Can ChatGPT flag potential terrorists? Study uses automated…

Technology such as ChatGPT could play a complementary role and help profile…

Dementia set to become Australia’s leading cause of…

The latest data from the Australian Bureau of Statistics (ABS) released today…

Welcome to Tariff Land: The Retreat of Free…

Free markets? Free trade? The modern economic world has little time for…

New report finds Australian wind tower manufacturing would…

The Australia Institute Media Release Australia could create more than 4300 quality direct…

Senate Splits On International Education Bill

Independent Tertiary Education Council Australia (ITECA) Media Release Independent skills training and higher…

Queensland Futures: Can More Critical and Comprehensive News…

By Denis Bright With the Queensland Government now in caretaker mode, time is…

Urgent expansion of services needed for people with…

Homelessness Australia Media Release October 10 2024 is World Homelessness Day and World…

Information, misinformation and blatant lies

By Bert Hetebry Does truth matter? How is truth discerned? Or more importantly what is…

«
»
Facebook

Can ChatGPT flag potential terrorists? Study uses automated tools and AI to profile violent extremists

Technology such as ChatGPT could play a complementary role and help profile terrorists and identify the likelihood of them engaging in extremist activity, according to a groundbreaking study which could make anti-terrorism efforts more efficient.

Charles Darwin University (CDU) researchers fed 20 post-9/11 public statements made by international terrorists to software Linguistic Inquiry and Word Count (LIWC).

They then provided ChatGPT with a sample of statements from four terrorists within this dataset and asked the technology two questions: What are the main themes or topics in the text, and what grievances are behind the communicated messages?

ChatGPT was able to identify the central themes of selected texts by four terrorists which revealed clues to each individual’s motivations and the purpose of their texts. ChatGPT could produce reasonably well thematic and semantic categories.

Themes include retaliation and self-defense, rejecting democratic systems, opposition to secularism and apostate rulers, struggle and martyrdom, dehumanisation of opponents, criticism of mass immigration, opposition to multiculturalism, and more.

ChatGPT also identified clues to motivations of violence, including desire for retribution and justice, anti-Western sentiment, oppression and aggression by enemies, religious grievance, and fear of racial and cultural replacement.

The themes were also mapped onto the Terrorist Radicalisation Assessment Protocol-18 (TRAP-18), a tool used by authorities to assess individuals who potentially might engage in terrorism.

The themes were found to have matched with TRAP-18 indicators of threatening behaviour.

Lead author Dr Awni Etaywe, who is a leading expert in forensic linguistics focusing on terrorism, says the advantage of language learning models (LLMs) like ChatGPT is it could be used as a complementary tool which does not require specific training.

“While LLMs cannot replace human judgement or close-text analysis, they offer valuable investigative clues, accelerating suspicion and enhancing our understanding of the motivations behind terrorist discourse,” Dr Etaywe said.

“Despite concerns about the potential weaponisation of AI tools like ChatGPT as raised by Europol, this study has demonstrated that future work aimed at enhancing proactive forensic profiling capabilities can also apply machine learning to cyberterrorist text categorisation.”

The paper was co-authored by CDU International Relations and Political Science Senior Lecturer Dr Kate Macfarlane, and CDU Information Technology Professor Mamoun Alazab.

Dr Etaywe said further study is needed to improve the accuracy and reliability of LLMs, including ChatGPT’s analyses.

“We need to ensure it becomes a practical aid in identifying potential threats while considering the socio-cultural contexts of terrorism,” Dr Etaywe said.

“These large language models thus far have an investigative but not evidential value.”

A cyberterrorist behind the keyboard: An automated text analysis for psycholinguistic profiling and threat assessment was published in the Journal of Language Aggression and Conflict.

 

The study was conducted by Charles Darwin University researchers Dr Awni Etaywe (pictured), Dr Kate Macfarlane and Professor Mamoun Alazab

Like what we do at The AIMN?

You’ll like it even more knowing that your donation will help us to keep up the good fight.

Chuck in a few bucks and see just how far it goes!

Your contribution to help with the running costs of this site will be gratefully accepted.

You can donate through PayPal or credit card via the button below, or donate via bank transfer: BSB: 062500; A/c no: 10495969

Donate Button

1 comment

Login here Register here
  1. Anon. E. Mouse

    I would like to see AI do a similar analysis on Dutton and Trump.

Leave a Reply

Your email address will not be published. Required fields are marked *

The maximum upload file size: 2 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here

Return to home page