Can ChatGPT flag potential terrorists? Study uses automated tools and AI to profile violent extremists
Technology such as ChatGPT could play a complementary role and help profile terrorists and identify the likelihood of them engaging in extremist activity, according to a groundbreaking study which could make anti-terrorism efforts more efficient.
Charles Darwin University (CDU) researchers fed 20 post-9/11 public statements made by international terrorists to software Linguistic Inquiry and Word Count (LIWC).
They then provided ChatGPT with a sample of statements from four terrorists within this dataset and asked the technology two questions: What are the main themes or topics in the text, and what grievances are behind the communicated messages?
ChatGPT was able to identify the central themes of selected texts by four terrorists which revealed clues to each individual’s motivations and the purpose of their texts. ChatGPT could produce reasonably well thematic and semantic categories.
Themes include retaliation and self-defense, rejecting democratic systems, opposition to secularism and apostate rulers, struggle and martyrdom, dehumanisation of opponents, criticism of mass immigration, opposition to multiculturalism, and more.
ChatGPT also identified clues to motivations of violence, including desire for retribution and justice, anti-Western sentiment, oppression and aggression by enemies, religious grievance, and fear of racial and cultural replacement.
The themes were also mapped onto the Terrorist Radicalisation Assessment Protocol-18 (TRAP-18), a tool used by authorities to assess individuals who potentially might engage in terrorism.
The themes were found to have matched with TRAP-18 indicators of threatening behaviour.
Lead author Dr Awni Etaywe, who is a leading expert in forensic linguistics focusing on terrorism, says the advantage of language learning models (LLMs) like ChatGPT is it could be used as a complementary tool which does not require specific training.
“While LLMs cannot replace human judgement or close-text analysis, they offer valuable investigative clues, accelerating suspicion and enhancing our understanding of the motivations behind terrorist discourse,” Dr Etaywe said.
“Despite concerns about the potential weaponisation of AI tools like ChatGPT as raised by Europol, this study has demonstrated that future work aimed at enhancing proactive forensic profiling capabilities can also apply machine learning to cyberterrorist text categorisation.”
The paper was co-authored by CDU International Relations and Political Science Senior Lecturer Dr Kate Macfarlane, and CDU Information Technology Professor Mamoun Alazab.
Dr Etaywe said further study is needed to improve the accuracy and reliability of LLMs, including ChatGPT’s analyses.
“We need to ensure it becomes a practical aid in identifying potential threats while considering the socio-cultural contexts of terrorism,” Dr Etaywe said.
“These large language models thus far have an investigative but not evidential value.”
A cyberterrorist behind the keyboard: An automated text analysis for psycholinguistic profiling and threat assessment was published in the Journal of Language Aggression and Conflict.
Like what we do at The AIMN?
You’ll like it even more knowing that your donation will help us to keep up the good fight.
Chuck in a few bucks and see just how far it goes!
Your contribution to help with the running costs of this site will be gratefully accepted.
You can donate through PayPal or credit card via the button below, or donate via bank transfer: BSB: 062500; A/c no: 10495969
5 comments
Login here Register hereI would like to see AI do a similar analysis on Dutton and Trump.
“ChatGPT also identified clues to motivations of violence, including desire for retribution” – let me guess, ChatGPT blew a gasket when statements from the Zionist IDF were given for analysis. GIGO. Rubbish programming of AI will give rubbish results.
The way things are going with AI, expression of any anti-govt sentiment means you will be labelled a terrorist. Lucky we have a MAD Bill in the wings so dissidents can be advised of their wrong-think and be given an opportunity to do some online re-education at home. If that fails then there always the unused Quarantine Camps that can be repurposed. That works in China, why not here?
More of the surveillance state in the hands of the feckless Big Tech profiteers and their infantile minions. Such surveillance and linguistic jiggery-pokery serves to entrench state paranoia, and with it oppression, and with all that a concomitant cycle of backlash.
Thinking that it won’t become a predominant tool of criminals, to me, is wrong thinking. That process is already well underway.
I wonder what the outcome would be if the Netanyahu emails before October 7 Invasion of Gaza and the Genocide of Palestinians commenced.
Very, very wary of this tech. Like most new ideas it will be doubtless used for the wrong purposes.