Prediction of Pre-Radicalism Leading to Hate Speech in Social Media Accounts Using Machine Learning and Intelligence Data Gathering Frameworks
DOI:
https://doi.org/10.69980/ajpr.v28i1.151Keywords:
Prediction, Pre-Radicalism, Hate Speech, Machine Learning, BERT, ModelAbstract
Social media presents concerns about extremist content spreading which produces anxiety about radicalization that occurs through online means. Digital platforms provide two advantages to evil organizations by helping them recruit new members and conducting their activities while distributing propaganda. BERT (Bidirectional Encoder Representations from Transformers) within the designed system detects indicators of pre-radicalization to prevent detecting the onset of extremist behavior and hateful speech. The system draws from Twitter data collected by intelligence data gathering frameworks and conducts ongoing training updates to enhance its operational efficiency as well as adaptability capabilities. Text normalization begins a preprocessing sequence which is followed by tokenization operations before companies perform feature extraction. Evaluations of the model happen through precision and recall measurements as well as accuracy assessment followed by F1-score evaluation. The identification of radicalization warning signs by separate deep learning algorithms proves more effective than traditional systems at early stages of development. AI systems for radicalisation detection need continuous training which results in improved detection capabilities and allows their application in both counterterrorism operations and political extremism support programs and online hate speech monitoring.
References
1. M. Conway, “Routing the Extreme Right: Challenges for Online Counter-Narratives,” Stud. Confl. Terror., vol. 42, no. 1, pp. 1–23, 2019.
2. A. Bruns, “Filtering Radicalization: Social Media Content Moderation and Free Speech,” New Media Soc., vol. 23, no. 3, pp. 534–551, 2021.
3. J. Berger and J. Morgan, The ISIS Twitter Census: Defining and Describing the Population of ISIS Supporters on Twitter, The Brookings Project on U.S. Relations with the Islamic World, 2015.
4. P. N. Howard and B. Kollanyi, “Bots, Social Media, and the Spread of Misinformation,” J. Inf. Technol. Polit., vol. 15, no. 2, pp. 91–108, 2018.
5. C. O’Callaghan, D. Greene, M. Conway, J. Carthy, and P. Cunningham, “Down the (White) Rabbit Hole: The Extreme Right and Online Radicalization,” Internet Policy Rev., vol. 6, no. 1, 2017.
6. L. Scrivens, S. Gill, and R. Conway, “The Role of the Internet in Radicalization: A Systematic Review of Research,” Terror. Polit. Violence, vol. 34, no. 3, pp. 545–564, 2022.
7. S. Weimann, Terrorism in Cyberspace: The Next Generation, New York, NY, USA: Columbia Univ. Press, 2015.
8. F. Benigni, K. Joseph, and K. Carley, “Online Extremism and the Amplification of Violence,” Comput. Math. Organ. Theory, vol. 27, no. 1, pp. 32–49, 2021.
9. B. Sayyid and S. Zac, Thinking Through Islamophobia: Global Perspectives, New York, NY, USA: Columbia Univ. Press, 2011.
10. M. Nouh et al., “Understanding the radical mind,” IEEE ISI, 2019.
11. M. Gaikwad et al., “Online extremism detection,” IEEE Access, 2021.
12. S. Mussiraliyeva et al., “Detecting violent extremism,” IJACSA, 2023.
13. O. Araque et al., “Hate speech detection using RoBERTa,” Comput. Commun., 2023.
14. European Union Internet Referral Unit, EUROPOL Online Radicalization Report, 2021.
15. Twitter, “Automated moderation systems,” Twitter Transparency Report, 2022.
16. M. Almusaylim et al., “Automated extremism detection,” Applied Sciences, 2021.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 American Journal of Psychiatric Rehabilitation

This work is licensed under a Creative Commons Attribution 4.0 International License.
This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License permitting all use, distribution, and reproduction in any medium, provided the work is properly cited.