Evaluating The Efficiency Of AI-Driven Person Identification Systems For Enhancing Campus Security: A Comparative Study Of Real-Time Surveillance Technologies

Authors

  • Sonia Victor Soans
  • Soumya Suvarna
  • Sufola Das Chagas Silva E Araujo

DOI:

https://doi.org/10.69980/ajpr.v28i4.332

Keywords:

AI-driven, Campus, security, Real-time, surveillance, Facial recognition, Gait analysis, Behaviour detection, Data privacy, Algorithmic bias

Abstract

INTRODUCTION:

The rapid advancement of artificial intelligence (AI) has fundamentally altered surveillance systems, particularly those employed for campus security. Because campuses are dynamic environments with large populations, they face challenges that traditional methods cannot effectively address. AI-driven person identification systems provide solutions through proactive threat detection, enhanced accuracy, and real-time monitoring. By swiftly and efficiently analyzing vast volumes of data, these technologies significantly improve the safety and security of educational institutions. Notwithstanding their potential, these systems raise moral concerns about bias and data privacy that must be implemented carefully and sensibly. This essay contrasts the benefits, drawbacks, and ethical issues of several AI-powered monitoring systems. By addressing these problems, the study intends to provide insight into how AI could be effectively used to campus security while upholding openness and confidence.

OBJECTIVES:

  1. To assess how well human identification systems powered by AI work in real-time surveillance.
  2. To evaluate the scalability, accuracy, and speed of various AI-based surveillance solutions.
  3. To evaluate these systems' effects on campus privacy and security.
  4. To determine the most effective ways to deploy AI-powered monitoring in academic settings.
  5. To investigate moral considerations including data management and algorithmic bias reduction.

METHODOLOGY:

This study employed a mixed-methods approach that included both qualitative and quantitative analysis. Information about a variety of factors, including lighting, population density, and potential security breaches, was collected over the course of six months from camera footage and access logs of multiple campuses. The study evaluated three AI-powered systems: facial recognition, gait analysis, and behavior detection. Performance metrics such processing speed, scalability, accuracy, and false positive/negative rates were used to assess the system's efficiency. Through online surveys and structured interviews that addressed privacy concerns and documented user experiences, campus security personnel and students provided qualitative observations. A comparison technique was created with a focus on both operational outcomes and ethical issues in order to methodically evaluate the usability and effectiveness of these systems.

RESULTS:

The study found that AI-powered human identification systems significantly increased campus security. Under ideal lighting conditions, facial recognition accuracy averaged 95%; however, under low-light conditions, it fell to 78%. Gait analysis proved useful in difficult situations, continuously maintaining an accuracy of 85%. Although they occasionally indicated non-threatening actions, behavior detection systems had an 88% accuracy rate in identifying anomalies. The fastest processing speed was 0.6 seconds per frame for facial identification, 0.8 seconds for gait analysis, and 1.2 seconds for behavior detection. While behavior detection technologies raised questions about consent and transparency, feedback emphasized how simple it was to integrate facial recognition systems with the infrastructure already in place.

Furthermore, 85% of survey participants supported anonymization methods to safeguard personal data and underlined the significance of explicit standards for data usage. Data privacy and AI bias are two ethical issues that have highlighted the necessity of strong governance and privacy-preserving measures. Although the systems improved security overall, they needed to be further improved in order to adequately handle ethical issues.

 

CONCLUSION:

Because AI-driven person identification systems improve efficiency, accuracy, and real-time monitoring capabilities, they have the potential to completely transform campus security. Nonetheless, the study emphasizes how critical it is to solve operational and ethical issues such integration costs, system bias, and data privacy. While gait analysis provides a less invasive option with reliable performance, facial recognition technologies are criticized for data exploitation and privacy concerns despite their high accuracy. Although behavior detection systems are excellent at proactively identifying threats, they need to be more transparent and get user approval. To preserve confidence and system integrity going ahead, organizations must establish transparent governance frameworks, apply privacy-preserving AI strategies, and guarantee ongoing monitoring.

Schools may create safer and more secure environments for their employees and students by utilizing these technologies' advantages while addressing their drawbacks.

Author Biographies

Sonia Victor Soans

Research Scholar, Institute of Computer and Information Sciences, Srinivas University, Mangalore, Karnataka, India. OrcId ID: 0000-0002-4964-1197; 

Soumya Suvarna

Professor, Institute of Computer and Information Sciences, Srinivas University, Mangalore, Karnataka, India. OrcId: 0000-0002-5431-1977, 

Sufola Das Chagas Silva E Araujo

Assistant Professor, Computer Science and Engineering, Padre Conceicao College of Engineering, Goa, India. OrcId ID:0000-0003-4933-9761;

References

[1] Ross, Arun & Jain, Anil. (2004). Multimodal biometrics: An overview. 1221-1224.

[2] Rashed, Ansam. (2024). Real Time People Identification Through Video Surveillance. 10.13140/RG.2.2.27617.31847.

[3] R. S. Almeida, Denise & Shmarko, Konstantin & Lomas, Elizabeth. (2022). The ethics of facial recognition technologies, surveillance, and accountability in an age of artificial intelligence: a comparative analysis of US, EU, and UK regulatory frameworks. AI and Ethics. 2. 10.1007/s43681-021-00077-w.

[4] Hansen, Craig. (2024). The Role of Artificial Intelligence in Enhancing Educational Systems. 10.13140/RG.2.2.25517.91367.

[5] Heaton, Jeffrey. (2017). Ian Goodfellow, Yoshua Bengio, and Aaron Courville: Deep learning: The MIT Press, 2016, 800 pp, ISBN: 0262035618. Genetic Programming and Evolvable Machines. 19. 10.1007/s10710-017-9314-z.

[6] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770-778.

[7] Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Retrieved from https://www. reuters.com/article/amazon-ai-recruiting-idUSKCN1MK08G.

[8] Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, 149-159.

[9] Sim, J., & Wright, C. C. (2000). Research in health care: Concepts, designs, and methods. Stanley Thornes Publishers.

[10] Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 429-435.

[11] Suresh, H., & Guttag, J. V. (2021). A framework for understanding unintended consequences of machine learning. Communications of the ACM, 64(5), 62-71.

[12] Zhang, Q., Yang, L. T., Chen, Z., & Li, P. (2018). A survey on deep learning for big data. Information Fusion, 42, 146-157.

[13] Wang, Z., & Bovik, A. C. (2009). Mean squared error: Love it or leave it? A new look at signal fidelity measures. IEEE Signal Processing Magazine, 26(1), 98-117.

[14] Zhu, J., & Goldberg, A. B. (2009). Introduction to semi-supervised learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 3(1), 1-130.

[15] Patil, P., Peng, R. D., & Leek, J. T. (2016). A statistical definition for reproducibility and replicability. Nature Human Behaviour, 1(1), 1-12.

[16] LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.

[17] Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., ... & Hassabis, D. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676), 354-359.

[18] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., ... & Fei-Fei, L. (2015). ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211-252.

[19] Kingma, D. P., & Welling, M. (2014). Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114.

[20] Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1251-1258.

[21] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25, 1097-1105.

[22] Brown, Tom & Mann, Benjamin & Ryder, Nick & Subbiah, Melanie & Kaplan, Jared & Dhariwal, Prafulla & Neelakantan, Arvind & Shyam, Pranav & Sastry, Girish & Askell, Amanda & Agarwal, Sandhini & Herbert-Voss, Ariel & Krueger, Gretchen & Henighan, Tom & Child, Rewon & Ramesh, Aditya & Ziegler, Daniel & Wu, Jeffrey & Winter, Clemens & Amodei, Dario. (2020). Language Models are Few-Shot Learners. 10.48550/arXiv.2005.14165. Kaiming, H., Zhang, X., Ren, S., & Sun, J. (2016). Identity mappings in deep residual networks. European Conference on Computer Vision, 630-645.

[23] Redmon, J., & Farhadi, A. (2018). YOLOv3: An incremental improvement. arXiv preprint arXiv:1804.02767.

[24] Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems, 28, 91-99.

[25] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998-6008.

[26] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., ... & Houlsby, N. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.

[27] Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv: 1810. 04805.

[28] Tan, M., & Le, Q. V. (2019). EfficientNet: Rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905. 11946.

[29] Howard, J., & Gugger, S. (2020). Fastai: A layered API for deep learning. Information, 11(2), 108.

[30] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).

Downloads

Published

2025-05-12