Security and Privacy Concerns in AI Models

Muhammad Muneer, Faisal Rehman, Muhammad Hamza Sajjad, Muhammad Anwar, Kashif Naseer Qureshi

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

There is a chance to solve several social, economic, and environmental challenges due to the growing application of Artificial Intelligence (AI). AI technologies’ security must be ensured to fully utilize them. Recently, academics have started concentrating on hostile AI since AI models are susceptible to sophisticated hacking tactics. The objective is to build robust Machine Learning (ML) and deep learning models that can survive various adversarial assaults. In the context of AI applications, this chapter presents a thorough analysis of cyber security, encompassing topics like adversarial knowledge, attack strategies, and defensive models. This chapter also discusses the models for mathematical AI, emphasizing fresh iterations of federated learning and reinforcement, showing the potential usage of attack routes to reveal the flaws in AI systems. It explores several cyber-protection approaches such as assaults and presents a methodology for attacking apps with AI that is organized and systematic. Adaptive defenses must be developed to protect the security of AI applications, and this is especially true in light of recent assaults on commercial apps. In this chapter, we go through the major issues and potential avenues for future study in terms of the security and privacy of AI technology.

Original languageEnglish
Title of host publicationNext Generation AI Language Models in Research
Subtitle of host publicationPromising Perspectives and Valid Concerns
PublisherCRC Press
Pages293-325
Number of pages33
ISBN (Electronic)9781040157329
ISBN (Print)9781032667935
DOIs
Publication statusPublished - 1 Jan 2024

Fingerprint

Dive into the research topics of 'Security and Privacy Concerns in AI Models'. Together they form a unique fingerprint.

Cite this