Abstract
There is a chance to solve several social, economic, and environmental challenges due to the growing application of Artificial Intelligence (AI). AI technologies’ security must be ensured to fully utilize them. Recently, academics have started concentrating on hostile AI since AI models are susceptible to sophisticated hacking tactics. The objective is to build robust Machine Learning (ML) and deep learning models that can survive various adversarial assaults. In the context of AI applications, this chapter presents a thorough analysis of cyber security, encompassing topics like adversarial knowledge, attack strategies, and defensive models. This chapter also discusses the models for mathematical AI, emphasizing fresh iterations of federated learning and reinforcement, showing the potential usage of attack routes to reveal the flaws in AI systems. It explores several cyber-protection approaches such as assaults and presents a methodology for attacking apps with AI that is organized and systematic. Adaptive defenses must be developed to protect the security of AI applications, and this is especially true in light of recent assaults on commercial apps. In this chapter, we go through the major issues and potential avenues for future study in terms of the security and privacy of AI technology.
Original language | English |
---|---|
Title of host publication | Next Generation AI Language Models in Research |
Subtitle of host publication | Promising Perspectives and Valid Concerns |
Publisher | CRC Press |
Pages | 293-325 |
Number of pages | 33 |
ISBN (Electronic) | 9781040157329 |
ISBN (Print) | 9781032667935 |
DOIs | |
Publication status | Published - 1 Jan 2024 |