From Data Poisoning to AI Agents: The Next Wave of Cyber Threats
As artificial intelligence (AI) tools proliferate across various industries, they provide organizations with unprecedented opportunities for efficiency and growth. However, this rapid adoption also brings a surge in cyber risks. While AI can enhance security, it is simultaneously being exploited by cybercriminals.
The World Economic Forum reported a staggering 223% increase in generative AI applications on the dark web between 2023 and 2024. Additionally, a survey conducted by cybersecurity training firm SoSafe revealed that 87% of security professionals encountered AI-powered attacks within their organizations in March 2025.
Adversarial Machine Learning: Attacks on AI Models
One significant concern in the realm of AI security is adversarial machine learning, which encompasses cyberattacks targeting AI models throughout their development stages. Two notable forms of these attacks are poisoning attacks and privacy attacks.
Poisoning Attacks
Data poisoning involves attempts to disrupt an AI model’s outputs by manipulating the data used for training. This can be done actively, by inserting corrupted files into a training dataset, or passively, when poisoned data is unknowingly included.
For instance, in 2023, researchers created a tool to embed minute amounts of corrupted data into digital artwork. These files were virtually undetectable, and if integrated into AI training, they could significantly degrade model performance. Notably, researchers demonstrated that they could poison just 0.01% of a popular training dataset for as little as $60.
Two critical features make poisoning attacks particularly concerning:
- Vulnerability across model types – Various models, including natural language processing and machine vision systems, can be compromised.
- Concentration of training data – Most developers depend on a limited number of datasets; targeting these can affect a wide array of AI models.
Additionally, federated learning, where multiple organizations collaboratively train a model while retaining control of their data, is also at risk. If even one participant is compromised, the shared model can become corrupted.
Privacy Attacks
Privacy attacks focus on models that have already been trained and deployed. These attacks can extract sensitive data, expose model functionalities, or even replicate the model itself. The implications are severe, as AI models often contain personally identifiable information, intellectual property, and trade secrets.
One significant risk is data seepage, which occurs when AI systems inadvertently expose sensitive information. For example, in 2023, a transcription tool mistakenly distributed confidential meeting notes to unintended participants, highlighting the potential for human error in AI interactions.
Corporate governance around AI is uneven, with only 37% of organizations having any governance in place. Therefore, it is anticipated that incidents of data seepage will become more common.
The Rise of AI Agents and Associated Cyber Risks
The emergence of AI agents—systems that extend the capabilities of large language models—introduces another layer of risk. These agents operate autonomously, performing tasks such as web surfing, accessing datasets, and executing code.
While AI agents can perform complex functions, their autonomy raises significant risks, including the potential for errors and hallucinations that can lead to information exposure. The more freedom granted to an AI agent, the greater the risk of misuse or data breach. Additionally, there is a danger that hackers could impersonate AI agents within corporate systems, further complicating security measures.
Strategies for Brokers to Address AI Risks
As these cyber risks evolve, clients will increasingly seek guidance from brokers on managing AI-related exposures. Here are some practical recommendations for brokers:
- Inquire about AI use cases – Encourage clients to identify where AI is utilized within their organization to understand potential exposure points.
- Examine governance and controls – Help clients recognize the importance of formal oversight, including acceptable-use policies and employee training.
- Highlight supply chain risks – Discuss the dependency on third-party datasets and open-source models, ensuring proper vetting and monitoring controls are in place.
- Review cyber policy language – Standard cyber policies may not adequately address risks associated with adversarial machine learning, AI agents, or data seepage.
- Position risk management as a differentiator – Clients demonstrating strong AI governance and data hygiene are likely to receive better insurance terms and pricing.
As the landscape of cyber threats continues to evolve with the advancement of AI, it is essential for organizations to stay informed and proactive in their risk management strategies.