Emerging Cyber Threats: AI Risks and Solutions for Brokers

From Data Poisoning to AI Agents: The Next Wave of Cyber Threats

As artificial intelligence (AI) tools proliferate across various industries, they provide organizations with unprecedented opportunities for efficiency and growth. However, this rapid adoption also brings a surge in cyber risks. While AI can enhance security, it is simultaneously being exploited by cybercriminals.

The World Economic Forum reported a staggering 223% increase in generative AI applications on the dark web between 2023 and 2024. Additionally, a survey conducted by cybersecurity training firm SoSafe revealed that 87% of security professionals encountered AI-powered attacks within their organizations in March 2025.

Adversarial Machine Learning: Attacks on AI Models

One significant concern in the realm of AI security is adversarial machine learning, which encompasses cyberattacks targeting AI models throughout their development stages. Two notable forms of these attacks are poisoning attacks and privacy attacks.

Poisoning Attacks

Data poisoning involves attempts to disrupt an AI model’s outputs by manipulating the data used for training. This can be done actively, by inserting corrupted files into a training dataset, or passively, when poisoned data is unknowingly included.

For instance, in 2023, researchers created a tool to embed minute amounts of corrupted data into digital artwork. These files were virtually undetectable, and if integrated into AI training, they could significantly degrade model performance. Notably, researchers demonstrated that they could poison just 0.01% of a popular training dataset for as little as $60.

Two critical features make poisoning attacks particularly concerning:

  1. Vulnerability across model types – Various models, including natural language processing and machine vision systems, can be compromised.
  2. Concentration of training data – Most developers depend on a limited number of datasets; targeting these can affect a wide array of AI models.

Additionally, federated learning, where multiple organizations collaboratively train a model while retaining control of their data, is also at risk. If even one participant is compromised, the shared model can become corrupted.

Privacy Attacks

Privacy attacks focus on models that have already been trained and deployed. These attacks can extract sensitive data, expose model functionalities, or even replicate the model itself. The implications are severe, as AI models often contain personally identifiable information, intellectual property, and trade secrets.

One significant risk is data seepage, which occurs when AI systems inadvertently expose sensitive information. For example, in 2023, a transcription tool mistakenly distributed confidential meeting notes to unintended participants, highlighting the potential for human error in AI interactions.

Corporate governance around AI is uneven, with only 37% of organizations having any governance in place. Therefore, it is anticipated that incidents of data seepage will become more common.

The Rise of AI Agents and Associated Cyber Risks

The emergence of AI agents—systems that extend the capabilities of large language models—introduces another layer of risk. These agents operate autonomously, performing tasks such as web surfing, accessing datasets, and executing code.

While AI agents can perform complex functions, their autonomy raises significant risks, including the potential for errors and hallucinations that can lead to information exposure. The more freedom granted to an AI agent, the greater the risk of misuse or data breach. Additionally, there is a danger that hackers could impersonate AI agents within corporate systems, further complicating security measures.

Strategies for Brokers to Address AI Risks

As these cyber risks evolve, clients will increasingly seek guidance from brokers on managing AI-related exposures. Here are some practical recommendations for brokers:

  • Inquire about AI use cases – Encourage clients to identify where AI is utilized within their organization to understand potential exposure points.
  • Examine governance and controls – Help clients recognize the importance of formal oversight, including acceptable-use policies and employee training.
  • Highlight supply chain risks – Discuss the dependency on third-party datasets and open-source models, ensuring proper vetting and monitoring controls are in place.
  • Review cyber policy language – Standard cyber policies may not adequately address risks associated with adversarial machine learning, AI agents, or data seepage.
  • Position risk management as a differentiator – Clients demonstrating strong AI governance and data hygiene are likely to receive better insurance terms and pricing.

As the landscape of cyber threats continues to evolve with the advancement of AI, it is essential for organizations to stay informed and proactive in their risk management strategies.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...