Emerging Cyber Threats: AI Risks and Solutions for Brokers

From Data Poisoning to AI Agents: The Next Wave of Cyber Threats

As artificial intelligence (AI) tools proliferate across various industries, they provide organizations with unprecedented opportunities for efficiency and growth. However, this rapid adoption also brings a surge in cyber risks. While AI can enhance security, it is simultaneously being exploited by cybercriminals.

The World Economic Forum reported a staggering 223% increase in generative AI applications on the dark web between 2023 and 2024. Additionally, a survey conducted by cybersecurity training firm SoSafe revealed that 87% of security professionals encountered AI-powered attacks within their organizations in March 2025.

Adversarial Machine Learning: Attacks on AI Models

One significant concern in the realm of AI security is adversarial machine learning, which encompasses cyberattacks targeting AI models throughout their development stages. Two notable forms of these attacks are poisoning attacks and privacy attacks.

Poisoning Attacks

Data poisoning involves attempts to disrupt an AI model’s outputs by manipulating the data used for training. This can be done actively, by inserting corrupted files into a training dataset, or passively, when poisoned data is unknowingly included.

For instance, in 2023, researchers created a tool to embed minute amounts of corrupted data into digital artwork. These files were virtually undetectable, and if integrated into AI training, they could significantly degrade model performance. Notably, researchers demonstrated that they could poison just 0.01% of a popular training dataset for as little as $60.

Two critical features make poisoning attacks particularly concerning:

  1. Vulnerability across model types – Various models, including natural language processing and machine vision systems, can be compromised.
  2. Concentration of training data – Most developers depend on a limited number of datasets; targeting these can affect a wide array of AI models.

Additionally, federated learning, where multiple organizations collaboratively train a model while retaining control of their data, is also at risk. If even one participant is compromised, the shared model can become corrupted.

Privacy Attacks

Privacy attacks focus on models that have already been trained and deployed. These attacks can extract sensitive data, expose model functionalities, or even replicate the model itself. The implications are severe, as AI models often contain personally identifiable information, intellectual property, and trade secrets.

One significant risk is data seepage, which occurs when AI systems inadvertently expose sensitive information. For example, in 2023, a transcription tool mistakenly distributed confidential meeting notes to unintended participants, highlighting the potential for human error in AI interactions.

Corporate governance around AI is uneven, with only 37% of organizations having any governance in place. Therefore, it is anticipated that incidents of data seepage will become more common.

The Rise of AI Agents and Associated Cyber Risks

The emergence of AI agents—systems that extend the capabilities of large language models—introduces another layer of risk. These agents operate autonomously, performing tasks such as web surfing, accessing datasets, and executing code.

While AI agents can perform complex functions, their autonomy raises significant risks, including the potential for errors and hallucinations that can lead to information exposure. The more freedom granted to an AI agent, the greater the risk of misuse or data breach. Additionally, there is a danger that hackers could impersonate AI agents within corporate systems, further complicating security measures.

Strategies for Brokers to Address AI Risks

As these cyber risks evolve, clients will increasingly seek guidance from brokers on managing AI-related exposures. Here are some practical recommendations for brokers:

  • Inquire about AI use cases – Encourage clients to identify where AI is utilized within their organization to understand potential exposure points.
  • Examine governance and controls – Help clients recognize the importance of formal oversight, including acceptable-use policies and employee training.
  • Highlight supply chain risks – Discuss the dependency on third-party datasets and open-source models, ensuring proper vetting and monitoring controls are in place.
  • Review cyber policy language – Standard cyber policies may not adequately address risks associated with adversarial machine learning, AI agents, or data seepage.
  • Position risk management as a differentiator – Clients demonstrating strong AI governance and data hygiene are likely to receive better insurance terms and pricing.

As the landscape of cyber threats continues to evolve with the advancement of AI, it is essential for organizations to stay informed and proactive in their risk management strategies.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...