Emerging Cyber Threats: AI Risks and Solutions for Brokers

From Data Poisoning to AI Agents: The Next Wave of Cyber Threats

As artificial intelligence (AI) tools proliferate across various industries, they provide organizations with unprecedented opportunities for efficiency and growth. However, this rapid adoption also brings a surge in cyber risks. While AI can enhance security, it is simultaneously being exploited by cybercriminals.

The World Economic Forum reported a staggering 223% increase in generative AI applications on the dark web between 2023 and 2024. Additionally, a survey conducted by cybersecurity training firm SoSafe revealed that 87% of security professionals encountered AI-powered attacks within their organizations in March 2025.

Adversarial Machine Learning: Attacks on AI Models

One significant concern in the realm of AI security is adversarial machine learning, which encompasses cyberattacks targeting AI models throughout their development stages. Two notable forms of these attacks are poisoning attacks and privacy attacks.

Poisoning Attacks

Data poisoning involves attempts to disrupt an AI model’s outputs by manipulating the data used for training. This can be done actively, by inserting corrupted files into a training dataset, or passively, when poisoned data is unknowingly included.

For instance, in 2023, researchers created a tool to embed minute amounts of corrupted data into digital artwork. These files were virtually undetectable, and if integrated into AI training, they could significantly degrade model performance. Notably, researchers demonstrated that they could poison just 0.01% of a popular training dataset for as little as $60.

Two critical features make poisoning attacks particularly concerning:

  1. Vulnerability across model types – Various models, including natural language processing and machine vision systems, can be compromised.
  2. Concentration of training data – Most developers depend on a limited number of datasets; targeting these can affect a wide array of AI models.

Additionally, federated learning, where multiple organizations collaboratively train a model while retaining control of their data, is also at risk. If even one participant is compromised, the shared model can become corrupted.

Privacy Attacks

Privacy attacks focus on models that have already been trained and deployed. These attacks can extract sensitive data, expose model functionalities, or even replicate the model itself. The implications are severe, as AI models often contain personally identifiable information, intellectual property, and trade secrets.

One significant risk is data seepage, which occurs when AI systems inadvertently expose sensitive information. For example, in 2023, a transcription tool mistakenly distributed confidential meeting notes to unintended participants, highlighting the potential for human error in AI interactions.

Corporate governance around AI is uneven, with only 37% of organizations having any governance in place. Therefore, it is anticipated that incidents of data seepage will become more common.

The Rise of AI Agents and Associated Cyber Risks

The emergence of AI agents—systems that extend the capabilities of large language models—introduces another layer of risk. These agents operate autonomously, performing tasks such as web surfing, accessing datasets, and executing code.

While AI agents can perform complex functions, their autonomy raises significant risks, including the potential for errors and hallucinations that can lead to information exposure. The more freedom granted to an AI agent, the greater the risk of misuse or data breach. Additionally, there is a danger that hackers could impersonate AI agents within corporate systems, further complicating security measures.

Strategies for Brokers to Address AI Risks

As these cyber risks evolve, clients will increasingly seek guidance from brokers on managing AI-related exposures. Here are some practical recommendations for brokers:

  • Inquire about AI use cases – Encourage clients to identify where AI is utilized within their organization to understand potential exposure points.
  • Examine governance and controls – Help clients recognize the importance of formal oversight, including acceptable-use policies and employee training.
  • Highlight supply chain risks – Discuss the dependency on third-party datasets and open-source models, ensuring proper vetting and monitoring controls are in place.
  • Review cyber policy language – Standard cyber policies may not adequately address risks associated with adversarial machine learning, AI agents, or data seepage.
  • Position risk management as a differentiator – Clients demonstrating strong AI governance and data hygiene are likely to receive better insurance terms and pricing.

As the landscape of cyber threats continues to evolve with the advancement of AI, it is essential for organizations to stay informed and proactive in their risk management strategies.

More Insights

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...

The Hidden Risks of AI Integration in the Workplace

As organizations rush to adopt AI, many are ignoring the critical risks involved, such as compliance and oversight issues. Without proper governance and human management, AI can quickly become a...

Investing in AI Safety: Capitalizing on the Future of Responsible Innovation

The AI safety collaboration imperative is becoming essential as the artificial intelligence revolution reshapes industries and daily life. Investors are encouraged to capitalize on this opportunity by...

AI Innovations in Modern Policing

Law enforcement agencies are increasingly leveraging artificial intelligence to enhance their operations, particularly in predictive policing. The integration of technology offers immense potential...

Kenya’s Pivotal Role in UN’s Groundbreaking AI Governance Agreement

Kenya has achieved a significant diplomatic success by leading the establishment of two landmark institutions for governing artificial intelligence (AI) at the United Nations. The Independent...

AI Governance Framework: Ensuring Responsible Deployment for a Safer Future

At the 17th annual conference of ISACA in Abuja, stakeholders called for an AI governance framework to ensure responsible deployment of artificial intelligence. They emphasized the need for...

Essential Strategies for Effective AI Governance in Healthcare

The AMA emphasizes the necessity for CMOs and healthcare leaders to establish policies for AI tool adoption and governance due to the rapid expansion of AI in healthcare. Key foundational elements for...

UN Establishes AI Governance Panel for Global Cooperation

The United Nations General Assembly has adopted a resolution to establish an Independent International Scientific Panel on Artificial Intelligence and a Global Dialogue on AI Governance. This...

Emerging Cyber Threats: AI Risks and Solutions for Brokers

As artificial intelligence (AI) tools rapidly spread across industries, they present new cyber risks alongside their benefits. Brokers are advised to help clients navigate these risks by understanding...