Open-Source AI Systems: A Hidden Threat of Criminal Misuse

Researchers Flag 7.5% of Open-Source AI Systems for Potential Criminal Use

Kenya, 29 January 2026 – Researchers are warning that a growing number of open-source artificial intelligence models are being deployed outside the safety controls of major technology platforms, creating what they describe as a largely unseen layer of potential criminal misuse.

Analysis Overview

The findings, shared by cybersecurity firms SentinelOne and Censys, are based on an analysis of publicly accessible deployments of open-source large language models conducted over a 293-day period. The researchers stated that many of the systems they observed were running on internet-exposed computers with limited or no safeguards, making them vulnerable to abuse by hackers and other malicious actors.

Risks of Self-Hosted AI Models

According to the researchers, self-hosted AI models can be repurposed to:

  • Generate phishing content
  • Automate spam operations
  • Support disinformation campaigns
  • Assist in other illicit activities

Unlike commercial AI platforms, which operate under centralized rules and monitoring, open-source models allow operators to modify system instructions and remove guardrails entirely.

Findings from the Analysis

The analysis revealed that while thousands of open-source language model variants exist, a significant share of internet-accessible deployments were based on well-known models such as Meta’s Llama and Google DeepMind’s Gemma. In hundreds of cases, researchers identified configurations where safety controls had been explicitly disabled.

Researchers were able to view system prompts, which shape a model’s behavior, in roughly a quarter of the deployments they examined. Of those, about 7.5% were assessed as potentially enabling harmful activity, including scams, harassment, and data theft.

Geographical Distribution of Exposed Systems

Geographically, about 30% of the exposed systems were operating from China, with roughly 20% located in the United States, underscoring the global nature of the issue.

Insights from AI Governance Experts

Juan Andres Guerrero-Saade, executive director for intelligence and security research at SentinelOne, noted that discussions around AI security often overlook the scale of open-source deployments. He compared the situation to an iceberg, where visible, regulated platforms account for only a small portion of real-world AI use.

AI governance experts emphasize that the findings highlight the limits of platform-based safety measures. Rachel Adams, chief executive of the Global Center on AI Governance, stated that responsibility for managing risks becomes shared once models are released, including obligations on developers to document foreseeable harms and provide mitigation guidance.

The Challenge for Regulators

Technology companies, including Microsoft, have stated that open-source models play an important role in innovation but acknowledge the need for safeguards to prevent misuse. However, other firms referenced in the research did not respond to requests for comment.

The researchers concluded that the results point to a growing challenge for regulators as AI use expands beyond centralized platforms into decentralized, self-hosted environments that are harder to monitor and control.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...