Researchers Flag 7.5% of Open-Source AI Systems for Potential Criminal Use
Kenya, 29 January 2026 – Researchers are warning that a growing number of open-source artificial intelligence models are being deployed outside the safety controls of major technology platforms, creating what they describe as a largely unseen layer of potential criminal misuse.
Analysis Overview
The findings, shared by cybersecurity firms SentinelOne and Censys, are based on an analysis of publicly accessible deployments of open-source large language models conducted over a 293-day period. The researchers stated that many of the systems they observed were running on internet-exposed computers with limited or no safeguards, making them vulnerable to abuse by hackers and other malicious actors.
Risks of Self-Hosted AI Models
According to the researchers, self-hosted AI models can be repurposed to:
- Generate phishing content
- Automate spam operations
- Support disinformation campaigns
- Assist in other illicit activities
Unlike commercial AI platforms, which operate under centralized rules and monitoring, open-source models allow operators to modify system instructions and remove guardrails entirely.
Findings from the Analysis
The analysis revealed that while thousands of open-source language model variants exist, a significant share of internet-accessible deployments were based on well-known models such as Meta’s Llama and Google DeepMind’s Gemma. In hundreds of cases, researchers identified configurations where safety controls had been explicitly disabled.
Researchers were able to view system prompts, which shape a model’s behavior, in roughly a quarter of the deployments they examined. Of those, about 7.5% were assessed as potentially enabling harmful activity, including scams, harassment, and data theft.
Geographical Distribution of Exposed Systems
Geographically, about 30% of the exposed systems were operating from China, with roughly 20% located in the United States, underscoring the global nature of the issue.
Insights from AI Governance Experts
Juan Andres Guerrero-Saade, executive director for intelligence and security research at SentinelOne, noted that discussions around AI security often overlook the scale of open-source deployments. He compared the situation to an iceberg, where visible, regulated platforms account for only a small portion of real-world AI use.
AI governance experts emphasize that the findings highlight the limits of platform-based safety measures. Rachel Adams, chief executive of the Global Center on AI Governance, stated that responsibility for managing risks becomes shared once models are released, including obligations on developers to document foreseeable harms and provide mitigation guidance.
The Challenge for Regulators
Technology companies, including Microsoft, have stated that open-source models play an important role in innovation but acknowledge the need for safeguards to prevent misuse. However, other firms referenced in the research did not respond to requests for comment.
The researchers concluded that the results point to a growing challenge for regulators as AI use expands beyond centralized platforms into decentralized, self-hosted environments that are harder to monitor and control.