Agentik.md Launches Open-Source AI Safety Specifications Ahead of 2026 EU and Colorado AI Regulations
In a significant advancement for the AI industry, WellStrategic has released the AI Agent Safety Stack, a collection of twelve free, open-source Markdown file specifications aimed at defining essential safety protocols for autonomous AI agents. These specifications are crucial for organizations preparing to comply with the upcoming regulations outlined in the EU AI Act and the Colorado AI Act, both set to be enforced in 2026.
Overview of the AI Agent Safety Stack
The AI Agent Safety Stack is designed to assist developers and organizations in establishing clear safety boundaries, shutdown protocols, and accountability standards for AI systems that operate autonomously. Accessible through killswitch.md and GitHub, these specifications are released under the MIT licence, making them available at no cost.
Background and Need for Specifications
Autonomous AI agents are increasingly being integrated into enterprise applications, with analysts predicting that a large portion of these systems will embed AI capabilities by the end of 2026. These agents can perform tasks like calling APIs, modifying files, and sending messages at machine speed. However, the lack of a standardized format for documenting their safety boundaries poses significant risks.
The AI Agent Safety Stack addresses this gap by providing a set of plain-text Markdown files, each dedicated to a specific safety concern. This collection allows for easy integration into project repositories, following a format established by the existing AGENTS.md convention, which is already utilized in over 60,000 open-source repositories.
Regulatory Context
As new governance frameworks come into effect in 2026, organizations must be aware of the implications:
- EU AI Act: Effective from August 2, 2026, this act includes provisions for high-risk AI systems, mandating human oversight and the ability to halt AI operations.
- Colorado Consumer Protections for Artificial Intelligence Act: Enforcement begins on June 30, 2026, requiring impact assessments and risk management documentation for high-risk AI systems.
While the AI Agent Safety Stack aids in documenting safety controls, organizations must seek qualified legal or compliance advice to ensure adherence to regulations.
The Twelve Specifications
The specifications are divided into four categories:
1. Operational Control
- THROTTLE.md: Rate limiting and cost ceilings.
- ESCALATE.md: Human-in-the-loop approval processes.
- FAILSAFE.md: Safe fallback states for emergencies.
- KILLSWITCH.md: Emergency shutdown protocols.
- TERMINATE.md: Permanent shutdown procedures.
2. Data Security
- ENCRYPT.md: Data classification and handling.
- ENCRYPTION.md: Cryptographic standards and compliance.
3. Output Quality
- SYCOPHANCY.md: Bias detection in AI outputs.
- COMPRESSION.md: Context compression and coherence verification.
- COLLAPSE.md: Model drift detection mechanisms.
4. Accountability
- FAILURE.md: Incident response and failure mode mapping.
- LEADERBOARD.md: Performance benchmarking and regression detection.
Each specification is crafted as a plain-text Markdown file, intended for use by AI agents at startup, by engineers during development, and by compliance teams during audits. They are framework-agnostic and adaptable to any AI agent implementation.
Availability
The twelve specifications are available immediately under the MIT licence. Full documentation can be found at agentik.md.
Conclusion
The AI Agent Safety Stack represents a proactive step toward ensuring the safe integration of autonomous AI systems in enterprise environments. While it provides valuable resources for compliance, organizations must remain vigilant in understanding their regulatory obligations and seeking professional guidance.