Germany’s AI Implementation Act (KI-MIG): Who are the Responsible Supervisory Authorities?
On 10 February 2026, the Federal Government of Germany adopted its official government draft for the AI Market Surveillance and Innovation Promotion Act (KI-Marktüberwachungs- und Innovationsförderungs-Gesetz – KI-MIG). This act outlines Germany’s supervisory architecture, enforcement powers, and penalty regime for AI systems under the EU AI Act (Regulation (EU) 2024/1689).
This new draft updates an earlier ministerial draft from 11 September 2025, particularly regarding competent authorities and administrative fines, marking the official start of the legislative process.
Supervisory Architecture
Germany adopts a hybrid supervisory model, establishing a strong central authority without creating a new agency. The key components include:
- BNetzA as the Central Authority: The German Federal Network Agency (Bundesnetzagentur – BNetzA) will serve as the default market surveillance authority. It will be the single point of contact for the EU AI Office and the central complaints office. BNetzA will also manage at least one AI regulatory sandbox with priority access for SMEs, start-ups, and research institutions.
- Coordination and Competence Centre (KoKIVO): Established within BNetzA, KoKIVO pools AI expertise centrally, making it available to other competent authorities, ensuring that interpretive guidance flows from one hub.
- Sector-Specific Authorities: Existing regulators for harmonized EU product legislation will retain competencies for AI systems related to those products.
- Media Service Providers: AI systems used by media service providers for journalistic or advertising purposes will be supervised by state media authorities rather than BNetzA, ensuring compliance with constitutional neutrality.
- BaFin for Financial Services: The Federal Financial Supervisory Authority (BaFin) will supervise AI systems linked to regulated financial activities, developing its own cybersecurity testing guidelines for high-risk systems.
- Independent AI Market Surveillance Chamber: For sensitive high-risk AI systems, an independent chamber will be created within BNetzA to oversee specific areas like biometric AI systems used for law enforcement and democratic processes.
- Federal States Carve-Out: Where public bodies of the federal states place AI systems in the market, the relevant state authority will handle market surveillance, not BNetzA.
Investigative and Enforcement Powers
International companies should be aware of the robust enforcement toolkit provided to authorities under the draft law:
- Extensive Inter-Agency Information Sharing: Authorities can exchange information, including personal data, to fulfill their tasks.
- Remote Access and External Experts: Authorities can conduct investigations remotely and hire external experts for assistance.
- Unannounced Inspections: Inspections can be conducted without notice during and outside regular business hours.
- Immediate Enforcement: Legal challenges against decisions made by BaFin have no suspensive effect, meaning compliance is immediate.
- Enforcement Tactics: Authorities will proactively police the market through anonymous test purchases and close cooperation with customs authorities.
Administrative Fines
The EU AI Act fines apply directly, but German administrative offense procedures will govern violations, with supplementary national fines of up to EUR 50,000 for specific violations not covered by the AI Act.
Strict Obligations Upon Ceasing Business
For international companies operating in Germany, the draft outlines requirements regarding the end of a business lifecycle. If a provider ceases activities, the obligation to retain AI Act-related documentation transfers to the liquidation or insolvency administrator.
Whistleblower Protection
The draft amends Germany’s Whistleblower Protection Act to cover violations of the EU AI Act, ensuring full protections against retaliation for whistleblowers.
Innovation Promotion
BNetzA will establish an AI Service Desk, deliver training programs, and advise public-sector bodies on AI classification. The AI regulatory sandbox will prioritize access for research institutions, while a tacit approval mechanism allows for real-world testing of high-risk AI.
Next Steps
The adoption of this government draft marks the beginning of the legislative process. Stakeholders should monitor potential amendments and the delineation of powers between BNetzA and other authorities as the Bundestag debates the draft.