Technical Sandboxes Enable Regulatory Learning for the EU AI Act and Rapid AI Development
The European Union’s forthcoming AI Act seeks to govern a rapidly evolving technological landscape, but its success hinges on a capacity for continuous adaptation and learning. This study investigates how this ‘regulatory learning’ can be effectively implemented, addressing a critical gap in the Act’s framework: a lack of clearly defined technical mechanisms for gathering and processing information necessary for informed policy adjustments.
Regulatory Learning Model
The authors propose a theoretical model that decomposes regulatory learning into micro, meso, and macro levels, identifying AI Technical Sandboxes as vital components for generating the evidence needed to drive this process. This work offers a crucial bridge between legal requirements and technical implementation, fostering a more productive dialogue between legal and technical experts and ultimately strengthening the EU’s approach to AI governance.
Adaptive Governance for AI Technologies
An adaptive approach is essential to govern artificial intelligence technologies, given their rapid development and unpredictable emerging capabilities. The AI Act embeds provisions for regulatory learning; however, these provisions currently operate within a complex network of actors and mechanisms lacking a clearly defined technical basis for scalable information flow.
Theoretical Model of the AI Act’s Regulatory Learning Space
This paper establishes a theoretical model of the EU AI Act’s regulatory learning space by mapping information flow between stakeholders. Scientists meticulously mapped actors and their interactions, extending existing hierarchical analyses to model the dynamic interplay between enforcement and evidence aggregation.
The work leverages an extended ‘bathtub model’ to visualize this flow, pinpointing how technical compliance demands from the EU AI Act exert pressure on AI system providers and developers, constituting the micro level. Activities in designing and assessing AI systems generate the micro-level evidence necessary to inform adaptation at the macro level, potentially leading to amendments of the AI Act itself or the creation of implementing acts.
Challenges and Opportunities
The research highlights a disconnect between the AI Office’s legal and operational autonomy, identifying it as an example of ‘quasi-agencification’ within EU governance. To overcome this, the study pioneered a functional reasoning approach, tracing the top-down enforcement pipeline from legislation to technical assessments, defining three levels of abstraction—legislative, regulatory, and technical—where learning can occur.
Experiments reveal that SMEs, facing high-risk AI classifications, must demonstrate compliance with articles 8 to 27 of the AI Act, undertaking iterative assessments throughout their solution’s development lifecycle. Participation in structures like standardization processes and advisory forums allows micro-level information and experience to propagate to the meso and macro levels.
Transparency and Compliance
A consistent, reproducible methodology within an AITS makes AI system development transparent, potentially aiding interpretation of legal requirements and assessment results. Results confirm that implementing AITS methodologies in engagements with Member State Authorities (MSAs) enables comparable assessments, allowing MSAs to gather evidence and refine their understanding of translating high-level legislation into technical operationalization.
As the number of AI Regulatory Sandbox (AIRS) engagements grows, the machine-readable data generated supports aggregation and scalable analysis at both meso and macro levels. This allows the AI Office to design guidelines and Codes of Practice, and the Commission to evaluate the suitability of standards for legal force.
Conclusion
This work demonstrates that a robust technical foundation is necessary to support the AI Act’s ambition of future-proof regulation, moving beyond existing legal mechanisms for review and standardization. By applying social learning theory, the research highlights the importance of AITS in reproducibly generating technical evidence, while also outlining requirements for machine-readable solutions to ensure efficient data aggregation.
The authors acknowledge limitations including socio-political challenges such as regulatory capture and legislative inertia, which a technical framework alone cannot resolve. Future research will focus on implementing the components detailed within the study, potentially transforming the compliance process into a source of valuable regulatory insight for both companies and regulators.
The success of the AI Act ultimately depends on operationalizing this socio-technical infrastructure, and the proposed AITS represents a key step towards balancing governance with continued innovation.