EU Plans Extra Time for Companies to Meet AI Act Rulesh2>
The European Union is preparing to delay the enforcement of certain high-risk duties outlined in the b>AI Actb> until late 2027. This decision comes as a response to pressure from global tech companies and business groups, who argue that firms require more time to comply with the stringent regulations.p>
Overview of the Delayh3>
The European Commission announced that the enforcement of high-risk AI rules, which include regulations for systems such as b>biometric identificationb>, b>credit scoringb>, b>job screeningb>, b>health servicesb>, b>law enforcement toolsb>, and b>traffic systemsb>, will now be postponed until December 2027. Originally, these rules were set to take effect by mid-2026.p>
This delay is part of a broader initiative called the b>Digital Omnibus packageb>, aimed at simplifying overlapping digital regulations while maintaining their core principles. A Commission official emphasized that simplification does not mean deregulation, but rather a reassessment of the regulatory landscape.p>
Implications of the Delayh3>
The high-risk systems governed by these rules have significant implications for individuals’ identities, access to services, and legal rights. The Commission justifies the delay by stating that companies need additional time to meet the technical and legal requirements associated with these tools.p>
Changes to Other Regulationsh3>
The b>Digital Omnibus packageb> also proposes updates to existing regulations such as the b>GDPRb>, the b>e-Privacy Directiveb>, and the b>Data Actb>. A particularly controversial aspect of the GDPR update would allow major companies, including b>Googleb> and b>Metab>, to utilize personal data from European citizens for training AI models, provided that clear rules and safeguards are established. Critics, however, argue that this could undermine protections that have been in place since 2018.p>
Simplifying Cookie Consenth3>
Additionally, the proposal seeks to streamline cookie consent processes. Many users are often overwhelmed by consent pop-ups while browsing the web, leading to what regulators term “banner fatigue.” The Commission aims to facilitate quicker choices, reduce repeated consent requests, and establish clearer limits on cookie settings’ duration, though some critics fear this may lead to increased tracking.p>
Rationale Behind the Changesh3>
The Commission stresses that this slowdown does not diminish its commitment to AI safety. Officials argue that Europe requires a clearer set of rules that are practical for companies to follow, rather than an overly ambitious framework that is difficult to enforce. The delay is also influenced by external pressures from tech firms in the US and Asia, who contend that stringent regulations could deter investment and hinder AI research in Europe.p>
Criticism from Rights Groupsh3>
Digital rights advocates, such as b>European Digital Rights (EDRi)b>, criticize the proposed changes, claiming they may weaken data protection rights and facilitate easier data collection by large corporations. They warn that the delay on high-risk rules could leave individuals vulnerable to biased or erroneous systems in critical areas like hiring and law enforcement.p>
Next Stepsh3>
The b>Digital Omnibus packageb> awaits approval from the European Parliament and the Council, with lawmakers expected to debate the proposal in the coming months. The exact implementation dates for certain rules will depend on the readiness of supporting technical standards, adding another layer of uncertainty for companies.p>
As Europe navigates the balance between protecting its citizens from risky AI systems and maintaining a competitive tech sector, the upcoming months will reveal whether this new plan can effectively achieve both objectives.p>