On May 7, 2026, the Council of the European Union and the European Parliament reached a provisional agreement on amendments to the landmark EU AI Act (Regulation 2024/1689). The deal, part of the Digital Omnibus package proposed by the European Commission in November 2025, delays key compliance deadlines, removes machinery from scope, bans AI-generated intimate deepfakes, and extends regulatory relief to mid-sized companies.
All of this while preserving the Act’s core risk-based framework.
This is a deep dive from official EU sources into what actually changed — and what it means in practice.
The Timeline: Everything Gets Delayed
The most significant impact of these amendments is temporal. Here’s the before and after:
Standalone high-risk AI systems (biometrics, employment screening, education, law enforcement, critical infrastructure, border management):
- Before: August 2, 2026
- After: December 2, 2027 (+16 months)
High-risk AI embedded in regulated products (medical devices, toys, lifts, watercraft):
- Before: August 2, 2027
- After: August 2, 2028 (+12 months)
National AI regulatory sandboxes:
- Before: Should exist by August 2026
- After: August 2, 2027
Watermarking and transparency of AI-generated content:
- New deadline: December 2, 2026
This is notably earlier than the Commission’s own November 2025 proposal (February 2027), showing Parliament pushed for faster transparency rules.
Why the delays? The Commission’s explanatory memorandum (COM(2025) 836) cited four concrete problems: delayed designation of national competent authorities, missing conformity assessment bodies, no harmonised standards for high-risk requirements yet, and incomplete guidelines and compliance tools. Without these foundations, the Commission argues, businesses face unpredictable compliance costs.
Machinery: Fully Excluded
AI systems embedded in machinery are now completely exempt from the AI Act. They only need to comply with the Machinery Regulation — one regulatory framework instead of two.
Before this amendment, a factory robot with AI had to satisfy both the Machinery Regulation and the AI Act simultaneously — double the paperwork, double the cost. Now the Commission has the power to add AI-specific health and safety requirements directly into the Machinery Regulation via delegated acts, eliminating the overlap.
This was a direct result of lobbying from major industrial companies like Siemens and ASML, who argued that dual compliance was unsustainable.
Practical impact: Any company whose AI products fall under the Machinery Regulation can stop preparing for AI Act compliance. But watch for the Commission’s delegated acts — AI-specific requirements may be added to the Machinery Regulation itself.
The «Nudifier» Ban: New Explicit Prohibitions
The amendment adds two explicit bans to the AI Act’s prohibited practices list:
1. AI systems designed to create child sexual abuse material (CSAM) 2. AI systems that generate non-consensual sexual or intimate images of identifiable persons — colloquially known as «nudifier» apps
This covers images, video, and audio. The obligations apply to:
- Placing such systems on the EU market
- Placing systems on the market without reasonable safety measures
- Deployers using them for this purpose
Deadline: December 2, 2026.
This was a major priority for the European Parliament. Co-rapporteur Michael McNamara (Renew) called it «a key part of the Parliament’s mandate.» Dutch lawmaker Kim van Sparrentak emphasized the protection of women and girls from intimate deepfakes.
Small Mid-Caps: Regulatory Relief Expanded
The EU’s new definition of «SME» extends to companies with up to 3,000 employees and €2.2 billion in turnover — the so-called Small Mid-Caps (SMCs). These companies now qualify for the same regulatory simplifications that previously only applied to traditional SMEs (≤250 employees):
- Simplified technical documentation requirements
- Special consideration in penalty applications
- Reduced administrative burden overall
This is a significant expansion. Thousands more companies now benefit from lighter compliance requirements, directly supporting the Commission’s stated goal of fostering European AI scaleups.
Safety Components: A Narrower Definition
The amendment narrows what qualifies as a «safety component» under the AI Act. AI functions that only assist users or optimise performance will no longer automatically trigger high-risk classification — unless their failure poses actual health or safety risks.
Before, any AI classified as a safety component of a regulated product was automatically deemed high-risk. The narrower definition reduces the compliance scope substantially for product manufacturers.
Centralised Enforcement: The AI Office
Oversight of AI systems built on General-Purpose AI models is now centralized at the EU-level AI Office (European Commission). National authorities retain competence only for:
- Law enforcement AI
- Border management AI
- Judicial authority AI
- Financial institution AI
This means AI developers face one supervisor — not 27 different national authorities potentially interpreting rules differently. Less fragmentation, more predictability.
Bias Detection: Personal Data Now Permitted
A notable pro-innovation change: providers and deployers of all AI systems can now process special categories of personal data (sensitive data like race, health, religion, sexual orientation) where strictly necessary to detect and correct biases, provided appropriate safeguards are in place.
Previously, using sensitive data for bias testing required finding a legal basis under GDPR — legally uncertain territory. The amendment explicitly carves out an exception, making bias testing legally safe and encouraging better AI quality across the board.
Other Notable Changes
- Registration obligation reinstated: Providers must register AI systems in the EU high-risk database even if they claim exemption from high-risk classification. This closes a loophole where companies could avoid transparency by self-exempting.
- Sectoral overlap mechanism: A new mechanism allows the Commission to limit the AI Act’s application where sectoral laws already have equivalent AI-specific requirements — preventing future double regulation.
- AI literacy obligation shifted: Instead of imposing an unspecified obligation on providers and deployers, the duty to promote AI literacy now falls on the Commission and Member States.
- Post-market monitoring simplified: The requirement for a harmonised post-market monitoring plan was removed, giving companies flexibility in how they monitor AI systems after deployment.
The Political Framing
The Council presidency (Cyprus) framed this as a competitiveness move. Deputy Minister Marilena Raouna stated:
«Today’s agreement on the AI Act significantly supports our companies by reducing recurring administrative costs. It ensures legal certainty and a smoother and more harmonised implementation of the rules across the Union, strengthening EU’s digital sovereignty and overall competitiveness.»
This is the first deliverable under the «One Europe, One Market» roadmap agreed by EU institutions. The broader political context is the October 2024 Letta and Draghi reports, which warned that regulatory complexity was eroding Europe’s competitiveness against the US and China.
What Comes Next
The provisional agreement still needs formal adoption by both the Council and the European Parliament. Both institutions have indicated they aim to complete this before August 2, 2026 — the original deadline for high-risk AI rules — to avoid any regulatory gap.
After adoption, the text undergoes legal and linguistic revision before being published in the Official Journal.
Sources
All information in this article comes from official EU sources:
Deja una respuesta