Heralded as a model for responsible governance, it also ignites a fierce debate about whether regulation can coexist with innovation.
The European Union has staked its claim as the world’s foremost regulator of artificial intelligence. The AI Act — formally Regulation (EU) 2024/1689 — entered into force on 1 August 2024, making the EU the first jurisdiction anywhere to adopt a comprehensive, binding legal framework governing AI systems across all sectors of the economy . For Brussels, the legislation represents the culmination of years of deliberation and a clear statement of values: that technology must serve people, not the other way around.
At the heart of the Act is a risk-based approach, which classifies AI systems into four tiers according to the severity of harm they could inflict. At the top sits a category of “unacceptable risk” — AI applications so dangerous to fundamental rights and public safety that they are outright banned. These include social scoring systems, real-time remote biometric identification in public spaces for law enforcement purposes, and AI tools designed to manipulate individuals through subliminal techniques. These prohibitions came into force in February 2025, marking the first concrete enforcement milestone of the new regime .
| Risk Category | Regulatory Treatment | Key Examples |
| Unacceptable Risk | Prohibited outright | Social scoring, mass biometric surveillance, manipulative AI |
| High Risk | Strict pre-market obligations | AI in recruitment, credit scoring, critical infrastructure |
| Transparency Risk | Disclosure requirements | Chatbots, deepfakes, AI-generated content |
| Minimal / No Risk | No specific rules | Spam filters, AI-enabled video games |
Below the prohibited tier, “high-risk” AI systems — those used in recruitment, credit scoring, medical devices, education, and law enforcement — face a demanding compliance regime before they can be placed on the market. Providers must conduct rigorous risk assessments, ensure the quality of training data, maintain detailed technical documentation, and implement robust human oversight mechanisms. These obligations are set to apply fully from August 2026 . For General-Purpose AI (GPAI) models — the large foundation models underpinning tools like chatbots and image generators — specific transparency and copyright-related rules became enforceable in August 2025 .
The EU has been explicit about its ambitions. The Act is designed not merely to constrain harmful AI, but to position Europe as a global standard-setter — a kind of regulatory “Brussels Effect” for the digital age. Alongside the Act, the Commission has launched the AI Pact, a voluntary initiative encouraging companies to comply ahead of the legal deadlines, and has established the European AI Office to coordinate enforcement and develop codes of practice for GPAI providers .
Yet the Act has attracted sustained criticism, and not only from the technology industry. Scholars and policymakers have identified a fundamental tension at its core. By focusing primarily on output regulation — what AI systems must not do — rather than on cultivating the inputs that drive competitiveness, such as access to capital, computing infrastructure, and talent, the EU risks what analysts have called a loss of “cognitive sovereignty” . The fear is that non-European values and technological standards will become embedded in AI systems deployed across the continent, shaping imagination, culture, and ultimately democratic discourse — all while European firms struggle to compete.
Critics also point to the sheer complexity of the legislation. With over 1,000 recitals, articles, and annexes, the AI Act is the most extensive regulatory instrument in the EU’s digital ecosystem . Its implementation is further layered with guidelines, codes of practice, and technical standards — a density that risks undermining the legal certainty it was designed to provide. The 2024 Draghi Report on EU competitiveness warned explicitly that excessive regulatory density could deter investment and drive AI talent to jurisdictions with more permissive environments .
In response, the European Commission proposed a “Digital Omnibus” in late 2025, framed as a simplification exercise. However, the proposal has itself become controversial, with experts warning that introducing major amendments just months before key provisions take effect would create new uncertainty and potentially legitimise previously non-compliant datasets, entrenching the advantages of dominant players .
The effectiveness of the AI Act will ultimately depend less on its textual ambition than on the quality of its implementation. National authorities across the EU’s 27 member states must now build the institutional capacity to enforce a technically complex law in a fast-moving technological environment. The stakes could not be higher: the choices Europe makes today about how to govern AI will shape not only its own digital future, but set a template — or a cautionary tale — for the rest of the world.