The European Union (EU) stands poised to roll out its groundbreaking AI Act. This new legislation will bring the first risk-based regulatory framework to the broader artificial intelligence applications. Under the terms of the approved legislation, the law will go into effect on August 2. It addresses major AI use cases and specifically prohibits applications deemed to have an “unacceptable risk.” Cognitive behavioral manipulation and social scoring are both part of the discussion. These practices complicate ethical questions surrounding their ability to benefit both the individual and society.
The EU Commission is remaining committed to a definitive timeline to begin implementation of the AI Act. They have reiterated that the rules should focus on builders of general-purpose AI models that could be systemic risks. THESE RULES APPLY TO AI COMPANIES Major companies such as OpenAI, Anthropic, Google, and Meta are subject to this regulatory umbrella. In practice, these organizations need to be following the new standards by August 2, 2027. This requirement extends to any individual or entity that made general-purpose AI models available on the market prior to that date.
This legislation has raised a lot of conversations about its breadth and reach. Critics have claimed that the requirements included in the AI Act go too far, introducing legal confusion for model developers, even up to criminal liability. Joel Kaplan, Meta’s exec and AI enthusiast hero, shared his worries about the far-reaching implications of the bill.
“Europe is heading down the wrong path on AI.” – Joel Kaplan
Innovative AI департамент Kaplan warned that such heavy-handed rules would be a blocker for the development of frontier AI models in Europe. He warned that such limitations might “throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them.”
With the 2024 deadline fast approaching, businesses are still trying to figure out what these regulations mean. Whether the AI Act adequately protects individuals from high-risk AI applications remains to be seen. It sets a clear, predictable guardrails for companies pushing the bounds of AI. The newly struck balance between protecting public safety and supporting entrepreneurial innovation is the key test ahead.
The EU’s determination to bring the AI Act into force is steadfast, even amid widespread pushback from civil society and other stakeholders in the tech industry. With the July deadline fast approaching, nonprofits and businesses alike are being pressed to get ready to comply with these new regulations.