The EU AI Act, a landmark legislation regulating artificial intelligence (AI) in the European Union, has come into effect. This law aims to ensure AI systems used and developed in the EU are safe, trustworthy, and respectful of the fundamental rights of EU citizens. The EU AI Act follows a risk-based approach to regulating AI, with stricter rules for high-risk systems.
Understanding the EU AI Act
What is the EU AI Act?
The EU AI Act is a piece of European Union legislation proposed by the European Commission in 2020 to control artificial intelligence models, especially those with systemic risks. Its primary goal is to mitigate risks and safeguard user rights for companies investing in AI development. According to Mathieu Michel, Belgian state secretary for digitalization, “This landmark law addresses a global technological challenge that also creates opportunities for our societies and economies.” The law harmonizes rules on AI use and development across the EU’s single market.
The EU AI Act defines general-purpose AI models as “capable of generating text, images, and other content.” These models are considered to not pose systemic risks but have limited requirements and obligations, such as transparency about how the models are trained. However, they also present challenges to artists, authors, and other creators regarding copyright law.
Key Objectives of the EU AI Act
The key objectives of the EU AI Act include ensuring that AI systems used and developed in the EU are safe and trustworthy. The law aims to ensure that these systems respect existing laws on the fundamental rights of EU citizens while boosting investment and innovation in the bloc.
To achieve this goal, high-risk applications will face stringent requirements under the EU AI Act. These include regular activity tracking, enforced use of high-quality training datasets to reduce bias, thorough risk assessment and mitigation measures, and transparency on model documentation with authorities for compliance evaluation.
“Unacceptable” risks, defined as any systems that undermine fundamental rights or result in discriminatory outcomes, will be banned by the law. Open-source projects would not be exempt unless model parameters are fully available to the public.
Scope and Applicability of the EU AI Act
Who is Affected by the EU AI Act?
The EU AI Act will have a major impact on global tech companies operating within or outside Europe. Companies like Microsoft, Meta, Google, and Amazon could face substantial fines if they breach it, even if they choose not to operate within Europe but still provide their services there.
Jamil Jiva stated that imposing significant fines on non-compliant companies would make regulations impactful. Charlie Thompson noted it would bring much more scrutiny on tech giants regarding their operations in the European market and usage of citizen data. Tech giants now need to comply with stricter regulations than ever before, so these large corporations may need to adhere to a completely new set of rules governed from Brussels to protect the interests of European users alike.
Type of Risk | Description | Obligations |
---|---|---|
Limited Risk | Transparent only | None |
High-Risk | Stricter rules | Regular activity tracking, enforced high-quality datasets, thorough risk assessment and mitigation, transparency, model documentation with authorities, compliance evaluation |
Types of AI Systems Covered by the EU AI Act
The EU AI Act covers various types of AI systems, including general-purpose models like OpenAI’s ChatGPT, Google Gemini, generative technologies, Midjourney, autonomous vehicles, medical devices, loan decision systems, educational scoring, remote biometric identification systems, cognitive behavioral manipulation, social scoring, predictive policing, and systems related to race and sexual orientation.
- General-purpose
- Generative technologies
- Autonomous vehicles
- Medical devices
- Loan decision systems
- Educational scoring
- Remote biometric identification systems