In a landmark move to regulate artificial intelligence (AI) within the European Union, the Members of the European Parliament (MEPs) have approved a provisional agreement on the Artificial Intelligence Act at the committee level.
This pivotal legislation, aimed at ensuring safety and compliance with fundamental rights, received robust support during the vote by the Internal Market and Civil Liberties Committees, with a tally of 71-8 and 7 abstentions.
The Artificial Intelligence Act, which comes three months after a provisional agreement was reached, is designed to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI applications. At the same time, it seeks to foster innovation and position Europe as a leader in the AI domain by establishing obligations for AI based on its potential risks and level of impact.
Key Provisions of the Artificial Intelligence Act
- Banned Applications: The Act prohibits certain AI applications that pose threats to citizens’ rights, including biometric categorization systems based on sensitive characteristics, untargeted scraping of facial images for recognition databases, emotion recognition in workplaces and schools, social scoring, predictive policing based solely on profiling, AI that manipulates human behavior, and exploitation of vulnerabilities.
- Law Enforcement Exemptions: The use of real-time biometric identification systems by law enforcement is generally prohibited, except under strictly defined and narrowly limited situations. These include time and geographic limitations and require prior judicial or administrative authorization.
- Obligations for High-Risk Systems: The legislation imposes clear obligations on high-risk AI systems that could significantly impact health, safety, fundamental rights, the environment, democracy, and the rule of law. It covers critical infrastructure, education, employment, essential services, law enforcement, migration and border management, justice, and democratic processes, granting citizens the right to launch complaints regarding AI systems affecting their rights.
- Transparency Requirements: General-purpose AI systems and their models must meet transparency requirements and comply with EU copyright law during training. More powerful models posing systemic risks will face additional evaluation, risk assessment, and reporting obligations. Additionally, artificial or manipulated video content (“deepfakes”) must be clearly labeled.
- Support for Innovation and SMEs: The Act establishes regulatory sandboxes and real-world testing initiatives at the national level, offering SMEs and startups opportunities to develop and train innovative AI solutions before market placement.
Although the provisional agreement has been endorsed at the committee level, it awaits formal adoption in an upcoming plenary session of the European Parliament and final endorsement by the Council. Once fully adopted, the Artificial Intelligence Act will become applicable within 24 months of its entry into force, with certain provisions, such as bans on prohibited practices, applying 6 months after entry into force, and obligations for high-risk systems applying in 36 months.