Members of the European Parliament (MEPs) have provisionally agreed on the world’s first rulebook for artificial intelligence (AI), known as the AI Act. This legislation aims to regulate AI based on its potential for harm. The formalization of the Parliament’s position is imminent, with a committee vote scheduled for 11 May and a plenary vote in mid-June.
Key points from the Act include:
- General Purpose AI: The Act puts stricter regulations on foundation models, such as ChatGPT, which are AI systems that do not have a specific purpose. Generative AI models would need to comply with EU law and fundamental rights, including freedom of expression.
- Prohibited practices: Certain AI applications deemed to pose unacceptable risks are banned. These include AI-powered tools for general monitoring of interpersonal communications, biometric identification software (with certain exceptions for serious crimes), purposeful manipulation, emotion recognition software in certain domains, and predictive policing for administrative offenses.
- High-risk classification: AI solutions that pose a significant risk of harm to health, safety, or fundamental rights will be classified as high-risk, requiring them to follow stricter regulations, including risk management, transparency, and data governance. AI used to manage critical infrastructure will also be deemed high-risk if they present a severe environmental risk.
- Detecting biases: Providers of high-risk AI models can process sensitive data to detect negative biases, but under strict conditions. The processing must happen in a controlled environment, the data must not be shared with other parties, and it must be deleted after the assessment.
- General principles: All AI models should adhere to principles including human agency and oversight, technical robustness and safety, privacy and data governance, transparency, social and environmental well-being, diversity, non-discrimination, and fairness.
- Sustainability of high-risk AI: High-risk AI systems and foundation models will have to comply with European environmental standards and keep records of their environmental footprint.
The The EU AI Act Newsletter #28 has an up-to-date developments and analyses of the proposed EU artificial intelligence law.
The US who has been standing still on enacting legislation and a lot of whats adopted has roots in EU legislation. Looking at a high-level comparison of the EU and U.S. positions on AI regulation.
EU and U.S. Positions on AI Regulation: A Comparison
EU Approach:
- Comprehensive legislation tailored to specific digital environments
- New requirements planned for high-risk AI in socioeconomic processes, government use of AI, and regulated consumer products
- Emphasizes public transparency and influence over AI system design in social media and e-commerce
U.S. Approach:
- Highly distributed across federal agencies without new legal authorities
- Investments in non-regulatory infrastructure, such as AI risk management framework and evaluations of facial recognition software
- Risk-based approach but lacks consistent federal approach to AI risks
Alignment and Misalignment:
- Conceptual alignment on risk-based approach, key principles of trustworthy AI, and importance of international standards
- Significant differences in AI risk management regimes, especially in socioeconomic processes and online platforms
Collaboration:
- EU-U.S. Trade and Technology Council: Successful collaboration on metrics, methodologies, and international AI standards
- Joint efforts in studying emerging AI risks and applications
Recommendations for Alignment:
- U.S.: Execute federal agency AI regulatory plans, design strategic AI governance with EU-U.S. alignment, establish legal framework for online platform governance
- EU: Create flexibility in sectoral implementation of EU AI Act, improve law for future EU-U.S. cooperation
The Brookings Inst offers this framing on the US approach
Regarding the U.S. federal government’s approach to AI risk management, it is characterized as risk-based, sectorally specific, and highly distributed across federal agencies. However, the development of AI policies in the U.S. has been uneven.
While there are guiding federal documents on AI harms, they have not created a consistent approach to AI risks. Federal agencies have not fully developed the required AI regulatory plans, with only a few agencies having comprehensive plans in response to the requirements.
The Biden administration has shifted focus from implementing Executive Order 13859 to the Blueprint for an AI Bill of Rights (AIBoR), developed by the White House Office of Science and Technology Policy (OSTP).
The AIBoR endorses a sectorally specific approach to AI governance, relying on associated federal agency actions rather than centralized action. However, the AIBoR is nonbinding guidance