New Regulations Impact U.S. Businesses and AI Industry

As of August 1, 2024, the European Union’s AI Act has officially come into force, ushering in a new era of artificial intelligence regulation. The landmark legislation imposes significant compliance requirements with a broad extraterritorial reach, including hefty penalties of up to seven percent of global annual turnover. The Act's focus on risk-based governance is expected to have a profound impact on U.S. companies involved in the development, use, and distribution of AI systems. Over the next 24 months, businesses will need to adapt to new regulations aimed at mitigating risks, enhancing transparency, and ensuring the accuracy and security of AI systems.

Defining AI Under the New Legislation

The EU AI Act defines an "AI System" as a machine-based system designed to operate with varying levels of autonomy and capable of adapting after deployment. These systems use input data to generate outputs such as predictions, recommendations, or decisions affecting physical or virtual environments. This broad definition ensures that the Act applies across diverse technologies and industries, distinguishing AI from traditional software, which does not possess adaptive learning capabilities. The law also covers general-purpose AI models, including ChatGPT and Meta's Llama, which will be explored in future updates.

Classification and Requirements for High-Risk AI Systems

Under the EU AI Act, AI systems are categorized into four risk levels: Prohibited AI, High-Risk AI, Limited-Risk AI, and Minimal Risk. The Act emphasizes "High-Risk AI Systems," which are detailed in over half of its 113 articles. AI systems are deemed "High-Risk" if they fall into one of two categories:

1. Regulated Products and Safety Components: AI systems that are either a regulated product under EU legislation (e.g., medical devices, vehicles, aircraft, toys, machinery) or a safety component of these products.
2. Specific Use Cases: High-Risk AI systems are also classified based on eight key areas listed in Annex III of the Act, including biometric identification, critical infrastructure, education, employment, essential services, law enforcement, migration, and judicial processes.

Implications for U.S. Companies

The EU AI Act applies to U.S. companies engaged in the full AI value chain—whether they develop, use, import, or distribute AI systems within the EU market. This includes cases where U.S. companies use AI systems for purposes such as job screening or online proctoring, if these outputs impact the EU market. For example, a U.S. automaker incorporating AI for self-driving features in vehicles sold in the EU will be subject to the Act’s regulations.

Compliance Obligations for High-Risk AI Systems

The EU AI Act sets out extensive compliance requirements for High-Risk AI Systems to promote transparency and accountability. These include:

1. Developing a risk management system to identify and mitigate risks throughout the AI lifecycle.
2. Implementing robust data governance to ensure high-quality data use and validation.
3. Providing detailed technical documentation on the AI system’s purposes, design, and human oversight.
4. Maintaining records of AI functionalities and performance.
5. Ensuring transparency and clear instructions for users to interpret AI outputs.
6. Guaranteeing human oversight to protect health, safety, and fundamental rights.
7. Ensuring accuracy, robustness, and cybersecurity to prevent errors and external exploitations.

High-Risk AI systems categorized as regulated products will also need to complete a conformity assessment to certify compliance with the EU AI Act. AI providers must follow the relevant product conformity procedures as the Act’s integration with existing EU regulations progresses.

Future Outlook and Compliance Timeline

The EU AI Act took effect on August 1, 2024, with most regulations for High-Risk AI systems classified under "Specific Use Cases" scheduled to be enforced after August 1, 2026. Compliance requirements for High-Risk AI systems deemed regulated products will be implemented after August 1, 2027, affecting a broad range of products including medical devices and machinery.

Key Considerations for U.S. Companies

U.S. companies involved with AI should prepare for these regulatory changes by:

1. Evaluating AI use for both internal and external purposes.
2. Compiling a comprehensive inventory of AI use cases and associated suppliers.
3. Assessing whether these use cases fall under the "High-Risk" classifications of the EU AI Act or other relevant laws.v 4. Reviewing data quality and permissions to ensure compliance.

The enactment of the EU AI Act marks a significant shift in global AI regulation. The next 24 months will be crucial for U.S. companies to align with these new standards and adapt to the evolving regulatory landscape.

 

Reach out to our regulation experts on chemical and product regulatory compliances