How EU AI Act May Accelerate Compliance Regime for U.S. Enterprises

The regulation’s risk-based hierarchy may affect organizations in the United States whose products and services affect EU citizens

Pending formal adoption by the European Parliament in April following a provisional agreement in late 2023, the final EU AI Act is expected to apply to any provider, deployer, or distributor of AI whose services or products reach the EU market. “These are broad-reaching requirements that are expected to apply to providers and users of AI systems outside the EU if the output of the AI system affects people located in the EU,” says Tim Davis, a principal with Deloitte Risk & Financial Advisory at Deloitte & Touche LLP.

Similar to the EU’s Global Data Protection Regulation that applies to entities around the world, the EU AI Act has a global reach that creates specific requirements and new obligations on organizations across sectors and throughout value chains that develop, place, or use AI in the EU. “The pending law is an important regulatory milestone in the technology sector, both for companies that provide digital platforms and for entities that use those platforms to help deliver their goods and services,” says Davis.

According to the European Commission, the EU AI Act is intended to provide for the safety and fundamental rights of people and businesses while strengthening investment and innovation across EU countries. Along with the EU’s recently applied Digital Services Act (DSA), the EU AI Act adds pressure on entities to improve protection, safety, and fundamental rights for individuals in an increasingly digitalized society.

Risk Hierarchy

“The Act takes a risk-based approach to regulating AI, establishing progressively increasing restrictions on uses deemed to produce greater risk,” says Mosche Orth, a manager with the EU Policy Centre at Deloitte Global.

For example, AI applications that might be deployed in industrial settings to provide predictive maintenance insights are seen as having minimal or no risk, so they are permitted without restriction, says Orth. “Systems that interact with humans such as chatbots and systems that generate or manipulate content are regarded as having limited risks, so they would be permitted subject to information or transparency obligations,” he says.

Systems regarded as high risk under the provisional EU law are those that might be used in safety features, such as in aviation, cars, medical devices, and toys. High-risk applications also include those used for remote biometric identification; managing critical infrastructure; education; employment including worker management and recruiting; essential private and public services; and benefits such as creditworthiness or social welfare; law enforcement, and administration of justice and democratic processes. “These uses would be permitted but subject to compliance with conformity assessments that would be required before the AI use is put on the market,” says Orth.

The provisional law bans applications that are regarded as entailing unacceptable risk levels. These include uses that would lead to manipulation of human behavior, opinions, and decisions; classification of people based on their social behavior; and real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, except in a limited number of cases.

Source: European Council

For entities outside the EU, the law is expected to require an added step to compliance, says Orth. “Under the draft language, before placing a high-risk system or general-purpose AI model in the EU market, providers outside the EU must appoint an authorized representative established in the EU to give EU authorities access to someone with the necessary information on compliance of an AI system,” he says.

Further, general purpose AI models that present systemic risks will be subject to further requirements, including risk assessments and adversarial testing, as well as reporting of serious incidents. “These models would also be under the direct supervision of a new AI Office at the European Commission,” he says.

Compliance Requirements

For permitted uses, the AI Act provisional language includes several key requirements, which apply depending on the risk classification of the AI system. The requirements for systems designated high-risk focus on data and governance, technical documentation to enable traceability of decision-making, human oversight, and transparency to provide information to users of AI-based systems. The requirements also include provisions intended to ensure levels of accuracy, robustness, and security.

Around the world, many governments, and other organizations are actively developing and enacting AI governance laws and standards, attempting to keep pace with the rapid and varied growth of AI technologies. The efforts include broad legislation, specific regulations for certain applications, guidelines, and standards that are producing a variety of regulatory frameworks. In addition to EU AI Act, global approaches to AI governance such as the voluntary G7 Code of Conduct and the AI Safety Summit, which led to the signing of the Bletchley Declaration on AI Safety, join existing and emerging country-specific regulations and policies to produce a host of requirements and nonbinding guidelines globally. In the United States, the U.S. AI Executive Order issued in October 2023 provides directives to federal government agencies to develop guidance and leading practices, but it is not a regulation or a policy.

“As the technology sector becomes a more regulated industry, companies operating in this space may face operational and cultural[HJ1]  shifts while they establish governance, risk, and compliance processes [HJ2] over their everyday operations to a much greater degree than they have experienced historically,” Davis says. Organizations in health care and life sciences, financial services, and housing may also face heightened scrutiny due to evolving AI regulations.

The final requirements are expected to take effect in phases. When the law is enacted, organizations will have six months to comply with rules on prohibited systems, 12 months for foundation models, 36 months for high-risk AI safety components, and 24 months for all others.

Actions to Prepare

U.S.-based entities that might be affected by the new EU AI Act when it is enacted can consider at least four critical actions to begin preparing, says Jennifer McMillan, a senior manager in the Technology, Media, and Entertainment Regulatory and Legal Support practice at Deloitte & Touche LLP. “These actions are focused on preparing not only for the EU AI Act, but also other laws that are likely to follow as many jurisdictions globally develop or consider developing AI-related regulations,” she says.

Take stock. As AI is not necessarily new in many organizations, leaders can begin by gathering information on existing AI ethics and governance processes. This may include systems, tools, and frameworks used in various parts of the enterprise, both functionally and geographically, and across domains such as security, privacy, trust, and safety.

The process can include identifying systems across the organization to create an inventory, understanding each system’s use, and evaluating how each might be classified under the EU Act’s risk hierarchy. It can also include an initial assessment of what governance, reporting, or other compliance requirements might apply to each use, including whether the use might be prohibited. Organizations that are affected by the DSA can consider similar activities that have been taken or are soon to be taken to comply with those requirements.

Plan and mobilize. To manage AI risks and comply with not only the currently evolving regulations but also newer ones that are likely to emerge, it may be important for organizations to develop a coordinated, integrated, and centralized strategy and approach. For example, processes for monitoring the development of new AI-focused regulations and managing emerging obligations may be key. A rationalized obligation management register may become a critical tool for entities to continually learn about, prepare for, and comply with existing and evolving regulations as they develop.

A rationalized register can help organizations scale and manage their approach to AI governance and risk management globally to enable compliance with multiple regulations across jurisdictions, continually tracking new requirements, mapping these requirements to risks and controls, and enabling processes across the program.

Assign cross-functional responsibility. A centralized and scalable approach often necessitates a dedicated, cross-functional team of people to assume responsibility for supporting the management of risk, compliance, and decision-making. The team can leverage existing processes as a starting point, then build out operating and governance models that are informed by historic challenges to establish new processes, controls, and governance forums for managing emerging risks and compliance requirements. The team can focus early in its efforts on engaging with senior leaders across the enterprise to build consensus on the importance of a comprehensive approach with their support from the top down to establish a culture of compliance and willingness to support.

Adopt a framework. With an understanding of existing practices, a centralized approach, and cross-functional team established, the organization can consider adopting a framework for how it intends to manage AI risks, ethics, and compliance across the enterprise and strengthen trust with customers. Some frameworks already exist that organizations can consider adopting or customizing to their own circumstances to set guardrails and develop a rationalized approach for managing AI-related risks and controls comprehensively.

Entities are likely well-served by choosing a trusted framework based on their existing principles, along with taking into consideration externally published frameworks and regulatory requirements that can help them manage AI-related risks and compliance, both as they exist and as they may evolve across many sectors, geographies, and domains in the coming years.

“Efforts to regulate AI are in early stages globally, and the advent of generative AI has accelerated these efforts,” says Davis. “Many companies are likely to find even if they are only making limited use of AI that their uses may be subject to the EU AI Act, and this regulation is the tip of the spear in terms of similar requirements that are expected to emerge in jurisdictions around the world.”

—by Tammy Whitehouse, senior writer, Executive Perspectives in The Wall Street Journal, Deloitte Services LP

Published on  Feb 13, 2024 at 3:00 PM

This publication contains general information only and Deloitte is not, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This publication is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional advisor.

Deloitte shall not be responsible for any loss sustained by any person who relies on this publication.

About Deloitte

Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee (“DTTL”), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as “Deloitte Global”) does not provide services to clients. In the United States, Deloitte refers to one or more of the US member firms of DTTL, their related entities that operate using the “Deloitte” name in the United States and their respective affiliates. Certain services may not be available to attest clients under the rules and regulations of public accounting. Please see www.deloitte.com/about to learn more about our global network of member firms.

Copyright © 2024 Deloitte Development LLC. All rights reserved.