European AI Act: A Necessary Framework, but Insufficient in the Global Race for AI?

Article AI Privacy 21.03.2024
Par Adrien Hug-Korda

Proposed Analysis by Adrien Hug Korda, Director of Privacy & Compliance, Data Protection Officer, on the AI Act Debated in the European Parliament During the Plenary Session of March 13, 2024

Key Takeaways

  • The AI Act is expected to come into force in the coming weeks and will regulate the marketing and deployment of AI systems through a risk-based approach.
  • The regulation bans AI systems that are incompatible with the fundamental values of the Union and provides a more flexible framework for technologies deemed low-risk.
  • Between these two extremes, the text imposes significant obligations on companies involved in the distribution and use of AI systems classified as “high-risk.”
  • This legislation marks a strategic turning point for the EU, asserting its role as a pioneer in establishing international standards for responsible digital development despite a considerable technological lag in AI.
  • However, the text suffers from ambiguities, and its implementation by companies could prove challenging.

 

The conclusion of the legislative process initiated in 2021 is near: the AI Act is likely to come into force in a few weeks. The text, which reflects the European Union’s ambition to create the first international regulatory framework for the development and use of artificial intelligence, aims primarily to regulate the marketing of AI systems in Europe through the assessment of associated risks: protection of individuals and data, transparency, security, and traceability. Thus, AI systems whose purpose is fundamentally incompatible with the core values of the Union will simply be banned, while those serving a legitimate purpose but likely to pose significant risks to health, safety, and fundamental rights will require substantial safeguards from their developers and, to a lesser extent, from user companies.

At the same time, AI systems posing only a “limited risk,” such as chatbots, deep fakes, or generative AI—technologies capable of generating text, images, videos, and sounds that are particularly realistic—will be subject to transparency obligations to mitigate the risk of user confusion.

Is this text suitable for the current market situation? Does it contain flaws? Does it represent a hindrance?

 

Just in time for the AI Act…

The adoption of the AI Act comes at a strategic moment. Confronted with the significant technological advancement of the United States in the field of AI, the EU has an imperative to quickly establish a robust regulatory framework with extraterritorial reach, in order to assert its position as a leader in setting international standards for responsible digital development. The risk-based approach of the AI Act, coupled with various provisions for potential updates—particularly through the extension of the list of AI systems deemed “high-risk”—makes the regulation particularly sustainable. Furthermore, the text avoids certain pitfalls; in particular, it allows the use of “sensitive” data—within the framework of the GDPR—to identify and counter potential algorithmic biases.

The fact that the text focuses on the marketing and deployment of AI systems while providing a relative exemption for research and development phases—along with reduced fines for smaller entities in case of non-compliance—will also allow companies to continue innovating without imposing an excessive administrative burden. Additionally, the obligations of the AI Act are framed relatively generically, offering companies some flexibility in their practical compliance, notably allowing them to leverage existing processes such as those related to personal data protection.

 

…provided that the flaws are anticipated

However, the text approved by the EU Council is far from perfect; and although it is still technically just a draft regulation, it is unlikely that the final version will undergo significant changes before its publication in the Official Journal in a few weeks. In particular, several key definitions lack precision and are open to interpretation, such as that of an “artificial intelligence system,” which primarily relies on the ambiguous notion of “level of autonomy,” or, even more unclear, that of a “general-purpose AI model,” a late addition to the Parliament’s initiative, which refers to any “AI model” exhibiting “significant generality” and capable of competently executing a “broad range” of tasks, without the regulation clarifying the meaning of any of these terms.

Similarly, the classification criteria do not consistently allow for a definitive determination of whether an AI system should be considered “high-risk.” Some studies have even shown that nearly 40% of AI applications commonly used by businesses could fall into a “gray area.” Furthermore, the sometimes awkward drafting of the regulation introduces inconsistencies and potential loopholes, enabling actors, particularly those based in third countries, to evade the application of the AI Act in a manner that is likely contrary to the spirit of the text.

It is to be hoped that some of these imperfections will be corrected before the publication of the final version of the regulation. Otherwise, it will be essential to wait for the competent national authorities—or the European AI Committee—to adopt clear guidelines to address the gaps in the text… Nevertheless, since the AI Act allows member states to designate authorities with heterogeneous profiles—and therefore potentially divergent perspectives—a uniform approach at the Union level is far from guaranteed.

Finally, from a political perspective, while the AI Act clearly reflects the EU’s desire to establish itself as a leader in regulating digital practices, the projected timeline for the text, which will only be fully applicable thirty-six months after its entry into force, or in 2027 at the earliest, provides ample time for other major powers to get their act together. In particular, the United States sent a strong signal with the publication of Executive Order 14110 on October 30, which, while far from providing a framework as comprehensive as the European AI Act, imposes obligations on American agencies starting this year.

 

Par Adrien Hug-Korda

Directeur Privacy & Compliance, Data Protection Officer