Decoding the EU AI Act for Business Daniel: Hello and welcome to this latest re-skill masterclass from the HEC Paris Campus. I'm the school's Head of Research Communication, Daniel Brown. Today it's our great pleasure to welcome Pablo Baquero, who's the Assistant Professor in the Law and Tax Departments at HEC Paris. Pablo, it's a real pleasure to welcome you to this masterclass. Pablo : My pleasure. Daniel : Pablo's going to discuss the impact of the EU AI Act on companies operating both within and beyond Europe. After your 20-minute presentation, Pablo, you're going to answer the numerous questions that our viewers have been sending on this very topical subject. Well, Pablo, the floor is yours. Pablo: Thank you very much, Daniel. It's a pleasure to be here to discuss with you the impact of the European Union Artificial Intelligence Act on businesses. There is much concern nowadays in the business world among market actors about the potential chilling effects that regulation might have on innovation and particularly on technology adoption by firms. We have made a poll prior to this session and we have asked you the following question: Does the European AI Act risk slowing Europe down compared to the US and China in the development of AI technologies? The majority of you said yes, the AI Act is too burdensome and slow. Thirty-six percent said that actually this is a false choice, both innovation and regulation can align. Seventeen percent said that even if the regulation slows down, we have to give prevalence to ethical values in the European Union and enforce those regulations. As you can see, this is a very divided opinion. There are conflicting views about this topic, and this is not surprising at all, because in the end the Artificial Intelligence Act of the European Union is the first comprehensive framework specific to artificial intelligence created by a major regulator. The AI Act has significant impacts on businesses, some of which we will discuss throughout this session, but it is not the only piece of legislation in the EU governing artificial intelligence. There are others such as the Digital Services Act, the Digital Markets Act, and the General Data Protection Regulation, among other regulations and directives. Often the EU AI Act is considered the centerpiece of AI regulation in Europe and an instrument that has been influential in other countries, inspiring how they draft and design their own AI laws. The AI Act entered into force in August 2024. Some of its rules have already become applicable, particularly those on prohibited AI systems that are banned in the European Union. Many other rules are still to come into effect in the future. In August 2025, rules on general-purpose artificial intelligence will apply. In 2026, rules on high-risk AI systems will become applicable. Finally, in 2027, additional rules will come into force, still focusing on high-risk AI systems. It is a complex process of implementing this act along a gradual timeline. Since this is a very timely question, as the AI Act is entering into force gradually, the debate about regulation discouraging innovation has become pressing. In the European Union, this discussion is combined with calls for greater competitiveness of the European bloc, particularly because there is a sense that Europe may be falling behind the United States and China in technology development. Before we deepen this discussion, we must understand the regulatory approach of the AI Act and what issues it was designed to address. Perhaps the best way to see the problems the Act seeks to address is by looking at major cases involving harms caused by AI technologies that have been discussed in the news and academic literature, cases likely in the minds of EU regulators when they drafted the AI Act. The first case involves Clearview, a company that developed software to be sold to private security firms and police departments around the world. This software scraped billions of publicly available images containing facial features of individuals and compared them with images of suspects or persons of interest to find matches. As you might imagine, this technology has been very controversial. It has even been banned before the AI Act in countries such as France and the UK. A major concern was the lack of data protection. By collecting those images without authorization, the company violated privacy and data protection. This is one of the major issues AI technologies can cause: the breach of privacy. The second case involves an algorithm implemented by the UK government during the pandemic to predict student performance and allocate university offers. Normally this analysis would be based on A-level exams, but since students could not take the tests due to social distancing, the UK government implemented an algorithm to predict results. When the results came out, there was controversy. The algorithm was found to be biased against students from public schools and to favor students from affluent areas and private schools. This is another major problem AI technologies can cause: discrimination or bias against certain groups. The third case involves the Horizon software used by the UK Post Office. This software monitored accounting across local branches managed by third parties, looking for errors or fraud. Based on its results, many local branches and employees were wrongly accused, prosecuted, and punished. The consequences were devastating: some went to prison, others suffered depression, and there were even cases of suicide, all due to the software’s errors. The lack of accuracy and reliability is another major problem AI technologies can cause. These cases illustrate the main issues that likely motivated the European legislator to create the AI Act through binding rules. The AI Act was structured as a product safety regulation. That means it imposes requirements for products and services implementing AI systems to ensure compliance before they are launched on the market and monitored throughout their lifecycle. The regulation harmonizes rules across EU member states to facilitate free trade and innovation while protecting citizens’ fundamental rights. Balancing these goals is one of the Act’s core challenges. The AI Act applies to various actors in the AI supply chain: developers or providers of AI technologies, companies deploying them in products and services, distributors, importers, and exporters. Importantly, it also applies to foreign companies if their AI systems are used in the European market. Since the Act regulates artificial intelligence, we must understand its definition. Article 3(1) defines AI as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment for explicit or implicit objectives inferred from input, generating outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.” This is a broad definition, arguably covering simple software, not only advanced machine learning systems. But even simple systems can create risks. The Act adopts a risk-based regulatory approach. Different AI systems are subject to different rules depending on their risk level. Some are strictly regulated, while others face no requirements because they pose minimal risk. There are four levels of risk. The first includes prohibited AI systems, those presenting unacceptable risks and banned from the EU market, such as algorithms predicting criminal behavior or real-time facial recognition in public spaces. The second level covers high-risk systems, which may be launched on the market but must meet strict requirements before doing so. Most provisions of the AI Act focus on these high-risk systems. Examples include algorithms used in education or vocational training to determine university access, those deciding eligibility for public benefits, and those used in hiring processes. The third level covers limited-risk systems, subject mainly to transparency requirements, providing information for the users interacting with those AI systems that they are dealing with an AI system rather than a human. The fourth level includes minimal-risk systems, which face no obligations under the AI Act, though industry codes of conduct may guide their development. High-risk AI systems must fulfill several requirements before entering the market. They must comply with data governance rules, ensuring data is lawfully collected and that sensitive personal data is not used unlawfully. They must avoid discrimination and bias based on factors like sex, religion, ethnicity, nationality, sexual orientation, or political views. They must ensure transparency, explainability, and reliability. Finally, they must allow for human oversight and include safeguards for accountability and cybersecurity. Many of the requirements are broad, generic, and abstract. Implementing concepts such as transparency, non-discrimination, and accuracy in practice requires technical standards. Standardization bodies all over the world, and in Europe in particular, are developing these standards to specify how these general concepts should be implemented. In the European Union, three recognized standardization bodies are involved in creating AI governance standards. There is ongoing discussion about the role these institutions play, since they are mostly formed by industry actors, raising questions about transparency and civil society participation in producing these standards. This affects the legitimacy of both the standards and the general functioning of the AI Act. Penalties are established under the AI Act for companies that violate obligations. The most important penalties are fines, the amount of which depends on the type of violation. For businesses today, particularly those in technology or wishing to adapt AI, it is crucial to understand the level of risk their AI technologies pose, the regulatory requirements they must comply with, and the potential penalties for violations. It is important to note that this is still a developing field, with many technical standards still being produced that will determine how the AI Act functions in practice. Daniel: Many questions came in since we invited our viewers to talk about this topic. Marie asked about generative AI systems and whether these are covered by the AI Act. Pablo: This is an extremely relevant question. When the AI Act was first drafted in 2021, there were no specific rules on generative AI. At that point, ChatGPT version 3.5 had not yet been launched, and widespread adoption of these models occurred later. By 2022, with 100 million users in about two months, generative AI had become significant, and regulators added specific rules for these systems in subsequent versions of the Act. These rules refer to general-purpose AI systems.The concern with general-purpose AI systems is that they can be used for almost anything. They can produce creative content, provide guidance on safety issues, or generate legal briefs, depending on the task. This creates difficulty in assessing risks for each use. General-purpose AI systems are subject to the general rules of the risk-based regulatory framework mentioned earlier. If they are considered high-risk systems, they must comply with the corresponding requirements. Additional rules apply if they present a systemic risk, such as model testing and reporting incidents to regulators. All types of generative or general-purpose AI systems must comply with copyright rules, including ensuring that the data used is legally sourced. A summary must explain the sources of the data used and provide transparency on how the system works. Systems presenting systemic risks have additional obligations. Daniel: Pedro from Brazil asked about consumer rights under the AI Act. Pablo: Users or consumers affected by AI technologies have limited rights under the AI Act. They can complain to the market surveillance authority and request action against companies. They can also request transparency and clarification regarding AI outputs. However, the AI Act does not provide additional rights for consumers to claim compensation. Compensation must be pursued under other legal mechanisms, such as the EU Product Liability Directive, which considers software a type of product. This covers defects and harms arising from AI systems. For violations of fundamental rights, personality rights, or discrimination, individuals must use their national legal system. An AI Liability Act was proposed in the EU but withdrawn in February, so full harmonization of liability issues is not yet achieved. Daniel: Amadou from Niger asked whether the AI Act could deter innovation. Pablo: This is a complex question. The detailed obligations of the AI Act are still being defined, and technical standards that implement requirements such as transparency, non-discrimination, and accuracy are still in development. The risk-based approach of the Act aims to balance consumer protection and innovation. Evidence suggests that Europe’s slower innovation is more influenced by structural factors than regulations, such as the ability to attract international talent, strict insolvency laws, and limited venture capital. A culture that is less tolerant of business failure compared to the United States also plays a role. Startups in Europe may face higher risks of insolvency and less opportunity to rebuild, which can discourage innovation more than AI regulation itself. Daniel: Francois asked about the sufficiency of penalties under the AI Act. Pablo: The AI Act establishes significant fines for violations, up to 7 percent of a company’s annual turnover for prohibited AI systems. While this can be significant, large companies may still find profitability outweighing fines. Alternative measures include reputational consequences and enforcement under other regulations such as GDPR. As the AI Act is implemented and enforced, adjustments may be needed to ensure compliance and responsible AI development. Daniel: The masterclass and Q&A session have provided a comprehensive overview of the AI Act, highlighting its complexity, requirements, and ongoing developments. Pablo Baquero has provided insights into its implications for businesses and consumers. Questions remain, and further discussion will continue as the field evolves. Thank you, Pablo Baquero, for a rich and informative exchange on the EU AI Act. To our viewers, please continue sending questions. Pablo will be happy to answer them and provide further insights into this complex topic.