With the AI Act, the European Union (EU) has passed the world’s first comprehensive law to regulate artificial intelligence (AI). The provisions establish uniform rules for the AI market in all EU member states. In this way, the EU wants to ensure that AI is always developed and applied in accordance with the values, fundamental rights and principles of the European Union. The AI Act pursues a dual objective: to promote innovation and at the same time provide legal certainty. The main aim of the Act is to ensure a high level of protection of fundamental rights, safety, health and environmental protection from the potentially harmful effects of AI systems.

 


 

Would you like to know why you should integrate generative AI into your business processes?

Read here

 


 

Who is affected by the AI Act?

The AI Act affects almost all companies operating in the EU. This is because the provisions not only apply to the providers of AI systems, but are also binding for the users of such systems. The regulations apply whenever AI-based systems are used within the EU – regardless of where the operators are based.

 

Companies must therefore take a closer look at the requirements in the medium term, especially if the first use cases are already being tested or even used productively. Otherwise, there is a risk of avoidable risks, additional costs and additional investments, which could even prevent legal consequences in the long term.

 

In order to best prepare for the AI Act, companies should catalog and categorize their AI use cases according to the risk classes defined in the AI Act. This makes it easier to determine the associated legal obligations. In this way, risks can be systematically identified and minimized. The use of applications that the AI Act prohibits in future should be avoided as soon as possible, as the ban will come into effect just six months after the AI Act comes into force (see section “When does the AI Act come into force?”).

 

What provisions does the AI Act contain?

The use of artificial intelligence is associated with social and environmental benefits and can give companies and the European economy as a whole a decisive competitive edge. Effective AI measures prove to be particularly beneficial for issues with a major impact on the common good – such as climate change, the environment and health – as well as for the public sector and the areas of finance, mobility and agriculture. Key outcomes include more accurate predictions, improved processes, optimized use of resources and personalized services.

 

In order to minimize or completely eliminate potential risks associated with the use of AI, the AI Act sets out strict rules for the use of AI. The specific requirements that AI systems must meet depend primarily on their risk class. The AI Act defines four different risk classes:

  • Unacceptable risk

  • High risk

  • Limited risk

  • Minimal risk

 

The greater the potential risk associated with the use of an AI system, the more extensive the associated legal requirements. Applications that are not expressly prohibited and are not considered high-risk remain largely unregulated. This blog post can only provide a rough overview of the provisions of the AI Act. The full details and any special provisions should be looked up directly in the legal text.

 

Unacceptable risk

The AI Act prohibits AI systems that are considered a clear threat to the safety, rights or livelihood of people. This includes, for example, systems that carry out large-scale facial recognition in public spaces (there are exceptions for law enforcement agencies), technologies for recognizing emotions in the workplace and in education, as well as the development of a social scoring system, such as that established in China. The ban also extends in particular to practices with the potential to manipulate people beyond their awareness and to negatively influencing the behavior of particularly vulnerable groups such as children and people with disabilities.

 

High risk

High-risk AI applications must meet strict requirements – in terms of the data used, security, functionality, documentation as well as quality and risk management. Among other things, it must be ensured that the training data does not put certain groups at a disadvantage. In addition, decisions made by AI systems must always be monitored by humans. Providers of high-risk AI systems are obliged to carry out a conformity assessment and issue a declaration of conformity.

 

Limited risk

According to the definition in the AI Act, AI systems that interact with humans pose a limited risk. Chatbots are a typical example. Anyone offering such applications is subject to a transparency obligation: a natural person must be informed that they are currently interacting with an AI system. Technical documentation of the training data and test procedures is also required. Compliance with applicable copyright law must also be demonstrated.

 

Minimal risk

AI systems that cannot be assigned to any of the other risk categories are classified as “minimal risk”. The spectrum of applications ranges from spam filters to systems for the predictive maintenance of machines. Such AI applications do not have to meet any legal requirements. However, companies have the option of defining and applying voluntary rules of conduct for this type of AI system.

 

Special rules for General Purpose AI (GPAI)

In addition to the various risk classes, the AI Act also defines specific regulations for so-called General Purpose AI (GPAI), i.e. AI models with a general purpose. These are AI systems that can be used for various purposes and generate content such as text or images, for example – as stand-alone systems or as an integral part of other systems.

 

GPAI models are generally regarded as models with limited risk and must therefore comply with the applicable transparency obligations. In addition, the AI Act introduces the category “GPAI models with systemic risk”. This includes GPAI models with a high potential impact, measured in terms of the cumulative computational effort required for training. They are subject to additional obligations regarding risk management and cyber security.

 

What else does the AI Act stipulate?

The AI Act provides for the establishment of a dedicated AI authority within the European Commission to monitor compliance with the rules. At national level, the relevant authorities will ensure that the provisions are complied with. They meet regularly in an EU committee to ensure a uniform interpretation of the law in the member states. Consumers have the right to complain to the authorities about inappropriate use of AI by companies.

 

Member states are obliged to set up so-called regulatory sandboxes in which providers can test AI innovations during development. The law is intended to create the conditions for German and European companies and start-ups to be able to operate on an equal footing with the strong international players in the field of artificial intelligence.

 

How are violations of the AI Act punished?

Violations of the provisions of the AI Act can result in severe penalties. If AI systems prohibited in the EU are used (“unacceptable risk”), fines of 7% of annual global turnover or up to 35 million euros are envisaged. Companies that do not comply with their legal obligations can expect to be fined up to 3% of their turnover or 15 million euros. Those who provide incorrect information about their AI model will be fined 1.5% of their turnover or 7.5 million euros. Lower penalties are planned for start-ups and medium-sized companies.

 

When does the AI Act apply?

The European Parliament adopted the AI Act on March 13, 2024. Formal approval by the European Council, which is made up of the heads of state and government of the 27 EU member states, is still pending. The AI Act will then be published in the Official Journal of the EU and will officially enter into force 20 days later.

 

There is a transitional period of 24 months for most of the provisions before they become binding. However, some provisions will apply earlier: The ban on AI systems with unacceptable risk applies just six months after the AI Act comes into force and the regulations on General Purpose AI models take effect after twelve months. Due to this time window, companies are well advised to deal with the content of the AI Act as early as possible – and with the obligations that arise for them as a result.

 

Would you like to know why you should integrate generative AI into your business processes?

Read here

 

Further articles of interest: