The AI Act: new regulations and their significance for companies
1. Who does the AI Act apply to?
The AI Act applies primarily to providers (organisations that develop, place on the market or put into operation an AI system) and operators (organisations that use AI systems commercially under their supervision) of AI systems that are marketed or used in the EU, regardless of whether they are based in the EU or a third country. As a rule, open source AI systems that are not assigned to the two highest risk levels are exempt (see below). In addition, importers and distributors are also subject to obligations under the AI Act.
In short, the AI Act contains obligations for all actors involved in the development, use, import, distribution or production of AI systems.
2. Classification of AI systems
The AI Act essentially follows a risk-based approach. It categorises so-called single-purpose AI systems (AI systems with a specific intended use) into four different risk categories based on their area of application, in each of which specific prohibitions or compliance and information requirements apply:
Unacceptable risk (Art. 5): Systems that are considered a clear threat to people's rights and safety will be prohibited in the European Union in the future. This includes, for example, social scoring systems, manipulative AI as well as emotion recognition in the workplace and in educational institutions.
High risk (Art. 6-27): High-risk systems, which are the main focus of the regulation, are considered to have a significant impact on the rights and safety of citizens. They are subject to a variety of compliance requirements (see below for details). High-risk systems are divided into two categories:
- Systems for products that are subject to EU safety regulations, e.g. toys, medical devices or lifts (Art. 6(1) in conjunction with Annex I).
- Systems that are operated in certain areas, for example in critical infrastructure, human resources management, education or medical diagnosis (Art. 6(2) in conjunction with Annex III).
Limited risk (Art. 50): The risks of these systems relate to a lack of transparency, which is why specific disclosure obligations apply to them. For example, before using a chatbot, humans must be informed that they will be interacting with an AI. Providers must also ensure that AI-generated content, in particular deepfakes, can be identified as such.
Minimal risk (Art. 4, 95): Many current AI applications, such as AI-powered video games, spam filters and recommendation services, fall into this category. No special regulations apply to them and they can continue to be used freely.
3. Requirements for providers and operators of high-risk AI systems
The majority of the obligations in the AI Act relate to high-risk AI systems, the development and use of which will be subject to a wide range of compliance requirements in future. These obligations include, for example
Conformity test and registration: The system must undergo a conformity test before being placed on the market and must also be registered in an EU database:
Risk management: Providers must establish an appropriate risk management system and carry out a risk assessment over the entire life cycle of the AI system.
Data governance: A high quality of the training, validation and test data sets must be ensured in order to avoid bias and discriminatory results.
Technical documentation: Before the system is placed on the market or put into operation, technical documentation must be drawn up from which the competent authorities can clearly and comprehensibly see that the system fulfils the requirements of the regulation.
Human oversight: High-risk AI systems must be designed in such a way that effective supervision by a human is ensured for the duration of their use.
4. Special requirements for “General Purpose AI” (GPAI)
General-purpose AI systems that can be used for a variety of purposes ("GPAI models") have the potential for widespread use due to their flexible application possibilities, especially through APIs to other applications. At the same time, it is difficult to keep track of the possibilities of these models. Examples of such GPAI models include GPT-4, DALL-E and Midjourney.
These systems are subject to a separate classification framework (Art. 51 et seq.), which again follows a tiered approach. The GPAI models are subject to transparency requirements, which include, for example, technical documentation and instructions for use. More extensive risk management requirements apply to particularly powerful and influential models with "systemic risk" (Art. 55), which include a comprehensive risk analysis, regular monitoring and cybersecurity requirements.
5. Implementation and transition periods
The AI Act will enter into force 20 days after its publication in the Official Journal of the EU, which is expected by the end of June 2024. The provisions will then be implemented in stages:
- 6 months after entry into force (approx. end of December 2024): AI systems with unacceptable risk are banned and must be withdrawn from the market.
- 12 months (approx. June 2025): The provisions on GPAI take effect.
- 24 months (approx. June 2026): All provisions for which nothing to the contrary is stipulated enter into force.
- 36 months (approx. June 2027): The special requirements for high-risk systems become effective.
6. Sanctions and compliance
The AI Act is enforced in a dual system at an administrative level. On EU level, the new "AI Office", an authority within the European Commission, is responsible for the supervision of high-impact GPAIs and for coordinating implementation in the Member States. In addition, the Member States set up national authorities that are responsible for enforcing the regulation.
Depending on their severity, violations of the AI Act can lead to significant fines. These can amount to up to EUR 35 million or 7 % of the company's global annual turnover in the case of infringements relating to prohibited AI systems, and up to EUR 15 million or 1.5 % of annual turnover in the case of more minor infringements. The regulation provides for lower fines for SMEs and start-ups. It should also be noted that the supervisory authorities can force providers to withdraw non-compliant AI systems from the market.
7. Need for action and advice for companies
Companies should use the time remaining until the AI Act comes into force to be prepared for the new requirements at an early stage. The following steps are of particular importance:
- Inventory: Companies should first check internally which AI systems they use, develop or purchase from external providers. If not yet available, it is advisable to create a continuously updated directory, as the use of AI is expected to increase in the future.
- Classification: The identified AI systems can then be categorised according to their risk. This categorisation can sometimes be complex, but must be carried out carefully due to the widely diverging requirements.
- Preparation: Once the requirements have been clarified, the concrete implementation of the respective obligations can begin. This includes, for example, the creation of technical documentation, the introduction of a risk management system and the establishment of a governance structure. In addition, it may be advisable to draw up internal guidelines for dealing with AI systems and to raise employee awareness of the new regulations by providing information and training.
8. Conclusion
The AI Act is a milestone in AI regulation, but also a very extensive and complex set of regulations, the content of which can only be touched on in this article. Companies are required to carefully review their AI systems and processes and adapt them to the new legal requirements within the transition periods. Early preparation and close collaboration with technical and legal experts are crucial in order to ensure compliance and utilise the innovative potential within the new legal framework.
Our law firm is here to advise you. We support you in mastering the scope and complexity of the AI Act and preparing you optimally for the upcoming regulations.
You might also be interested in this
The European Union has implemented significant reforms to the Court of Justice of the European Union (CJEU). The changes, which took effect on September 1, 2024, are designed to improve the efficiency of the court system and increase transparency in its operations.
The law of jurisdiction contains a number of pitfalls which are often underestimated in practice, particularly at the interface between contractual and tort claims. In this article, we discuss a recent decision of the Oberlandesgericht Karlsruhe (Karlsruhe Higher Regional Court), which had to rule on a claim brought by a journalist who had unsuccessfully challenged the blocking of his account on an internet platform on the grounds of a breach of the law on unfair competition (OLG Karlsruhe, Urt. v. 8.Mai 2024,Az.6U198/23).
So-called repeat filings, often intended to circumvent the obligation to put the trademark to genuine use and thus taking advantage of further grace periods for the refiled trademarks, might seem attractive in particular to reduce costs. However, they also involve the risk of finding bad faith at least under EU law. The concept of bad faith is not legally defined but is shaped by the respective case-law. In this respect, the “Monopoly” Judgment of the General Court sets forth relevant principles and provides important strategic guidelines. Furthermore, the recent decisions of the Cancellation Division of the EUIPO regarding the trademarks of the famous artist Banksy representing one of his best known artworks, the Flower Thrower, demonstrate the importance of having a sound strategy behind any trademark filing. In the following, we will have a closer look at these cases to illustrate the possible risks of repeat filings and to provide some advice on risk-minimizing trademark filing strategies.
The Federal Court of Justice (BGH) has ruled on a lawsuit over an advertisement from the German confectionery company Katjes. This involved competition law issues relating to the use of the climate neutrality statement of the production of Katje's products.