28.10.2024
AI expert Helmut van Rinsum breaks down the essentials of the new EU AI Act, detailing its impact on businesses that create, use, or distribute AI in the EU. From risk classifications to compliance timelines, this guide highlights what you need to know to keep your AI strategy compliant and competitive.
Helmut van Rinsum
Guest Author & AI Expert
The EU AI Act came into force on August 1, marking the end of a multi-year decision-making phase on how to regulate the use of artificial intelligence in Europe. Since 2019, various EU institutions have been fine-tuning the regulations, which have grown to several hundred pages over time. The new regulation now affects companies that develop, distribute, or even just use AI systems — in other words, almost everyone.
The AI Act essentially takes a risk-based approach. For example, AI systems that specifically influence people or “social scoring” — awarding points for desirable behavior — are prohibited. Systems used in sensitive areas such as healthcare must meet a number of strict requirements. In the future, transparency and compliance regulations will apply to AI that poses limited or minimal risk. Typically, text and image generation, including AI-based product descriptions, fall into this low-risk category.
Depending on the risk category, there are different transition periods. For example, from January, AI systems with manipulative intent will be prohibited. From August 2025, transparency and documentation requirements will apply to so-called “general-purpose AI”. This will affect companies that use generative AI, for example. This is the case for many online stores that use chatbots or personalized recommendations, virtual fitting rooms, or AI-based text and image generation. Fritz-Ulli Pieper, a lawyer specializing in IT law at Taylor Wessing and head of the Artificial Intelligence department at the German digital association BVDW, advises getting an overview as soon as possible. The best way to do this is to create an interdisciplinary AI team. Its first and most urgent task should be to clarify where and what types of AI systems are being used. Pieper: “The main focus must be on the classification of the AI system in the risk categories of the AI Act. Only when this is clear can risks, responsibilities and obligations be derived.
A number of tools are now available to help companies with this initial assessment. These include the TÜV Association's AI Risk Navigator, a free online tool for classifying the risks of AI systems and models. “Testing AI systems creates trust and is already a competitive advantage today,” said Joachim Bühler, managing director of the TÜV Association. “Companies would be well advised to familiarize themselves with the requirements now, especially with regard to the transition periods. It is important to assess how and where the AI law will affect their activities.”
The next step is to establish governance and risk management structures or expand existing structures to cover all relevant AI issues. This could perhaps be based on existing structures, such as existing data protection compliance, explains legal expert Pieper. “In addition, guidelines, and awareness strategies should be developed that, in conjunction with training, sensitize employees to the use of AI and promote its use.” The establishment of audit structures could also be useful. In his view, the biggest enemy of implementation is ignorance at management level and unclear ideas about where AI is even being used. If there is also a lack of competent personnel and expertise, obligations and regulations can be overlooked. This should not be the case. The EU AI Act also provides for fines. And they can be very high.
This is not the only reason why tech start-ups like Frontnow are keeping a close eye on the new regulations. Compliance is seen as a competitive factor. By taking proactive measures and adapting to regulatory requirements at an early stage, the company aims to position itself as a reliable partner in an increasingly complex market environment. For this reason, laws and regulations that go beyond the European Union are also being closely monitored, even if the AI Act has set a standard here for the time being.
As comprehensive as the EU AI Act is, it should not deter anyone. Companies have overcome similar hurdles with other legislative projects, such as the GDPR. Even then, the regulations were complex and in many cases difficult to interpret. The administrative effort was high, while the technical challenges were considerable. Processes were overhauled, compliance policies were adopted, training was provided, and IT systems were adapted.
Similar to the GDPR, the EU AI Act also requires companies' various specialist areas to work together — from IT and sales to marketing, controlling, production and purchasing. This is because AI is a cross-functional technology. It must also be clear that AI compliance goes beyond the AI Regulation, emphasizes Pieper. After all, data protection and copyright also play a role. Pieper: “This has to be considered from the very beginning“.
The EU AI Act came into force on August 1, marking the end of a multi-year decision-making phase on how to regulate the use of artificial intelligence in Europe. Since 2019, various EU institutions have been fine-tuning the regulations, which have grown to several hundred pages over time. The new regulation now affects companies that develop, distribute, or even just use AI systems — in other words, almost everyone.
The AI Act essentially takes a risk-based approach. For example, AI systems that specifically influence people or “social scoring” — awarding points for desirable behavior — are prohibited. Systems used in sensitive areas such as healthcare must meet a number of strict requirements. In the future, transparency and compliance regulations will apply to AI that poses limited or minimal risk. Typically, text and image generation, including AI-based product descriptions, fall into this low-risk category.
Depending on the risk category, there are different transition periods. For example, from January, AI systems with manipulative intent will be prohibited. From August 2025, transparency and documentation requirements will apply to so-called “general-purpose AI”. This will affect companies that use generative AI, for example. This is the case for many online stores that use chatbots or personalized recommendations, virtual fitting rooms, or AI-based text and image generation. Fritz-Ulli Pieper, a lawyer specializing in IT law at Taylor Wessing and head of the Artificial Intelligence department at the German digital association BVDW, advises getting an overview as soon as possible. The best way to do this is to create an interdisciplinary AI team. Its first and most urgent task should be to clarify where and what types of AI systems are being used. Pieper: “The main focus must be on the classification of the AI system in the risk categories of the AI Act. Only when this is clear can risks, responsibilities and obligations be derived.
A number of tools are now available to help companies with this initial assessment. These include the TÜV Association's AI Risk Navigator, a free online tool for classifying the risks of AI systems and models. “Testing AI systems creates trust and is already a competitive advantage today,” said Joachim Bühler, managing director of the TÜV Association. “Companies would be well advised to familiarize themselves with the requirements now, especially with regard to the transition periods. It is important to assess how and where the AI law will affect their activities.”
The next step is to establish governance and risk management structures or expand existing structures to cover all relevant AI issues. This could perhaps be based on existing structures, such as existing data protection compliance, explains legal expert Pieper. “In addition, guidelines, and awareness strategies should be developed that, in conjunction with training, sensitize employees to the use of AI and promote its use.” The establishment of audit structures could also be useful. In his view, the biggest enemy of implementation is ignorance at management level and unclear ideas about where AI is even being used. If there is also a lack of competent personnel and expertise, obligations and regulations can be overlooked. This should not be the case. The EU AI Act also provides for fines. And they can be very high.
This is not the only reason why tech start-ups like Frontnow are keeping a close eye on the new regulations. Compliance is seen as a competitive factor. By taking proactive measures and adapting to regulatory requirements at an early stage, the company aims to position itself as a reliable partner in an increasingly complex market environment. For this reason, laws and regulations that go beyond the European Union are also being closely monitored, even if the AI Act has set a standard here for the time being.
As comprehensive as the EU AI Act is, it should not deter anyone. Companies have overcome similar hurdles with other legislative projects, such as the GDPR. Even then, the regulations were complex and in many cases difficult to interpret. The administrative effort was high, while the technical challenges were considerable. Processes were overhauled, compliance policies were adopted, training was provided, and IT systems were adapted.
Similar to the GDPR, the EU AI Act also requires companies' various specialist areas to work together — from IT and sales to marketing, controlling, production and purchasing. This is because AI is a cross-functional technology. It must also be clear that AI compliance goes beyond the AI Regulation, emphasizes Pieper. After all, data protection and copyright also play a role. Pieper: “This has to be considered from the very beginning“.
Read next
Convince yourself and redefine your success with little effort and resources for maximum results.