8.8.2023

Decoding EU AI Act: Regulations & Implications

Discover the key details in this article about the EU AI Act. It introduces risk-based regulations, prioritizing transparency, accountability, and responsible AI usage to safeguard rights.

Marie Berg

Content Manager

Artificial Intelligence

From mesmerizing ChatGPT conversations to intimidating facial recognition, artificial intelligence (AI) is captivating the world with its boundless potential. As AI systems increasingly influence our society, we find ourselves at the forefront of a technological revolution. With AI news flooding our screens and advancements unfolding at lightning speed, one question lingers: What lies ahead in this extraordinary journey?

To address this, the European Union (EU) is implementing the AI Act. This act aims to create clear rules and structures that regulate the use and development of AI systems.

In this article, we will explore what is behind the EU AI Act and which rules could become important in the future.

What is the AI Act of the EU?

The European Union (EU) has recently proposed the EU AI Act, which aims to regulate the development and use of artificial intelligence (AI) systems across the EU. The EU has been working on a corresponding legislative project, the Artificial Intelligence Act (AIA), for some time. However, technological developments are rapid, and sometimes discussion processes simply overtake them, which made the AIA difficult to finalize.

Now, the AI Act has been adopted by the European Parliament, and it still needs to be agreed upon by the EU Commission and the member states in the so-called trilogue before it officially comes into force. An agreement is expected to be reached by the end of the year. After that, companies will have two years to adapt to the changed framework conditions.

The act's goal is to create a trusted framework in which businesses can benefit from the development of AI systems while ensuring the rights and safety of people.

The four risk stages of the AI Act

The EU AI Act proposes a risk-based approach to regulating AI, with different levels of regulation depending on the risk that a particular AI application poses to individuals and society as a whole.

Unacceptable Risk: AI systems that pose a clear threat to security, livelihoods, and human rights will be banned. These include, for example, social assessment by governments (recognition of gender, origin, or skin color) and voice assistance systems that encourage dangerous behavior.

High Risk: AI systems that are classified as high risk are subject to strict requirements before they can be brought to market. These include, for example, critical infrastructure that can endanger human life and health, assessment of exams in the education system, application of AI in robotic surgery, credit scoring, and AI in law enforcement.

Limited Risk: AI systems with limited risk are those with a special transparency obligation. For example, with chatbots, users should be aware that they are interacting with a machine in order to make or evaluate decisions.

Minimal or No Risk: For AI systems with minimal or no risk, the AI Act proposal allows free use. These include AI-powered video games or spam filters. The vast majority of AI systems used in the EU fall into this category.

A pyramid visualizing the four different risk stages from the EU AI Act
European Commission

What is the Aim of the EU AI Act?

The European approach aims to provide rules that consider and ensure the functioning of both markets and the public sector, while also protecting the security and fundamental rights of individuals.

The AI Act aims to regulate a wide range of AI systems used in various sectors, including

  • public administrations,
  • healthcare,
  • transport,
  • law enforcement,
  • financial services,
  • education, and
  • consumer products and services.

It covers both AI systems developed in the EU and those used within the EU, regardless of their place of origin. The proposed law also requires companies that develop or use AI to provide detailed information about the AI system, including its purpose, data sources, and accuracy. This information will be made publicly available to ensure transparency and accountability.

In addition, the EU AI Act includes provisions to protect fundamental rights, such as the right to privacy and non-discrimination. AI systems must be designed in a way that respects these rights, and companies that violate these provisions may be subject to significant fines.

Testing AI systems

Once the AI Act goes into effect, the following steps must then be taken when an AI system is introduced before it can be published in the European Union:

A visual representation of the 4 stages that AI companies need to undergo in order to bring an AI tool on the market
European Commission

The Act proposes the establishment of a regulatory sandbox, where developers can test and experiment with AI systems in a controlled environment while complying with certain guidelines. Additionally, the Act calls for the creation of a European AI Board, which would serve as a regulatory body responsible for overseeing and coordinating the implementation of the Act across EU member states.


Penalties and Fines

Owners of unacceptable or high-risk AI systems may face potentially enormous fines if they fail to comply with regulations. These fines could reach up to 40 million Euros or an amount equal to 7 % of a company's worldwide annual turnover.

How will Frontnow be affected by the EU AI Act?

In light of the yet-to-be-enacted AI Act, we can only base our understanding on the current information available. With that in mind, our very own CTO and founder, Cedric May, takes a stance and shares his predictions:

Quote by Cedric May, Co-Founder and CTO of Frontnow, on how he supports the AI Act and that Frontnow's AI solution meets all legal requirements


Important Dates Regarding the AI Act

April 2021: The European Commission proposes the Artificial Intelligence Act on 21 April 2021.

November 2022: After a year of discussion amongst EU lawmakers and nearly five iterations of the regulation’s text, a final version of the AI Act was agreed to by the Council of the EU and submitted to the Telecommunications (TTE) Council.

June 2023: Lawmakers have agreed a draft version of the Act, which will now be negotiated with the Council of the European Union and EU member states before becoming law. Brando Benifei, a member of the European Parliament who is working on the EU AI Act, said this about the recent progress: "We have made history today.”

An Outlook: What can we expect from the AI Act?

The European Union is taking significant steps to become the first in the world to establish rules for companies using artificial intelligence (AI). In doing so, it hopes to set global standards and even outpace the USA.

Given the rapid development of AI technologies, the AI law could become the foundation for how we work with AI systems in the future. Additionally, clear boundaries could alleviate the uncertainties that many people feel about AI.

However, the EU faces a challenge in adapting to the swift advances in AI and formulating the AI Act in a universally applicable way that can function in the long term.



From mesmerizing ChatGPT conversations to intimidating facial recognition, artificial intelligence (AI) is captivating the world with its boundless potential. As AI systems increasingly influence our society, we find ourselves at the forefront of a technological revolution. With AI news flooding our screens and advancements unfolding at lightning speed, one question lingers: What lies ahead in this extraordinary journey?

To address this, the European Union (EU) is implementing the AI Act. This act aims to create clear rules and structures that regulate the use and development of AI systems.

In this article, we will explore what is behind the EU AI Act and which rules could become important in the future.

What is the AI Act of the EU?

The European Union (EU) has recently proposed the EU AI Act, which aims to regulate the development and use of artificial intelligence (AI) systems across the EU. The EU has been working on a corresponding legislative project, the Artificial Intelligence Act (AIA), for some time. However, technological developments are rapid, and sometimes discussion processes simply overtake them, which made the AIA difficult to finalize.

Now, the AI Act has been adopted by the European Parliament, and it still needs to be agreed upon by the EU Commission and the member states in the so-called trilogue before it officially comes into force. An agreement is expected to be reached by the end of the year. After that, companies will have two years to adapt to the changed framework conditions.

The act's goal is to create a trusted framework in which businesses can benefit from the development of AI systems while ensuring the rights and safety of people.

The four risk stages of the AI Act

The EU AI Act proposes a risk-based approach to regulating AI, with different levels of regulation depending on the risk that a particular AI application poses to individuals and society as a whole.

Unacceptable Risk: AI systems that pose a clear threat to security, livelihoods, and human rights will be banned. These include, for example, social assessment by governments (recognition of gender, origin, or skin color) and voice assistance systems that encourage dangerous behavior.

High Risk: AI systems that are classified as high risk are subject to strict requirements before they can be brought to market. These include, for example, critical infrastructure that can endanger human life and health, assessment of exams in the education system, application of AI in robotic surgery, credit scoring, and AI in law enforcement.

Limited Risk: AI systems with limited risk are those with a special transparency obligation. For example, with chatbots, users should be aware that they are interacting with a machine in order to make or evaluate decisions.

Minimal or No Risk: For AI systems with minimal or no risk, the AI Act proposal allows free use. These include AI-powered video games or spam filters. The vast majority of AI systems used in the EU fall into this category.

A pyramid visualizing the four different risk stages from the EU AI Act
European Commission

What is the Aim of the EU AI Act?

The European approach aims to provide rules that consider and ensure the functioning of both markets and the public sector, while also protecting the security and fundamental rights of individuals.

The AI Act aims to regulate a wide range of AI systems used in various sectors, including

  • public administrations,
  • healthcare,
  • transport,
  • law enforcement,
  • financial services,
  • education, and
  • consumer products and services.

It covers both AI systems developed in the EU and those used within the EU, regardless of their place of origin. The proposed law also requires companies that develop or use AI to provide detailed information about the AI system, including its purpose, data sources, and accuracy. This information will be made publicly available to ensure transparency and accountability.

In addition, the EU AI Act includes provisions to protect fundamental rights, such as the right to privacy and non-discrimination. AI systems must be designed in a way that respects these rights, and companies that violate these provisions may be subject to significant fines.

Testing AI systems

Once the AI Act goes into effect, the following steps must then be taken when an AI system is introduced before it can be published in the European Union:

A visual representation of the 4 stages that AI companies need to undergo in order to bring an AI tool on the market
European Commission

The Act proposes the establishment of a regulatory sandbox, where developers can test and experiment with AI systems in a controlled environment while complying with certain guidelines. Additionally, the Act calls for the creation of a European AI Board, which would serve as a regulatory body responsible for overseeing and coordinating the implementation of the Act across EU member states.


Penalties and Fines

Owners of unacceptable or high-risk AI systems may face potentially enormous fines if they fail to comply with regulations. These fines could reach up to 40 million Euros or an amount equal to 7 % of a company's worldwide annual turnover.

How will Frontnow be affected by the EU AI Act?

In light of the yet-to-be-enacted AI Act, we can only base our understanding on the current information available. With that in mind, our very own CTO and founder, Cedric May, takes a stance and shares his predictions:

Quote by Cedric May, Co-Founder and CTO of Frontnow, on how he supports the AI Act and that Frontnow's AI solution meets all legal requirements


Important Dates Regarding the AI Act

April 2021: The European Commission proposes the Artificial Intelligence Act on 21 April 2021.

November 2022: After a year of discussion amongst EU lawmakers and nearly five iterations of the regulation’s text, a final version of the AI Act was agreed to by the Council of the EU and submitted to the Telecommunications (TTE) Council.

June 2023: Lawmakers have agreed a draft version of the Act, which will now be negotiated with the Council of the European Union and EU member states before becoming law. Brando Benifei, a member of the European Parliament who is working on the EU AI Act, said this about the recent progress: "We have made history today.”

An Outlook: What can we expect from the AI Act?

The European Union is taking significant steps to become the first in the world to establish rules for companies using artificial intelligence (AI). In doing so, it hopes to set global standards and even outpace the USA.

Given the rapid development of AI technologies, the AI law could become the foundation for how we work with AI systems in the future. Additionally, clear boundaries could alleviate the uncertainties that many people feel about AI.

However, the EU faces a challenge in adapting to the swift advances in AI and formulating the AI Act in a universally applicable way that can function in the long term.



Read next

Frontnow Newsletter

Stay Updated With Frontnow's News & Cutting-Edge AI Innovation.