What is the EU AI Act: The Ultimate Guide
Does your startup rely on LLMs or NLP? The European Parliament has officially approved the European AI Act, and the Law came into force in 2024.
Last updated
Does your startup rely on LLMs or NLP? The European Parliament has officially approved the European AI Act, and the Law came into force in 2024.
Last updated
This is a living guide to help you navigate and learn more about the EU AI Act. 1. How did the EU AI Act come to be? 2. What is classified as an AI System under the Act? 3. How does the EU AI Act work? 4. Which sections of the AI Act are currently in force? 5. What is the timeline for enforcement of other sections of the AI Act? 6. What does this mean for companies?
In response to the potential of advancements in how AI-driven technologies will be used, the European Commission shared recommendations through whitepapers. Most notably in 2020, a white paper on trustworthy AI was released by the Euorpean Commission, In April 2021, the European Commission published a plus a to implement it. Subsequently, in 2023, likely in response to the popularity of models such as GPT-3, the European Parliament officially approved the text of the European AI Act based on this framework.
The Law then begun enforcement in 2024.
Under the framework, the definition of Artificial Intelligence is reduced to an 'AI System'. This is defined as:
a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The system is machine run, basing decisions on clearly defined goals set by the maker or deployer.
AI systems covered by the act include systems of varying levels of independence. AI systems that function independently fully or partially, and perform tasks with or without direct human intervention are included.
The AI must continues to improve how and what outputs are generated based on the system's learning capabilities.
AI systems covered by the Act use techniques that enable inference during their development. These include logic/knowledge-based approaches, which derive conclusions from inputs.
Unlike basic data processors, AI systems can learn, reason, and model complex scenarios, enhancing their decision-making capabilities.
In the Act, "environments" refer to the context where AI systems operate. This differs to "outputs" which are the results the AI systems produce, such as predictions, content, recommendations, or decisions.
This definition covers a number of different use cases of LLMs, and addresses the techniques to develop and build AI (machine learning models, logic based approaches and statistical approaches).
Now we've addressed who the EU AI Act is designed to regulate, how has t determine the level of regulation of AI products and companies?
“The new AI regulation will make sure that Europeans can trust what AI has to offer. Proportionate and flexible rules will address the specific risks posed by AI systems and set the highest standard worldwide”
The framework proposes a risk-based model, with a different level of protection depending on the potential risk level of an AI’s usage.
Under the regulation, AI is categorised into four different categories:
prohibited
high
limited and,
minimal risk
At the top of the scale, any AI that is seen as a threat to human life, rights or wellbeing will be considered an unacceptable risk and therefore banned. This includes:
Manipulative or deceptive techniques: Prohibited if they distort behavior, impair informed decision-making, and cause significant harm.
Exploiting vulnerabilities: Banned when related to age, disability, or socio-economic circumstances and results in significant harm.
Biometric categorization: Restricted for inferring sensitive attributes like race, religion, or sexual orientation, except in specific lawful cases.
Social scoring: Prohibited if it evaluates individuals based on behavior or traits, causing unfavorable treatment.
Criminal risk profiling: Banned when solely based on profiling or personality traits, except as part of verifiable, fact-based assessments linked to criminal activity.
Facial recognition databases: Prohibited if created via un-targeted scraping of online or CCTV facial images.
Emotion detection: Restricted in workplaces or educational settings, except for medical or safety purposes.
High risk AI is then categorized as AI used in less risky but still vital systems.
The list features industries where the safety of the software is vital, including critical infrastructures, healthcare robots. It also includes where the incorrect outcome or lack of transparency would be unfair or unjust, including law enforcement, employment, migration and private and public services.
The use of AI which isn’t as hazardous has a lower threshold of oversight.
Limited risk then covers AI which is used in products and services such as chatbots and requires transparency that AI is used.
Any other products are considered to be in the minimal risk category, with the example of AI-enabled video games or spam filters given by the European Commission.
The EU AI Act came into force on August 1, 2024 however implementation is in stages.
(From February 2025) AI literacy: Providers and deployers of AI systems should be educating individuals in their team who operate their AI systems. This includes training their team to understand how AI works, the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.
(From February 2025) Prohibited systems: The prohibited AI systems are banned.
Notifying authorities: Notifying authorities in European countries will notify non-compliance.
General-purpose AI models: Providers of general purpose AI models must ensure that their models comply with the Act's standards, of transparency, safety, and ethics.
Governance: The European Artificial Intelligence Board will be established, responsible for facilitating the consistent application of the AI Act across member states.
Penalties: Fines for non-compliance will be enforced.
Confidentiality: Rules for the transfer and processing of data including proprietary information and personal data are safeguarded against unauthorized disclosure.
The AI Act will be effective as a whole except for articles concerning high-risk AI systems.
Rules for high risk AI-systems will be effective.
Technology companies and startups who develop AI will need to pay attention to how the regulation will be implemented across the European Union.
In particular the following types of companies should be aware of their EU AI Act obligations:
Any LLM-powered product, whether the LLM is third party or proprietary,
Companies building and training their own proprietary LLM,
Companies who ensure product safety mechanisms function, healthcare products, and companies that ensure the working of critical infrastructure.
However, this is only the beginning for the framework: it is yet to be shown how local jurisdictions react and how other legal areas will intersect with the Act.
If a product utilises a high risk AI, it will be subject to risk assessments, documentation and even human oversight before being released to the public. The prohibited and high-risk categories in particular agree with the European Commission’s previous guidelines on trustworthy AI, where it was proposed AI needs to be
Looking to find out if the EU AI Act applies to your company? Get your .