Skip to navigation Skip to main content Skip to footer

The EU AI Act:
Pioneering the Future of AI Regulation

22 August 2024

By Josh Waller

What is the European Union Artificial Intelligence Act?

EU AI Act Summary:

As the use of Artificial Intelligence (AI) becomes more prominent, the European Union Parliament and Council have adopted the world's most ambitious regulation of the technology – the AI Act, which seeks to establish clear requirements and expectations regarding specific uses of AI.   

The landmark legislation has come a long way; it was initially proposed on 21 April 2021, approved by the EU's governing institutions between March and May 2024, and then officially published as the world's first comprehensive set of AI laws on 12 July 2024. The protection of citizens' fundamental rights and the safety of European society are at the heart of the law, but the AI Act also aims to position the EU as a global leader in AI development, governance, and enforcement. 

While the AI Act came into force on 1 August 2024, the majority of its provisions will not be enforced immediately. Instead, they will be phased in gradually, with full enforcement scheduled for 1 August 2027. The Act’s risk-based approach to AI regulation will mean that the obligations for AI systems in the market will be further separated by intended use. 

Overall, the European Union Artificial Intelligence Act is part of a broader plan set out by the EU Commission, including the updated Coordinated Plan on AI. Together, the regulatory framework and the Coordinated Plan aim to ensure people's and businesses' safety and rights as AI rapidly integrates into everyday services. These efforts also aim to boost investment and innovation in AI across EU member states. 

What are the requirements?  

The EU AI Act takes great care to thoroughly define the terms they reference. One of the first things to understand is how lawmakers characterise AI in the context of the regulation; they maintain that,

'AI system' means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. 

The European Union AI Act Chapter I Article 3

To make sense of the requirements that are placed on affected organisations, it’s then important to interpret the risk-based approach taken by the EU.  

Specifically, the Act defines 4 levels of risk in AI: 

  • Unacceptable risk – The highest level of risk within the regulation is unacceptable risk. This encompasses any and all AI systems that are considered a clear threat to the safety, livelihoods, and rights of people. Any AI system designated as unacceptable, such as social scoring systems or toys using voice assistance that encourage dangerous or unruly behaviour, will be banned in the EU under the new regulations. 
  • High risk – This category includes the use of AI in critical national infrastructure, administration of justice, democratic processes, and similar areas. Any AI system designated as "high risk" will be subject to stringent rules before it can go to market.  
  • Limited risk – AI systems classified as limited risk include those with specific transparency obligations, such as chatbots (including social messaging bots, menu bots, skill and keyword bots) and virtual assistants. 
  • Minimal or no risk – The lowest level of risk within the regulation is minimal or no risk. This category includes the use of AI in applications such as AI-enabled spam filters and AI-enabled video games (non-playable characters). The majority of AI use in the EU will fall under this category, and developers and users will have minimal obligations under the AI Act. 

As noted above, high-risk applications face the most significant requirements under the Act. For providers of these systems, this includes the establishment of risk and quality management systems, data governance, human oversight, cyber security measures, post-market monitoring, and maintenance of the required technical documentation. Providers will also be required to undertake a third-party conformity assessment before high-risk AI systems are put into service or placed on the market. 

Importers and distributors of high-risk applications are obliged to ensure compliance before they place a system on the market. Meanwhile, users are required to take appropriate technical and organisational measures to minimise risks, such as ensuring human oversight, controlling input data, and informing workers if a high-risk AI system is being used.  

Providers of general-purpose AI (GPAI) models also face notable transparency obligations under the AI Act. This includes maintaining technical documentation and making available relevant information to those who intend to integrate the models into their AI systems. Where a GPAI model poses a systemic risk, providers face further requirements such as performing adversarial testing on their models, assessing and mitigating possible systemic risks, and ensuring an adequate level of cyber security protection.    

The EU’s AI Office – who is tasked with the overall implementation of the Act – will facilitate the creation of new codes of practice, detailing how providers of GPAI models can comply. These codes are intended as interim measures as standards are developed.

Who will be affected by the EU AI Act? 

The Act regulates numerous organisations involved in developing, distributing, and deploying AI systems in the EU market, including providers, deployers, importers, and distributors.   

  • Providers – those who develop, or have an AI system or model developed, and place it on the EU market 
  • Deployers – in other words, users of AI systems  
  • Importers – those who place an AI system on the EU market  
  • Distributors – anyone makes an AI system available on the EU market (that is not a provider or an importer)  

Any organisation that places its product or model on the market in the EU will fall within the scope of the regulation, following other recent EU regulatory Acts, such as the EU GDPR (implemented in 2018) and DORA (implemented in early 2025). An organisation does not have to be located within the EU to fall into the scope of the Act. The term "extraterritorial influence" describes this.  

AI Act compliance

EU AI Act compliance timeline

Timeline of EU AI Act Deadlines For Compliance Implementation

A crucial aspect of the Act's phased implementation is the introduction of a total ban on AI systems deemed to present an 'unacceptable risk.' This ban will come into effect on 2 February 2025. AI systems falling into this category include those that pose severe threats to safety, health, and fundamental rights and are thus deemed too dangerous to be used within the EU.   

The Act will become fully applicable in August 2027. Still, some aspects of the Act will be applicable sooner, such as the aforementioned ban on unacceptable risk systems, the introduction of the Codes of Practice (May 2025), and the transparency requirements set out in the Act (August 2025). 

Penalties / impacts of non-compliance 

As with other recent EU regulations (such as DORA), non-compliance fines are immense. Non-compliance with the EU AI Act will result in fines up to €35 million or 7% of total worldwide annual turnover for the preceding financial year, whichever is higher.  

Further fines for non-compliant AI systems are possible, with several provisions related to operators or notified bodies (see Art. 99 (4) for further information). Offenders will be fined up to €15 million, or up to 3% of total worldwide annual turnover for the preceding financial year, whichever is higher.  

Finally, the supply of incorrect, incomplete, or misleading information to notified bodies or national competent authorities in reply to a request will be met with fines up to €7.5 million, or up to 1% of total worldwide annual turnover for the preceding financial year, whichever is higher.  

How are other countries regulating AI?

Outside of the EU, governments are pursuing a mix of legislative and voluntary approaches to setting safeguards for AI, for example:  

United Kingdom:  

The UK Government has announced plans to introduce legislation that strengthens AI safety frameworks for the most powerful AI models. However, it is expected that this will not be as wide-ranging as the EU AI Act.   

Australia:  

Australia has announced plans to consult on mandatory safeguards for those who develop or deploy AI systems in legitimate, high-risk settings.  

United States:  

The White House's Executive Order (EO 14110) marked a real step change in the US's approach to safeguarding AI, with Vice President Harris calling for legislation and arguing that "history has shown in the absence of regulation and strong government oversight, some technology companies choose to prioritise profit over the wellbeing of their customers, the security of our communities, and the stability of our democracies." However, in the absence of an ability to unilaterally create an AI law, the Federal Government is using its public procurement and soft power to ensure safety standards are met.

How to prepare for the EU AI Act

The AI Act will standardise AI regulation across the 27 EU Member States, with significant extraterritorial implications, covering all AI systems impacting people in the EU, regardless of their origin.  

Organisations within the scope of the EU AI Act should already be preparing. If your organisation hasn't started yet, consider seeking third-party assistance to ensure comprehensive compliance. Whether through gap analysis or other methods, beginning your compliance journey now is crucial to avoid non-compliance fines.  

Preparing for the EU AI Act involves several key considerations to ensure compliance and avoid penalties. Specifically, we recommend prioritising the following steps:   

  • First, identify whether your organisation is a provider, deployer, importer, or distributor of AI systems within the EU market to understand your specific obligations under the AI Act.  
  • Next, assess and classify your AI systems to determine which are affected by the Act, particularly those deemed high-risk or unacceptable risk. Conduct a thorough risk assessment to identify any systems that might pose threats to safety, health, or fundamental rights.  
  • Develop a comprehensive compliance strategy that outlines the steps your organisation needs to take to meet the Act's requirements. This should include engaging third-party assessors, mitigating identified risks, and planning around key compliance timelines.  
  • Maintain detailed documentation of your AI systems, including their design, development, deployment, and usage. Ensure that these records are up to date and readily available for inspection by regulatory authorities.  
  • Finally, implement transparency and reporting measures by disclosing necessary information about AI system functionality and risk management. Be prepared to report any incidents or non-compliance issues promptly to relevant authorities.  

Ensure your organisation is ready for the AI Act by identifying your role, assessing your AI systems, and developing a robust compliance strategy today.  

 

 

Josh Waller

Josh Waller

Security Consultant, NCC Group

Josh is part of the Consulting & Implementation team at NCC Group, delivering security consultancy, security architecture and design reviews, ISO 27001 implementation and strategy reviews, risk methodologies and assessments, and compliance programmes. 

He has a solid technical background, having specialised in software and web development and vulnerability testing. Josh is also an ISO 27001 Lead Implementer, a SWIFT Attestation Assessor, and a recently qualified PCI DSS AQSA & PCIP. 

His current role involves leading various UK Government and private sector
compliance programmes and co-leading the Digital Operational Resilience Act (DORA) service line.

Ensure your organisation is ready for the full scope of the AI Act.

Get expert support to identify your role, assess your AI systems, and develop a robust compliance strategy today.