Concerns about AI and its potential dangers have been raised by industry professionals, prompting calls for action. Over 50,000 signatories signed a letter in March urging an immediate halt in the development of “giant” AIs and the establishment of robust AI governance systems.

What is the EU Artificial Intelligence Act 2023?

The AI Act is a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere. The law assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, will be assessed before being put on the market and also throughout their lifecycle, and will be subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated. These limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. Additionally, generative AI, like ChatGPT, would have to comply with certain transparency requirements.

Despite the concerns, there is not a unanimous consensus regarding the existential threat posed by AI systems that lack human control. Many within the tech industry argue for a more immediate focus, particularly on generative AI, which can produce realistic imitations of text, images, and voice.

What are the objectives of the EU AI Act?

  • To classify and regulate artificial intelligence applications based on their risk to cause harm
  • To provide rules and guidelines for organisations using AI

Despite the concerns, there is not a unanimous consensus regarding the existential threat posed by AI systems that lack human control. Many within the tech industry argue for a more immediate focus, particularly on generative AI, which can produce realistic imitations of text, images, and voice.

Strict new rules will curtail threats to people

The EU has already proposed new AI legislation which will curtail threats to individual rights and freedoms, including a proposed ban on real-time facial recognition from being deployed on European streets or at border posts. 

The package of proposed measures could see firms being fined up to €10m or removed from trading within the EU for breaches of the rules. The proposals would also ban “emotional recognition” AI, such as those used by employers or police to identify tired workers or drivers.

European parliament members have also sought to call time on AI that undertakes social scoring like in China, predictive policing, algorithms that indiscriminately scrape the internet for photographs, and real-time biometric recognition in public spaces.

The draft act would also force those generating artificial intelligence to be transparent about which original literature, science research, music and other copyrighted materials it uses to train machine learners. This will enable performers, writers, and others whose work has been used by AI machines to sue if they think copyright law has been breached. 

Companies deploying generative AI tools such as ChatGPT would have to disclose if their models have been trained on copyrighted material—making lawsuits more likely. And text or image generators, such as MidJourney, would also be required to identify themselves as machines and mark their content in a way that shows it’s artificially generated. They should also ensure that their tools do not produce child abuse, terrorism, or hate speech, or any other type of content that violates EU law.

Some AI systems will be banned outright

Ultimately, the AI Act is designed to make sure that AI systems in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. The EU wants AI systems to be overseen by people, not machines.

The AI Act has different rules depending on the level of risk posed by artificial intelligence.

AI systems assessed as having an unacceptable risk and considered a threat to people will be banned. These include:

  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
  • Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
  • Real-time and remote biometric identification systems, such as facial recognition

Any AI system that negatively affects safety or infringes on fundamental rights and freedoms will be considered high risk. This means the systems will have to be assessed before being put onto the market and will need to be continually assessed throughout the life of the product. They will have to be registered on an EU database as well.

The types of AI considered high risk will include:

  • AI tools used in products already subject to the EU’s product safety legislation such as medical devices and toys
  • AI and critical infrastructure
  • Education, training and employment
  • Access to essential private and public services and benefits
  • Migration, law enforcement, legal interpretation and application of the law

For AI systems assessed as having limited risk, which would probably include most generative AI systems like ChatGPT, they will have to comply with a minimum set of transparency requirements. This means disclosing the content was generated by AI, ensuring the model does not generate illegal content, and public a summary of any copyrighted data that it uses for training. Users will also have to be made aware that they are interacting with an AI system so things like deepfakes can be spotted and known about.

AI Act has some caveats, but also large fines

The EU rules would likely set the gold standard of AI regulation. However the proposals have already been watered down due to lobbying from the industry. Requirements for foundation models—which form the basis of tools like ChatGPT—to be audited by independent experts were taken out.

Like with GDPR, the EU is getting serious with fines for violations of the AI Act.

30 million Euro or 6% of annual worldwide turnover (whichever is higher) – in case of the use of a prohibited AI system according to Art. 5 AI Act or if the company does not meet the quality criteria for high-risk AI systems set out in Art. 10.

20 million Euro or 4% of annual worldwide turnover (whichever is higher) – if the establishment and documentation of a risk management system, technical documentation, and standards for high-risk AI systems concerning the accuracy, robustness, and cybersecurity (Article 9) do not meet the criteria.

10 million Euro or 2% of annual worldwide turnover (whichever is higher) – if the competent authorities receive inaccurate, insufficient, or deceptive information in answer to their request for information.

Has the EU AI act passed?

The European Parliament passed the text of the AI act in June 2023.

When will the EU AI act come into force?

The EU wants the legislation to come into force late in 2024, and encourage tech companies to develop and promote trustworthy AI.