As the application of AI becomes more prevalent in our modern world, governments are considering how AI should be regulated. One such proposal put forward by the European Union is the Artificial Intelligence Act. But how will the AI Act regulate artificial intelligence, what effects will it have, and why has it angered OpenAI CEO, Sam Altman?

What Is the EU AI Act?

What's first important to note here is that the EU's Artificial Intelligence Act has not yet been enforced. Rather, it is still being fleshed out and altered so the European Parliament can reach a consensus on the final product. However, this act may come into law in the near future, so it's important to understand what it entails.

The EU AI Act is focused on regulating the development, release, and use of artificial intelligence within the EU. Once fully enacted, the EU AI Act will be the world's first set of official regulations placed upon the AI industry.

In May 2023, a draft was adopted by the EU's Internal Market Committee and the Civil Liberties Committee. The draft discussed the first set of rules in the AI Act, which garnered 84 votes for, seven votes against, and 12 abstentions. A European Parliament press release stated that the amended draft focuses on ensuring that "AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly."

As AI technology advances, governments, businesses, and individuals are concerned about how it will be used. Because AI has so much potential and may one day become incredibly powerful, it's no surprise that people have questions, and governments want to ensure development doesn't get out of hand.

However, this AI act still has people talking. If enacted, the EU AI Act will have a big effect on AI development and use within the EU.

As stated in the European Commission's AI Act proposal, the act aims to "ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values." Additionally, the proposal focuses on "enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems."

Furthermore, the act "lays down a solid risk methodology to define 'high-risk' AI systems that pose significant risks to the health and safety or fundamental rights of persons."

While the EU AI Act has several objectives, its main aim is to somewhat reign in AI. This will be done by assessing the risk levels of AI systems, ensuring they follow EU laws, and enforcing transparency requirements for AI systems.

Who Will Be Affected by the EU AI Act?

wooden court hammer on white background

While the EU AI Act is still in the works, there are concerns over how it will affect AI researchers, developers, and users within the EU.

On the official Artificial Intelligence Act website, a scope is laid out for the act, including:

  • Providers putting AI systems into use.
  • Users of AI who are physically established or present in the EU.
  • Providers of AI present in a third country, where the system's output is used within the EU.
  • AI system importers and distributors.
  • Representatives of AI providers within the EU.
  • Product manufacturers putting AI systems into use within the EU under their own name or trademark.

Evidently, the scope is large, spanning the AI industry. Thousands of AI organizations may be affected by this act, including ChatGPT creator, OpenAI. This has led to contention between the EU and OpenAI CEO Sam Altman. In fact, Altman has threatened to pull OpenAI, and therefore ChatGPT, out of the EU because of it. So, why has he made such a striking statement?

Why Is Sam Altman Threatening to Pull ChatGPT From the EU?

sam altman sitting on stage talking
Image Credit: TechCrunch/Flickr

Sam Altman's threat to leave the EU stems from how the European Parliament will choose to regulate ChatGPT. You've likely heard of or used the ChatGPT chatbot already, as it's become the world's most popular AI-powered language processing tool. Today, millions of people use ChatGPT, but EU residents may see changes to how they can access and use this tool after the AI Act is implemented.

A particularly contentious part of the European Commission's EU Act proposal is the transparency requirements that will be placed on GPT (Generative Pre-trained Transformer) tools. The act, if enforced, will require GPT tools to follow transparency rules. For example, a given GPT model would have to be designed so that it does not produce any illegal content. Additionally, GPT models would have to be clear about whether AI created the content.

Altman hasn't outright stated that ChatGPT won't comply with these rules. In fact, the CEO would like to cooperate, but only if it is technically possible for OpenAI. Time reports that Altman stated OpenAI would attempt to comply, but the company has criticized how the EU AI Act proposal is currently worded.

Interestingly, this threat came shortly after Altman advocated for further AI regulation within the US to mitigate the risks of AI development. We'll leave how that looks up to you.

The EU's AI Act Could Alter AI Development

While many support the EU's proposed legal framework for AI regulation, this isn't the case across the board. There is concern surrounding the enforcement of this act and how it will challenge or restrict AI developers. Time will tell whether the EU's Artificial Intelligence Act will be a net positive or negative for the AI industry and it's millions of customers.