Mlada manazerka

How to regulate artificial intelligence?

“Artificial intelligence is too important not to be regulated – and too important not to be regulated well.” That’s a quote from one of the most authoritative in the online world, Google CEO Sundar Pichai.

European approach

Back in April 2021, the European Commission presented the first draft of a comprehensive regulatory framework[1] for artificial intelligence (AI), known as the “AI Act”, a draft regulation with direct applicability in all EU Member States, similar to the GDPR. Considering the benefits and risks of AI, the European Union authorities have defined the basic objective of this regulation as follows: ” To promote the development, use and deployment of AI in the internal market, there is therefore a need for an EU legal framework that establishes harmonised rules in the field of AI, while at the same time meeting a high level of protection of public interests such as health and safety and the protection of fundamental rights. In order to achieve this objective, rules should be laid down governing the placing on the market and putting into service of certain artificial intelligence systems, thereby ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. The establishment of those rules in this Regulation will support the EU’s objective of becoming a world leader in the development of safe, trustworthy and ethical artificial intelligence [2]. A clear summary of the whole proposal is available on the European Parliament’s website[3]. There is also a dedicated webpage dedicated to the AI Act[4], where useful information can be found, including a chronological overview of the legislative process, relevant documents and analyses.

The draft AI Act sets out obligations for AI providers and users depending on the level of risk arising from AI, distinguishing between four main levels of risk: unacceptable, high, risk related to generative AI, limited risk.


[1] The Slovak version of this first draft together with the explanatory memorandum are publicly available here:
https://eur-lex.europa.eu/legal-content/SK/TXT/HTML/?uri=CELEX:52021PC0206
[2] Point 5 of the explanatory memorandum of the draft AI Act.
[3] Overview summary of the AI Act: https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf
[4] See: https://artificialintelligenceact.eu/

Unacceptable risk

AI systems with unacceptable risk are systems that are considered a threat to humans and will be banned in the EU. These include:

  • Cognitive manipulation of the behaviour of individuals or specific vulnerable groups: for example, voice-activated toys that encourage unsafe behaviour in children;
  • Social scoring: classifying people based on behaviour, socio-economic status, personal characteristics;
  • real-time and remote biometric identification systems, such as facial recognition.

Exceptions, such as ‘downstream’ remote biometric identification systems, where identification is carried out with a significant delay, will be allowed for the prosecution of serious crimes and only with the approval of the court.

High risk

AI systems that have a negative impact on safety or fundamental rights will be considered high risk and will be divided into two categories. (i) AI systems used in products covered by EU product safety legislation. These include toys, aviation, cars, medical devices and lifts. (ii) AI systems falling into eight specific areas that will have to be registered in an EU database:

  • biometric identification and categorisation of natural persons;
  • management and operation of critical infrastructure;
  • education and training
  • employment, worker management and access to self-employment;
  • Access to and use of essential private and public services and benefits;
  • law enforcement;
  • managing migration, asylum and border control
  • assistance in the legal interpretation and application of the law.

All high-risk AI systems will be assessed before they are placed on the market and throughout their lifecycle.

Generative artificial intelligence

Generative artificial intelligence, such as ChatGPT, will need to meet transparency requirements, i.e:

  • disclose that the content was generated by artificial intelligence;
  • Design the model to prevent the creation of illegal content;
  • publish summaries of copyrighted data used for AI training.
Limited risk

Risk-constrained AI systems should meet minimum transparency requirements to enable users to make informed decisions. After interacting with the applications, the user can decide whether they want to continue using them. Users should be informed that they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content (e.g. deepfakes).

Criticism of the EU approach and the questionable fate of the AI Act

In June 2023, more than 150 major European companies, including Renault, Heineken, Airbus and Siemens, signed a critical open letter[5]. According to them, the draft AI Act as it stands may eliminate the opportunity that AI technology provides for Europe to “rejoin the technological cutting edge”. The signatories argue that the regulatory rules are too extreme and risk undermining the EU’s technological ambitions rather than providing an enabling environment for AI innovation – as intended. One of the main concerns they raise relates to the strict rules applied to generative AI systems, a subset of AI models that usually fall under the label of ‘baseline model’. Under the draft AI Act, providers of baseline AI models – regardless of their intended use – will have to register their product in the EU, undergo a risk assessment, and comply with transparency requirements, such as disclosing any copyrighted data used to train their models. The open letter states that companies developing these core AI systems would be exposed to disproportionate compliance costs and liability risks, which may prompt AI providers to withdraw from the European market altogether. “Europe cannot afford to remain on the sidelines,” says the letter, which calls on EU lawmakers to move away from strict compliance obligations for generative AI models and instead focus on those that can satisfy “broad principles under a risk-based approach”.


[5] See: https://www.igizmo.it/wp-content/uploads/2023/06/Open-Letter-EU-AI-Act-and-Signatories.pdf

A premature end to the AI Act?

This criticism seems to have – at least in part – a factual basis, as confirmed by other events in the legislative process[6]. While the so-called Trilogue at the end of October 2023 looked like a political consensus also on the basic models of generative AI, in early November 2023 several EU Member States, including France, Germany and Italy, rejected any regulation for the basic AI models at a meeting of the “Telekom” working group. It even looks like there may be a political problem of “overregulation”.

The last time the European legislators, i.e. representatives of the European Parliament, the Council and the Commission, met for a trilogue was from 6 to 9 December 2023. According to published information[7], a preliminary consensus seems to have been reached in the end. The list of prohibited uses of AI will be extended to remote biometric identification in public spaces, but with an exception for law enforcement. The final text is not yet known and further work and translations to this end will be ongoing. Strong political pressure can be expected for an early conclusion of the whole legislative process on the draft AI Act, also in view of the change of the Council Presidency from January 2024, when Belgium takes over the Presidency from Spain, and also in view of the upcoming European Parliament elections in June 2024.


[6] E.g. https://www.euractiv.com/section/artificial-intelligence/news/eus-ai-act-negotiations-hit-the-brakes-over-foundation-models/
[7] See: https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/

The American approach

Today, there is no specific AI regulation in the US, but intensive preparatory work is underway[8]. The most likely outcome for the United States will be a patchwork of bottom-up executive action. Unlike Europe, the United States is unlikely to pass a comprehensive national AI law in the next few years. Sub-legislation is likely to focus on less controversial and targeted measures – such as funding for AI research or child safety. This is likely to disappoint proponents of strong national AI regulation. This outcome is unlikely to be as systematic and will certainly have legislative loopholes, but that does not mean it could not be effective. We are likely to see a variety of actions in the US by agencies operating in the various areas affected, particularly in healthcare, financial services, housing, workforce and child safety, plus more government regulation. This patchwork of rules, if well implemented, could be underpinned by the expertise of specific agencies and more tailored to innovation. The US federal government is likely to increase spending on artificial intelligence and its research, particularly in defence and intelligence, and use its purchasing power to shape the market. AI trade collisions with Europe are likely to emerge and private companies will pursue their own ‘responsible AI’ initiatives and face a fragmented global AI regulatory environment. The looming competition from China escalates the debate on how not to be “left behind”. The Federal Trade Commission (FTC) and the Department of Justice (DOJ) are likely to be proactive in preventing the over-concentration of AI in the most prevalent technologies. There is also a real, but less likely, chance that a key U.S. state (e.g., California) will enact significant AI-related legislation, or that a major AI-related disaster will occur[9]. The US will also have a presidential election in 2024, and stricter regulation of AI may also be an issue in the context of developments in China.


[8] See https://ai.gov/
[9] See: https://www.csis.org/blogs/strategic-technologies-blog/ai-regulation-coming-what-likely-outcome

China

In mid-August 2023, a new Chinese law aimed at regulating generative artificial intelligence came into effect. This law, which is the latest in a series of regulations focusing on various aspects of AI, is internationally groundbreaking as the first law to specifically target generative AI. It imposes new restrictions on companies that provide these services to consumers, both in terms of the training data used and the outputs produced. Compared to various versions of the legislative proposal, the approved text of the law on generative artificial intelligence is considerably weakened. Requirements to act within a three-month period to correct illegal content and to ensure that all training data and outputs are “truthful and accurate” have been removed. It has also been clarified that these rules only apply to publicly accessible AI generative systems. A new provision has also been added stating that development and innovation should be given equal weight to the safety and governance of the systems. An obligation to watermark AI-generated content has also been introduced as a necessary tool to combat misinformation.

Global harmonisation in sight

G7 nations adopt report on generative artificial intelligence in September 2023[10]. It does not include any promises or indications of global harmonisation in this area. Rather, the focus should be on voluntary codes of conduct, with national regulation limited to those areas where the public interest is most important, such as human rights, health protection, the fight against misinformation, data protection, privacy and intellectual property protection.


[10] See: https://www.oecd-ilibrary.org/science-and-technology/g7-hiroshima-process-on-generative-artificial-intelligence-ai_bf3c0c60-en

Concluding remarks

The regulation of generative AI (such as ChatGPT), i.e. whether and to what extent to regulate it at all, will be a fault line in future legislative developments in all relevant countries, including the EU and the US. Several individual aspects of the development, use and deployment of AI are already more or less regulated, e.g. data protection, privacy or intellectual property protection. Rather, appropriate interpretative and methodological work by the relevant regulatory authorities and, in time, case law could be sufficient here. However, the red lines for AI should certainly be set by statutory regulation. The near future will show whether the EU’s comprehensive, proactive approach to date (the AI Act) or the rather restrained, laissez-faire approach of the US will prove successful.

Published: 18. December 2023

Signum.legal

advokátska kancelária

Signum.legal

This article is part of magazine no.

Published: 18. December 2023

advertising

Iveta Hlaváčová

We contacted representatives of three companies that are responsible in the field of cyber security and asked them for their...

Iveta Hlaváčová

GAMO is currently developing a virtual cyber marketplace, CYBER PLACE, which aims to connect services, education and awareness raising in...

Peter Bednár

GAMO a.s.

To say that SIEM is 'dead' is a statement that is highly debated in the cybersecurity community. It is true...
advertising