grafika zemerana na AI a kyberneticku bezpecnost

New AI regulation: who will be most affected?

In recent years, we have witnessed tremendous advances in the field of artificial intelligence (AI), which brings with it enormous opportunities on the one hand, and great risks on the other.

As we reported in the previous issue, AI regulation is on the rise around the world, with the AI Act recently passed in the European Union after a lengthy legislative process.

The issue of finding the right rules for AI is much deeper than it might seem at first glance, as the use of AI will be regulated in the EU even in areas where few would expect it, such as in elevators, traffic signs or even in some medical applications. The aim of this article is to introduce readers to the basic range of entities that will be affected by this regulation.

The Artificial Intelligence Act (AI Act) is based on a categorisation of the risks posed by a particular AI system. Four types of risks are distinguished:

  1. Prohibited systems: systems with the highest risk.
  2. Systems with significant risk: Referred to as high-risk systems. These are the most highly regulated in the AI Act.
  3. Systems with limited risk: These are heavily regulated, particularly in Article 50 of the AI Act.
  4. Minimum risk systems: They are not regulated because their risk is very limited.

Did you know that OCR technology is also considered an AI system? However, it probably carries only minimal risk. Conversely, a so-called social scoring system will run the risk of being banned.

Prohibited systems

Prohibited AI systems are those that can cause physical or psychological harm by manipulating behaviour or exploiting the vulnerabilities of people who are children or the elderly. Social scoring and uses of AI that discriminate on the basis of behaviour or characteristics are also prohibited, as is real-time remote biometric identification of people in public places, except in very specific circumstances.

For illustrative purposes, here is a brief list of prohibited systems using AI:

  • Manipulative techniques to influence people’s behaviour without their knowledge: for example, an app that uses visual or audio cues to manipulate users into buying a particular product.
  • Exploiting people’s vulnerabilities based on age, disability, or social or economic situation to influence their behaviour: this may include an advertising system that targets ads for expensive medicines to older people or people with health problems.
  • Social scoring of people based on their social behaviour and personal characteristics with negative consequences: an example is a system that assigns social scores to citizens based on their online activity and affects their access to services such as housing or employment.
  • Profiling or personality-based crime risk assessment: an AI system that predicts the future criminal behaviour of individuals based solely on their past records and personality traits, without further evidence.
  • Creating or extending facial recognition databases through broad facial feature extraction: a system that collects and analyses people’s faces from public CCTV footage without their knowledge or consent.
  • Emotion recognition in the workplace and educational institutions (except for health or safety reasons): an AI system that monitors and analyses the emotions of employees during the working day for the purpose of evaluating their performance.
  • Biometric categorization of persons based on biometric data to infer race, political views, union membership, religious beliefs, sex life or orientation: a system that analyzes biometric data to determine a person’s religious beliefs or political views and uses that data for discriminatory purposes.
  • Use of AI systems for real-time remote biometric identification in public spaces for law enforcement purposes (with exceptions for certain serious cases): a system that continuously monitors public spaces and identifies persons in real time, for example at an airport, without an adequate legal basis.
High Risk AI Systems

In general, selected methods are considered high-risk systems based on their use under the AI Act. These methods are all enumerated in the AI Act, although they may change from time to time by decision of the European Commission. In summary, the following AI systems used for the following areas are considered high-risk:

  • Biometrics
  • Critical infrastructure: security components in the management and operation of critical infrastructure, including digital infrastructure.
  • Education and training
  • Employment, management of workers and access to self-employment
  • Access to and use of essential private services and essential public services and benefits
  • Law enforcement: Insofar as their use is permitted under relevant Union or national law.
  • Migration, asylum and border control management
  • Administration of justice and democratic processes

Another way to identify a high-risk system is to assess whether the system serves as a safety component and is a safety component of a product covered by European Union harmonisation legislation. This product must undergo a third-party conformity assessment in order to be placed on the market or put into service under European rules. This section is important for operators or importers because if their product has to undergo a third-party conformity assessment, it is likely to be a high-risk system and subject to many obligations.

Examples of areas of high risk systems:

  • Toy safety
  • Recreational craft and jet skis
  • Lifts and safety components for lifts
  • Equipment and protective systems in potentially explosive atmospheres
  • Radio equipment
  • Pressure equipment
  • Cableway facilities
  • Personal protective equipment
  • Appliances burning gaseous fuels
  • Medical devices and in vitro diagnostic medical devices, including certain medical applications

These examples illustrate the wide range of areas in which an AI system may be considered high-risk and therefore subject to strict regulation and compliance assessments.

Who knew that the AI Act would apply to, for example, elevators, protective personal equipment, or even toys in addition to super advanced applications if any part of it is based on the use of AI. And the definition of AI is quite broad! We would therefore like to point out that these obligations do not only apply to manufacturers. In the case of a high-risk system, they also apply to almost all stakeholders in the supply chain.

Obliged entities

From a practical point of view, the most important thing is to assess what obligations people will have if a particular application or business model has been identified in an area that is considered high risk, such as biometrics, toy safety and so on. For example, the following persons will be subject to regulatory obligations:

  • AI product manufacturer: a person or company that develops and manufactures products containing artificial intelligence and markets them under its name or brand.
  • AI provider: a person or organisation that develops or has developed an AI system and markets it under its own name or brand. This can be a company, government office, agency or other entity that makes the AI system available for use, either for a fee or for free.
  • Deployer (AI user): A person or organisation that uses an AI system in the course of their work or activity, except where the AI system is used for personal and non-professional purposes.
  • Importer: A person or company located in the European Union that markets an AI system manufactured and branded with the name or brand of a company from another country outside the EU.
  • Distributor: a person or company in the supply chain that makes an AI system available on the European Union market, but is not a provider or importer.

Each of these appointees is subject to the AI Act and many of the obligations under it.

The regulation of artificial intelligence through the AI Act represents a significant step towards the safe use of AI technologies. These new rules will touch on a wide range of areas and introduce obligations for manufacturers, providers, users, importers and distributors of AI systems, even in unexpected areas such as toys.

If you have identified that your application or business model may fall into the category of high-risk systems, it is important to take the necessary steps to comply with any obligations under the AI Act. Don’t hesitate to reach out to experts who can help you set up your system to comply with regulatory requirements.

The implementation of AI technologies brings enormous potential for innovation and growth, but only if it is done with a focus on the safety and security of individuals. The AI Act is a step in the right direction and compliance with it contributes to public confidence in AI and its positive benefits to society.

Published: 25. June 2024

Signum.legal

advokátska kancelária

Signum.legal

This article is part of magazine no.

Published: 25. June 2024

advertising

Iveta Hlaváčová

We contacted representatives of three companies that are responsible in the field of cyber security and asked them for their...

Iveta Hlaváčová

GAMO is currently developing a virtual cyber marketplace, CYBER PLACE, which aims to connect services, education and awareness raising in...

Peter Bednár

GAMO a.s.

To say that SIEM is 'dead' is a statement that is highly debated in the cybersecurity community. It is true...
advertising