If any company uses ChatGPT via an API on its platform, marks it with its brand or trademark, materially changes the system or its purpose, it may be considered a provider of a high-risk AI system. What is an AI Act?
This is a legislative framework imposing obligations on downstream providers using or integrating AI created by others.
The legal obligations of second parties, i.e. downstream AI providers, include, for example, ensuring transparency, data protection, and the safe use of AI systems. Even if they are not the developers of a given user type of AI, they are responsible for ensuring that the technologies used meet all legal requirements and do not put users at risk.
Downstream providers, using or integrating artificial intelligence (AI) systems created by other entities, have a specific position within the AI Act. Their obligations depend on the nature and riskiness of the AI system they use.
For example, if a company uses ChatGPT via an API on its platform, brands or trademarks it, substantially changes the system or its purpose, it may be considered a provider of a high-risk AI system. In such a case, the downstream provider is subject to the obligations of a provider under Article 16 of the AI Act, which imposes a significant regulatory burden.
The downstream provider is considered to be the provider of that particular AI system, which means that it must also comply with the obligations under Articles 17, 18 and 20. This includes detailed documentation of the design and performance of the system, the obligation to maintain records and logs of the operation of the AI system that may be required by regulators for review purposes, as well as rules for monitoring and updating high-risk systems to ensure their safety throughout their lifecycle. It is also viewed as the provider of that particular AI system, which brings with it a significant regulatory burden.
A practical example will often explain more than the theory itself, so let us consider the following situation:
Example: Microsoft and Bank XYZ
Microsoft provides an artificial intelligence model for risk analysis and fraud detection using its Azure AI product. It offers this sophisticated model to other companies via a cloud service and API, although it was not originally designed for highly regulated areas such as healthcare or banking.
Bank XYZ (a large European bank) uses this model as a downstream provider and integrates it into its fraud monitoring and prevention application. In this way, Bank XYZ uses the Azure AI model to analyze transactions and detect suspicious customer behavior. In this case, various situations may arise that affect Bank XYZ’s obligations under the AI Act:
- Marking the system with the bank’s brand
- If Bank XYZ brands the AI system with its name and integrates it directly into its banking services, it becomes a direct provider to end users. Even if it does not change the original AI model code, it must ensure compliance with the AI Act requirements, including data protection, accessibility and security.
- Substantial change to the system
- If Bank XYZ modifies the Azure AI model, for example by adding features that allow the system to make decisions about credit allocation or investment recommendations, this may be considered a material change. This means that the bank must comply with all obligations under Section 16 of the AI Act. These include, for example, the implementation of a quality management system and the maintenance of records of the operation of the system (logs).
- Change in the intended purpose of the system
- If Bank XYZ also used Azure AI to assess credit risks that would affect mortgage or consumer lending decisions, the model could be classified as a high-risk system. In this case, Bank XYZ would be subject to additional stringent obligations under Section 16 of the AI Act.
In these situations, Bank XYZ would be fully responsible for AI Act compliance, and a careful assessment would need to be made as to whether the modifications would cause the system to fall into the high-risk category.
You can read how to determine if a system is high risk in our previous article here.
Conclusion: important aspects of AI integration
The obligations of providers of high-risk AI systems are a broad topic that will not be covered in detail in this article. However, in the future we will prepare a short paper explaining this area of AI Act in detail.
From a practical perspective, however, it is important to be cautious when deciding whether to integrate AI into a company’s processes or products. In particular, a check needs to be made to ensure that the use of AI system changes meet the requirements of the AI Act. This is because such a regulatory approach means that a significant number of small and medium-sized enterprises will also enter into the regulation of high-risk systems.
Businesses have two basic options: they can choose not to act as providers and purchase an off-the-shelf AI product from a third party or develop their own AI system and meet all regulatory requirements.