1352_GAMO_POI9-foto-4 copy

AI Literacy under Article 4 of the AI Act: a practical guide for compliance

From driver’s licenses to algorithms – and back again.

In February 2025, another of the significant obligations arising from Regulation 2024/1689 of the European Parliament and of the Council of the EU, establishing harmonised rules in the field of artificial intelligence, known as the AI Act, came into force.

While most of the attention so far has focused on prohibited systems (Article 5) and high-risk AI (Article 6 et seq.), the less discussed, but crucial in compliance, Article 4 on the obligation to ensure an adequate level of AI literacy is also coming into practice.

This article was written as a parliamentary “addendum”, i.e. it does not come from the original European Commission proposal. That is why there has long been a lack of any explanation of how its requirement is to be met. It was not until the European AI Office was set up that it started to publish a living repository – an open database of real-life examples from practice.

What does Article 4 actually say?

Paraphrased: Article 4 requires that all those involved in the design, development, implementation or use of AI systems have an appropriate level of AI knowledge and skills – and to the extent appropriate to their role and the context of the system deployment. In other words: the developer should know more than the average employee, but the marketing department should not be left completely in the dark either.

The obligation is phrased in relative terms: “take steps to ensure, as far as possible, a sufficient level of AI literacy among their employees and others”, which increases the importance of transparent documentation and the ability to demonstrate the adequacy of the control process.

Although to date there is no officially designated supervisory authority for this area in Slovakia (14.5.2025), Article 4 is fully valid and effective. This means that in the future, the supervisory authority can also (within the legal limits, of course) retroactively control the fulfilment of this obligation from the date of its entry into force.

It follows that it may not be worthwhile to ignore the obligation imposed by Article 4 of the AI Act. And perhaps even more important than the fine itself is the ability to demonstrate your ability to cooperate in the marketplace and protect your business model. After all, artificial intelligence presents a number of challenges in terms of copyright protection – and you may end up with a customer who, while thanking you for the work you have supplied, refuses to pay royalties if it is not sufficiently clear that a licence could have been created in the first place.

In other words, it is a double-edged sword – it is not enough to develop AI, it is also necessary to use it legally and responsibly, because in the event of AI system failure, not only can reputational loss occur, but also lawsuits over the validity of the licensing relationship or claims for damages.

Driving licence for AI?

If you’re looking for a metaphor to describe this obligation, the most common comparison in academic discourse is that of a driver’s license. Just as a car driver must know:

  • How a car works (basic mechanics) – in AI equivalent, this is a basic understanding of algorithms, inputs and outputs;
  • It knows the signs and traffic rules – in AI, again, distinguishing levels of risk, obligations and prohibitions;
  • It knows how to drive, i.e. use the AI system correctly in accordance with the purpose and regulation.

In the same way, the “AI user” must be able to estimate the boundaries of safe and lawful system behaviour. Article 4 applies not only to developers, but also to project managers, testers, lawyers, salespeople and, yes, HR departments and others.

How to build an AI literacy program in your company?
  1. The first step is to map out the key roles and how they interact with AI. Not every role needs a deep technical background. But each needs a basic framework: what do the risk levels mean (prohibited, high, limited, minimal)? What does a legally usable AI system look like? When is documentation, auditing, or notification to a supervisory authority mandatory?
  2. The second step is to define the minimum range of knowledge for each role. We expect developers to have an understanding of AI system design principles, the ability to embed ethical requirements into the design (e.g., according to IEEE 7000), and an understanding of standards such as ISO/IEC 42001 or NIST AI RMF. However, marketing, sales or legal departments must identify risky use cases, be able to distinguish prohibited practices from legal ones, and know when to contact the AI/Legal team.
  3. The third step is to set up a cycle of training and review. Keep the training short, punchy, and tied to a specific role. Use microlearning or online modules. Once a year, review their content according to the evolution of standards and the AI market.

The Living repository is an open database of best practices maintained by the European AI Office to support teaching and knowledge exchange in the field of AI literacy.

It provides concrete examples of how different organisations in the EU are approaching the requirements of Article 4 of the AI Act – i.e. awareness raising, training, internal policies and staff readiness assessments.

Although it is not a legally binding interpretation, the repository is a reference source recognised by an EU institution and as such can be of significant help in defending a chosen compliance approach. Its use is recommended as part of a state-of-the-art approach to AI literacy implementation. https://digital-strategy.ec.europa.eu/en/library/living-repository-foster-learning-and-exchange-ai-literacy

Follow standards instead of “stamps”

There is as yet no official certification of AI literacy under Article 4 of the AI ACT. Everything is based on “state-of-the-art” evidence, which is best represented by aligning training with international frameworks: the ISO/IEC 42001 management system for AI, ideal for larger companies. IEEE 7000-2021 – guidance on how to design systems with ethical risks in mind. NIST AI RMF 1.0 – a systematic map of risks and how to mitigate them. Aligning content with these standards is not just a matter of “compliance” but defensibility in control.

However, this is not the only way to meet the requirements of Article 4. In many cases, it is also appropriate to follow sectoral initiatives that can set appropriate conditions and ensure the transfer of training not only to internal staff but also to external collaborators or suppliers.

Here too, however, we need to be wary of the so-called “stamp magicians” – those who only yesterday were “certifying” cyber security with non-existent tools or promising outcomes that our legal system does not even recognise. Volunteering for professional initiatives or introducing training by an external partner only makes sense if they are respected professional bodies.

We therefore recommend contacting either established professional associations, reputable companies, or law firms that deal with AI regulation and responsible deployment of systems on a professional level. The guarantee is not a rubber stamp, but the expertise of the law firm or firms that have actually participated in the preparation of the training frameworks – ideally in compliance with the current legal status and international standards.

What happens if you let it float?

In the event of a breach of Article 4 of the AI Act, companies face fines of up to 7% of their total annual turnover on a global basis – which is higher than under the GDPR. However, an even more serious consequence can be reputational loss for the company. Those that fail to demonstrate the level of AI literacy in their teams risk being excluded from tenders/public contracts, losing the trust of partners and weakening their position in the market.

At the same time, the risk of internal failures – misuse of the AI system, breach of a prohibition, discriminatory behaviour or a security incident – increases, which can lead to contractual disputes, penalties or loss of customers. Ultimately, ignoring this obligation can be significantly more costly than investing in responsible training and documentation.

Three steps you can take tomorrow

Start with a quick internal gap analysis: map which teams and roles in your organisation are coming into contact with AI systems, and assess what their current level of awareness and readiness is in relation to Article 4 of the AI Act. Remember that the obligation doesn’t just apply to technical staff, AI now also impacts legal, HR, sales, marketing and project management departments.

As a second step , choose a credible frame of reference to serve as a basis for training – ideally one that is based on internationally recognised standards. ISO/IEC 42001, NIST AI RMF or IEEE 7000 are proven tools that allow you to design training in line with “state-of-the-art” requirements. This avoids false certainties based on dubious “certifications” or marketing promises.

The third step is to set an internal timeframe: determine by when each employee in the relevant role should have completed the initial training. If you are considering engaging an external partner, we recommend selecting established professional associations, reputable companies or law firms that have a professional level of expertise in the area of AI regulation and governance. Avoid the so-called “stamp magicians” – entities that until recently promised cybersecurity “certifications” with non-existent tools and now offer “guaranteed AI compliance crash courses” that have no basis in law. There is no quick fix in this area – only the expertise and demonstrable experience of the partner in question is a guarantee.

Conclusion (and open offer)

While the AI Act did not give us detailed turnkey guidance, it did give us something more valuable – a clearly defined framework of accountability. Section 4 does not require formal certification, but it does expect every organisation to be able to demonstrate that it understands the risks posed by AI and that its employees know how to handle AI systems legally, safely and responsibly.

To ignore this obligation is to expose the company to legal, reputational and business risks that can have long-term consequences. Conversely, organizations that invest in training today – wisely, purposefully and with an emphasis on relevant roles – build not only compliance capacity, but also competitive advantage.

If you need to design training, set up an internal AI literacy verification system, or just get a second opinion on whether what you have meets market and regulatory expectations, get in touch. There is a solution – the important thing is not to start late. Because in the era of AI, it’s not just the most innovative that will survive, but the most prepared.

Published: 24. June 2025

Signum.legal

advokátska kancelária

Signum.legal

This article is part of magazine no.

Published: 24. June 2025

advertising

Peter Blažečka

ESET, spol. s r.o.

Sometimes it happens that attackers manage to deploy ransomware on a company network despite strong security. But even then, sensitive...

Martina Kormaník

GAMO a.s.

The amendment to the Cybersecurity Act transposing the NIS 2 Directive has also brought new obligations for food processing and...

Zuzana Holý Omelková

GAMO a.s.

If organisations want to avoid mistakes, it is essential to take a systematic and responsible approach to the implementation of...
advertising