AI literacy in companies: Why Art. 4 AI Act counts now and how Zenjob has implemented it

Why AI literacy should be on the agenda now:

Artificial intelligence is not only changing business processes, but also the legal framework. With the EU AI Act, which was adopted in 2024 and is currently taking effect in stages, the EU is creating a comprehensive set of rules for the development and use of AI systems for the first time. A central component is Article 4, which obliges companies to strengthen their employees' skills in using AI: Also known as AI literacy.

This applies not only to tech companies, but also to companies from all sectors.
AI is being used across industries, from industry, logistics, healthcare to administration. Just using tools such as Chat GPT is enough to fall under the requirements of Article 4 of the AI Act.

In the future, companies will be liable for misconduct or lack of transparency in their systems, especially in the high-risk area. In order to comply with legal requirements, training must be documented and awareness must be demonstrated. At the same time, AI literacy is also a competitive factor: only those who use AI confidently and responsibly remain competitive in the long term.

What does the AI Act say specifically?

Since February 2, 2025, AI literacy has been mandatory:

With Article 4 of the AI Act coming into force, companies must ensure that all employees who work with AI systems — whether in development, use or evaluation — are adequately trained and qualified to do so.

The AI Act is intended to enable a future in which risks are both understood and managed and innovations is promoted.

All companies and organizations that use AI and automated systems are affected: Even in areas such as customer communication, internal data analyses or automated processes.

The aim is an expert, safe use of AI, even in non-technical roles.

Why this goes beyond data protection and requires more than one-off training:

In contrast to the GDPR, which governs the protection of personal data, the AI Act aims at transparency, traceability and risk minimization when making algorithmic decisions. Both legal requirements complement each other, but set different priorities. Both require training, but in different areas.

Target group-specific learning paths should then be developed and clear usage rules for AI should be defined. In addition to the legal dimension, special attention should be paid in particular to ethical implications. The focus of training is on transparency, risk management and data protection.

However, training is not enough: It requires a continuous process, with regular updating of systems, internal guidelines, continuing education and clear responsibility within the company.

Recommendation:

To position yourself as securely as an organization, you should:

  • Name a contact person for AI.
  • Establish internal policies and governance structures.
  • Provide employees with orientation and security through clear rules, documentation standards and training offers, particularly in the areas of data protection and transparency.

Read here how Zenjob worked with Paxa to implement AI literacy in the company:

Zenjob SE connects companies across Germany with students for flexible part-time jobs, arranges over 70,000 assignments a month and is active in more than 42 German cities. The company was founded in Berlin in 2015 by Fritz Trott, Cihan Aksakal and Frederik Fahning. They work quickly, digitally and via app. AI-based processes in particular play a central role making the proper use of AI essential. The team recognized early on that the competent, thoughtful use of AI not only brings efficiency, but also requires clear rules, orientation and knowledge: particularly with regard to new regulatory requirements such as the EU AI Act. In the following interview,  Fractional General Counsel Julian Jantze from Paxa gives insight into how he strategically tackled and implemented the topic of AI literacy with Zenjob.

This is how AI Literacy was put into practice:

When did Zenjob start focusing on AI literacy and what was your motivation for continuing education for employees?

Julian:

“Zenjob has been working on the topic of AI literacy among employees since last year. This raises specific questions for Zenjob: As a digital temporary employment agency, Zenjob is already using a lot of artificial intelligence. The use of AI is becoming increasingly relevant in job placement and that is why we had a particular eye on it here, because we are quickly moving into high-risk AI or even in the area of prohibited AI. Zenjob works with thousands of daily contracts every day. When selecting our temporary workers correctly, we use matching algorithms based on predefined objective criteria. In the future, we want to focus more on artificial intelligence here too, but we must be careful that we do not fall within the scope of social scoring or emotion recognition, which could potentially represent illegal AI. It is important to us to create awareness among everyone — including product developers.

The AI training for our employees should therefore be aimed at everyone and provide everyone with basic knowledge, but individualized and tailored to Zenjob's business model. We also wanted to specifically train individual departments that come into particular contact with artificial intelligence, such as the Product & Engineering Department and the Marketing Department.

We saw a great need for action here: If programming originating in AI is used in coding, for example, the question of ownership rights arises.”

Note: As background information, the AI Act classifies AI systems into four risk groups based on their potential impact:

  • Minimal risk:
    AI systems in this category pose no relevant risk to security and fundamental rights. They can be used without any special requirements.
    Examples: email spam filters, automatic spell checking.
  • Limited risk:
    These systems have a manageable risk of creating a lack of transparency or deceiving people. There are information requirements: Users must know that they are dealing with AI.
    Examples: chatbots, virtual assistants
  • High-risk AI:
    This is about applications that can have a decisive influence on human rights, security and life decisions. They are therefore subject to strict requirements. The obligations include risk management systems, transparency and human control bodies.
    Examples: AI systems for applicant selection, creditworthiness checks, medical diagnostics
  • Prohibited AI:
    These systems are generally prohibited in the EU because they violate fundamental rights or manipulate people.
    Examples: social scoring, recognition of emotions in the workplace.

How did you approach the topic structurally, and which implementation path did you choose?

Julian:

Zenjob looked for external providers early on, but they were never really able to deliver what was really important to Zenjob. It then became apparent that an internal solution was needed to precisely address the challenges facing the company.

I have set up an individual solution here.

Our big advantage was that Paxa understands Zenjob's operational processes, which tools are used and where high-risk AI is involved. A training was then created from Zenjob's practice. It was important to us that high-risk systems were not scary, but only had to be used correctly. And that's where Paxa helps. An internal and individual training solution was then developed by me in collaboration.”

How did you convey the content? What was the format that you used?

Julian:

“We created videos with an AI video program, with me as an avatar speaker, in which we provided information on topics such as AI Definition - Legal Foundations - Risk and Compliance. Legal and ethical requirements were also integrated here, all based on practical examples at Zenjob. For now, we have completed the basic training courses, the more specific one for IT and marketing specialists is yet to come. Here, too, the support of Paxa is required.”

How was this documented?

Julian:

“We have documented the whole thing with the help of an HR tool, which shows which employees have all taken part in the training, here they all receive a certificate after successful completion, so that all documentation requirements are met.”

What feedback did you receive from the team, and what were lessons did you learn?

Julian:

“We received a great deal of positive feedback; in particular, the examples tailored and taken from practice were received very positively. This was particularly helpful when it came to risk assessment, where we worked with specific examples from Zenjob of what low risk cases are and where there is a high risk. Together with the individual presentation, this was very well received.

It was particularly important to us that implementation was as unbureaucratic as possible and no further extensive training. We have managed to deliver all important content as a video series within a manageable framework. As part of Paxa together with Zenjob, we have designed 3 videos and have already implemented the complete basis.

What was also important and good for the future is the clear introduction of an AI person responsible: At Zenjob, I am an AI Officer and therefore the contact person for all issues relating to AI and compliance. Now we will continue to work together to carry out subject-specific training courses and are ready to set Zenjob up very well here as well.”

Where can Paxa help?

Not every company has the resources to design and implement its own AI literacy program. This is where Paxa comes in.

Paxa provides support in five areas:

  1. Strategy consulting on AI readiness → Where is the company today, where does it need to go?
  2. Development of individual learning paths → target group and role-based (e.g. tech, legal, HR)
  3. Implementation of trainings & workshops → in-house or remotely, modularly or as a program
  4. Preparation of policy frameworks → Guidelines for AI use, documentation & responsibilities
  5. Assistance with risk analyses & audits → Support with verification within the framework of the AI Act

AI literacy is no longer an outstanding feature, but a duty.

Companies that act early not only protect themselves legally but also build trust and improve their operational excellence in dealing with AI. Article 4 of the AI Act is not just a compliance requirement, but an opportunity to implement AI broadly, responsibly and effectively.

Check now:

  • Is there already AI-related training in the company?
  • Do non-tech colleagues also understand how AI works and risks?
  • Are there documented policies and clearly assigned roles for AI projects?

If you are interested, please contact us here

Your peace of mind is our mission.