It is a milestone in the regulation of artificial intelligence: the AI Act is the European Union's first comprehensive legislation specifically aimed at AI systems. Providers are now faced with the task of making their systems not only safe but also transparent and comprehensible. Technical documentation is an indispensable tool for this. But what specific documentation requirements must be met?
We have collected and sorted the most frequently asked questions and answered them for you in the following article.
The AI Act is the European Union's first comprehensive regulation specifically for artificial intelligence. Its aim is to create harmonised rules for the development, placing on the market and use of AI systems. This is intended to ensure that AI systems are designed to be human-centred and that the democratic values of the EU – such as the protection of human dignity, equality and data protection – are respected. At the same time, the harmonised regulations are intended to promote innovation and cross-border trade in AI-supported products.
The act came into force on August 1, 2024. Most of the provisions must be officially applied from August 1, 2026, with staggered transition periods:
The AI Act is aimed at all parties involved in the development, marketing and use of AI systems in the European Union. This applies in particular to players such as providers, operators, importers and distributors of AI systems. Companies outside the EU are also subject to the regulations if they offer or use AI systems in the EU.
Exceptions: The provisions of the AI Act do not apply to AI systems that are developed or used exclusively for military purposes. Another exception are private individuals who use artificial intelligence in a non-commercial context.
The EU uses the official OECD definition to determine what exactly constitutes an AI system within the meaning of the act. Specifically, the act defines an AI system as follows:
" ... a machine-based system that is designed to operate with varying degrees of autonomy and that, once operational, can be adaptive and that derives from the inputs received for explicit or implicit goals how outputs such as predictions, content, recommendations or decisions are produced that can influence physical or virtual environments."
This definition is so broad that all common AI systems and generative AI applications such as ChatGPT etc. fall under the act.
The AI Act takes a risk-based approach to the regulation of AI systems. To this end, AI systems are divided into risk classes based on their potential harm to the rights and freedoms of individuals and to society. The higher the risk of an AI system, the stricter the legal requirements and conditions.
The AI Act distinguishes four main categories of risk classes:
GPAI models as a special case
So-called general purpose AI models (GPAIs) form a separate risk class in the EU AI Act, which must meet special requirements. GPAIs are AI models that can perform a wide range of tasks and are often integrated into different applications - such as ChatGPT or Claude. The requirements include creating technical documentation, carrying out risk assessments and ensuring cyber security.
The concept of technical documentation runs like a red thread through the AI Act. It plays a key role, especially for high-risk AI systems and GPAIs. Basically, the documentation serves to provide all essential information on the development, operation and safety of the AI system. Specifically, complete and precise documentation helps with:
There are essentially two target groups for the documentation: firstly, authorities with a focus on regulatory control and secondly, operators and downstream providers who are dependent on the secure use of the systems.
Essential requirements for the technical documentation of high-risk AI are defined in Article 11 of the AI Act:
The requirements in Annex IV
Annex IV of the AI Act defines the specific content that technical documentation for high-risk AI systems must contain. It can be roughly divided into the following three main sections. The individual sections of the regulation should be consulted for a detailed analysis.
The documentation intended for the operator of the AI system is specified in Article 13. It must be precise, complete and clearly understandable and must also ensure that the operator can use the system safely and effectively.
Important contents of the instructions for operators are
For general purpose AI systems, the AI Act distinguishes between requirements for public authorities and requirements for downstream providers that integrate the system into their products.
In principle, the following applies:
With regard to publication, Article 13 of the act states that the digital provision of operating instructions is generally permitted. This is probably the most sensible solution, especially in the software sector, for providing users with flexible and easy access. However, it is not yet known whether other types of information provision will be considered in addition to the classic PDF. It is important to keep an eye on further developments and publications by the EU Commission.
Unfortunately, no specific information on the linguistic and design implementation of the technical documentation can be derived from the act. Whether an official guideline from the EU will specify this topic at a later date remains to be seen. In the meantime, AI providers can fall back on proven standards such as DIN EN 82079-1. This standard defines general requirements for the creation of clear, structured and comprehensible user information and also provides a good basis for the documentation of AI systems.
Simplified procedure planned for SMEs and start-ups
For SMEs and start-ups, it is permissible to provide the technical documentation of a high-risk AI system in a simplified form. This is done using a special form provided by the EU Commission and accepted by notified bodies for the purpose of conformity assessment. However, no further information is yet available on the exact design of the form or its publication.
Monitoring is carried out by the so-called AI Office of the European Union and by national supervisory authorities in the respective member states. In Germany, the Federal Network Agency (BNetzA) will presumably assume the role of the national AI supervisory authority.