It is a milestone in the regulation of artificial intelligence: the AI Act is the European Union's first comprehensive legislation specifically aimed at AI systems. Providers are now faced with the task of making their systems not only safe but also transparent and comprehensible. Technical documentation is an indispensable tool for this. But what specific documentation requirements must be met?
We have collected and sorted the most frequently asked questions and answered them for you in the following article.
The AI Act at a glance
What is the AI Act?
The AI Act is the European Union's first comprehensive regulation specifically for artificial intelligence. Its aim is to create harmonised rules for the development, placing on the market and use of AI systems. This is intended to ensure that AI systems are designed to be human-centred and that the democratic values of the EU – such as the protection of human dignity, equality and data protection – are respected. At the same time, the harmonised regulations are intended to promote innovation and cross-border trade in AI-supported products.
When does the AI Act apply?
The act came into force on August 1, 2024. Most of the provisions must be officially applied from August 1, 2026, with staggered transition periods:
- From February 1, 2025: Ban on AI systems with unacceptable risk
- From August 1, 2025: Application of the regulations for general purpose systems, e.g. ChatGPT
- From August 1, 2026: Application of all other regulations
- From August 1, 2027: Application of the rules for high-risk AI systems that are already subject to other EU regulations
Who is affected by the act?
The AI Act is aimed at all parties involved in the development, marketing and use of AI systems in the European Union. This applies in particular to players such as providers, operators, importers and distributors of AI systems. Companies outside the EU are also subject to the regulations if they offer or use AI systems in the EU.
Exceptions: The provisions of the AI Act do not apply to AI systems that are developed or used exclusively for military purposes. Another exception are private individuals who use artificial intelligence in a non-commercial context.
What is an AI system according to the AI Act?
The EU uses the official OECD definition to determine what exactly constitutes an AI system within the meaning of the act. Specifically, the act defines an AI system as follows:
" ... a machine-based system that is designed to operate with varying degrees of autonomy and that, once operational, can be adaptive and that derives from the inputs received for explicit or implicit goals how outputs such as predictions, content, recommendations or decisions are produced that can influence physical or virtual environments."
This definition is so broad that all common AI systems and generative AI applications such as ChatGPT etc. fall under the act.
What risk classes are there for AI systems?
The AI Act takes a risk-based approach to the regulation of AI systems. To this end, AI systems are divided into risk classes based on their potential harm to the rights and freedoms of individuals and to society. The higher the risk of an AI system, the stricter the legal requirements and conditions.
The AI Act distinguishes four main categories of risk classes:
- Unacceptable risk: AI systems that are classified as an unacceptable risk are completely prohibited. This applies, among other things, to AI systems that use the following mechanisms: Social scoring, subliminal behaviour manipulation, real-time biometric identification in public spaces ...
- High-risk systems: High-risk AI systems are affected by most of the provisions of the AI Act and are therefore subject to strict regulatory requirements. Typical application areas of high-risk AI systems are: critical infrastructure, healthcare and banking, law enforcement and administration of justice, migration, asylum and border control.
- Limited risk: These AI systems such as chatbots or personalised recommendation systems usually interact directly with humans. They are subject to less stringent requirements but must fulfill certain transparency and information obligations that can affect both providers and operators of the AI system. For example, AI-based chatbots must be designed in such a way that it is clear to users at all times that they are interacting with an AI system.
- Minimal risk: There are currently no specific legal requirements for AI systems in this category (e.g. search functions, spam filters or computer games) as long as they do not pose a risk to the rights and freedoms of people. However, in order to be classified as a low-risk application, the AI provider must carry out a corresponding risk assessment of their system.
GPAI models as a special case
So-called general purpose AI models (GPAIs) form a separate risk class in the EU AI Act, which must meet special requirements. GPAIs are AI models that can perform a wide range of tasks and are often integrated into different applications - such as ChatGPT or Claude. The requirements include creating technical documentation, carrying out risk assessments and ensuring cyber security.
The technical documentation of AI systems
What role does technical documentation play?
The concept of technical documentation runs like a red thread through the AI Act. It plays a key role, especially for high-risk AI systems and GPAIs. Basically, the documentation serves to provide all essential information on the development, operation and safety of the AI system. Specifically, complete and precise documentation helps with:
- Meeting regulatory requirements by providing the authorities with all the necessary information for inspection.
- Minimising security risks by identifying potential vulnerabilities and documenting solutions.
- Creating trust by making the use and purpose of the AI systems comprehensible for the operators.
Who is the technical documentation for?
There are essentially two target groups for the documentation: firstly, authorities with a focus on regulatory control and secondly, operators and downstream providers who are dependent on the secure use of the systems.
What requirements must the technical documentation for authorities fulfill (high-risk AI)?
Essential requirements for the technical documentation of high-risk AI are defined in Article 11 of the AI Act:
- The technical documentation of a high-risk AI system must be prepared before it is placed on the market or put into service and must be updated regularly.
- It must be clear, comprehensible and complete and contain at least the information specified in Annex IV of the Act.
- When combined with a product that falls under other EU harmonization legislation (e.g. Machinery Directive), a joint technical documentation for the product and the AI system is sufficient.
The requirements in Annex IV
Annex IV of the AI Act defines the specific content that technical documentation for high-risk AI systems must contain. It can be roughly divided into the following three main sections. The individual sections of the regulation should be consulted for a detailed analysis.
- General description of the system: Purpose, vendor name, version, interaction with other systems, deployment (e.g. software package, API), hardware requirements and user interface
- Technical details and development: Development process, system architecture, data used (origin, cleansing, validation), security and testing measures (accuracy, robustness, cyber security)
- Monitoring and risk management: Performance limits, risks (e.g. discrimination), risk management system and changes during the life cycle
What requirements must the technical documentation for the operator meet (high-risk AI)?
The documentation intended for the operator of the AI system is specified in Article 13. It must be precise, complete and clearly understandable and must also ensure that the operator can use the system safely and effectively.
Important contents of the instructions for operators are
- System provider: Name and contact details of the provider or their authorised representative
- Technical characteristics: Purpose, accuracy, robustness, risks, usage instructions and input data requirements
- System updates: Description of possible changes and their impact on system performance
- Human monitoring: Measures for interpreting and controlling system outputs
- Technology and maintenance: Hardware requirements, expected service life, maintenance and update measures
- Logging: If applicable, information on mechanisms for traceability and data storage
What requirements must the technical documentation for GPAIs fulfill?
For general purpose AI systems, the AI Act distinguishes between requirements for public authorities and requirements for downstream providers that integrate the system into their products.
In principle, the following applies:
- Documentation for public authorities: The requirements focus on information on model purpose, architecture, data use, security measures and testing strategies.
- Documentation for downstream providers: Here the focus is on information on the purpose of use, restrictions on use and technical details of integration into the higher-level system.
How must the documentation be made available?
With regard to publication, Article 13 of the act states that the digital provision of operating instructions is generally permitted. This is probably the most sensible solution, especially in the software sector, for providing users with flexible and easy access. However, it is not yet known whether other types of information provision will be considered in addition to the classic PDF. It is important to keep an eye on further developments and publications by the EU Commission.
Is there any further information on the concrete implementation of the documentation?
Unfortunately, no specific information on the linguistic and design implementation of the technical documentation can be derived from the act. Whether an official guideline from the EU will specify this topic at a later date remains to be seen. In the meantime, AI providers can fall back on proven standards such as DIN EN 82079-1. This standard defines general requirements for the creation of clear, structured and comprehensible user information and also provides a good basis for the documentation of AI systems.
Simplified procedure planned for SMEs and start-ups
For SMEs and start-ups, it is permissible to provide the technical documentation of a high-risk AI system in a simplified form. This is done using a special form provided by the EU Commission and accepted by notified bodies for the purpose of conformity assessment. However, no further information is yet available on the exact design of the form or its publication.
How is compliance with the regulations checked?
Monitoring is carried out by the so-called AI Office of the European Union and by national supervisory authorities in the respective member states. In Germany, the Federal Network Agency (BNetzA) will presumably assume the role of the national AI supervisory authority.