AI in medical devices
The technical life cycle in the context of the EU AI Act
AI Act (EU Regulation 2024-1689)
The European Union has adopted EU Regulation 2024/1689 on Artificial Intelligence (AI Act) to ensure the safe use of AI.
In this blog post, we provide an overview of the part of the EU AI Act that is relevant for medical device manufacturers. This blog shows which requirements medical device manufacturers and the documentation provided for users must meet and also addresses transparency, data maintenance and the life cycle.
1. Requirements for medical devices
AI Act Chapter III, Section 2 describes requirements for AI systems in medical devices.
Here are the most important requirements:
1. Compliance with the requirements and proof of conformity
The Notified Body verifies the conformity of the systems through comprehensive tests with regard to safety, performance and regulatory requirements.
2. Record-keeping obligations
Manufacturers must keep comprehensive records of the development and service life of the AI system in order to demonstrate conformity with the requirements. These are used for internal audits and for submission to supervisory authorities or notified bodies.
3. Data and data governance
Manufacturers must ensure that the data quality is high and that the data is processed lawfully. This applies to both patient data and training data that is used to develop AI algorithms. Data management must be secure and transparent in order to guarantee the accuracy and reliability of the AI. Data protection requirements must be met.
4. Risk management
All possible risks in connection with the use of the systems must be identified, evaluated and minimized, including
- Technological risks,
- Ethical risks,
- and Security-related risks.
The risk management system must be regularly reviewed and updated to take account of new risks.
5. Technical documentation
Manufacturers must provide detailed technical documentation describing the
- Structure,
- Functionality,
- and Life cycle
of the system. These are checked by notified bodies and supervisory authorities. The technical documentation must contain information on
- Algorithms,
- Training process,
- and Data.
6. Transparency and provision of information for users
Users and those affected should be informed about
- Functionality,
- possible risks,
- and Utilization
of the AI system. This means that users must be given clear information about how the AI works, what decisions it makes and what risks might be involved. Users should be able to understand the decisions made by the AI.
7. Human Oversight
AI systems must be monitored by humans. Humans remain responsible for ensuring that the system functions correctly and that no unforeseen risks arise. Human control should ensure that human intervention is possible if necessary.
8. Accuracy, robustness and cyber security
Accuracy, robustness and cyber security should ensure that the system performs its tasks reliably and securely. Manufacturers must prove that their AI systems are robust enough to withstand disruptions, manipulation or cyber attacks. A secure and robust architecture is important here.
2. Intended purpose and state of the art
1. Consideration of the intended use
The intended use of the AI system must be clearly defined, including how and in what context it is to be used. This is necessary in order to:
- optimize and check the AI system accordingly
- assess the risks and define suitable protective measures
2. Consideration of the state of the art
Development and implementation must be based on the latest state of research and technology (generally recognized state of the art). Manufacturers should continuously improve and adapt their systems by integrating the latest algorithms, data processing techniques and security measures.
3. Full compliance
The development and operation of AI systems must be in line with technical requirements. Monitoring and testing procedures must be implemented to ensure ongoing compliance. This is achieved through robust processes for each phase of the product lifecycle.
4. Integration of test and reporting processes
Manufacturers can integrate the necessary testing and reporting processes into their existing (QMS). It is recommended that all necessary test procedures, risk assessments and reports are integrated into existing workflows.
3. Characteristics, abilities and performance limits
1. Requirements for transparency in system design
The operating instructions must contain a clear description of the features, capabilities and performance limits. This includes:
- Range of functions: Information about what the system can do and in which scenarios it is best suited.
- Limitations: Indicates situations in which the performance of the system may not meet expectations or where special care is required.
- Potential risks: Information on the potential risks that may be associated with the use of the system.
2. Predetermined changes
Predetermined changes to the system must be provided by the manufacturer so that users understand how the changes might affect functionality.
3. Mechanisms for logging and data interpretation
Logs must be properly recorded, stored and interpreted. These logs should include the following mechanisms:
- Logging mechanisms: Detailed instructions for recording relevant data and events that occur during the operation of the AI system. This can help to detect problems at an early stage and evaluate the performance of the system.
- Data interpretation: Support in analyzing and interpreting the collected log data to identify patterns and anomalies and make informed decisions.
4. Accuracy, robustness and cyber security
1. Requirements for accuracy
AI systems must be designed and developed to achieve an appropriate level of accuracy. This includes:
- Accurate predictions and decisions: Systems should be able to make accurate predictions and informed decisions based on available data.
- Verification of accuracy: Throughout the life cycle of the system, accuracy should be checked regularly to ensure that performance standards are being met.
2. Robustness of the system
Robustness refers to its ability to function consistently and reliably, even when unexpected situations or faults occur:
- Consistency over the entire life cycle: The systems must be developed in such a way that they deliver consistent performance throughout their entire life cycle. This includes taking into account changes in the input data and in the operating environment.
- Error and fault tolerance: The systems should be as resistant as possible to errors, faults or inconsistencies. This means that they should be able to recognize errors and react appropriately without this leading to dangerous situations.
3. Cyber security
The following points must be taken into account when it comes to cyber security:
- Security measures: Appropriate security measures should be implemented to protect the integrity, confidentiality and availability of data. These include authentication procedures, encryption and access controls.
- Risk assessment: Before implementation, a risk assessment is required to identify potential (cybersecurity) vulnerabilities. Based on this, cyber security solutions are developed and implemented.
4. Continuous learning and bias reduction
When developing continuously learning systems, it is important to minimize or, if possible, completely eliminate the risk of bias. Important considerations here are:
- Data diversity and quality: The training data should be representative and diverse to ensure that the model is not influenced by bias in the data. Careful data preparation and validation is essential here.
- Monitoring and adaptation: Continuous learning systems should be regularly monitored and adapted to ensure that they do not learn discriminatory patterns and that their performance remains in line with ethical and regulatory standards.
5. Life cycle and relevant obligations
Manufacturers are obliged to ensure safety, reliability and conformity throughout the entire life cycle. The most important aspects are listed below:
1. Ex-ante conformity assessment
An ex-ante conformity assessment is a procedure that checks whether a product, system or service meets the applicable regulations, standards and safety requirements before it is launched on the market. Manufacturers must systematically record, document and analyze the following data for this purpose:
- Reliability: Assessment of how reliably the AI system works under different conditions.
- Performance: Evaluation of how well the system fulfills the defined requirements and achieves the expected results.
- Safety: Checking that the system is safe to use and does not pose any risks to users or third parties.
2. Continuous verification of conformity
Continuous compliance verification means that manufacturers ensure that systems meet regulatory requirements throughout their life cycle. This includes:
Regular assessments: Manufacturers must be able to continuously review and document the compliance of their systems. These assessments should be carried out throughout the use of the system to ensure that all changes and developments are taken into account.
Adapting to new requirements: Manufacturers should also proactively prepare for new regulatory requirements that may arise over time.
3. Post-market monitoring
Post-market surveillance (PMS) refers to the monitoring of a product after it has been launched on the market. The aim is to check its safety, functionality and compliance with regulations. Manufacturers must do this:
Report serious incidents: All serious incidents and malfunctions that lead to violations of fundamental rights must be reported immediately.
Implement incident reporting systems: Manufacturers should establish incident reporting systems to ensure that all relevant information is captured and handled appropriately.
4. New conformity assessment for significant changes
In the event of significant changes to the system, manufacturers must carry out a new conformity assessment. This applies in particular to
Substantial modifications: If changes are made outside the "predefined framework" for continuously learning AI systems, a new conformity assessment is required.
6. Important findings on the EU AI Regulation
The key points of the EU regulation on artificial intelligence are summarized below:
1. Review of the requirements
The requirements are listed in Chapter III, Section 2 of the Regulation and contain specific obligations to ensure the safety and reliability of the systems, which are concrete:
- Safety standard: Manufacturers must ensure that their systems meet the specified safety and performance requirements.
- Documentation and verification: Comprehensive documentation of the development, the data used and the validation of the systems is required.
2. Transitional provisions in Articles 178 and 179
Important dates and aspects of the transitional provisions under Articles 178 and 179 of the Regulation are:
- Date of application: The ordinance officially enters into force on August 2, 2026.
- Enforcement: Provisions to enforce the regulation are due to come into force on August 2, 2025. Compliance measures must therefore be taken before the official application date.
3. Review of the conformity assessment processes
According to Article 43 of the EU AI Regulation, manufacturers are obliged to review and adapt the conformity assessment processes. These processes include:
- Internal controls: Manufacturers can implement an internal control that does not require notification by a designated inspection body. The prerequisite is that they prove that their quality management system (QMS) and the technical documentation meet the requirements.
- Quality management system (QMS) and technical documentation: The development and implementation of an effective QMS is crucial to ensure the continuous compliance of the systems. The technical documentation must cover all relevant aspects of the AI systems, including their risks, security requirements and associated testing procedures.






