Want to create interactive content? It’s easy in Genially!
EU AI ACT Reqs & Obls High Risk
Teresa Jambrina
Created on October 24, 2024
Start designing with a free template
Discover more than 1500 professional designs like these:
Transcript
EU AI ACT High risk ai systems REQUIREMNTS & obligations
A comprehensive and detailed guide that thoroughly explain all the various obligations and responsibilities associated with AI systems we should comply with at NTT Data EU&LA.
Start
REQUIREMENTS VS OBLIGATIONS
EU AI Act makes a clear distinction between Requirements and Obligations
Obligations
Requirements
Focus: technical, ethical or operational qualities of High Risk AI system Requirements are directed at the system design and deployment to ensure safe, reliable and ethical work.
Focus: Actions to be taken to ensure compliance depending on Role. Obligations are direct impositions to Entities for the development, deployment, distribution...
What Ai systems need to be...
What Humans need to do...
EU AI ACT ROLES
Depending on NTT Data role, there are specific obligations that must be complied with.
Definition
Main Obligations
Meet EU AI Act Requirements & Obligations
Provider:
Entity that develops, place into market or puts into service under its own name or trademark an AI system / AI model
Deployer:
Entity using an AI system under its authority (deploy, operates or use)
Ensure AI system is properly used not harming individuals fundamental rights
REQUIREMENTS INDEX
EU AI ACT require High Risk AI systems to comply with following requirements
Mainly Technical
Mainly Organizational
Risk Management System
Technical documentation
Records Keeping
Data & Data Governance
Transparency and Info. for Deployers
Accuracy, robustness and cybersecurity
Human Oversight
OBLIGATIONS INDEX
EU AI ACT require Providers & Deployers to meet following obligations:
Inform the workers' representatives and the unions
ID & Contact Details
Ensure and Monitor proper use of AI System
CE Marking
Quality Mangement System
Implement Human Oversight measures (training/support)
Registration Obligations
Carry out a DPIA, if applicable
Documentation Keeping
Address Data Quality & Bias, if applicable
Corrective actions and duty of information
Transparency obligation for automated decision making
Automatically Generated Logs
Accesibility
Cooperation with competent Authorities
Keep logs automatically generated
Conformity Assessment
Cooperation with competent Authorities
EU Declaration of Conformity
REQUIREMENTS
Risk Management System
Identification & Risk Analysis (safety, health, Fundamental Rights)
A comprehensive risk management system must be established, implemented, documented, and maintained to ensure compliance with relevant standards.
Evaluation of Risks for forseeable misuse
- Risk Management should have following steps:
- Continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system
- Adapted to consider impact in vulnerable groups / minors
Other type of Risks Analysis
EU&LA AI RISK MATRIX
Adoption of Mitigating Measures
Judge Residual Risks
ILPS involved in the process led by EU&LA Corporate Legal
Know More
Deliverables
REQUIREMENTS
Data & Governance*
Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system
- Relevant design decisions;
- Data collection processes and data sources, including the original purpose for personal data;
- Data preparation operations such as annotation, labeling, cleaning, updating, enrichment, and aggregation;
- Formulation of assumptions about what the data represents; e) assessment of the availability, quantity, and suitability of necessary datasets;
- Review of potential biases that could affect safety, fundamental rights, or lead to prohibited discrimination;
- Measures to detect, prevent, and mitigate identified biases;
- Detection of data gaps that hinder regulatory compliance and ways to address them.
*Just for High-risk AI systems which make use of techniques involving the training of AI models: NOT the case for AXET.
Know More
Deliverables
ILPS involved in the process led by EU&LA Corporate Legal
REQUIREMENTS
Transparency and Info. for Deployers
High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately
- Identity and the contact details of the provider
- Characteristics, capabilities and limitations of performance of the high-risk AI system
- Changes and performance at the initial conformity assessment
- Human oversight measures
- Hardware requirements/ maintenance and expected lifetime
- Logs operation
Instructions
Know More
Deliverables
ILPS involved in the process led by EU&LA Corporate Legal
REQUIREMENTS
Human Oversight
High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons
Objective: to Prevent and minimise risk to Health, Safety and fundamental rights
Provider should extend Human Oversight measures to Deployer so people in charge of oversight can:
- Understand details of capabilities and limitations of high-risk AI systems to monitor and address anomalies, malfunctions, and unexpected behaviors
- Advise on potential over-reliance on AI outputs, especially in decision-support systems ("automation bias")
- Correctly interpret outputs and decide when to disregard.
- Stop the AI system if necessary
Know More
Deliverables
ILPS involved in the process led by EU&LA Corporate Legal
REQUIREMENTS
Technical documentation
Technical documentation shall contain, at minimun, following elements:
- Intended purpose, provider’s name, and system version.
- Interaction with external hardware or software
- Software/firmware versions and update requirements
- Forms in which the AI system is marketed
- Description of the hardware required to operate
- For AI components in products, images or illustrations showing features, markings, and internal layout
- Description of deployer’s user interface
- Instructions for use and user interface
- Deployer instructions
- AI techniques and tools used in development
- Design specifications (AI logic, key design decisions, optimization goals, expected outputs)
- System architecture, detailing software component interactions
- Data set origin, selection, labeling, and cleaning methods
- Human oversight measures to support deployers
- Testing procedure
- Cybersecurity measures
- AI system’s monitoring and functionality, including its performance capabilities and limitations
- Risk management system
- Updates made by the provider over the AI system’s lifecycle
- EU declaration of conformity
- Plan for evaluating and monitor performance post-market.
Know More
Deliverables
CoE to lead the process with support of ILPS
REQUIREMENTS
Records Keeping
High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the system:
Logs should help to:
- Identify Risk Situations: Record instances where the AI system might present safety risks or substantial modifications
- Support Post-Market Monitoring: Facilitate the required ongoing monitoring of the AI’s performance after deployment
- Monitor Ongoing Operation: Enable observation of the system’s operation to assess ongoing safety and compliance
Logs requirements:
- Usage Duration: Log the start and end times of each operational period.
- Reference Database: Record the specific database against which the input data is compared.
- Matched Input Data: Document input data that matched with entries in the reference database.
- Identification of Verifiers: Record the identities of human personnel involved in verifying the AI’s output accuracy
Know More
Deliverables
CoE to lead the process
REQUIREMENTS
Accuracy, robustness and cybersecurity
High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity
- Documentation of Accuracy Levels: The accuracy levels and relevant metrics for high-risk AI systems must be detailed in the accompanying user instructions
- Error Resilience: These systems must be resilient to errors or inconsistencies. Technical and organizational measures should be implemented for that purpose.
- Biases: Systems that learn post-deployment should mitigate biased outputs and manage feedback loops appropriately
- Cybersecurity Resilience: Cybersecurity measures should be tailored to specific risks and include strategies to prevent and respond attacks.
Know More
Deliverables
CoE to lead the process
OBLIGATIONS INDEX
EU AI ACT require Providers & Deployers to meet following obligations:
Inform the workers' representatives and the unions
ID & Contact Details
Ensure and Monitor proper use of AI System
CE Marking
Quality Mangement System
Implement Human Oversight measures (training/support)
Registration Obligations
Carry out a DPIA, if applicable
Documentation Keeping
Address Data Quality & Bias, if applicable
Corrective actions and duty of information
Transparency obligation for automated decision making
Automatically Generated Logs
Accesibility
Cooperation with competent Authorities
Keep logs automatically generated
Conformity Assessment
Cooperation with competent Authorities
EU Declaration of Conformity
OBLIGATIONS
ID & Contact Details
High-risk AI systems must include the name, address, and contact details of the Provider, as well as the authorized representative, where applicable.
- Provider should include ID & Contact details in:
- Deployer instructions
- EU Database for High Risk AI Systems
- Deployers should include ID & Contact details in:
- EU Database for High Risk AI Systems.
- Other relevant AI system information should also be provided both by Deployer & Provider to the EU Database for High Risk AI Systems.
Know More
Deliverables
CoE to lead the process
OBLIGATIONS
Quality Mangement System
Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation
- Regulatory Compliance Strategy: Includes conformity assessment procedures and management of modifications.
- Design and Development: Techniques and procedures for design control, verification, quality control, and assurance.
- Testing and Validation: Examination, testing, and validation procedures before, during, and after development.
- Technical Specifications: Application of harmonized standards or alternative means to ensure compliance.
- Data Management: Procedures for data collection, analysis, labeling, storage, and retention.
The QMS is crucial for ensuring that high-risk AI systems meet the necessary standards for safety, security, and ethical use, thereby fostering trust in AI technologies across the EU
Know More
Deliverables
CoE to lead the process
OBLIGATIONS
Documentation Keeping
Providers must keep specific documentation at the disposal of national competent authorities for a period of 10 years after the high-risk AI system has been placed on the market or put into service. This includes:
- Technical Documentation
- Quality Management System Documentation
- Documentation on changes approvd by Notified Bodies*
- Decisions and Documents Issued by Notified Bodies
- EU Declaracion of Conformity
*Notified Bodies: desginated by Competent Authorities to evalute High Risk AI systems.
Know More
Deliverables
CoE to lead the process
OBLIGATIONS
Automatically Generated Logs
Providers of high-risk AI systems shall keep theautomatically generated logs by their high-risk AI systems. Key points to be considered:
- Purpose: To ensure traceability and accountability of AI decisions.
- Logging Requirements: Providers of high-risk AI systems must maintain logs of system outputs.
- Duration of Logs: Logs must be kept for at least six months.
*Notified Bodies: desginated by Competent Authorities to evalute High Risk AI systems.
Know More
Deliverables
CoE to lead the process
OBLIGATIONS
Conformity Assessment
Given the complexity of high-risk AI systems and the risks that are associated with them, it is required for Providers to pass a conformity assessment procedure for high-risk AI systems involving Notified Bodies, so-called third party conformity assessment.
- Conformity Assessment Procedures choice*: Providers of high-risk AI systems must choose one of two conformity assessment procedures:
- Internal Control.
- Assesment of QMS and technical documentation involving a Notified Body.
- Timing: Conformity assessments must be conducted before placing a high-risk AI system on the market or before its first use in the EU
* Choice is applicable solely for AI models which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012