Data & Privacy
AI & Trust
Cybersecurity
Digital Services & Media
CHAPTER I
GENERAL PROVISIONSArticles 1 — 4
CHAPTER II
PROHIBITED AI PRACTICESArticles 5 — 5
CHAPTER III
HIGH-RISK AI SYSTEMSArticles 6 — 49
CHAPTER IV
TRANSPARENCY OBLIGATIONS FOR PROVIDERS AND DEPLOYERS OF CERTAIN AI SYSTEMSArticles 50 — 50
CHAPTER V
GENERAL-PURPOSE AI MODELSArticles 51 — 56
CHAPTER VI
MEASURES IN SUPPORT OF INNOVATIONArticles 57 — 63
CHAPTER VII
GOVERNANCEArticles 64 — 70
CHAPTER VIII
EU DATABASE FOR HIGH-RISK AI SYSTEMSArticles 71 — 71
CHAPTER IX
POST-MARKET MONITORING, INFORMATION SHARING AND MARKET SURVEILLANCEArticles 72 — 94
CHAPTER X
CODES OF CONDUCT AND GUIDELINESArticles 95 — 96
CHAPTER XI
DELEGATION OF POWER AND COMMITTEE PROCEDUREArticles 97 — 98
CHAPTER XII
PENALTIESArticles 99 — 101
CHAPTER XIII
FINAL PROVISIONSArticles 102 — 113
ANNEXES
With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, the training to be expected by the deployer, and the presumable context in which the system is intended to be used.
The risk-management system should consist of a continuous, iterative process that is planned and run throughout the entire lifecycle of a high-risk AI system. That process should be aimed at identifying and mitigating the relevant risks of AI systems on health, safety and fundamental rights. The risk-management system should be regularly reviewed and updated to ensure its continuing effectiveness, as well as justification and documentation of any significant decisions and actions taken subject to this Regulation. This process should ensure that the provider identifies risks or adverse impacts and implements mitigation measures for the known and reasonably foreseeable risks of AI systems to the health, safety and fundamental rights in light of their intended purpose and reasonably foreseeable misuse, including the possible risks arising from the interaction between the AI system and the environment within which it operates. The risk-management system should adopt the most appropriate risk-management measures in light of the state of the art in AI. When identifying the most appropriate risk-management measures, the provider should document and explain the choices made and, when relevant, involve experts and external stakeholders. In identifying the reasonably foreseeable misuse of high-risk AI systems, the provider should cover uses of AI systems which, while not directly covered by the intended purpose and provided for in the instruction for use may nevertheless be reasonably expected to result from readily predictable human behaviour in the context of the specific characteristics and use of a particular AI system. Any known or foreseeable circumstances related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights should be included in the instructions for use that are provided by the provider. This is to ensure that the deployer is aware and takes them into account when using the high-risk AI system. Identifying and implementing risk mitigation measures for foreseeable misuse under this Regulation should not require specific additional training for the high-risk AI system by the provider to address foreseeable misuse. The providers however are encouraged to consider such additional training measures to mitigate reasonable foreseeable misuses as necessary and appropriate.