Data & Privacy
AI & Trust
Cybersecurity
Digital Services & Media
CHAPTER I
GENERAL PROVISIONSArticles 1 — 4
CHAPTER II
PROHIBITED AI PRACTICESArticles 5 — 5
CHAPTER III
HIGH-RISK AI SYSTEMSArticles 6 — 49
CHAPTER IV
TRANSPARENCY OBLIGATIONS FOR PROVIDERS AND DEPLOYERS OF CERTAIN AI SYSTEMSArticles 50 — 50
CHAPTER V
GENERAL-PURPOSE AI MODELSArticles 51 — 56
CHAPTER VI
MEASURES IN SUPPORT OF INNOVATIONArticles 57 — 63
CHAPTER VII
GOVERNANCEArticles 64 — 70
CHAPTER VIII
EU DATABASE FOR HIGH-RISK AI SYSTEMSArticles 71 — 71
CHAPTER IX
POST-MARKET MONITORING, INFORMATION SHARING AND MARKET SURVEILLANCEArticles 72 — 94
CHAPTER X
CODES OF CONDUCT AND GUIDELINESArticles 95 — 96
CHAPTER XI
DELEGATION OF POWER AND COMMITTEE PROCEDUREArticles 97 — 98
CHAPTER XII
PENALTIESArticles 99 — 101
CHAPTER XIII
FINAL PROVISIONSArticles 102 — 113
ANNEXES
In order to accelerate the process of development and the placing on the market of the high-risk AI systems listed in an annex to this Regulation, it is important that providers or prospective providers of such systems may also benefit from a specific regime for testing those systems in real world conditions, without participating in an AI regulatory sandbox. However, in such cases, taking into account the possible consequences of such testing on individuals, it should be ensured that appropriate and sufficient guarantees and conditions are introduced by this Regulation for providers or prospective providers. Such guarantees should include, inter alia, requesting informed consent of natural persons to participate in testing in real world conditions, with the exception of law enforcement where the seeking of informed consent would prevent the AI system from being tested. Consent of subjects to participate in such testing under this Regulation is distinct from, and without prejudice to, consent of data subjects for the processing of their personal data under the relevant data protection law. It is also important to minimise the risks and enable oversight by competent authorities and therefore require prospective providers to have a real-world testing plan submitted to competent market surveillance authority, register the testing in dedicated sections in the EU database subject to some limited exceptions, set limitations on the period for which the testing can be done and require additional safeguards for persons belonging to certain vulnerable groups, as well as a written agreement defining the roles and responsibilities of prospective providers and deployers and effective oversight by competent personnel involved in the real world testing. Furthermore, it is appropriate to envisage additional safeguards to ensure that the predictions, recommendations or decisions of the AI system can be effectively reversed and disregarded and that personal data is protected and is deleted when the subjects have withdrawn their consent to participate in the testing without prejudice to their rights as data subjects under the Union data protection law. As regards transfer of data, it is also appropriate to envisage that data collected and processed for the purpose of testing in real-world conditions should be transferred to third countries only where appropriate and applicable safeguards under Union law are implemented, in particular in accordance with bases for transfer of personal data under Union law on data protection, while for non-personal data appropriate safeguards are put in place in accordance with Union law, such as Regulations (EU) 2022/868 and (EU) 2023/2854 of the European Parliament and of the Council.