The report outlines the key privacy risks associated with the use of large language models (LLMs), including data protection concerns related to training data, model outputs, and deployment scenarios. It presents practical mitigation strategies focusing on transparency, data minimisation, legal basis for processing, and the rights of data subjects. The document also addresses technical safeguards such as differential privacy and federated learning. The report is produced in the context of the Support Pool of Experts programme.
Author: EDPB SPE
Status: Adopted / Published
Adoption date: 2025-04-17
Last updated: 08 Aug 2025
Category: Miscellaneous
Subcategory: Report