Skip to content

AI regulation through the lens of data protection legislation's potent instrumentality

Initial legal challenges for large language models originated in the realm of data protection law, an area of law that might seem obscure to those captivated by the novel aspects of Artificial Intelligence law or AI ethics and governance principles.

The regulation of AI through data protection laws holds significant potential
The regulation of AI through data protection laws holds significant potential

AI regulation through the lens of data protection legislation's potent instrumentality

In the second half of the 20th century, as computers began to process personal data on a large scale, data protection law emerged as a response. Early frameworks such as the German Federal Data Protection Act (1970) and the OECD Guidelines (1980) laid the groundwork for lawful, fair, and secure data processing. The EU's Data Protection Directive (1995) and its successor, the General Data Protection Regulation (GDPR) in 2016, set a regional standard for personal data protection[1].

Fast forward to the present, and AI and large language models (LLMs) have become central to the global economy. As these technologies pose new risks, such as mass-scale profiling, opaque decision-making, and potential for bias or discrimination, regulators are responding by reinforcing existing data protection principles and introducing new, complementary obligations focused on the technology itself[1].

The GDPR, which applies whenever personal data is processed, regardless of the technology involved[1], has been instrumental in setting a foundation for AI regulation. The EU's AI Act, finalized in 2024, specifically regulates AI systems, including LLMs, addressing the technology itself and imposing obligations on both providers and deployers of high-risk AI systems[1]. This new legislation complements, but does not replace, data protection law; both require risk minimization, but through different mechanisms[1].

The relevance of data protection law today lies in its role as a safeguard against the novel risks posed by large-scale, automated data processing. By ensuring that technological progress does not come at the expense of individual rights and societal trust, data protection law serves as a crucial component in the development of AI law and governance principles[1][2].

Dr. Gabriela Zanfir-Fortuna, in a blog published on LSE European Politics and Policy on February 10, 2025, discusses AI law and governance principles, asserting that data protection law was specifically designed to address the challenges posed by automation and future "thinking machines". The blog implies that the current wave of AI law and governance principles could be seen as the next generation of data protection law[1].

The blog further highlights that AI systems, including the most complex ones, are now immediately relevant to data protection law. It suggests that failure to build coherently on the existing body of data protection laws could result in AI law missing the mark. The blog also mentions that AI law and governance principles can be viewed as an extension of data protection law[1].

As the global regulatory landscape continues to evolve, with countries like China emphasizing state oversight and compliance, and the US relying on sectoral and state laws, there is growing convergence on principles of privacy preservation, traceability, and deletability in AI governance[1][2]. Organizations must understand when data protection law applies to AI, maintain data security (especially for sensitive data), and comply with evolving legal obligations[1][3].

In conclusion, the history of data protection law is one of adaptation to technological change. As AI and LLMs have become central to the global economy, regulators have responded by reinforcing existing data protection principles and introducing new, complementary obligations focused on the technology itself[1][2]. The relevance of data protection law today lies in its role as a safeguard against the novel risks posed by large-scale, automated data processing—ensuring that technological progress does not come at the expense of individual rights and societal trust[1][2].

References: [1] Zanfir-Fortuna, G. (2025). The AI Act: A New Era for Data Protection. LSE European Politics and Policy. [2] European Commission. (2021). Proposal for a Regulation of the European Parliament and of the Council on Artificial Intelligence (Artificial Intelligence Act). [3] European Commission. (2016). General Data Protection Regulation (Regulation (EU) 2016/679). [4] Office of the Federal Commissioner for Data Protection and Freedom of Information (2020). Federal Data Protection Act (FDPA). Swiss Federal Council.

  1. As AI and large language models (LLMs) have become integral to the global economy, regulators are reinforcing existing data protection principles and introducing new obligations focused on technology itself to safeguard against risks such as mass-scale profiling and potential for bias or discrimination.
  2. The European Union's AI Act, finalized in 2024, specifically regulates AI systems, including LLMs, addressing the technology itself and imposing obligations on both providers and deployers of high-risk AI systems.
  3. The GDPR, which applies whenever personal data is processed, regardless of the technology involved, has been instrumental in setting a foundation for AI regulation, ensuring that technological progress does not compromise individual rights and societal trust.
  4. Today, the relevance of data protection law lies in its role as a safeguard against the novel risks posed by large-scale, automated data processing in the context of AI and LLMs, preserving privacy, promoting traceability, and ensuring deletability in AI governance.

Read also:

    Latest