AI Compliance by Design - Episode 5: Implementing AI Utilization
Companies that use artificial intelligence (AI) must adhere to specific guidelines to ensure compliance with the General Data Protection Regulation (GDPR), a European privacy law. These considerations span the entire AI development lifecycle and include lawful processing, data minimisation, purpose limitation, transparency, and respecting data subject rights, among others.
Lawful Basis for Processing Personal Data
To process personal data within AI systems, companies must identify a clear GDPR-compliant legal basis, such as consent, legitimate interest, or contractual necessity. Special categories of data, like health data, require stricter safeguards.
Controller Roles and Agreements
Companies and AI developers must clarify their roles under GDPR. If they determine purposes and methods together, they are joint controllers, requiring agreements defining responsibilities. In a controller-processor arrangement, the AI developer processes personal data on behalf of the company, and they must enter into a controller-processor agreement outlining processing details.
Data Minimisation and Purpose Limitation
Only the minimum personal data necessary should be collected and used strictly for the specific purpose originally stated. Repurposing data for AI training or analysis must still comply with these principles.
Transparency and Meaningful Information
Data subjects must receive clear information about AI processing, including the logic behind automated decisions and their possible consequences, enabling users to understand how their data is used.
Automated Decision-Making Controls
When AI makes decisions with legal or significant effects on individuals, GDPR Article 22 applies, requiring human oversight options, rights to challenge decisions, and explanations on decision logic.
Respect for Data Subject Rights
Companies must enable rights including access, rectification, deletion, restriction, data portability, and objection, which can be technically complex when AI models embed personal data deeply.
Secure and Compliant AI Development
The French data protection authority CNIL recommends security measures during AI development, including robust filters to avoid processing personal data unintentionally and compliance with GDPR during data annotation and model training phases.
Documentation and Risk Analysis
Companies should document analyses determining GDPR applicability to AI models, maintain records, and assess data protection risks continuously throughout the AI system life cycle.
These considerations require a “privacy by design” approach, embedding GDPR compliance from the earliest stages of AI development through deployment and ongoing use.
Companies using AI must verify that any personal data generated as output, or any personal data provided by the company based on such output, is accurate. The GDPR is applicable not only to companies that are developing AI but also to those using it. Companies using AI are considered controllers under the GDPR, meaning entities that determine the purposes and means of processing personal data.
When using AI for automated individual decision-making, companies must provide individuals with relevant information in a concise, transparent, intelligible, and easily accessible form regarding the procedure and principles applied to use personal data to obtain a specific result. GDPR compliance is important because data is a key pillar of AI, and AI requires good quality and abundant data for effective functioning.
Companies using AI must ensure compliance with the GDPR when inputting personal data into AI systems, especially considering the potential sharing of information with AI developers of large language models. AI systems must be safe before use, and companies must have controller-processor agreements in place with AI developers, verifying that the processor guarantees the implementation of appropriate technical and organizational measures to ensure GDPR compliance.
Companies must be transparent about their use of AI and inform third parties, such as their customers, about how AI is used and the purposes for its application. Companies using AI must prioritise internal awareness and provide comprehensive training to all relevant staff for GDPR compliance.
Data protection by design is a key requirement of the GDPR, requiring businesses to implement appropriate technical and organisational measures at the determination stage of processing methods and during processing itself. The European Union's Artificial Intelligence Act (AI Act) and the EU General Data Protection Regulation (GDPR) are crucial for businesses using AI.
- To remain compliant with the GDPR, companies using AI must establish a cybersecurity law that encompasses privacy protection measures, ensuring secure and compliant AI development, as recommended by the French data protection authority CNIL.
- In the context of using AI, companies should clarify their roles and responsibilities in relation to the technology, acknowledging that they are considered controllers under the GDPR, thus requiring compliance with the GDPR guidelines throughout the AI development lifecycle.