Skip to content

AI Regulations in the EU Demand Transparency from ChatGPT and Similar Entities

"AI Transparency Issue: Under EU Regulations, AI providers are now obligated to reveal their training data sources. Yet, some creators contend that this disclosure is insufficient."

ChatGPT and similar AI servicesfacenew EU regulationsdemanding transparency.
ChatGPT and similar AI servicesfacenew EU regulationsdemanding transparency.

AI Regulations in the EU Demand Transparency from ChatGPT and Similar Entities

The European Union's AI Act, adopted in May 2024, has raised significant intellectual property (IP) protection concerns among associations representing European creators, authors, performers, publishers, producers, and other rightsholders. These concerns revolve around the Act's implementation, particularly the General-Purpose AI (GPAI) Code of Practice and accompanying guidelines.

Google, the developer of AI Gemini, has expressed apprehensions that the AI Act could hinder innovations, adding to the growing debate.

At the heart of the issue are the perceived shortcomings in protecting copyright and related rights effectively. The associations view the final implementation outcomes as a "betrayal" and a "missed opportunity" to safeguard IP rights, as they do not deliver meaningful protections in the context of generative AI, despite extensive engagement and feedback from rightsholder communities.

Another point of contention is the inadequate response to unlicensed use of copyrighted works. The provisions, notably Article 53 of the EU AI Act, are seen as ineffective in enforcing rights against unauthorized use in AI training. The associations argue that current measures ignore the widespread wholesale unlicensed use of protected works by AI model providers.

There is also a perceived bias towards AI providers over creators. The European Commission is criticized for favouring general-purpose AI model providers who "continuously infringe copyright and related rights to build their models," effectively sidelining the interests of creators and the cultural and creative sectors that contribute substantially to the EU economy.

Transparency and data documentation concerns are another issue. While the Act requires AI providers to document technical information about their models and have policies to comply with copyright law, rightsholder groups assert that these measures lack the clarity and enforceability needed to protect stakeholders' rights fully.

A related European Parliament study highlights a fundamental legal mismatch between AI training practices and existing text and data mining exceptions, creating uncertainties around AI-generated content and threatening fair remuneration of authors. It calls for clearer rules, transparency obligations, and equitable licensing models to balance innovation with creators' rights.

Operators of AI models must disclose how their systems function and the data they were trained on under the new guidelines. The European Artificial Intelligence Authority will enforce AI rules from August 2026 for new models, and from August 2027 for models already on the market before August 2, 2025.

Google has announced its intention to sign the voluntary code of conduct presented by the European Commission, suggesting a willingness to address the concerns raised by creators and publishers. Providers that join the code of conduct could benefit from higher legal certainty and lower administrative burden, according to the Commission.

Violations of the AI Act can result in fines of up to 15 million euros or three percent of the total global annual turnover. There should be a contact point for rights holders within the companies, according to EU guidelines. Developers must report which sources they used for their training data and whether they automatically scraped websites. Particularly powerful AI models that could pose a risk to the public must also document safety measures. Developers must specify the measures they have taken to protect copyright.

However, the Initiative for Copyright believes that measures are ineffective because there is no obligation to name specific datasets, domains, or sources. The new EU rules for AI models, effective from tomorrow, impose transparency requirements on General-Purpose AI systems.

As the implementation of the EU AI Act unfolds, it remains to be seen how these concerns will be addressed and whether the Act will strike a balance between fostering innovation and protecting the rights of creators and publishers.

  1. The Economic and Social policy of the European Union, as outlined in the AI Act, has raised concerns among creators and publishers, as they perceive inadequate protection of copyright and related rights, particularly in the context of generative AI.
  2. The implementation of the EU AI Act has sparked a debate over technology and artificial-intelligence, with Google expressing apprehensions that the Act could hinder innovations, while creators and publishers argue for more meaningful protections and transparency measures to safeguard their IP rights.

Read also:

    Latest