Skip to content

AI Integration Leads to Growing Restrictions on Expressions

Algorithmic censorship poses an ethical dilemma for AI, as it may enable governments and corporations to shape the worldwide discourse. Neglecting this issue could lead to unfettered control over global discussion.

Algorithmic suppression through artificial intelligence stirs an ethical dilemma of censorship, as...
Algorithmic suppression through artificial intelligence stirs an ethical dilemma of censorship, as prevailing obliviousness could empower governments and businesses to dictate worldwide discourse.

AI Integration Leads to Growing Restrictions on Expressions

Artificial Intelligence and Algorithmic Censorship: Ethical Implications of a Growing Crisis

In an increasingly interconnected world, artificial intelligence (AI) has become an integral part of various industries, including finance, technology, and media. This rapid growth, however, raises serious ethical concerns, particularly around the issue of algorithmic censorship. By glossing over these concerns, there is a risk of enabling governments and corporations to control the global conversation, potentially compromising democracy and human rights.

The power and scope of AI censorship have dramatically increased in recent years. In a matter of one-to-two years since 2010, the computational power of training AI systems has surged by a factor of 10, elevating the threat of censorship and control over public discourse to unprecedented levels.

Corporations worldwide have ranked privacy and data governance as their top AI risks, while censorship has remained overlooked. AI, capable of processing millions of data points in mere seconds, can enforce censorship through various means, such as content moderation and the control of information. Large language models (LLMs) and content recommendations can filter, suppress, or mass-share information on a massive scale.

AI has played a crucial role in amplifying state-led censorship, as highlighted by Freedom House in 2023. In China, for example, the Cyberspace Administration (CAC) has fused censorship strategies into generative AI tools by requiring chatbots to endorse "core socialist values" and block content deemed objectionable by the Chinese Communist Party. Chinese AI models like DeepSeek's R1 are already censoring topics like the Tiananmen Square massacre to promote state narratives.

Democratic policymakers need to collaborate with civil society experts from around the world, as Freedom House suggests, in order to establish strong human rights-based standards for both state and non-state actors employing AI tools. This will help protect the free and open internet from the threats posed by AI-driven censorship.

In 2021, researchers at UC San Diego discovered that AI algorithms trained on censored datasets, such as China's Baidu Baike, associate the keyword 'democracy' with 'chaos.' On the other hand, models trained on uncensored sources associate 'democracy' with 'stability.' Freedom House's 2023 'Freedom on the Net' report pointed out that global internet freedom witnessed a decline for the 13th consecutive year, largely due to AI's significant impact.

Twenty-two countries have laws that compel social media companies to use automated systems for content moderation, which can be potentially exploited to stifle debate and protests. The Myanmar military junta, Iranian government, Belarus, and Nicaragua have all capitalized on this to arrest dissidents, hand out severe sentences for online speech, or even carry out executions.

Freedom House found that at least 47 governments used AI to sway online conversations towards their desired narratives. Advanced technology was employed in at least 16 countries to sow doubt, defame opponents, or manipulate public debate. At least 21 countries require digital platforms to use machine learning to delete political, social, and religious speech.

A 2023 Reuters report foresaw that AI-generated deepfakes and misinformation could cause profound damage to democratic processes, empowering regimes seeking to tighten their control over information. In the 2024 US presidential election, AI-generated images falsely depicting Taylor Swift supporting Donald Trump demonstrated that AI is already warping electoral outcomes.

The most alarming example of AI-driven censorship can be found in China. A leaked dataset analyzed by TechCrunch in 2025 revealed a sophisticated AI system engineered for censorship topics such as pollution scandals, labor disputes, and Taiwanese political issues by employing LLMs to evaluate context and flag political satire. Unlike traditional keyword-based filtering, this system exhibits higher efficiency and granularity of state-led information control.

In 2024, a House Judiciary Committee report accused the National Science Foundation (NSF) of funding AI tools to combat 'misinformation' regarding Covid-19 and the 2020 election. This report found that the NSF supported AI-based censorship and propaganda tools, aiming to shape public opinion by limiting viewpoints and promoting others.

A 2025 WIRED report discovered that DeepSeek's R1 model incorporates censorship filters both at the application and training levels, leading to limitations on sensitive topics. In 2025, a Pew Research Center survey found that 83% of US adults expressed concerns about AI-driven misinformation, with significant apprehension about its implications for free speech. AI experts interviewed by Pew stated that AI training data could unintentionally reinforce established power structures.

To address the issue of AI-driven censorship, a 2025 HKS Misinformation Review recommended increased transparency to reduce censorship-induced trepidation. The review found that 38.8% of Americans are somewhat concerned, and 44.6% are highly concerned, about AI's role in spreading misinformation during the 2024 US presidential election, while 9.5% showed no concerns, and 7.1% were unaware of the issue altogether.

Making AI accessible to everyone while ensuring transparency and accountability has become crucial. Open-source AI ecosystems can help achieve this by encouraging companies to disclose their training dataset sources and biases. Governments need to develop AI regulatory frameworks emphasizing free expression and tackling censorship head-on. If the AI industry and consumers can rise to these challenges, a human-centric future may prevail instead of an AI-managed technocratic dystopia.

Manouk Termaaten, an entrepreneur and AI export, has founded Vertical Studio AI with the mission of making AI accessible to everyone. With a background in engineering and finance, Termaaten aims to innovate in the AI sector by developing customization tools and affordable computers for widespread consumer adoption. To keep up-to-date with the latest news in finance, AI, blockchain, and more, follow The Daily Hodl on Twitter, Facebook, and Telegram.

Industry Announcements

  • Venom Foundation achieves 150,000 transactions per second in closed-network stress test, setting the stage for its 2025 mainnet upgrade.
  • XDC Network's XVC Tech invests in Laser Digital Carry Fund, initiating the development of institutional fund infrastructure with Libre.
  • Psy develops the first trustless bridge from Dogecoin to Solana.
  • Mantle and Republic Technologies form a strategic partnership to pioneer institutional mETH integration.
  • BTCC Exchange names Dan Liu as CEO, marking the exchange's landmark 14th anniversary milestone.
  • Bitcoin Suisse secures in-principle approval from ADGM's Financial Services Regulatory Authority.
  • Beer 2.0, the memecoin with bigger aspirations, is brewing on Solana.

Spotlight

  • A partner at a $56 billion venture capital firm suffered data theft in Coinbase's hack, with their phone number, address, and other confidential details exposed.
  • An 'epidemic of scams' is affecting various industries, according to a Meta report.
  • The mastermind behind the SEC SIM swapping scheme has been sentenced to 14 months in prison for manipulating Bitcoin prices.
  • Capital One pays $425 million to customers after accused of cheating clients out of higher returns on bank balances.

In the realm of cryptocurrency and blockchain, the technology-driven industry is not immune to AI-driven censorship, as the power of AI can potentially influence the dissemination of information. For instance, Machine Learning models might filter, suppress, or mass-share information (example: North Korea's censorship of Bitcoin-related discussions).

Furthermore, the rapid advancements in AI technology can not only enhance blockchain transactions' efficiency but also pose challenges in maintaining the integrity of digital assets and peer-to-peer networks, especially when it comes to preventing AI-generated deepfakes or misinformation within the context of altcoins.

Read also:

    Latest