HodlX Guest Post Submit Your Post
AI (artificial intelligence) creates an ethical crisis of algorithmic censorship. By glossing over this problem, we risk allowing governments and corporations to control the global conversation.
Both AI technology and industry have gone parabolic. Its censorship potential becomes greater every day.
Every one to two years since 2010, the computational power of training AI systems has increased by a factor of 10, making the threat of censorship and control of public discourse more real than ever.
Corporations worldwide ranked privacy and data governance as their top AI risks, while censorship didn’t register on their radar.
AI – which can process millions of data points in seconds – can censor through various means, including content moderation and control of information.
LLMs (large language models) and content recommendations can filter, suppress or mass share information at scale.
In 2023, Freedom House highlighted that AI is enhancing state-led censorship.
In China, the CAC (Cyberspace Administration) has incorporated censorship strategy into generative AI tools, requiring chatbots to support “core socialist values” and block content the communist party wants to censor.
Chinese AI models, such as DeepSeek’s R1, already censor topics like the Tiananmen Square massacre, in order to spread state narratives.
“To protect the free and open internet, democratic policymakers – working side by side with civil society experts from around the world – should establish strong human rights–based standards for both state and non-state actors that develop or deploy AI tools,” concludes Freedom House.
In 2021, UC San Diego found that AI algorithms trained on censored datasets, such as China’s Baidu Baike, which associates the keyword ‘democracy’ with ‘chaos.’
Models trained on uncensored sources associated ‘democracy’ with ‘stability.’
In 2023, Freedom House’s ‘Freedom on the Net’ report found that global internet freedom fell for the 13th consecutive year. It attributed a large part of the decline to AI.
Twenty-two countries have laws in place requiring social media companies to employ automated systems for content moderation, which could be used to suppress debate and demonstrations.
Myanmar’s military junta, for instance, used AI to monitor Telegram groups and detain dissidents and carry out death sentences based on their posts. The same happened in Iran.
Additionally, in Belarus and Nicaragua, governments sentenced individuals to draconian prison terms for their online speech.
Freedom House found that no fewer than 47 governments used comments to sway online conversations towards their preferred narratives.
It found that in the past year, new technology was used in at least 16 countries to sow the seeds of doubt, smear opponents or influence public debate.
At least 21 countries require digital platforms to use machine learning to delete political, social and religious speech.
A 2023 Reuters report warned that AI-generated deepfakes and misinformation could “undermine public trust in democratic processes,” empowering regimes that seek to tighten control over information.
In the 2024 US presidential elections, AI-generated images falsely implying Taylor Swift endorsed Donald Trump demonstrated that AI is already manipulating public opinion.
China offers the most prominent example of AI-driven censorship.
A leaked dataset analyzed by TechCrunch in 2025 revealed a sophisticated AI system designed to censor topics like pollution scandals, labor disputes and Taiwan political issues.
Unlike traditional keyword-based filtering, this system uses LLMs to evaluate context and flag political satire.
Researcher Xiao Qiang noted that such systems “significantly improve the efficiency and granularity of state-led information control.”
A 2024 House Judiciary Committee report accused the NSF (National Science Foundation) of funding AI tools to combat ‘misinformation’ on Covid-19 and the 2020 election.
The report found that the NSF funded AI-based censorship and propaganda tools.
“In the name of combating alleged misinformation regarding Covid-19 and the 2020 election, NSF has been issuing multi-million-dollar grants to university and non-profit research teams,” reads the report.
“The purpose of these taxpayer-funded projects is to develop AI-powered censorship and propaganda tools that can be used by governments and Big Tech to shape public opinion by restricting certain viewpoints or promoting others.”
A 2025 WIRED report discovered that DeepSeek’s R1 model includes censorship filters at both the application and training levels, resulting in blocks on sensitive topics.
In 2025, a Pew Research Center survey found that 83 of US adults were concerned about AI-driven misinformation, with many showing concerns about its free speech implications.
Pew interviewed AI experts, who said that AI training data can unintentionally reinforce existing power structures.
Addressing AI-driven censorship
A 2025 HKS Misinformation Review called for better reporting to reduce fear-driven calls for censorship.
The survey found that 38.8 of Americans are somewhat concerned, and 44.6 are highly concerned, about AI’s role in spreading misinformation during the 2024 US presidential election, while 9.5 held no concerns, and 7.1 were unaware of the issue altogether.
Creating an open-source AI ecosystem is of the utmost importance. This means companies disclose training dataset sources and biases.
Governments should create AI regulatory frameworks prioritizing free expression.
If we want a human future, instead of an AI-managed technocratic dystopia, the AI industry and consumers need to build up the courage to tackle censorship.
Manouk Termaaten is an entrepreneur, an AI export and the founder and CEO of Vertical Studio AI. He’s aiming to make AI accessible to everyone. With a background in engineering and finance, he seeks to disrupt the AI sector with accessible customization tools and affordable computers.
Follow Us on Twitter Facebook Telegram
Check out the Latest Industry Announcements
Generated Image: DALLE3