The Future Is Here
We may earn a commission from links on this page

China’s Great Firewall Comes for AI Chatbots, and Experts Are Worried

The country's top censor says chatbots like Baidu's Ernie should not "undermine national unity" in what experts call a threat to free speech and human rights.

We may earn a commission from links on this page.
Image for article titled China’s Great Firewall Comes for AI Chatbots, and Experts Are Worried
Photo: China Photos (Getty Images)

China’s top digital regulator proposed bold new guidelines this week that prohibit ChatGPT-style large language models from spitting out content believed to subvert state power or advocate for the overthrow of the country’s communist political system. Experts speaking with Gizmodo said the new guidelines mark the clearest signs yet of Chinese authorities’ eagerness to extend its hardline online censorship apparatus to the emerging world of generative artificial intelligence. China’s Great Firewall now encircles AI.

“We should be under no illusions. The Party will wield the new Generative AI Guidelines to carry out the same function of censorship, surveillance, and information manipulation it has sought to justify under other laws and regulations,” Michael Caster, Asia Digital Programme Manager for Article 19, a human rights organization focused on online free expression, told Gizmodo.

Advertisement

The draft guidelines, published by the Cyberspace Administration of China, come hot on the heels of new generative AI products from Baidu, Alibaba, and other Chinese tech giants. AI developers looking to operate in China moving forward will be required to submit their products to a government security review before they are released to the public and ensure all AI-generated content is clearly labelled. Chatbots will have to verify users’ identities, and their makers will be obligated to ensure content served by AI is factual—so far a big problem for their American counterparts—and does not discriminate against users’ race, ethnicity, belief, country, region, or gender.

While most of those safeguards appear in line with calls from AI safety experts in other countries, the guidelines sharply diverge on the issues of potentially subversive political content. On that question, China wants to impose stringent measures largely in line with its current policies moderating speech on social media. Here’s a translated portion of the Cyberspace Administration’s proposed guidelines.

“The content generated by generative artificial intelligence should reflect the core values ​​of socialism, and must not contain subversion of state power, overthrow of the socialist system, incitement to split the country, undermine national unity, promote terrorism, extremism, and promote ethnic hatred and ethnic discrimination, violence, obscene and pornographic information, false information, and content that may disrupt economic and social order.”

Caster fears Beijing’s new guidelines on generative AI could lead to a clampdown on foreign articles translated by chatbots or suggestions on how internet users could use VPNs or other tools to sidestep the country’s so-called Great Firewall content filter. Caster specifically highlighted the recent arrest of a blogger named Ruan Xiaohuan, who was imprisoned for seven years for incitement of subversion of state power. AI models republishing any of his writing, under these guidelines, could face retaliation.

“These are the types of independent information deemed subversive in China and what would run afoul of the new guidelines should a dataset inadvertently pull from his [Xiaohuan’s] website in delivering generative content,” Caster said.

Advertisement

Human Rights Watch Senior China Researcher Yaqiu Wang told Gizmodo those strict rules, while new, were “totally expected.” Even without these guidelines, Wang said she believed Chinese government officials could still effectively punish companies for spreading content deemed critical of the political system. Having written rules in place specifically mentioning generative AI makes it administratively simpler for officials to cite precise statutes when targeting potentially violating tech firms.

“Even without the guidelines, they can do the same thing,” Wang said. “The guidelines are just a convenient tool they can point to.” She agreed with Caster’s assessment, saying it seemed possible Chinese authorities could use the text of the new AI draft rules to strike down peaceful and “totally legitimate speech.”

Advertisement

Large tech firms like Baidu have likely already built and trained their models knowing something akin to these restrictions would pass. Previous reporting from the Wall Street Journal showed how earlier versions of Chinese chatbots choked up when asked critical questions about Chinese President Xi Jinping or when prompted to discuss Chinese politics. Some frustrated users reportedly call the censored ChatGPT wannabes “ChatCCP.”

“If you are operating in the Chinese system, you know there are things you cannot talk about,” Wang said. “The guidelines are just another warning.”

Advertisement

What is China’s Cyberspace Administration and what does it want with AI? 

The Cyberspace Administration of China (CAC), formed in 2013, has rapidly evolved in recent years and taken on a role as the country’s foremost internet censor and a server of regulatory nightmares for rapidly growing Chinese tech firms. The CAC was responsible for suddenly knocking Chinese ride-hailing giant Didi out of app stores in 2021 just days after its massive $4.4 billion IPO, and has played a leading role in crafting two of China’s most significant and severe data privacy laws. Critics of the CAC, like Caster of Article 19, say the agency’s close ties to President Xi Jinping mean it’s directly involved in censorship demands handed down from the highest levels of power.

Advertisement

Caster warned the CAC’s suggested bans on content that promotes terrorism or extremism, though laudable in the abstract, could similarly be weaponized to crack down on political dissents or marginalized groups like the country’s Uyghur Muslim minority. In that case, Chinese government authorities have categorized Uyghurs as extremists to justify actions multiple human rights groups have described as state-sanctioned persecution. AI-generated content that simply acknowledges Uyghur history or culture, under the new guidelines, could be seen as promoting extremism, Caster said.

US and China chatbots could exist in alternate realities

Chinese regulators haven’t been shy about their concerns over potential political interference attributed to US-made AI chatbots. In February, Tencent and Ant Group reportedly clamped down on users trying to access OpenAI’s ChatGPT, which regulators reportedly warned could be used to, “spread false information.” Even though ChatGPT is blocked in China, users on WeChat and other apps were reportedly sharing exchanges with the model after accessing through VPNs. Some of the answers served up by ChatGPT, according to the Guardian, were perceived by Chinese authorities to be “consistent with the political propaganda of the US government.”

Advertisement

On the other side of the Pacific, US lawmakers are voicing similar concerns about Chinese-made AI models. Speaking at an Axios event last month, House China select committee Chair Republican Rep. Mike Gallagher described Chinese AI models as weapons that government officials could use to perfect a “Orwellian techno-totalitarian surveillance state.” That might sound dumb, and it is, but other more respected China hawks, like former Google CEO Eric Schmidt have likewise the US must do “whatever it takes,” to win an AI race against China.

“I think our challenge,” Gallagher said, “Is to ensure that AI is used as an instrument for human flourishing and freedom.”

Advertisement

The restrictions on foreign chatbots come just as homegrown brands like Baidu and Alibaba race to release their own similar alternatives. Baidu showed off its own alternative, dubbed Ernie Bot, during a pre-recorded demo last month. It shares some similarities with OpenAI’s models, but critics note the Baidu competitor appeared to struggle with basic logic. Internally, the Wall Street Journal notes, Baidu worked around the clock, scrambling to ensure Ernie was capable of completing basic functions. Alibaba’s more recently released Tongyi Qianwen model, which the company opened up to corporate clients, reportedly excels at writing poems in multiple languages and solving basic math problems, but similarly struggles with basic logic problems.

US tech firms’ apparent lead in large language models, at least for now, is thanks in part to relatively stricter AI regulations in China and in influx of investment by American companies. Last year, according to a recent analysis conducted by Stanford researchers, US companies invested $47.4 billion on AI projects, a figure 3.5 times more than China. Those figures stand in contrast to claims by some US AI enthusiasts who have suggested US firms could lose their edge due to an anything-goes, non-regulatory environment in China.

Advertisement

“It’s clear that China is moving in step with global momentum on regulating AI,” AI Now Institute Executive Director Amba Kak told Gizmodo. “This and other regulatory moves from the Chinese government, like on competition enforcement, directly contradict claims that Chinese tech companies have an edge in the ‘US v China AI race’ because they are left unregulated.”

“These loosely backed claims are dangerous because they’re used to push back against regulation of Big Tech firms in the US, promoting a race to the bottom when it comes to standards for privacy, competition and consumer protection,” Kak added.