China’s AI Chatbots Clam Up When Asked About Xi Jinping’s Leadership, Wall Street Journal, Mar. 15, 2023.

By Shen Lu

For companies trying to ride the ChatGPT wave, there is an added layer of headaches if their chatbots are in China: how to weed out any talk of politics. Since its release in November, the artificial-intelligence chatbot developed by San Francisco-based OpenAI has generated great excitement but has also raised concern about misinformation or disturbing AI-produced responses. This week, the Chinese search-engine company Baidu Inc. is set to release Ernie Bot, its answer to ChatGPT, driving attention to China’s homegrown chatbots, as well as their capability to ensure politically kosher dialogues. In a country that has built rigid digital borders, censors have learned to adapt to new forms of content and evolving censorship demands. Controlling AI-generated responses from a chatbot presents a new challenge, one that might prove more complex than policing search and social media but could strengthen ideological control and further separate China digitally from the rest of the world, Chinese tech executives, engineers and AI experts say. Every tech company moderates content deemed harmful in the countries of its users. OpenAI and other US-based developers filter their data to eradicate issues sensitive in the US, such as hate speech regarding race and ethnicity. In China, the ramifications of letting through politically tricky content can go well beyond the deletion of posts. Chinese chatbots face limits similar to those for searches on China’s web. Users hit a wall each time a prompt contains politically touchy keywords, leading some internet users to collectively refer to anticipated Chinese ChatGPT-like services as “ChatCCP”—a play on the abbreviation for the Chinese Communist Party.

(To be continued…)