How China, Russia hope AI will help manipulate information
FORUM Staff
Malign actors in the People’s Republic of China (PRC), Russia and elsewhere have used artificial intelligence (AI) — although with negligible effect — to enhance manipulated information schemes, according to researchers.
OpenAI, the United States-based developer of the AI chatbot ChatGPT, reported in mid-2024 it had detected and disrupted at least five covert influence operations that used its AI models. Among them were the PRC network known as Spamouflage, known for its pro-Beijing messages and harassment of overseas dissidents, and the Russian operation Doppelganger, which typically attempts to undermine support for Ukraine as Moscow wages war on its democratically governed neighbor.
Agents spreading false and misleading information used OpenAI tools to generate large amounts of text, including for online comments, in multiple languages, according to the developer. The goal, analysts say, is to create the false impression that manipulated messages are coming from a large, organic audience rather than the state-sponsored groups responsible.
In another case, Spamouflage turned to ChatGPT for help with computer coding on a website, which hosts articles attacking members of the Chinese diaspora who criticize PRC abuses, according to OpenAI.
Creators of false and misleading information also asked ChatGPT to generate text with fewer grammatical and other language errors. If AI can make propaganda appear to originate in a target’s native language, readers are more likely to assume it is legitimate.
The information manipulators that OpenAI detected have not managed such sophistication.
“Influence campaigns are running up against the limits of generative AI, which doesn’t reliably produce good copy or code,” reported Wired magazine. “It struggles with idioms — which make language sound more reliably human and personal — and also sometimes with basic grammar (so much so that OpenAI named one network ‘Bad Grammar.’)”
Russia’s Bad Grammar operation, relying on AI rather than English skills, accidentally posted multiple comments to Telegram that began “As an AI language model, I am here to assist and provide …”
None of the information manipulation campaigns that OpenAI detected on its platform saw meaningful increases in audience engagement or reach because of AI.
“Generative AI reduces the cost of running propaganda campaigns, making it significantly cheaper to produce content and run interactive automated accounts,” wrote researchers Josh Goldstein, of Georgetown University’s Center for Security and Emerging Technology, and Renee DiReste, of the Stanford Internet Observatory.
“But it is not a magic bullet, and in the case of the operations that OpenAI disclosed, what was generated sometimes seemed to be rather spammy. Audiences didn’t bite.”
Online audiences can combat deceptive content by considering the veracity of sources, approaching digital information with skepticism, and helping friends and family understand the prevalence of AI content.
Researchers are sharing information and exposing AI abuse with tools such as the AI Incident Database and the Political Deepfakes Incident Database.
Meanwhile, AI also is being harnessed to analyze content to better detect inaccurate and potentially harmful information manipulation, Goldstein and DiReste wrote for the Massachusetts Institute of Technology Review.
In the report on malign uses of its technology, OpenAI called for cooperation to secure the digital information environment. “We will not be able to stop every instance [of information manipulation],” it said. “But by continuing to innovate, investigate, collaborate, and share, we make it harder for threat actors to remain undetected across the digital ecosystem.”