OpenAI announced that it has disrupted five covert influence operations in the past three months. These operations, originating from Russia, China, Iran, and Israel, accessed the company’s AI products to manipulate public opinion or influence political outcomes while concealing their true identities.
This report from OpenAI comes amid growing concerns about AI’s role in global elections this year. OpenAI detailed how these influence networks utilized its tools to deceive people more efficiently, generating text and images with greater volume and fewer language errors than humans could alone. However, OpenAI concluded that these campaigns ultimately failed to significantly extend their reach using the company’s services.
Ben Nimmo, the principal investigator on OpenAI’s Intelligence and Investigations team, stated in a press briefing that the report aims to address questions about the impact of generative AI on influence operations. OpenAI defined these targets as covert “influence operations,” which are deceptive attempts to manipulate public opinion or influence political outcomes without disclosing the true identity or intentions of the actors. Unlike disinformation networks, these groups may promote factually correct information but do so in a misleading way.
While propaganda networks have long exploited social media platforms, their use of generative AI tools is relatively new. OpenAI identified that AI-generated material was used alongside traditional formats, such as manually written texts or memes on social media. Some networks also used OpenAI’s products to increase productivity by summarizing articles or debugging code for bots.
The identified networks included groups like the pro-Russian “Doppelganger,” the pro-Chinese “Spamouflage,” and an Iranian operation known as the International Union of Virtual Media (IUVM). OpenAI also discovered previously unknown networks from Russia and Israel.
One new Russian group, dubbed “Bad Grammar” by OpenAI, used AI models and Telegram to create a content-spamming pipeline. The group used OpenAI’s models to debug code for automated Telegram posts and generated comments in Russian and English using multiple accounts. OpenAI identified some AI-generated content through common AI error messages, such as “As an AI language model, I am here to assist.”
Despite the limited reach of these networks, Nimmo warned against complacency, noting that historically, influence operations can suddenly gain traction if unchecked. Nimmo acknowledged that other groups using AI tools might exist undetected.
Other companies, like Meta Platforms Inc., have made similar disclosures about influence operations. OpenAI is sharing threat indicators with industry peers to aid in detection efforts and plans to release more reports in the future.
