OpenAI, the creators of ChatGPT, reported that within 24 hours, they acted to disrupt deceptive uses of AI in covert operations targeting the Indian elections, resulting in no significant increase in audience engagement.
In a report on its website, OpenAI detailed that STOIC, a political campaign management firm from Israel, created content related to the Indian elections and the Gaza conflict.
“In May, the network began generating comments focused on India, criticizing the ruling BJP party and praising the opposition Congress party,” the report stated. “We disrupted some activity focused on the Indian elections less than 24 hours after it began.”
OpenAI said it banned a cluster of accounts operated from Israel, which were used to generate and edit content for an influence operation spanning X, Facebook, Instagram, websites, and YouTube. “This operation targeted audiences in Canada, the United States, and Israel with content in English and Hebrew. In early May, it began targeting audiences in India with English-language content.”
The report did not provide further details.
Minister of State for Electronics & Technology, Rajeev Chandrasekhar, commented on the report, saying, “It is absolutely clear and obvious that @BJP4India was and is the target of influence operations, misinformation, and foreign interference, conducted by and/or on behalf of some Indian political parties.
“This is a very dangerous threat to our democracy. It is clear vested interests in India and outside are driving this and need to be deeply scrutinized, investigated, and exposed. My view at this point is that these platforms could have released this much earlier, and not so late when elections are ending,” he added.
OpenAI emphasized its commitment to developing safe and broadly beneficial AI. “Our investigations into suspected covert influence operations (IO) are part of a broader strategy to meet our goal of safe AI deployment.”
OpenAI also stated its commitment to enforcing policies that prevent abuse and improve transparency around AI-generated content, particularly in detecting and disrupting covert influence operations intended to manipulate public opinion or political outcomes without disclosing the true identity or intentions of the actors involved.
“In the last three months, we have disrupted five covert IOs that sought to use our models for deceptive activities across the internet. As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services,” the report said.
OpenAI described its operations, noting that activity by a commercial company in Israel called STOIC was disrupted, but the company itself was not affected.
“We nicknamed this operation Zero Zeno, after the founder of the stoic school of philosophy. The individuals behind Zero Zeno used our models to generate articles and comments, which were then posted across multiple platforms, including Instagram, Facebook, X, and websites associated with this operation,” the report concluded.