Washington —When the U.S. announced the seizure of 32 internet domains tied to Russian efforts to ply American voters with disinformation ahead of November’s presidential election, prosecutors were quick to note the use of artificial intelligence, or AI.
The Russian operation, known as Doppelganger, drove internet and social media users to the fake news using a variety of methods, the charging documents said, including advertisements that were “in some cases created using artificial intelligence.”
AI tools were also used to “generate content, including images and videos, for use in negative advertisements about U.S. politicians,” the indictment added.
And Russia is far from alone in turning to AI in the hopes of swaying U.S. voters.
“The primary actors we’ve seen for election use of this are Iran and Russia, although as various private companies have noticed, China also has used artificial intelligence for spreading divisive narratives in the United States,” according to a senior intelligence official, who spoke on the condition of anonymity in order to discuss sensitive information.
“What we’ve seen is artificial intelligence is used by foreign actors to make their content more quickly and convincingly tailor their synthetic content in both audio and video forms,” the official added.
But other U.S. officials say the use of AI to spread misinformation and disinformation in the lead-up to the U.S. election has so far failed to live up to some of the more dire warnings about how deepfakes and other AI-generated material could shake-up the American political landscape.
“Generative AI is not going to fundamentally introduce new threats to this election cycle,” according to Cait Conley, senior adviser to the director of the Cybersecurity and Infrastructure Security Agency, the U.S. agency charged with overseeing election security.
“What we’re seeing is consistent with what we expected to see,” Conley told VOA.
AI “is exacerbating existing threats, in both the cyber domain and the foreign malign influence operation-disinformation campaigns,” she said. But little of what has been put out to this point has shocked officials at CISA or the myriad state and local governments who run elections across the country.
“This threat vector is not new to them,” Conley said. “And they have taken the measures to ensure they’re prepared to respond effectively.”
As an example, Conley pointed to the rash of robocalls that targeted New Hampshire citizens ahead of the state’s first in the nation primary in January, using fake audio of U.S. President Joe Biden to tell people to stay home and “save your vote.”
Source: VOAnews