In a remarkably detailed joint advisory, US, Dutch and Canadian agencies identified a variety of software programs that were used to administer the network, including a program called Meliorator that created fictitious users, known as “souls,” in various countries. The FBI obtained a court order allowing it to seize two web domains that were used to register the email addresses behind the accounts.
“Today’s action marks the first time we have disrupted a Russian-backed generative AI-powered social media bot farm,” FBI Director Christopher A. Wray said in a statement. “Russia intended to use this bot farm to spread AI-generated foreign disinformation, with AI assistance to undermine our Ukrainian partners, and influence geopolitical narrative in favor of the Russian government.”
Automated accounts with more detailed bios posted original content, and a supporting cast of more generic accounts liked and reshared those posts. Authorities did not respond to questions about how many real users saw the posts or whether anyone spread the message further, so it’s unclear how effective the campaign was.
Get caught up in
Stories to keep you up to date
The system circumvented one of X’s technologies for verifying a user’s authenticity by automatically copying a one-time passcode sent to a registered email address, and references to Facebook and Instagram in the program code suggest the operation was intended to extend to those platforms, authorities said.
Authorities have recommended that social media companies improve the ways they capture covert, automated behavior.
Company X complied with a court order requiring it to provide information about the accounts to the FBI and subsequently deleted them. The company did not respond to questions from The Washington Post.
The Justice Department expressed gratitude for X’s cooperation during the investigation, a sign of improved communication between governments and major social media companies after the Supreme Court upheld officials’ right to point out foreign influence operations.
John Scott-Railton, a researcher at the Canadian nonprofit Citizen Lab, said the countries provided detailed information about the inner workings of botnets so other investigators and companies know what to look for.
“They don’t think this problem is going away, so they’re sharing information far and wide,” Scott Railton said.
He said the documents show that AI’s large language models help scale and translate Russian propaganda efforts, as well as evade detection software that looks for repeated use of the same Internet Protocol addresses and other identifiers.
But many other systems are already in operation and will continue to improve as they adapt to what they detect and what they allow through, Scott-Railton said. “This is not even the tip of the iceberg,” he said. “This is just a drop in the iceberg.”