Meta is changing the labels it applies to social media posts suspected of being generated in some way using artificial intelligence tools. The parent company of Facebook, Instagram, Threads, and WhatsApp said its new label will display “AI information” next to a post, where it previously said “Created with AI.”
These changes are partly due to Meta’s detection systems labeling images with minor edits as “made with AI,” leading some artists to criticize the approach.
In one high-profile example, former White House photographer Pete Souza told TechCrunch that cropping tools appear to add information to images, and that information then alerts Meta’s AI detectors.
Meta, for its part, said it is balancing rapidly changing technology with its responsibility to help people understand what its systems are showing in their feeds.
“As we work with companies across the industry to improve the process so that our labeling approach better matches our intent, we are updating the “Made with AI” label to “AI Insights” across all of our apps, which users can click on for more information,” the company said in a statement Monday.
Learn more: Is This Photo Close to Reality? What You Need to Know in the Age of AI
Meta’s evolutionary approach underscores the speed at which AI technologies are spreading across the web, making it increasingly difficult for ordinary citizens to distinguish what’s actually real.
This is particularly concerning ahead of the November 2024 US presidential election, when bad actors are expected to step up their efforts to spread misinformation and ultimately confuse voters. Google researchers published a report last month highlighting this point, with the Financial Times reporting that AI creations of politicians and celebrities are by far the most popular uses of the technology by malicious actors.
Tech companies have tried to publicly address the threat. OpenAI said earlier this year that it had thwarted social media disinformation campaigns linked to Russia, China, Iran and Israel, all powered by its AI tools. Apple, meanwhile, announced last month that it would add metadata to label images whether they were altered, edited or generated by AI.
Yet the technology seems to be evolving far faster than companies’ ability to identify it. A new term, “slop,” has become increasingly popular to describe the growing flood of AI-generated posts.
Tech companies including Google have contributed to the problem with new technologies like its AI Overview summaries for search, which have been caught spreading racist conspiracy theories and dangerous health advice, including adding glue to pizza to keep the cheese from sliding off. Google, for its part, has since said it will slow the rollout of its AI Overview summaries, though some publications have still found it recommending glue additives to pizza weeks later.