Interestingly, despite the report’s insistence on calling these cases “misuse,” it acknowledges that most cases of AI being used to flood the internet with machine-generated content, whether fake or not, are “often neither overtly malicious nor explicitly violating the tools’ content policies or terms of use.” In simple terms, this means that such uses are intentional and that the AI tools are working as intended, a fact that, let’s be honest, is pretty obvious even without any scientific papers.
What’s even more interesting is that both the report and YouTube’s content policy update mentioned above identify deepfakes as the most harmful application of generative AI. While it’s hard to argue with that—I mean, just imagine AI-generated videos of politicians declaring war on each other—Google’s focus on warning the public about deepfakes in particular can be seen as somewhat puzzling, given the billions the tech giant itself has already invested in AI research, which, among other things, also facilitates the creation of deepfakes.
Do they know something we don’t? Should we brace ourselves for a wave of nearly indistinguishable deepfakes wreaking havoc in real life? Is there any truth to the theory that current-generation AI was developed years before the AI boom of 2022 and was only released to the public as an experiment, now entering a new phase? Let us know what you think in the comments!
You can read the full 29-page report by clicking this link. Don’t do it Fdon’t forget to join us Level 80 Talent Platform And our Telegram channelfollow us on Instagram, Twitter, LinkedIn, Tick TockAnd Redditwhere we share breakdowns, breaking news, awesome artwork and more.