GOOD.
Privacy Policy
The potential for generative AI to allow malicious actors to effortlessly impersonate you is a nightmare. To combat this, YouTube, the world’s largest video platform, is now giving users the ability to request removal of AI-generated content that mimics their appearance or voice, strengthening the technology’s currently few safeguards.
The change was quietly added in an update to YouTube’s privacy guidelines last month, but wasn’t reported until after TechCrunch We noticed this week that YouTube considers instances where AI is used “to edit or create synthetic content that looks or sounds like you” as a potential privacy violation, rather than a misinformation or copyright issue.
Filing a request does not guarantee removal of the content, however, and YouTube’s criteria leave room for considerable ambiguity. YouTube says it will consider factors including whether the content is presented as “edited or synthetic,” whether the person “can be uniquely identified,” and whether the content is “realistic.”
But here lies a huge and familiar gap: whether the content can be considered parody or satire, or even more vaguely, as containing some value for the “public interest” will also be considered — nebulous qualifications that show that YouTube is taking a fairly soft stance here that is by no means anti-AI.
Letter of the law
In line with its standards regarding any form of privacy violation, YouTube says it will only listen to first-party complaints. Third-party complaints will only be considered in exceptional cases, such as when the person being impersonated does not have access to the internet, is a minor, or is deceased.
If the complaint is upheld, YouTube will give the user 48 hours to respond to the complaint, which may involve cutting or blurring the video to remove the problematic content, or removing the video entirely. If the user does not respond in time, their video will be subject to further review by YouTube staff.
“If we remove your video for violating privacy, do not upload another version featuring the same people,” YouTube’s guidelines read. “We take the protection of our users very seriously and suspend accounts that violate people’s privacy.”
These guidelines are all well and good, but the real question is how YouTube applies them in practice. The Google-owned platform, like TechCrunch notes, has its own AI stakes, including the release of a music-generating tool and a bot that summarizes comments into short videos — not to mention Google’s much larger role in the AI race overall.
Perhaps that’s why this new ability to request removal of AI-generated content has debuted quietly, as a tentative continuation of YouTube’s “responsible” AI initiative launched last year and which goes into effect today. It officially began requiring disclosure of realistic AI-generated content in March.
That said, we suspect that YouTube won’t be as quick to remove problematic AI-generated content as it is to enforce copyright infringement penalties. But it’s at least a mildly encouraging gesture and a step in the right direction.
Learn more about AI: Facebook freaks go viral with AI-generated photos of police officers carrying huge Bibles through floodwaters