The following MBW Views op-ed is by Ed Newton-Rex (pictured inset), CEO of ethical generative AI nonprofit Fairly Trained.
A veteran expert in the world of gen-AI, Newton-Rex is the former VP of Audio at Stability AI and founder of JukeDeck (acquired by TikTok/ByteDance in 2019).
In this article, Newton Rex argues: “MetersUSICs made with AI products that do not license their training data should be banned [from DSPs] or be disregarded in any royalty calculation or recommendation…”
Handing the baton to Ed…
When I wrote an article in April highlighting the striking similarities between Suno’s work and copyrighted music (and then did the same for Udio), I gave Suno credit: it was possible that they had signed a deal that allowed them to train in major label music, it was even theoretically possible (though unlikely) that they had no training in copyrighted music at all, and that many of the similarities were due to a uncanny level of coincidence.
But there is now no doubt: The RIAA’s lawsuits against both companies make it clear that no such agreements were made for the training, and in their responses to the lawsuits, both companies acknowledge, in identical language, that the recordings they used in the training “appear to have contained recordings owned by copyright owners.” [major record labels]”
Suno’s response goes further, saying that “the training data essentially includes all music files of decent quality that are accessible on the open internet, adhering to pay walls, password protection, etc.”
We knew the time would come when streaming services would have to decide what they would allow on their platforms when it comes to generative AI, and that time is now.
Until now, Spotify has not had a policy that explicitly bans AI-generated music. In 2023, Daniel Ek said that tools that imitate artists are unacceptable and may be banned under the company’s deceptive content policy (the wording is not entirely clear). However, in the same interview, Ek specifically pointed out that AI music: I didn’t Direct impersonation of artists is not currently prohibited.
And as a result, there are signs that AI music is becoming more and more prevalent on platforms. Chris Stokel Walker recently wrote an article for Fast Company about bands that are allegedly AI-generated, with hundreds of thousands of monthly listeners. Users of these AI music platforms have revealed that they share the AI music with their DSPs.
There have been reports of apparently AI-generated music being recommended on Spotify’s Discover Weekly playlist, and earlier this month an AI-generated song reached number 48 on the German pop charts and has been played more than 4 million times on Spotify to date.
For DSPs to continue to allow this would be actively permitting the use of musicians’ copyrighted works without a license.
To quote the more than 200 artists who signed an open letter on AI music earlier this year, “Some of the largest and most powerful companies are using our work to train AI models, without our permission. These efforts are directly aimed at replacing the work of human artists with mass amounts of ‘sound.’ […] “It will significantly dilute the pool of royalties paid to artists. This will be a devastating blow to the many working musicians, artists and songwriters struggling to make a living.”
Previously, there were doubts about whether Udio and Suno were doing the right musical training, something these artists were worried about, but now that doubt has been put to rest.
When DSPs distribute music created using AI models trained on musicians’ work without a license, it leads to the dilution of royalties paid to human musicians that these artists have been warning about.
Musicians’ royalties are being eroded by products made using their work against their will, and DSPs are encouraging this.
What can you do?
First of all, I want to say that DSPs shouldn’t ban AI music altogether. There are clearly good cases for using AI in music production, and if the training data can be licensed, then these cases are worth supporting, at least in my opinion. (I’m not going to lie, as music streaming services emerge and do Explicitly reject all AI music, as Cara did in the image space, and it would probably work, but there are good reasons why most DSPs don’t take such a blanket approach.
At the very least, DSPs should follow the example of other media platforms like Instagram and TikTok and label AI-generated content.
That way, music fans can at least choose what they listen to, and therefore what they support. Require uploaders to label the AI music they upload, and have a post-upload moderation process for tracks that tend to slip through the cracks. This is entirely feasible, and we hope most uploaders are honest (people generally tend to like honesty). For those who aren’t, there are plenty of third-party systems that can detect AI music with a high degree of accuracy.
Of course, the question arises as to what level of AI involvement would trigger the application of the label.
Entering text prompts and delivering the output on Spotify is obviously very different from using a MIDI generator as inspiration.
However, these challenges are not insurmountable and are not a reason to avoid labeling altogether. DSPs just need to make their policies clear and apply them equally to everyone. To start with, if any generative AI was used in the creation of a track, it can be labeled.
But I believe DSPs should do more than label: music created with AI products that don’t license their training data should be banned or not weighted in royalty calculations or recommendations.
Otherwise, the AI would be going head-to-head against the music it was trained on, which just isn’t fair. (At this point, if you’re tempted to say, “But humans are allowed to learn from and compete against existing music,” don’t. Training an AI model is very different from human learning, and the market impact is very different.)
“DSPs should do more than label. Music created with AI products that haven’t licensed their training data should be banned or downplayed in royalty calculations and recommendations. Otherwise, it’s in direct competition with the music they used for training, which isn’t fair.”
The problem here is that there is currently no obligation for AI companies to disclose what they use for training, so there is no definitive list of which AI products fall into this category (as there should be, but there isn’t).
Udio and Suno acknowledged it in court documents, and there are likely other companies taking the same approach. But again, this is not an excuse for inaction. DSPs need to do their own due diligence. If an AI model was likely trained on unlicensed music, we think it’s fair to apply different rules to music created using that model.
Some will say that the DSP should wait until these cases are heard in court before deciding how to act.
But now royalties are being diluted. And there’s plenty of precedent where DSPs have implemented content policies based on principles rather than specific legal rulings. For example, Spotify says it’s “investing heavily to detect, prevent, and remove the royalty impact of artificial streaming” (think someone who plays a track on repeat all night on silence to boost play counts), and it’s taking steps to mitigate the royalty impact from “bad actors” who game the system with white noise recordings.
The company believes these changes could “deliver approximately $1 billion in additional revenue to emerging and professional artists over the next five years.”
If that’s the goal, shouldn’t they also take action against music created using AI models trained on the work of unlicensed artists? Like White Noise, this is being used to game the system and divert royalties. Unlike White Noise, this was created using the work of competing artists.
I agree with Daniel Ek that there is a debatable middle ground when it comes to policing AI music: I would never want to ban AI music altogether, and there are certainly use cases where it would ultimately be beneficial for musicians if it was based on licensing.
But if DSP’s mission is to “give millions of creative artists the opportunity to earn a living from their art,” it seems clear that they should avoid encouraging music made with products that use other musicians’ work without a license, and diluting the royalty pool in the process.
DSPs will be tempted to put off making decisions about how to handle this new threat to musicians until they are forced to do so, but if they don’t act soon, it won’t be long before we see the first artists pull their songs from these platforms in protest.Global Music Business