Why I just resigned from my job in generative AI.
MBW Views is a series of exclusive op/eds from eminent music industry people… with something to say. The following article is a little different than the usual first-person pieces we run: it’s something of a public resignation letter.
Ed Newton-Rex is one of the most prominent figures in the evolution of generative AI in music.
The California-based entrepreneur founded the pioneering music-making AI platform Jukedeck over a decade ago, before selling it to TikTok/ByteDance in 2019. He subsequently became Product Director of TikTok’s in-house AI Lab, before becoming Chief Product Officer at music app Voisey (sold to Snap in late 2020).
Since last year, Newton-Rex has worked at Stability AI, home of generative AI image-maker, Stable Diffusion. Last year Stability AI raised USD $101 million at a $1 billion valuation.
Newton-Rex has made a big impact at Stability AI in a relatively short time.
As VP of Audio at the company, he’s led the development of Stable Audio, a generative AI music-making platform trained on licensed music in partnership with rights-holders. Last month, Stable Audio was named one of Time’s ‘Best Inventions Of 2023’.
Despite this success, Newton-Rex has just quit his role at Stability on a point of principle.
A published classical composer himself, Newton-Rex has, throughout his career, been consistent in his belief in the importance of copyright for artists, songwriters, and rightsholders.
As he explains below, Newton-Rex’s personal respect for copyright has somewhat clashed with that of his employer in recent weeks, after Stability AI argued in favor of the ‘fair use’ of copyrighted material to fuel generative AI within a submission to the US Copyright Office. (As Newton-Rex points out, several other large generative AI companies share Stability’s position on this.)
Some additional recent context: Newton-Rex’s decision to resign from Stability AI arrives as the debate over the ‘harvesting’ of copyrighted music by generative AI platforms gets even louder.
Just last week, superstar Bad Bunny expressed his fury over an AI-generated track that artificially replicates the sound of his vocals, as well as those of Justin Bieber and Daddy Yankee.
The purported maker of that track, which has over 22 million plays on TikTok, calls themselves FlowGPT.
In a message responding to Bad Bunny published on TikTok, FlowGPT offered to let the artist re-record the AI-generated track “for free with all rights… but don’t forget to credit FlowGPT”.
It gets worse: If Bad Bunny’s team managed to get the track removed from digital platforms, FlowGPT threatened, “I’ll have to upload a new version.”
Over to Ed…
I’ve resigned from my role leading the Audio team at Stability AI, because I don’t agree with the company’s opinion that training generative AI models on copyrighted works is ‘fair use’.
First off, I want to say that there are lots of people at Stability who are deeply thoughtful about these issues. I’m proud that we were able to launch a state-of-the-art AI music generation product trained on licensed training data, sharing the revenue from the model with rights-holders. I’m grateful to my many colleagues who worked on this with me and who supported our team, and particularly to Emad for giving us the opportunity to build and ship it. I’m thankful for my time at Stability, and in many ways I think they take a more nuanced view on this topic than some of their competitors.
But, despite this, I wasn’t able to change the prevailing opinion on fair use at the company.
“I don’t see how using copyrighted works to train generative AI models of this nature can be considered fair use.”
This was made clear when the US Copyright Office recently invited public comments on generative AI and copyright, and Stability was one of many AI companies to respond. Stability’s 23-page submission included this on its opening page:
“We believe that Al development is an acceptable, transformative, and socially-beneficial use of existing content that is protected by fair use”.
For those unfamiliar with ‘fair use’, this claims that training an AI model on copyrighted works doesn’t infringe the copyright in those works, so it can be done without permission, and without payment. This is a position that is fairly standard across many of the large generative AI companies, and other big tech companies building these models — it’s far from a view that is unique to Stability. But it’s a position I disagree with.
I disagree because one of the factors affecting whether the act of copying is fair use, according to Congress, is “the effect of the use upon the potential market for or value of the copyrighted work”. Today’s generative AI models can clearly be used to create works that compete with the copyrighted works they are trained on. So I don’t see how using copyrighted works to train generative AI models of this nature can be considered fair use.
“Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works. I don’t see how this can be acceptable.”
But setting aside the fair use argument for a moment — since ‘fair use’ wasn’t designed with generative AI in mind — training generative AI models in this way is, to me, wrong. Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works. I don’t see how this can be acceptable in a society that has set up the economics of the creative arts such that creators rely on copyright.
To be clear, I’m a supporter of generative AI. It will have many benefits — that’s why I’ve worked on it for 13 years. But I can only support generative AI that doesn’t exploit creators by training models — which may replace them — on their work without permission.
I’m sure I’m not the only person inside these generative AI companies who doesn’t think the claim of ‘fair use’ is fair to creators. I hope others will speak up, either internally or in public, so that companies realise that exploiting creators can’t be the long-term solution in generative AI.
Ed Newton-RexMusic Business Worldwide