Earlier this week, the Copyright Office convened a listening session on the topic of copyright issues in AI-generated music and sound recordings, the fourth in its listening session series on copyright issues in different types of AI-generated creative works. Authors Alliance participated in the first listening session on AI-generated textual works, and we wrote about the second listening session on AI-generated images here. The AI-generated music listening session participants included music industry trade organizations like the Recording Industry Association of America, Songwriters of North America, and the National Music Publishers’ Association; generative AI music companies like Boomy, Tuney, and Infinite Album; music labels like the Universal Music Group and Wixen; and individual musicians, artists, and songwriters. Streaming service Spotify and collective-licensing group SoundExchange also participated.
Generative AI Tools in the Music Industry
Many listening session participants discussed the fact that some musical artists, such as Radiohead and Brian Eno, have been using generative AI tools as part of their work for decades. For those creators, generative AI music is nothing new, but rather an expansion of existing tools and techniques. What is new is the ease with which ordinary internet users without musical training can assemble songs using AI tools—programs like Boomy enable users to generate melodies and musical compositions, with options to overlay voices or add other sounds. Some participants sought to distinguish generative tools from so-called “assistive tools,” with the latter being more established for professional and amateur musicians.
Where some established artists themselves have long relied on assistive AI tools to create their works, AI-generated music has lowered barriers to entry for music creation significantly. Some take the view that this is a good thing, enabling more creation by more people who could not otherwise produce music. Others protest that those with musical talent and training are being harmed by the influx of new participants in music creation, as these types of songs flood the market. In my view, it’s important to remember that the purpose of copyright, furthering the progress of science and the useful arts, is served when more people can generate creative works, including music. Yet AI-generated music may already be at or past the point where it can be indistinguishable from works created by human artists without the use of these tools, at least to some listeners. It may be the case that, as at least one participant suggested, audio generated works are somehow different from AI-generated textual works such that they may require different forms of regulation.
Right of Publicity and Name, Image, and Likeness
Although the topic of the listening session was federal copyright law, several participants discussed artists’ rights in both their identities and voices—aspects of the “right of publicity” or the related name, image, and likeness (“NIL”) doctrine. These rights are creatures of state law, rather than federal law, and allow individuals, particularly celebrities, to control what uses various aspects of their identities may be put to. In one well-known right of publicity case, Ford used a Bette Midler “sound alike” for a car commercial, which was found to violate her right of publicity. That case and others like it have popularized the idea that the right of publicity can cover voice. This is a particularly salient issue within the context of AI-generated music due to the rise of “soundalikes” or “voice cloning” songs that have garnered substantial popularity and controversy, such as the recent Drake soundalike, “Heart on My Sleeve.” Some worry that listeners could believe they are listening to the named musical artist when in fact they are listening to an imitation, potentially harming the market for that artist’s work.
The representative from the Music Artists Coalition argued that the hodge podge of state laws governing the right of publicity could be one reason why soundalikes have proliferated: different states have different levels of protection, and the lack of unified guidance on how these types of songs are governed under the law can create uncertainty as to how they will be regulated. And the representative from Controlla argued that copyright protection should be expanded to cover voice or identity rights. In my view, expanding the scope of copyright in this way is neither reasonable nor necessary as a matter of policy (and furthermore, would be a matter for Congress, and not the Copyright Office, to address), but it does show the breadth of the soundalike problem for the music industry.
Copyrightability of AI-Generated Songs
Several listening session participants argued for intellectual property rights in AI-generated songs, and others argued that the law should continue to center human creators. The Copyright Office’s recent guidance regarding copyright in AI-generated works suggests that the Office does not believe that there is any copyright in the AI-generated materials due to the lack of human authorship, but human selection, editing, and compilation can be protected. The representatives from companies with AI-generating tools expressed a need for some form of copyright protection for the songs these programs produce, explaining that they cannot be effectively commercialized if they are not protected. In my view, this can be accomplished through protection for the songs as compilations of uncopyrightable materials or as original works, owing to human input and editing. Yet, as many listening session participants across these sessions have argued, the Copyright Office registration guidance does not make clear precisely how much human input or editing is needed to render an AI-generated work a protectable original work of authorship.
Licensing or Fair Use of AI Training Data
In contrast to the view taken by many during the AI-generated text listening session, none of the participants in this listening session argued outright that training generative AI programs on in-copyright musical works was fair use. Instead, much of the discussion focused on the need for a licensing scheme for audio materials used to train generative AI audio programs. Unlike the situations with many text and image-based generative AI programs, the representatives from generative AI music programs expressed an interest and willingness to enter into licensing agreements with music labels or artists. In fact, there is some evidence that licensing conversations are already taking place.
The lack of fair use arguments during this listening session may be due to the particular participants, industry norms, or the “safety” of expressing this view in the context of the music industry. But regardless, it provides an interesting contrast to views around training data text-generating programs like ChatGPT, which many (including Authors Alliance) have argued are fair uses. This is particularly remarkable since at least some of these programs, in our view, use the audio data they are trained on for a highly transformative purpose. Infinite Album, for example, allows users to generate “infinite music” to accompany video games. The music reacts to events in the video game—becoming more joyful and upbeat for victories, or sad for defeats—and can even work interactively for those streaming their games, where those watching the stream can temporarily influence the music. This seems like precisely the sort of “new and different purpose” that fair use contemplates, and similarly like a service that is unlikely to compete directly with individual songs and records.
Generative AI and Warhol Foundation v. Goldsmith
Many listening session participants discussed the interactions between how AI-generated music should be regulated under copyright law and the recent Supreme Court fair use decision in Warhol Foundation v. Goldsmith (you can read our coverage of that decision here), which also considered whether a particular use which could have been licensed was fair use. And some participants argued that the decision in Goldsmith makes it clear that training generative AI models (i.e., the input stage) is not a fair use under the law. It is not clear precisely how the decision will impact the fair use doctrine going forward, particularly as it applies to generative AI, and I think it is a stretch to call it a death knell for the argument that training generative AI models is a fair use. However, the Court did put a striking emphasis on the commerciality of the use in that case, deemphasizing the transformativeness inquiry somewhat. This could impact the fair use inquiry in the context of generative AI programs, as these programs tend overwhelmingly to be commercial, and the outputs they create can and are being used for commercial purposes.