Earlier this week, the Copyright Office convened a second listening session on the topic of copyright issues in AI-generated expressive works, a part of its initiative to study and understand the issue, and following its listening session on copyright issues in AI–generated textual works a few weeks back (in which Authors Alliance participated). Tuesday’s sessions covered copyright issues in images created by generative AI programs, a topic that has garnered substantial public attention and controversy in recent months.
Participants in the listening sessions included a variety of professional artist organizations like National Press Photographers Association, Graphic Artists Guild, and Professional Photographers of America; companies that have created the generative AI tools under discussion, like Stability AI, Jasper AI, and Adobe; several individual artists; and a variety of law school professors, attorneys, and think tanks representing varied and diverse views on copyright issues in AI-generated images.
Generative AI as a Powerful Artistic Tool
Most if not all of the listening sessions’ participants agreed that generative AI programs had the potential to be incredible tools for artists. Like earlier technological developments such as manual cameras and, much more recently, image editing software like Photoshop, generative AI programs can minimize or eliminate some of the “mechanical” aspects of creation, making creation less time-consuming. But participants disagreed on the impact these tools are having on artists and whether the tools themselves or copyright law ought to be reformed to address these effects.
Visual artists, and those representing them, tended to caution that these tools should be developed in a way that does not hurt the livelihoods of the artists who created the images the programs are trained on. While a more streamlined creative process makes things easier for artists relying on generative AI in their creation, it could also mean fewer opportunities for others artists. When a single designer can easily create background art with Midjourney, for example, they might not need to hire another designer for that task. This helps the first designer to the detriment of the second. Those representing the companies that create and market generative AI programs, including Jasper AI and Stability AI, focused on the ways that their tools are already helping artists: these tools can generate inspiration images as “jumping off points” for visual artists and lower barriers to entry for aspiring visual artists who may not have the technical skills to create visual art without support from these kinds of tools, for example.
On the other hand, some participants voiced concerns about ethical issues in AI-generated works. A representative from the National Press Photographers Association mentioned concerns that AI-generated images could be used for “bad uses,” and creators of the training data could be associated with these kinds of uses. Deepfakes and “images used to promote social unrest” are some of the uses that photojournalists and other creators are concerned about.
Copyright Registration in AI-Generated Visual Art
Several participants expressed approval of the Copyright Office’s recent guidance regarding registration in AI-generated works, but others called for greater clarity in the registration guidance. The guidance reiterates that there is no copyright protection in works created by generative AI programs, because of copyright’s human authorship requirement. It instructs creators that they can only obtain copyright registration for the portions of the work they actually created, and must disclose the role of generative AI tools in creating their works if it is more than de minimis. An author can also obtain copyright protection for a selection and arrangement of AI-generated works as a compilation, but not in the AI-generated images themselves. Yet open questions, particularly in the context of AI-generated visual art, remain: how much does an artist need to add to an image to render it their own creation, rather than the product of a generative AI tool? In other words, how much human creativity is needed to transform an AI-generated image into the product of original human creation for the purposes of copyright? How are we to address situations where a human and AI program “collaborate” on the creation of a work? The fact that the Office’s guidance requires applicants to disclose if they used AI programs in the creation of their work also leaves open questions. If an artist uses a generative AI program to create just one element of a larger work, or as a tool for inspiration, must that be disclosed in copyright registration applications?
The attorney for Kristina Kashtanova, the artist who applied for a copyright registration for her graphic novel, Zarya of the Dawn also spoke. If you haven’t been tracking it, Zarya of the Dawn included many AI-generated images and sparked many of the conversations around copyright in AI-generated visual works (you can read our previous coverage of the Office’s decision letter on Zarya of the Dawn here). Kashtanova’s attorney raised more questions about the registration guidance. She pointed out that the amount of creativity required to create a copyrighted work is very low—there must be more than a “modicum” of creativity, meaning that vast quantities of works (like each of the photographs we take with our smartphones) are eligible for copyright protection. Why, then, is the bar higher when it comes to AI-generated works? Kashantova certainly had to be quite creative to put together her graphic novel, and the act of writing a prompt for the image generator, refining that prompt, and re-prompting the tool until the creator gets an image they are satisfied with requires a fair amount of creative human input. More, one might argue, than is required to take a quick digital photograph. The registration guidance attempts to solve the problem of copyright protection in works not created by a human, but in so doing, it creates different copyrightability standards for different types of creative processes.
These questions will become all the more relevant as artists increasingly rely on AI programs to create their works. The representative from Getty Images stated that more than half of their consumers now use generative AI programs to create images as part of their workflows, and several of the professional artist organizations noted that many of their members were similarly taking up generative AI tools in their creation.
Calls For Greater Transparency
Many participants expressed a desire for the companies designing and making available generative AI programs to be more transparent about the contents of these tools’ training data. This appealed both to artists who were concerned that their works were used to train the models, and felt this was fundamentally unfair, and those with ethical concerns around scraping or potential copyright infringement. Responsive to these critiques, Adobe explained that it sought to develop its new AI image generator, Firefly (which is currently in beta testing) in a way that is responsive to these kinds of concerns. Adobe explained that it planned to train its tool on openly licensed images, seeking to “drive transparency standards” and “deploy [the] technology responsibly in a way that respects creators and our communities at large.” The representative from Getty Images also called for greater transparency in training data. Getty stated that transparency could help mitigate the legal and economic risks associated with the use of generative AI programs—potential copyright claims as well as the possibility of harming the visual artists who created the underlying works they are trained on.
Opt-Outs and Licensing
Related to calls for transparency, much of the discussion centered around attempts to permit artists to opt out of having their works included in the training data used for generative AI programs. Like robots.txt, a tag that allows websites to indicate to web crawlers and other web robots that they don’t wish to allow these robots to visit their sites, several participants discussed a “do not train tag” as a way for creators to opt out of being included in the training data. Adobe said it intended to train its new generative AI tool, Firefly, on openly licensed images and make it easy for artists to opt out with a “do not train” tag, apparently in response to these types of concerns. Yet some rightsholder groups pointed out that compliance with this tag may be uneven—indeed, robots.txt itself is a voluntary standard, and so-called bad robots like spam bots often ignore it.
Works available under permissive licenses like Creative Commons’ various licenses have been suggested as good candidates for training data to avoid potential rights issues. Though several participants pointed out that there may be compliance issues when it comes to commercial uses of these tools, as well as attribution requirements. And the participant representing the American Society for Collective Rights Licensing voiced support for proposals to implement a collective licensing scheme to compensate artists whose works are used to train generative AI programs, echoing earlier suggestions by groups such as the Authors Guild.
One visual artist argued fervently that an opt out standard was not enough: in her view, visual artists should have to opt in to having their works included in training data, as, in her view, an opt out system harms artists without much of an online presence or the digital literacy to affirmatively opt out. In general, the artist participants voiced strong opposition to having their works included without compensation, a position many creators with concerns about generative AI have taken. But Jasper AI expressed its view that training generative AI programs with visual works found across the Internet was a transformative use of that data, all but implying that this kind of training was fair use (a position Authors Alliance has taken). It was notable that so few participants suggested that the ingestion of visual works of art for the purposes of training generative AI programs was a fair use, particularly compared to the arguments in the listening session on text-based works. This may well be due to ongoing lawsuits, inherent differences between image based and text based outputs, or the general tenor of conversations around AI-generated visual art. Many of the participants spoke of anecdotal evidence that graphic artists are already facing job loss and economic hardship as a result of the emergence of AI-generated visual art.