Authors are using generative AI to support their creative labors, engage in research across a number of disciplines, and also use it for other mundane but important tasks. But, there are still legal uncertainties when it comes to how authors and creators should interact with Generative AI. We will update this page as well as publish new resources as legislation and case laws develop in this arena.
- Check out our Frequently Asked Questions, below.
- Read a more in-depth statement of our current views on Generative AI. We exist to support authors who want to leverage the tools available in the digital age to see their creations reach broad audiences and create innovative new works, and we see generative AI systems as one such tool that can support authors and authorship.
- Check out Authors Alliance’s first zine “Putting the AI in Fair Use” to explore what is fair use and how we can reduce bias in AI.
- Learn about the right of publicity in the context of AI.
- Contact us at info@authorsalliance.org with questions.
Frequently Asked Questions
Is it infringing to use copyrighted works for AI training?
This is the single most hotly contested issue. Rightsholders have brought over two dozen copyright lawsuits in the United States that raise this question. Among many outright dismissed claims related to AI outputs, courts have generally allowed artists’ claims for unauthorized copying of their works for AI training to go to trial. Several new cases filed this year (such as Zhang v. Google) have limited their infringement claims to AI companies’ unauthorized copying of their works, instead of claiming the outputs are infringing copies of the artists’ works. So far, though, no court has ruled directly on the question of whether use of copyrighted works without permission to train AI models is permissible.
We believe AI training can rely on transformative fair use, but it is also possible for courts to take a very restrictive reading of fair use (especially in the wake of the Supreme Court’s recent Warhol decision) and decide that AI training is not acceptable as a fair use for a variety of reasons.
The broader impact of such a decision would depend on how each court frames the issues. A very broad decision–for example, rejecting the idea that non-expressive uses in general are not transformative fair uses, could have devastating effects for non-commercial use of AI, where the prohibitive cost of ingesting new copyrighted works will restrict training data to outdated public domain materials. Big AI companies, on the other hand, are now buying texts and images directly from publishers, in order to avoid risks of litigation and gain more access to training materials; and it is unclear if individual authors will get a share of that pie.
Are AI-created works copyrightable?
The US Copyright Office states that they have registered, and will continue to register, works that include content created by generative AI. Works created in whole by AI, and works partially created by AI but not disclosed in application, cannot obtain valid copyright registrations. The Office’s standpoint is that such materials are not protected by copyright, and any works registered with the Office must have some human-created content.
An ongoing case, Thaler v. Perlmutter, is litigating over this exact issue. Thaler is currently appealing a district court decision which sided with USCO who refused to register Thaler’s AI-generated work. We think it unlikely the appellate court will rule in favor of Thaler, because of the long established rule that only human authored works can obtain copyright protection.
Can AI-created works be subject to copyright infringement claims?
Copyright law’s “substantial similarity” doctrine is equipped to handle the question of whether a given output is similar enough to an input to be infringing. If similarity is minimal or only exists with regard to non-copyrightable elements, then a work cannot be said to be infringing even when there is actual copying. In most of the ongoing litigations, courts have dismissed claims that AI-output in a general sense is substantially similar and thus infringing. But a certain fact pattern could potentially lead to a different result. For example, in Concord Music Group v. Anthropic, the plaintiffs alleged that the generative AI used lyrics from Don McLean’s “American Pie” to produce a strikingly similar output.
Of course, AI systems do not produce outputs automatically but require some human intervention to prompt them to do so. This has raised questions about who will be liable–the AI system creator or the user who inputs the prompt–and what safeguards AI system creators should put in place to discourage uses that result in infringing outputs.