Tag Archives: Copyright

New White Paper on Federal Public Access Policies

Posted March 20, 2025
Sunset picture of the Eisenhower Executive Office Building, home to the Office of Science and Technology Policy.
Eisenhower Executive Office Building, home to the Office of Science and Technology Policy (Official White House photo, by Carlos Fyfe; Public Domain; Source: Wikimedia Commons)

Authors Alliance and SPARC have released the second of four planned white papers addressing legal issues surrounding open access to scholarly publications under the 2022 OSTP memo (the “Nelson Memo”). The white papers are part of a larger project (described here) to support legal pathways to open access.

This first paper discussed the “Federal Purpose License” and how it supports federal public access policies under the Nelson Memo. This second paper discusses legal landscape surrounding the Federal Purpose License and the public access policies in light of concerns that the policies are not permissible government actions.  The white paper explains why they are. 

The White Paper is available here. Supporting materials, previous papers, and other formats are available here.

In the last couple of months there has been a lot of change in the Federal grants space, but so far the public access policies, including the latest announced by the Nelson Memo, are still in place. Several agencies have already implemented their responses to the Nelson Memo through regulation; the rest are due to finish the task later this year.

For Federal agencies to act permissibly, their actions must be grounded in valid Congressional delegation of authority.  Congress can’t escape its limitations by delegating beyond its own authority, so to be valid delegation the actions also must be permissible actions for Congress.  The first part of the paper examines Congress’s constitutional power to provide grants for research and development, finding support under both the Spending and Progress clauses. 

The Federal Purpose License places a condition on acceptance of grant funds. Congress doesn’t have unlimited power to place conditions on grants, as established by the Supreme Court  in South Dakota v. Dole. However, the Federal Purpose License safely falls within the limitations placed by Dole.  In particular, the Federal Purpose License as a condition does not violate either the First Amendment’s Speech Clause or the Fifth Amendment’s Takings Clause.

The second part of the paper looks at Congress’s delegation of authority and the agencies’ development of the policies. The paper explains how Congress expressly—and permissibly—delegated power and obligation to create the prototype of the public access policy to the National Institutes of Health, and how subsequent application of the policy to the rest of the grant-making agencies is strongly supported by principles of implicit delegation, and were established through appropriate rulemaking. Though the recent case of Loper Bright Enterprises v. Raimondo may require agencies to satisfy a somewhat higher burden when defending their actions, the Supreme Court’s abandonment of the “Chevron doctrine” does nothing to change the permissibility of the public access policies or the use of the Federal Purpose License.

The next paper will examine the interaction between institutional intellectual property policies and federal public access policies, and the final paper will discuss issues surrounding article versioning. Watch this space for more!

Thaler v. Perlmutter: D.C. Court of Appeals confirms that a non-human machine cannot be an author under the U.S. Copyright Act

Posted March 19, 2025
A Recent Entrance to Paradise, an image generated by Steven Thaler’s “Creativity Machine.”

Yesterday, the U.S. Court of Appeals for the District of Columbia Circuit issued its ruling in Thaler v. Perlmutter, a case centered on the question of whether a non-human author, without any intervention from a human, could be an author and hold copyright under the U.S. Copyright Act. The court found that a non-human machine could not be an author under the Act.  

In 2018, Steven Thaler filed an application to register a copyright claim in A Recent Entrance to Paradise with the U.S. Copyright Office. In that application, the author of the image was identified as the “Creativity Machine,” with Thaler listed as the claimant with a transfer statement: “ownership of the machine.” In his application, Thaler stated that the work “was autonomously created by a computer algorithm running on a machine” and he was “seeking to register this computer-generated work as a work-for-hire to the owner of the Creativity Machine.” The Copyright Office refused to register the work and later affirmed the denial of registration. Thaler subsequently sued the Copyright Office and lost.  He appealed and has now lost again.  

In virtually every way, this decision should not be surprising. While it is absolutely conceivable that the product of AI and human collaboration may result in copyrightable works, it is well settled law that non-human authorship is not recognized under the U.S. Copyright Act. This opinion is mostly a repetition of the positions taken by the U.S. Copyright Office in its denial of registration.

That acknowledged, there are some points worth highlighting from the opinion:  

  • First, the court centers much of its analysis on the text of the Copyright Act and the myriad ways in which the statutory language is dependent on humans as authors. Taken together, the Act is unarguably one that is built upon the premise of human authorship. The court says: “All of these statutory provisions collectively identify an “author” as a human being. Machines do not have property, traditional human lifespans, family members, domiciles, nationalities, mentes reae, or signatures.”
  • Part of the court’s analysis is focused on whether the public would benefit from granting copyright to machine-authored works and ultimately concludes that it would not.  The court says: “But the Supreme Court has long held that copyright law is intended to benefit the public, not authors. Copyright law “makes reward to the owner a secondary consideration. ‘[T]he primary object in conferring the monopoly lie[s] in the general benefits derived by the public from the labors of authors.’” 
  • It is important to remember that this opinion is only about the narrow question of whether a machine, working in isolation and with no human intervention, can be considered the author of a work. We should be careful not to try to extend this opinion beyond that. “Those line-drawing disagreements over how much artificial intelligence contributed to a particular human author’s work are neither here nor there in this case. That is because Dr. Thaler listed the Creativity Machine as the sole author of the work before us, and it is undeniably a machine, not a human being.” 
  • Finally, the district court found that Dr. Thaler had waived the argument that, as creator of the Creativity Machine, he was the work’s author.  The Court of Appeals found that Dr. Thaler had not challenged that waiver and that it therefore could not address the question of whether works generated by Artificial Intelligence might be authored by the creator of the AI. (“Dr. Thaler argues that he is the work’s author because he made and used the Creativity Machine. We cannot reach that argument.”) This leaves some ambiguity as to whether a future creator of an AI might successfully claim copyright in a work themselves. It also leaves open questions where the human user of AI claims to be the author of an AI-generated work or portions of a work. This is the question the court will have to address head-on in Allen v. Perlmutter, a case currently pending in Colorado. We will continue to watch this space, and share with you any new developments.  

Ultimately, the Thaler v. Perlmutter decision is limited to the fact that a machine cannot be an author under copyright law. This is a sensible result and consistent with sound public policy. 

Authors Alliance Comment on US AI Action Plan

Posted March 14, 2025

Today, we submitted a response to a Request for Information from the Office of Science and Technology Policy (OSTP). The OSTP is seeking to develop an “AI Action Plan,” to sustain and accelerate the development of AI in the United States.  As an organization dedicated to advancing the interests of authors who wish to share their works broadly for the public good, we felt it imperative to weigh in on critical copyright and policy issues impacting AI innovation and access to knowledge.

In our response, we reaffirmed our belief that the use of copyrighted works specifically for AI training (distinct from other AI uses) is a quintessential fair use. We noted that Section 1202(b) of the Copyright Act has little utility and serves as an unnecessary stumbling block to the development of AI. We also highlighted the importance of high quality training data and pointed towards the work that is already being done to develop AI training corpora.  

A Few Key Points from Our Submission

Our response to the OSTP highlights several key areas where federal policy can support both authors and a thriving AI research environment:

1. The Role of Fair Use in AI Model Training

We emphasize that fair use has long been a cornerstone of innovation in the U.S.—enabling everything from web search engines to digitization projects. US Copyright law has played a major role in both developing the incredible creative industries homed in the US, as well as driving leading scientific research and commercial innovation. The key to this innovation policy has been a thoughtful balance between providing a degree of control over copyrighted works to copyright holders while allowing for flexibility when it comes to technological innovation and new transformative uses. AI development relies on the ability to analyze large datasets, many of which include copyrighted materials. The uncertainty surrounding the legal status of AI training data due to ongoing litigation threatens to slow innovation. We urge the federal government to explicitly support the application of fair use to AI training and provide much-needed clarity.

2. Addressing the Contractual Override of Fair Use

Many AI developers face contractual barriers that limit their ability to make fair use of content, particularly in text and data mining applications. We recommend legislative measures to prevent contracts from overriding fair use rights, ensuring that AI researchers and developers can continue innovating without undue restrictions.

3. Access to High Quality Datasets

Access to high-quality datasets is a foundational pillar for AI development, enabling models to learn, refine, and iteratively improve. However, the availability of such datasets is often hindered by restrictive licensing agreements, proprietary controls, and inconsistent data standards. To maximize the potential of AI while ensuring ethical and legally sound development, collaborations between academic institutions, libraries, public archives, and technology developers are essential. Government policies should facilitate public-private partnerships that allow for robust and thoughtfully curated datasets, ensuring that AI systems are trained on a rich range of representative materials.

We invite our community of authors, researchers, and policymakers to review our submission. Your engagement is crucial in shaping a responsible and forward-thinking AI policy in the U.S. You can always reach us at info@authorsalliance.org

Updates on AI Copyright Law and Policy: Section 1202 of the DMCA,  Doe v. Github, and the UK Copyright and AI Consultation 

Posted March 7, 2025
some district courts have applied DMCA 1202(b) to physical copies, including textile, which means if you cut off parts of a fabric that contain copyright information, you could be liable for up to $25,000 in damages

The US Copyright Act has never been praised for its clarity or its intuitive simplicity—at a whopping 460 pages long, it is filled with hotly debated ambiguities and overly complex provisions. The copyright laws of most other jurisdictions aren’t much better.

Because of this complexity of copyright law, the implications of changes to copyright law and policy are not always clear to most authors. As we’ve said in the past, many of these issues seem arcane, and largely escape public attention. Yet entities with a vested interest in maximalist copyright—often at odds with the public interest—are certainly paying attention, and often claim to speak for all authors when they in fact represent only a small subset.  As part of our efforts to advocate for a future where copyright law offers ample clarity, certainty, and real focus on values such as the advancement of knowledge and free expression, we would like to share with you two recent projects we undertook:

The 1202 Issue Brief and Amicus Brief in Doe v. Github

Authors Alliance has been closely monitoring the impact of Digital Millennium Copyright Act (DMCA) Section 1202. As we have explained in a previous post, Section 1202(b) creates liability for those who remove or alter copyright management information (CMI) or distribute works with removed CMI. This provision, originally intended to prevent wide-spread piracy, has been increasingly invoked in AI copyright lawsuits, raising significant concerns for lawful use of copyrighted materials beyond training AI. While on its face, penalties for removing CMI might seem somewhat reasonable, the scope of CMI (including a wide variety of information such as website terms of service, affiliate links, and other information) combined with the challenge of including it with all downstream distribution of incomplete copies (imagine if you had to replicate and distribute something like the Amazon Kindle terms of service every time you quoted text from an ebook) could be potentially very disruptive for many users. 

In order to address the confusion regarding the (somewhat inaptly named) “identicality requirement” by the courts in the 9th Circuit, we have released an issue brief, as well undertaken to file an amicus brief in the Doe v. Github case now pending in the 9th Circuit.

Here are the key reasons why we care—and why you should care—about this seemingly obscure issue:

  • The Precedential Nature of Doe v. Github: The upcoming 9th Circuit case, Doe v. GitHub, will address whether Section 1202(b) should only apply when copies made or distributed are identical (or nearly identical) to the original. Lower courts have upheld this identicality requirement to prevent overbroad applications of the law, and the appellate ruling may set a crucial precedent for AI and fair use.
  • Potential Impact on Otherwise Legal Uses: It is not entirely certain if fair use is a defense to 1202(b) claims. If the identicality requirement is removed, Section 1202(b) could create liability for transformative fair uses, snippet reuse, text and data mining, and other lawful applications. This would introduce uncertainty for authors, researchers, and educators who rely on copyrighted materials in limited, legal ways. We advocate for maintaining the identicality requirement and clarifying that fair use applies as a defense to Section 1202 claims. 
  • Possibility of Frivolous Litigation: Section 1202(b) claims have surged in recent years, particularly in AI-related lawsuits. The statute’s vague language and broad applicability have raised fears that opportunistic litigants could use it to chill innovation, scholarship, and creative expression.

To find out more about what’s at stake, please take a look at our 1202(b) Issue Brief. You are also invited to share your stories with us, on how you have navigated this strange statute. 

Reply to the UK Open Consultation on Copyright and AI

We have members in the UK, and many of our US-based members publish in the UK. We have been watching the development in UK copyright law closely, and have recently filed a comment to the UK Open Consultation on Copyright and AI. In our comment, we emphasized the importance of ensuring that copyright policy serves the public interest. Our response’s key points include:

  • Competition Concerns: We alerted the policy-makers that their top objective must include preventing monopolies forming in the AI space. If licensing for AI training becomes the norm, we foresee power consolidating in a handful of tech companies and their unbridled monopoly permeating all aspects of our lives within a few decades—if not sooner. 
  • Fair Use as a Guiding Principle: We strongly believe that the use of works in the training and development of AI models constitutes fair use under US law. While this issue is currently being tested in courts, case law suggests that fair use will prevail, ensuring that AI training on copyrighted works remains permissible. The UK does not have an identical fair use statute, but has recognized that some of its functions—such as flexibility to permit new technological uses—are valuable. We argue that the wise approach is for the UK to update its laws to ensure its creative and tech sectors can meaningfully participate in the global arena. Our comment called for a broad AI and TDM exception allowing temporary copies of copyrighted works for AI training. We emphasized that when AI models extract uncopyrightable elements, such as facts and ideas, this should remain lawful and protected. 
  • Noncommercial Research Should Be Protected: We strongly advocated for the protection of noncommercial AI research, arguing that academic institutions and their researchers should not face legal barriers when using copyrighted works to train AI models for research purposes. Imposing additional licensing requirements would place undue burdens on academic institutions, which already pay significant fees to access research materials.

Book Talk: Copyright, AI, and Great Power Competition

Register Here

How is artificial intelligence reshaping intellectual property law? And what role does copyright play in the global AI race? Join us for a thought-provoking discussion on Copyright, AI, and Great Power Competition, a new paper by Joshua Levine and Tim Hwang that explores how different nations approach AI policy and copyright regulation—and what’s at stake in the battle for technological dominance.

This event will bring together experts to examine key legal, economic, and geopolitical questions, including:

  • How do copyright laws affect AI innovation?
  • What are the competing regulatory approaches of the U.S., China, and the EU?
  • How should policymakers balance creators’ rights with AI development?

Whether you’re a legal scholar, technologist, policymaker, or just curious about the intersection of AI and copyright, this conversation is not to be missed!

DOWNLOAD

Download Copyright, AI, and Great Power Competition.

ABOUT OUR SPEAKERS

JOSHUA LEVINE is a Research Fellow at the Foundation for American Innovation. His work focuses on policies that foster digital competition and interoperability in digital markets, online expression, and emerging technologies. Before joining FAI, Josh was a Technology and Innovation Policy Analyst at the American Action Forum, where he focused on competition in digital markets, data privacy, and artificial intelligence. He holds a BA in Political Economy from Tulane University and lives in Washington, D.C.

TIM HWANG is General Counsel and a Senior Fellow at the Foundation for American Innovation focused on the intersection of artificial intelligence and intellectual property. He is also a Senior Technology Fellow at the Institute for Progress, where he runs Macroscience. Previously, Hwang served as the General Counsel and VP Operations at Substack, as well as the global public policy lead for Google on artificial intelligence and machine learning. He is the author of Subprime Attention Crisis, a book about the structural vulnerabilities in the market for programmatic advertising.

Dubbed “The Busiest Man on the Internet” by Forbes Magazine, his current research focuses on global competition in artificial intelligence and the political economy of metascience. He holds a J.D. from Berkeley Law School and a B.A. from Harvard College.

REGISTER HERE

Fair Use, Censorship, and Struggle for Control of Facts

Posted February 27, 2025
Caption: 451 is the http error code when a webpage is unavailable for legal reasons; it is also the temperature at which books catch fire and burn. This public domain image is taken inside the Internet Archive

Imagine this: a high-profile aerospace and media billionaire threatens to sue you for writing an unauthorized and unflattering biography. In the course of writing, you rely on several news articles, including a series of in-depth pieces about the billionaire’s life written over a decade earlier. Given their closeness in time to real events, you quote, sometimes extensively, from those articles in several places. 

On the eve of publication, your manuscript is leaked. Through one of his associated companies, the billionaire buys up the copyrights to the articles from which you quote. The next day the company files an infringement lawsuit against you. 

Copyright Censorship: a Time-Honored Tradition

It’s easy to imagine such a suit brought by a modern billionaire—perhaps Elon Musk or Jeff Bezos. But using copyright as a tool for censorship is a time-honored tradition. In this case, Howard Hughes tried it out in 1966, using his company Rosemont Enterprises to file suit against Random House for a biography it would eventually publish.

As we’ve seen many times before and since, the courts turned to copyright’s “fair use” right to rescue the biography from censorship. Fair use, the court explained, exists so that “courts in passing upon particular claims of infringement must occasionally subordinate the copyright holder’s interest in a maximum financial return to the greater public interest in the development of art, science and industry.” 

Singling out the biographical nature of the work and its importance in surfacing underlying facts, the court explained: 

Biographies, of course, are fundamentally personal histories and it is both reasonable and customary for biographers to refer to and utilize earlier works dealing with the subject of the work and occasionally to quote directly from such works. . . . This practice is permitted because of the public benefit in encouraging the development of historical and biographical works and their public distribution, e.g., so “that the world may not be deprived of improvements, or the progress of the arts be retarded.”

Fair use playing this role is no accident. As the Supreme Court has explained, the relationship between copyright and free expression is complicated. On the one hand, the Court has explained,  “[T]he Framers intended copyright itself to be the engine of free expression. By establishing a marketable right to the use of one’s expression, copyright supplies the economic incentive to create and disseminate ideas.” But, recognizing that such exclusive control over expression could chill the very speech copyright seeks to enable, the law contains what the Court has described as two “traditional First Amendment safeguards” to ensure that facts and ideas remain available for free reuse: 1) protections against control over facts and ideas, and 2) fair use. 

But rescuing a biography that merely quotes, even extensively, from earlier articles seems like an easy call, especially when it seems so clear that the plaintiff has so clearly engineered the copyright suit not to protect legitimate economic interests but to suppress an unpopular narrative.  

The world is a little more complicated now. Can fair use continue to protect free expression from excessive enforcement of copyright? I think so, but two key areas are at risk: 

Fair Use and the Archives

It may have escaped your notice that large chunks of online content disappear each year. 

For years, archivists have recognized and worked to address the problem. Websites going dark is an annoyance for most of us, but in some cases, it can have real implications for understanding recent history, even as officially documented. For example, back in 2013, a report revealed that well over half of the websites linked to in Supreme Court opinions no longer work, jeopardizing our understanding of just what went into why and how the Court decided an issue.  

While most websites disappear from benign neglect, others are intentionally taken down to remove records from public scrutiny.  Exhibit A may be the 8,000+ government web pages recently removed by the new presidential administration, but there are many other examples (even whole “reputation management” firms devoted to scrubbing the web of information that may cast one in an unfavorable light). 

The most well-known bulwark against disappearing internet content is the Internet Archive, which has, at this point, archived over 900 billion web pages. Over and over again, we’ve seen its WayBack Machine used to shine a light on history that powerful people would rather have hidden. It’s also why the WayBack Machine has been blocked or threatened at various times in China, Russia, India, and other jurisdictions where free expression protections are weak.

It’s not just the open web that is disappearing. A recent report on the problem of “Vanishing Culture” highlights how this challenge pervades modern cultural works. Everything from 90s shareware video games to the entirety of the MTV News Archive are at risk.  As Jordan Mechner, a contributor to the report explains, “historical oblivion is the default, not the exception” to the human record. As the report explains, it’s not just disappearing content that poses a problem: libraries and consumers must grapple with electronic content that can be remotely changed by publishers or others as well. As just one example among many, in just the last few years we’ve seen surreptitious modifications to ebooks on readers’ devices—some changing important aspects of the plot—for works by authors such as RL Stine, Roald Dahl, and Agatha Christie.  

The case for preservation as a foundational necessity to combat censorship is straightforward. “There is no political power without power over the archive,” Jacques Derrida reminds us. Without access to a stable, high-fidelity copy of the historical record, there can be no meaningful reflection on what went right or wrong, or holding to account those in power who may oppose an accurate representation of their past. 

What sometimes goes unnoticed is that, without fair use, a large portion of these preservation efforts would be illegal. 

In a world where century-long copyright protection applies automatically to any human expression with even a “modicum of creativity,” virtually everything created in the last century is subject to copyright. This is a problem for digital works because practically any preservation effort involves making copies—often lots of them—to ensure the integrity of the content. Making those copies means that archivists must rely on fair use to preserve these works and make them available in meaningful ways to researchers and others. 

The upshot is that every time the Internet Archive archives a website, it’s an act of faith in fair use. Is that faith well-founded? 

I think so. But the answer is complicated. 

For preservation efforts like those of the Internet Archive, fair use is a foundation, but not an unshakable one. Two recent cases highlight the risk, one against its book lending program and the other objecting to its “Great 78” record project. Both take issue with how the Archive provides access to preserved digital copies in its collections. While not directly attacking the preservation of those materials, the suits effectively jeopardize their effective use. As archivists have long lamented, “preservation without access is pointless.” 

Beyond direct challenges to fair use, archives are threatened by spurious takedown demands, content removal requests, and legal challenges. Organizations like the Internet Archive have fought back, but many institutions simply cannot afford to, leading to a chilling effect where preservation efforts are scaled back or abandoned altogether.

Compounding this uncertainty is the growing use of technological protection measures (TPMs) and digital rights management (DRM) systems that restrict access to digital works. Under the Digital Millennium Copyright Act (DMCA), circumventing these restrictions is illegal—even for lawful purposes like preservation or research. This creates a paradox where a researcher or archivist may have a clear fair use justification for accessing and copying a work, but breaking an encryption lock to do so could expose them to legal liability.

Additionally, the rise of contractual overrides—such as restrictive licensing agreements on digital platforms—threatens to sideline fair use entirely. Many modern works, including e-books, streaming media, and even scholarly databases, are governed by terms of service that explicitly prohibit copying or analysis, even for noncommercial research. These contracts often supersede fair use rights, leaving archivists and researchers with no legal recourse.

Still, there are reasons for optimism. Courts have generally ruled favorably when fair use is invoked for transformative purposes, such as digitization for research, searchability, and access for disabled users. Landmark decisions, like those in Authors Guild v. Google and Authors Guild v. HathiTrust, upheld fair use in the context of large-scale digital libraries and text-mining projects. These cases suggest that courts recognize the essential role fair use plays in making knowledge accessible, particularly in an era of vast digital information.

Fair Use and the Freedom to Extract 

One of copyright’s other traditional First Amendment protections is that the copyright monopoly does not extend to facts or ideas. Fair use is critical in giving life to this protection by ensuring that facts and ideas remain accessible, providing a “freedom to extract” (a term I borrow from law professor Molly Van Houweling’s recent scholarship) even when they are embedded within copyrighted works. 

Copyright does not and cannot grant exclusive control over facts, but in practice, extracting those facts often requires using the work in ways that implicate the rightsholder’s copyright. Whether journalists referencing past reporting, historians identifying truths in archival materials, or researchers analyzing a vast corpus of written works, fair use provides the necessary legal space to operate without running afoul of copyright protections for rightsholders. 

The need is more urgent than ever given the sheer scale of the modern historical record.   In many cases, relying on individual researchers to sift through the record and extract important facts is impractical, if not impossible. Automated tools and processes, including AI and text data mining tools, are now indispensable for processing, retrieving, and analyzing facts from large amounts of massive amounts of text, images, and audio. From uncovering patterns in historical archives to verifying political statements against prior records, these tools serve as extensions of human analysis, making the extraction of factual information possible at an unprecedented scale. However, these technologies depend on fair use. If every instance of text or data mining required explicit permission from rights holders—who may have economic or political incentives to deny access—the ability to conduct meaningful research and discovery would be crippled.

For example, consider a researcher studying the roots of the opioid crisis, trying to mine the 4 million documents in the Opioid Industry Documents Archive—many of them legal materials, internal company communications, and regulatory filings. These documents, made public through litigation, provide critical insights into how pharmaceutical companies marketed opioids, downplayed their risks, and shaped public policy. But making sense of such a massive trove of records is impossible without computational tools that can analyze trends, track key players, and surface hidden patterns. 

Without fair use, researchers could face legal roadblocks to applying text and data mining techniques to extract the facts buried within these documents. If copyright law were used to restrict or complicate access to these records, it would not only hamper academic research but also shield corporate and governmental actors from exposure and accountability.

Conclusion

As information continues to proliferate across digital media, fair use remains one of the few safeguards ensuring that historical records and cultural artifacts do not become permanently locked away behind copyright barriers. It allows the past to be examined, challenged, and understood. If we allow excessive copyright restrictions to limit the ability to extract and analyze our shared past and culture, we risk not only stifling innovation but also eroding our collective ability to engage with history and truth.

Fair Use Week

This is my contribution to Fair Use Week. The read the other excellent posts from this week, check out Kyle Courtney’s Harvard Library Fair Use Week blog here.

Independent Publisher’s Lawsuit Against Audible Fails, Highlighting Challenges to Receive Fair Streaming Compensation

Posted February 21, 2025
Adobe Stock Image

Last November, we covered a case where a group of authors complained about McGraw Hill’s interpretation of publishing agreements related to compensation for ebooks. As subscription-based models become increasingly dominant in the publishing industry, authors must be vigilant about how their contracts define compensation. Platforms like Kindle Unlimited, Audible, and academic ebook services are reshaping traditional royalty structures. This is not just a concern for trade books; academic publishing is also shifting towards subscription-based access, as evidenced by ProQuest’s recent announcement that it is ending print sales and moving toward a “Netflix for books” model. 

Here we see yet another case where ambiguous contractual terms resulted in financial loss for an author— 

On Feb. 19th, the Second Circuit affirmed the lower court’s dismissal of Teri Woods Publishing’s copyright infringement and breach-of-contract claims against Audible and other audiobook distributors in Teri Woods Publ’g, LLC v. Amazon.com, Inc. The Plaintiff initially granted the rights (that are the subject of this dispute) to Urban Audios in a licensing agreement. Thereafter, Urban Audio granted the rights under that agreement to Blackstone, which then sublicensed its rights to Amazon and Audible.

The Plaintiff in this case, Teri Woods Publishing, is an independent publisher founded by urban fiction author Teri Woods. The Plaintiff argued—and the courts ultimately disagreed—that the licensing agreement did not unambiguously permit Defendants to distribute Teri Woods’ audiobooks through the Defendants’ online audiobook streaming subscription services. More specifically, on the question of compensation for online streaming, Plaintiff and Defendants disagreed on whether (1) online streaming counted as “internet downloads” or alternatively “other contrivances, appliances, mediums and means,” and (2) the licensing terms dealing with royalties prohibit subscription streaming.

The licensing terms in question are contained in the licensing agreement Plaintiff entered into in 2018, granting Urban Audios the 

“exclusive unabridged audio publishing rights, to manufacture, market, sell and distribute copies throughout the World, and in all markets, copies of unabridged readings of the [Licensed Works] on cassette, CD, MP3-CD, pre-loaded devices, as Internet downloads and on, and in, other contrivances, appliances, mediums and means (now known and hereafter developed) which are capable of emitting sounds derived for the recording of audiobooks.”

In exchange of this assignment of rights, Urban Audio—as the Licensee—must pay Plaintiff: 

“(a) Ten percent (10%) of Licensee’s net receipts from catalog, wholesale and other retail sales and rentals of the audio recordings of said literary work; 

(b) Twenty Five percent (25%) of net receipts on all internet downloads of said literary work. 

(c) Twenty Five percent (25%) of net receipts on Playaway format [under certain conditions].”

In case you are not familiar with the services Amazon Audible provides: members of Audible generally pay a monthly fee to digitally stream or download audiobooks, instead of making any specific payment for the specific audiobooks they are streaming or downloading. This method of distribution, the Plaintiff argues, led to drastically lower compensation than expected, as the audiobooks were made available to subscribers at a fraction of their retail price. 

Audible has a history of relying on ambiguous contractual terms to reduce author payouts. The “Audiblegate” controversy, for instance, exposed how Audible’s return policy allowed listeners to return audiobooks after extensive use, deducting royalties from authors without transparency. That practice came under legal scrutiny inn Golden Unicorn Enters. v. Audible Inc., where authors alleged that Audible deliberately structured its payment model to significantly reduce their earnings (unfortunately, the court in that case also largely sided with Audible)

Despite Audible’s track record, the courts were unsympathetic to Plaintiff’s grievance in the Teri Woods case, and held that the plain meaning of the phrase “other contrivances, appliances, mediums and means (now known and hereafter developed)” in the licensing agreement included digital streams and other future technological developments in distribution services. The courts also observed that the underlying licensing agreement did not provide for the payment of royalties on a per-unit basis; Plaintiff was only entitled to a percentage of “net receipts” received by Urban Audio for sales, rentals, and internet downloads. 

The ambiguity in defining what constitutes an “internet download,” and whether payment was due on a per unit basis, ultimately were interpreted in favor of Audible. This case serves to remind us again of the importance of adopting clear contractual language. 

Licensing agreements should be drafted with clear and precise language regarding revenue models and payment structures. Subscription-based compensation models, like those employed by Audible, fundamentally differ from traditional sales models, often leading to lower per-unit earnings for authors. By failing to anticipate and address these nuances, authors risk losing control over how their works are monetized. Ensuring that rights, distribution methods, and payment structures are clearly defined can prevent disputes and financial losses down the line.

Many authors assume that digital rights are similar to traditional print rights, but as this case demonstrates, vague phrasing can allow distributors to exploit gaps in understanding. If authors do not explicitly outline limitations on emerging distribution technologies, they may find themselves receiving significantly less compensation than they anticipate when signing the agreement. For example, authors should ensure their contracts specify whether subscription-based revenue falls under traditional royalty calculations, and whether distribution via new technological formats require renegotiation. Beyond the issues with ambiguous contractual terms, this case also highlights the broader issue of how digital platforms can negatively impact readers and authors alike. Readers no longer own the books they purchase; instead, they receive licensed access that can be revoked or restricted at any time. This shift undermines the traditional relationship between books and their readers. Authors are equally threatened by these digital intermediaries, who have the power to dictate distribution methods and unilaterally alter revenue models; an author’s right to fair compensation is too often sacrificed along the way. The situation is especially dire with audiobooks, where Audible dominates the market.

Copyrightability and Artificial Intelligence: A new report from the U.S. Copyright Office

Posted February 20, 2025
Uncopyrightable image generated using Google Gemini, illustrating a group of photographers excited to learn that their nearly identical photos of the public domain Washington Monument are all copyrightable) (“The Office receives ten applications, one from each member of a local photography club. All of the photographs depict the Washington Monument and all of them were taken on the same afternoon. Although some of the photographs are remarkably similar in perspective, the registration specialist will register all of the claims.”) (Compendium of Copyright Office Practices, Section 909.1)

Recently, the United States Copyright Office published its Report on Copyright and Artificial Intelligence, Part 2: Copyrightability,  the second report in a three-part series. The Office’s reports and additional related resources can be found on the USCO’s Copyright and Artificial Intelligence webpage.

This latest report was the product of longstanding Copyright Office practices, the USCO’s evolving work and registration guidance in this area, rapid technological developments related to Artificial Intelligence, and over 10,000 reply comments to the Office’s August 2023 Notice of Inquiry. Among those commenters, the Authors Alliance submitted both an initial comment and a reply comment in late 2023.  

In our comments, we urged the Copyright Office to not pursue revisions to the Copyright Act at this time and instead work towards providing greater clarity for authors of AI-generated and AI-assisted works (“Instead of proposing revisions to the Copyright Act to enshrine the human authorship requirement in law or clarify the human authorship requirement in the context of AI-generated works, the Office should continue to promulgate guidance for would-be registrants.”) We also noted that, as technology evolves in the coming years, our ideas about the copyrightability of AI-generated and AI-assisted works will likely shift as well.    

We are happy to see that the USCO heard our voice and that of many others regarding no need for legislative change at this time (“The vast majority of commenters agreed that existing law is adequate in this area…”) (Report, page ii). We likewise continue to be aligned with the USCO’s view that works wholly generated by Artificial Intelligence are not copyrightable. In reading through the entirety of the report, it is clear that the Office appreciates that some elements of AI-assisted works will be copyrightable, but believes that the level of human control over the AI output will be central to the copyrightability inquiry (“Whether human contributions to AI-generated outputs are sufficient to constitute authorship must be analyzed on a case-by-case basis.”) (“Based on the functioning of current generally available technology, prompts do not alone provide sufficient control.”) (Report, page iii)

The Office’s report does provide some useful clarity. At the same time, it takes some positions that fail to adequately address the complexity of AI-generated works. Below, we will unpack a number of elements of the report that are noteworthy.  

Modifying or arranging AI-generated content

The report makes it clear that the USCO views selection and arrangement of AI-generated work as a viable path towards copyrightability of works where AI was an element in the creation of the work. In 2023, when reviewing the graphic novel Zarya of the Dawn, “the Office concluded that a graphic novel comprised of human-authored text combined with images generated by the AI service Midjourney constituted a copyrightable work, but that the individual images themselves could not be protected by copyright.” (Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, page 2) Thus, authors who incorporate AI-generated work into a larger work will often be successful in registering the whole work, but will typically need to disclaim any AI-generated elements.  

Alternatively, an author who modifies an AI-generated work outside of the AI environment (e.g., an artist who uses Photoshop to make substantial modifications to an AI-generated image), will usually have a path to copyright registration with the USCO. 

The USCO takes the position that most AI-assisted works are not copyrightable

Unlike an AI-generated image later modified manually by a human (which may be copyrightable), when prompt-based modifications to AI generated works are performed entirely within the AI environment, it is clear that the USCO is reluctant to view the resulting work as copyrightable. 

Here, the Office’s position regarding Jason Allen’s attempts to register copyright in the two dimensional artwork Théâtre D’opéra Spatial is illuminating. In developing the image using Midjourney, Allen claimed to have used over 600 text prompts to both generate and alter the image, and further used Photoshop to “beautify and adjust various cosmetic details/flaws/artifacts, etc.,” a process which he viewed as copyrightable authorship.  In denying his claim, the Office responded that “when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the ‘traditional elements of authorship’ are determined and executed by the technology—not the human user.” (88 FR 16190 – Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, page 16192). 

The USCO dismisses the idea that the process of revising prompts to modify AI output is sufficient to claim copyright in the resulting work. (“Inputting a revised prompt does not appear to be materially different in operation from inputting a single prompt. By revising and submitting prompts multiple times, the user is “re-rolling” the dice, causing the system to generate more outputs from which to select, but not altering the degree of control over the process. No matter how many times a prompt is revised and resubmitted, the final output reflects the user’s acceptance of the AI system’s interpretation, rather than authorship of the expression it contains.”) (Report, page 20) (emphasis added).

Within the report, there is no direct examination of the Théâtre D’opéra Spatial copyright claim and lessons to be learned from it. This is likely due to ongoing litigation between Allen and the USCO. While the USCO has significant practical influence on what materials are protectable under copyright, ultimately the decision falls to the courts. So, this suit and others like it will be important to watch.  Still, the lack of a deeper dive into such a real-world example is unfortunate—such examples offer fertile territory for exploring the boundary lines between copyrightable AI-assisted works and those that will remain uncopyrightable.  

The report offers a sense of possibility with regard to copyrightable AI-assisted works

Towards the end of its report, the USCO briefly explores AI platforms that allow for greater control of the final work. Interestingly, they point to specific features of Midjourney, which allows users to select and modify specific regions of an image. The Office views this as meaningfully different from modifying an AI-generated work through prompts alone, but takes no position as to whether that level of control will result in copyrightable works ( “Whether such modifications rise to the minimum standard of originality required under Feist will depend on a case-by-case determination. In those cases where they do, the output should be copyrightable.”) (Report, page 27).   

Unanswered Questions

Despite the complexity of these issues, the Office has been able to draw some bright lines (e.g., see this webinar on Registration Guidance for Works Containing AI-generated Content). 

Yet, the Office also acknowledges that there are remaining unanswered questions (“So I know that everyone in their particular area of creativity is looking for, you know, more examples and brighter lines. And I think at this point in time, we’re going to be learning as everyone else is learning…we will be providing more guidance as we learn more.”) (Webinar Transcript, Robert Kasunic, page 10) This recognition that the USCO, like everyone, is still learning is refreshing and welcome, given that it’s fairly easy to see that there are murky waters all around. AI-generated works are already frequently a complex hybrid of AI expression and human expression. 

What are some of these questions? 

  1. The technology is still developing and it seems likely that the legal complexity will become even more pronounced as sophisticated generative AI evolves to respond to fine-grained feedback from users, while also offering expression and suggestions that many users will ultimately adopt. Navigating this complexity will be challenging and will require answering a fundamental question: what is the threshold level of human control over AI-generated expression that is necessary as a prerequisite for copyright protection?  
  1. Similarly, what standards might the Copyright Office or the courts develop to prove sufficient human authorship when it is intermingled with AI-generated content? The copyright registration process currently requires very little information and no documentation related to this question. For now, creators don’t have clear guidance on what types of documentation will be most effective if a future dispute arises. 
  1. To the extent that protection does exist in human-guided, but AI-produced content, how will or should the courts determine what are uncopyrightable, AI-generated elements in what will appear to users as a single unified work? Separating human expression that is enmeshed and embedded within uncopyrightable AI expression will require some framework for distinguishing the two in cases of infringement. Although the courts have already developed methods that may shape this (selection, filtration, abstraction, for example) it remains far from clear whether such tests will perform adequately for AI-produced content

We will be watching developments in this space closely and will continue to advocate for reasonable and flexible approaches to copyrightability that align with the practical realities of authorship in an emerging technological landscape.  

Thomson Reuters v. Ross: The First AI Fair Use Ruling Fails to Persuade

Posted February 13, 2025
A confused judge, generated by Gemini AI

Facts of the Case

On February 11, Third Circuit Judge Stephanos Bibas (sitting by designation for the U.S.  District Court of Delaware) issued a new summary judgment ruling in Thomson Reuters v. ROSS Intelligence. He overruled his previous decision from 2023 which held that a jury must decide the fair use question. The decision was one of the first to address fair use in the context of AI, though the facts of this case differ significantly from the many other pending AI copyright suits. 

This ruling focuses on copyright infringement claims brought by Thomson Reuters (TR), the owner of Westlaw, a major legal research platform, against ROSS Intelligence. TR alleged that ROSS improperly used Westlaw’s headnotes and the Key Number System to train its AI system to better match legal questions with relevant case law. 

Westlaw’s headnotes summarize legal principles extracted from judicial opinions. (Note: Judicial opinions are not copyrightable in the US.) The Key Number System is a numerical taxonomy categorizing legal topics and cases. Clicking on a headnote takes users to the corresponding passage in the judicial text. Clicking on the key number associated with a headnote takes users to a list of cases that make the same legal point. 

Importantly, ROSS did not directly ingest the headnotes and the Key Number System to train its model. Instead, ROSS hired LegalEase, a company that provides legal research and writing services, to create training data based on the headnotes and the Key Number System. LegalEase created Bulk Memos—a collection of legal questions paired with four to six possible answers. LegalEase instructed lawyers to use Westlaw headnotes as a reference to formulate the questions in Bulk Memos. LegalEase instructed the lawyers not to copy the headnotes directly. 

ROSS attempted to license the necessary content directly from TR, but TR refused to grant a license because it thought the AI tool contemplated by ROSS would compete with Westlaw.

The financial burden of defending this lawsuit has caused ROSS to shut down its operations. ROSS has countered TR’s copyright infringement claims with antitrust claims but the claims were dismissed by the same Judge. 

The New Ruling

The court found that ROSS copied 2,243 headnotes from Westlaw. The court ruled that these headnotes and the Key Number System met the low legal threshold for originality and were copyrightable. The court rejected the merger and scenes à faire defense by ROSS, because, according to the court, the headnotes and the Key Number System were not dictated by necessity. The court also rejected ROSS’s fair use defense on the grounds that the 1st and 4th factors weighed in favor of TR. At this point, the only remaining issue for trial is whether some headnotes’ copyrights had expired or were untimely registered.

The new ruling has drawn mixed reactions—some saying it undermines potential fair use defenses in other AI cases, while others dismiss its significance since its facts are unique. In our view, the opinion is poorly reasoned and disregards well-established case law. Future AI cases must demonstrate why the ROSS Court’s approach is unpersuasive. Here are three key flaws we see in the ruling.   

Problems with the Opinion

  1. Near-Verbatim Summaries are “Original”?

“A block of raw marble, like a judicial opinion, is not copyrightable. Yet a sculptor creates a sculpture by choosing what to cut away and what to leave in place. … A headnote is a short, key point of law chiseled out of a lengthy judicial opinion.” 

— the ROSS court

(↑example of a headnote and the uncopyrightable judicial text the headnote was based on↑)

The court claims that the Westlaw headnotes are original both individually and as a compilation, and the Key Number System is original and protected as a compilation. 

“Original” has a special meaning in US copyright law: It means that a work has a modicum of human creativity that our society would want to protect and encourage. Based on the evidence that survived redaction, it is near impossible to find creativity in any individual headnotes. The headnotes consist of verbatim copying of uncopyrightable judicial texts, along with some basic paraphrasing of facts. 

As we know, facts are not copyrightable, but expressions of facts often are. One important safeguard for protecting our freedom to reference facts is the merger doctrine. US law has long recognized that when there are only limited ways to express a fact or an idea, those expressions are not considered “original.” The expressions “merge” with the underlying unprotectable fact, and become unprotectable themselves. 

Judge Bibas gets merger wrong—he claims merger does not apply here because “there are many ways to express points of law from judicial opinions.” This view misunderstands the merger doctrine. It is the nature of human language to be capable of conveying the same thing in many different ways, as long as you are willing to do some verbal acrobatics. But when there are only a limited number of reasonable, natural ways to express a fact or idea—especially when textual precision and terms of art are used to convey complex ideas—merger applies. 

There are many good reasons for this to be the law. For one, this is how we avoid giving copyright protection to concise expression of ideas. Fundamentally, we do not need to use copyright to incentivize the simple restatement of facts. As the Constitution intended, copyright law is designed to encourage creativity, not to grant exclusive rights to basic expressions of facts. We want people to state facts accurately and concisely. If we allowed the first person to describe a judicial text in a natural, succinct way to claim exclusive rights over that expression, it would hinder, rather than facilitate, meaningful discussion of said text, and stifle blog posts like this one. 

As to the selection and arrangement of the Key Number System, the court claims that originality exists here, too, because “there are many possible, logical ways to organize legal topics by level of granularity,” and TR exercised some judgment in choosing the particular “level” with its Key Number System. However, the cases are tagged with Key Number System by an automated computer system, and the topics closely mirror what law schools teach their first-year students. 

The court does not say much about why the compilation of the headnotes should receive separate copyright protection, other than that it qualifies as original “factual compilations.” This claim is dubious because the compilation is of uncopyrightable materials, as discussed, and the selection is driven by the necessity to represent facts and law, not by creativity. Even if the compilation of headnotes is indeed copyrightable, using portions of it that are uncopyrightable is decidedly not an infringement, because the US does not protect sui generis database rights.

  1. Can’t Claim Fair Use When Nobody Saw a Copy?

 “[The intermediate-copying cases] are all about copying computer code. This case is not.” 

— the ROSS court conveniently ignoring Bellsouth Advertising & Publishing Corp. v. Donnelley Information Publishing, Inc., 933 F.2d 952 (11th Cir. 1991) and Sundeman v. Seajay Society, Inc., 142 F. 3d 194 (4th Cir. 1998).

In deciding whether ROSS’s use of Westlaw’s headnotes and the Key Number System is transformative under the 1st factor, the court took a moment to consider whether the available intermediate copying case law is in favor of ROSS, and quickly decided against it. 

Even though no consumer ever saw the headnotes or the Key Number System in the AI products offered by ROSS, the court claims that the copying of these constitutes copyright infringement because there existed an intermediate copy that contained copyright-restricted materials authored by Westlaw. And, according to the court, intermediate copying can only weigh in favor of fair use for computer codes.

Before turning to the actual case law the court is overlooking here, we wonder if Judge Bibas is in fact unpersuaded by his own argument: under the 3rd fair use factor, he admits that only the content made accessible to the public should be taken into consideration when deciding what amount is taken from a copyrighted work compared to the copyrighted work as a whole, which is contrary to what he argues under the 1st factor—that we must examine non-public intermediate copies. 

Intermediate copying is the process of producing a preliminary, non-public work as an interim step in the creation of a new public-facing work. It is well established under US jurisprudence that any type of copying, whether private or public, satisfies a prima facie copyright infringement claim, but, the fact that a work was never shared publicly—nor intended to be shared publicly—strongly favors fair use. For example, in Bellsouth Advertising & Publishing Corp. v. Donnelley Information Publishing, Inc., the 11th Circuit Court decided that directly copying a competitor’s yellow pages business directory in order to produce a competing yellow pages was fair use when the resulting publicly accessible yellow pages the defendant created did not directly incorporate the plaintiff’s work. Similarly, in Sundeman v. Seajay Society, Inc., the Fourth Circuit concluded that it was fair use when the Seajay Society made an intermediary, entire copy of plaintiffs’ unpublished manuscript for a scholar to study and write about it. The scholar wrote several articles about it mostly summarizing important facts and ideas (while also using short quotations).  

There are many good reasons for allowing intermediate copying. Clearly, we do not want ALL unlicensed copies to be subject to copyright infringement lawsuits, particularly when intermediate copies are made in order to extract unprotectable facts or ideas. More generally, intermediate copying is important to protect because it helps authors and artists create new copyrighted works (e.g., sketching a famous painting to learn a new style, translating a passage to practice your language skills, copying the photo of a politician to create a parody print t-shirt). 

  1. Suddenly, We Have an AI Training Market?

“[I]t does not matter whether Thomson Reuters has used [the headnotes and the Key Number System] to train its own legal search tools; the effect on a potential market for AI training data is enough.”

 — the ROSS court

The 4th fair use factor is very much susceptible to circular reasoning: if a user is making a derivative use of my work, surely that proves a market already exists or will likely develop for that derivative use, and, if a market exists for such a derivative use, then, as the copyright holder, I should have absolute control over such a market.

The ROSS court runs full tilt into this circular trap. In the eyes of the court, ROSS, by virtue of using Westlaw’s data in the context of AI training, has created a legitimate AI training data market that should be rightfully controlled by TR.

Only that our case law suggests the 4th factor “market substitution” considers only markets which are traditional, reasonable or likely to be developed. As we have already pointed out in a previous blog post, copyright holders must offer concrete evidence to prove the existence, or likelihood of developing, licensing market, before they can argue a secondary use serves as “market substitute.” If we allowed a copyright holder’s protected market to include everything that he’s willing to receive licensing fees for, it will all but wipe out fair use in the service of stifling competition. 

Conclusion

The impact of this case is currently limited, both because it is a district court ruling and because it concerns non-generative AI. However, it is important to remain vigilant, as the reasoning put forth by the ROSS court could influence other judges, policymakers, and even the broader public, if left unchallenged.

This ruling combines several problematic arguments that, if accepted more widely, could have significant consequences. First, it blurs the line between fact and expression, suggesting that factual information can become copyrightable simply by being written down by someone in a minimally creative way. Second, it expands copyright enforcement to intermediate copies, meaning that even temporary, non-public use of copyrighted material could be subject to infringement claims. Third, it conjures up a new market for AI training data, regardless of whether such a licensing market is legitimate or even likely to exist.

If these arguments gain traction, they could further entrench the dominance of a few large AI companies. Only major players like Microsoft and Meta will be able to afford AI training licenses, consolidating control over the industry. The AI training licensing terms will be determined solely between big AI companies and big content aggregators, without representation of individual authors or public interest.  The large content aggregators will get to dictate the terms under which creators must surrender rights to their works for AI training, and the AI companies will dictate how their AI models can be used by the general public. 

Without meaningful pushback and policy intervention, smaller organizations and individual creators cannot participate fairly. Let’s not rewrite our copyright laws to entrench this power imbalance even further.

Artificial Intelligence, Authorship, and the Public Interest

Posted January 9, 2025
Photo by Robert Anasch on Unsplash

Today, we’re pleased to announce a new project generously supported by the John S. and James L. Knight Foundation. The project, “Artificial Intelligence, Authorship, and the Public Interest,” aims to identify, clarify, and offer answers to some of the most challenging copyright questions posed by artificial intelligence (AI) and explain how this new technology can best advance knowledge and serve the public interest.

Artificial intelligence has dominated public conversation about the future of authorship and creativity for several years. Questions abound about how this technology will affect creators’ incentives, influence readership, and what it might mean for future research and learning. 

At the heart of these questions is copyright law. Over two dozen class-action copyright lawsuits have been filed between November 2022 and today against companies such as Microsoft, Google, OpenAI, Meta, and others. Additionally, congressional leadership, state legislatures, and regulatory agencies have held dozens of hearings to reconcile existing intellectual property law with artificial intelligence. As one of the primary legal mechanisms for promoting the “progress of science and the useful arts,” copyright law plays a critical role in creating, producing, and disseminating information. 

We are convinced that how policymakers shape copyright law in response to AI will have a lasting impact on whether and how the law supports democratic values and serves the common good. That is why Authors Alliance has already devoted considerable effort to these issues, and this project will allow us to expand those efforts at this critical moment. 

AI Legal Fellow
As part of the project, we’re pleased to add an AI Legal Fellow to our team to support the project. The position requires a law degree and demonstrated interest and experience with artificial intelligence, intellectual property, and legal technology issues. We’re particularly interested in someone with a demonstrated interest in how copyright law can serve the public interest. This role will require significant research and writing. Pay is $90,000/yr, and it is a two-year term position. Read more about the position here. We’ll begin reviewing applications immediately and do interviews on a rolling basis until filled. 

As we get going, we’ll have much more to say about this project. We will have some funds available to support research subgrants, organize several workshops and symposia, and offer numerous opportunities for public engagement. 

About the John S. and James L. Knight Foundation
We are social investors who support democracy by funding free expression and journalism, arts and culture in community, research in areas of media and democracy, and in the success of American cities and towns where the Knight brothers once had newspapers. Learn more at kf.org and follow @knightfdn on social media.