Category Archives: Law and Policy

Fair Use Week 2023: Looking Back at Google Books Eight Years Later

Posted February 24, 2023
Photo by Patrick Tomasso on Unsplash

This post is authored by Authors Alliance Senior Staff Attorney, Rachel Brooke. 

More recent members and readers may not be aware that Authors Alliance was founded in the wake of Authors Guild v. Google,  a class action fair use case in the Second Circuit that was litigated for nearly a decade, and finally resolved in favor of Google in 2015. The case concerned the Google Books project—an initiative launched by Google whereby the company partnered with university libraries to scan books in their collections. These scans would ultimately be made available as a full-text searchable database for the public to search through for particular terms, with short “snippets” displayed accompanying the search results. Users could not, however, view or read the scanned books in their entirety. The Authors Guild, along with several authors, filed a lawsuit against Google alleging that scanning the books and displaying these snippets constituted copyright infringement.

In addition to Authors Guild representing its members in the litigation, its associated plaintiffs brought the case as a class action, claiming to bring the case on behalf of a broad group of authors:  “[a]ll persons residing in the United States who hold a United States copyright interest in one or more Books reproduced by Google as part of its Library Project” who were either authors or the authors’ heirs.

But many of these authors did not agree with the Authors Guild’s stance in the case, and felt that the Google Books project served their interests in sharing knowledge, seeing their creations be preserved, and reaching readers interested in their work. A group of authors and scholars came together to share their views with the district court, many of whom would soon become founding members of Authors Alliance. Many of those same authors signed on to amicus briefs before both the district court and Second Circuit explaining why they opposed the litigation and supported Google’s fair use defense. Then, in 2014, Authors Alliance submitted its first amicus brief to the Second Circuit, supporting Google’s ultimately successful fair use defense. The plaintiffs later appealed the Second Circuit’s ruling, asking the Supreme Court to weigh in, but the Court ultimately declined to hear the case, leaving the Second Circuit’s ruling intact. 

Nearly a decade later, the effects of Google Books can still be seen in fair use decisions and copyright policy developments involving the challenges of adapting copyright to the digital world. In today’s post, I’ll reflect on how Google Books can be contextualized within today’s fair use landscape and share my thoughts on what the case can tell us about copyright in the digital world. 

Google Books and Transformativeness

A major question in Authors Guild v. Google was whether Google’s use of the copyrighted works was “transformative,” a key component of the fair use inquiry. When a use is found to be transformative, this in practice weighs heavily in favor of a finding of fair use. In the case, the court found that Google’s scanning, as well as the search and snippet display functions, were transformative because the service “augments public knowledge by making available information about [the] books without providing the public with a substantial substitute for . . . the original works.” This was because Google Books provided information about the books—such as the author and publisher information—without creating substitutes of the original works. In other words, readers could learn about the books they searched through, but could not read the books in full—to do this, those readers would have to purchase or borrow copies through the normal channels. 

Since the doctrine of transformativeness was established in the 1994 landmark Supreme Court case, Campbell v. Acuff-Rose Music, there have been myriad questions about the precise contours of what it means for a use to be transformative. Campbell established that a use is transformative when it endows the secondary work with a “new meaning or message,” but it can be difficult to apply this test in practice, particularly in the context of new or nascent technologies. Google Books tells us that scanning works in order to create a full-text searchable database with limited snippet displays is a transformative use based on its new and different purpose from the purpose of the works themselves. Furthermore, it reinforces the notion that a use is particularly likely to be considered transformative when it serves the underlying purpose of copyright law: incentivizing new creation for the benefit of the public and “enriching public knowledge.” By highlighting that Google contributed to public knowledge about books through its scanning activities and the Google Books search function, the court helped bring fair use for scholarship and research—two key prototypical uses established in the 1976 Copyright Act—into the digital age, setting an important precedent for later cases. 

Google Books and Derivative Works

One of the plaintiffs’ arguments in Google Books was that Google’s full-text searchable database constituted a derivative work. One of a copyright holder’s exclusive rights is the right to prepare derivative works—such as adaptations, abridgements, or translations of the original work—and the plaintiffs alleged that this right had been infringed. The court disagreed, finding that Google’s use had a transformative purpose, whereas derivative works tend to involve a transformation in form, such as the adaptation of a novel into a movie or an audiobook. Furthermore, the court explained that derivative works are “those that re-present the protected aspects of the original work, i.e., its expressive content, converted into an altered form[.]” In contrast, the Google Books project provided information about the books and offered a limited “snippet” view, but did not re-present the expressive content: the full text of the books themselves.

The distinction the court drew between transformative fair uses and derivative works in Google Books is an important one, as it can often be a close question whether a work involves a transformative purpose or merely represents the same work in a new form, without enough added to tip the scales towards fair use. And it is a question that continues to arise in fair use cases today: just last year, the Supreme Court agreed to hear Warhol Foundation v. Goldsmith, a case about whether Andy Warhol’s creation of a series of screenprints of the late musical artist Prince which drew from a photograph taken by photographer Lynn Goldsmith qualified as a fair use. We’ve covered this case extensively on our blog over the past few years, and submitted an amicus brief in the case. Our brief argues (among other things) that Warhol’s screen prints involve much more than a transformation in form: they are stylistically and visually distinct from Goldsmith’s photograph, and endow the photograph with a new meaning or message, making the use highly transformative. 

As in Google Books, the parties and amici in Goldsmith grapple with the line between transformative uses and the creation of derivative works, an often complicated and fact-sensitive determination. In this context, Google Books serves as a reminder that fair use is not a one-size-fits-all determination. Yet it also provides support for arguments advanced by Authors Alliance and others that simply because a transformation in form exists—in the Google Books case, the transformation from a print book to a scanned copy, and in Goldsmith, the transformation of a black and white photo to a series of colorful screenprints—does not mean that a secondary use cannot be a fair one. Warhol’s use did not merely “re-present the protected aspects of the original work[‘s] . . . expressive content,” but was transformative in the different “purpose, character, expression, meaning, and message” it conveyed.

Google Books and Controlled Digital Lending

The practice of controlled digital lending (“CDL”)—and the arguments in favor of it constituting a fair use—can be traced back in part to the fair use principles established and reinforced in Google Books. As I argue in our amicus brief in Hachette Books v. Internet Archive, a case about—among other things—whether CDL constitutes a fair use, Google Books shows that copying the entirety of a work in the process of making a transformative use of it can be fully consistent with fair use. 

Another important suggestion in the Google Books case, made at the district court level, was that the Google Books search function could actually drive book sales: the search results were accompanied by links to purchase the book, and research suggested that this could enhance sales of those books. This is analogous to the effects of library lending: library readers often purchase books by authors they first discovered at the library, an effect which can apply with equal force when the library patron borrows a CDL scan. Indeed, several other amici in Hachette Books argue that the finding that the Google Books search was a fair use lent substantial support for the argument that CDL is a fair use, based on both the factual similarities between the two initiatives and their shared objective of “enriching public knowledge.” 

As in Google Books, CDL also helps authors reach readers who could not otherwise access their books, and achieves this through scanning books on library shelves. And also like Google Books, CDL helps solve the problem of 20th century works “disappearing”: the commercial life of a book tends to be much shorter than the term of copyright, so when books under copyright go out of print, they can disappear into obscurity. Scanning these books to preserve them ensures that the knowledge they advance will not be lost. 

Google Books and Text Data Mining

Text data mining—the process of using automated techniques aimed at quantitatively analyzing text and other data—is also widely considered to be a fair use, and this determination is similarly built in part on the building blocks established in Google Books. As was the case in Google Books, the results of text data mining research provide information about the works being studied, and cannot in any way serve as substitutes for the content of the works. In fact, one important aspect of the new exemption to DMCA liability for text data mining, which Authors Alliance successfully petitioned for in 2021, is that researchers are not able to use the works in the text data mining corpus for consumptive purposes. And also like Google Books, researchers are able to view the content in a limited manner to verify their findings, analogous to Google Books’s snippet view. The new TDM exemption was a huge win for Authors Alliance members, and something to celebrate for all scholars engaged in this important research. Importantly, the precedent established by Google Books strongly supported its adoption and the Register of Copyright’s suggestion that text data mining was likely to be a fair use

Looking Forward: Google Books and Artificial Intelligence

In recent years, scholars and researchers have grappled with the implications of copyright protection on AI-generated content and AI models more generally. The holding in Google Books provides some support for companies’ and researchers’ ability to engage in these activities: one important factor in the case was that Google Books did not harm the market for the books at issue in the case, since the books in the database could not serve as substitutes for the books themselves. Similarly, when copyrighted works are used to train AI, the output cannot serve as a substitute for the copyrighted works, and the market for those works is not harmed, even if—like the plaintiffs in Google Books—the copyright holders might prefer that their works not be used in this way. Google Books establishes that simply because copyrighted works are used as “input” in a given model, this does not mean that the outputs constitute infringement. It is also worth noting that the court found Google’s use to be fair despite the fact that it was a use by a commercial, profit-seeking entity. While a commercial use can sometimes tip the scales in favor of finding a use to not be fair, this can be overcome by a socially beneficial, transformative purpose. This could arguably apply with equal force to AI models trained on copyrighted works which contribute to our understanding of the world, despite the fact that commercial entities are often the ones deploying these technologies. 

Eight years after it was decided, the legacy of Google Books endures in policy debates and copyright lawsuits that capture the public’s attention. Policymakers and judges would be wise to heed the lessons it teaches about the value of advancing public knowledge through digitization and the use of copyrighted works for new and socially beneficial purposes. As we await policy developments regarding text data mining and wait for decisions in Goldsmith and Hachette Books, it is my hope that this legacy will live on, reminding us all of the vast capabilities of information technology to enrich our understanding of the world and advance the progress of knowledge, which, after all, is what copyright law is all about. 

Fair Use Week 2023: Resource Roundup

Posted February 21, 2023
Photo by Adi Goldstein on Unsplash

Authors who want to incorporate source materials into their writings with confidence may find themselves faced with more questions than answers. What exactly does fair use mean? What factors do courts consider when evaluating claims of fair use? How does fair use support authors’ research, writing, and publishing goals? Fortunately, help is at hand! This Fair Use/Fair Dealing Week, we’re featuring a selection of resources, briefs, and blog posts to help authors understand and apply fair use.

Fair Use 101

Cover of the Fair Use Guide for Nonfiction Authors

Authors Alliance Guide to Fair Use for Nonfiction Authors: Our guidebook, Fair Use for Nonfiction Authors, covers the basics of fair use, addresses common situations faced by nonfiction authors where fair use may apply, and debunks some common misconceptions about fair use. Download a PDF today.

Authors Alliance Fair Use FAQs: Our Fair Use FAQs cover questions such as:

  • Can I still claim fair use if I am using copyrighted material that is highly creative?
  • What if I want to use copyrighted material for commercial purposes?
  • Does fair use apply to copyrighted material that is unpublished?

Codes of Best Practices in Fair Use: The Center for Media and Social Impact at American University has compiled this collection of Codes of Best Practices in Fair Use for various creative communities, from journalists to librarians to filmmakers.

Fair Use Evaluator Tool: This tool, created by the American Library Association, helps users support and document their assertions of fair use.

Dig Deeper

U.S. Copyright Office Fair Use Index: The U.S. Copyright Office maintains this searchable database of legal opinions and fair use test cases.

Fair Use Amicus Briefs: Authors Alliance submitted several friend of the court briefs on issues related to fair use over the past year. Check out our brief in Hachette Books v. Internet Archive, where we expand on our longtime defense of Controlled Digital Lending as a fair use; our brief in Goldsmith v. Warhol Foundation, where we advocate for a broad yet sensible conception of “transformativeness”; and our brief in Sicre de Fontbrune v. Wofsy, where we explain why fair use is a crucial aspect of U.S. policy and why it should shield authors from the enforcement of foreign copyright judgments where fair use would have protected the use had it occurred in the U.S.

Fair Use and Text Data Mining: Learn about Authors Alliance’s new project, “Text and Data Mining: Defending Fair Use,” intended to support researchers engaging in text and data mining under the recent DMCA exemption for Text Data Mining, generously supported by the Mellon Foundation.

Fair Use and Public Policy: Learn about why we voiced opposition to the SMART Copyright Act of 2022 and the Journalism Competition and Preservation Act—proposed legislation that, if passed, could erode our fair use rights.

Public Domain Day 2023: Welcoming Works from 1927 to the Public Domain

Posted January 5, 2023
Montage courtesy of the Center for the Public Domain

Literary aficionados and copyright buffs alike have something to celebrate as we welcome 2023: A new batch of literary works published in 1927 entered the public domain on January 1st, when the copyrights in those works expired. The public domain refers to the commons of creative expression that is not protected by copyright. When a work enters the public domain, anyone may do anything they want with that work, including activities that were formerly the “exclusive right” of the copyright holder like copying, sharing, translating, or adapting the work. 

Some of the more recognizable books entering the public domain this year include: 

  • Virginia Woolf’s To the Lighthouse
  • William Faulkner’s Mosquitoes
  • Agatha Christie’s The Big Four
  • Edith Wharton’s Twilight Sleep
  • Herbert Asbury’s The Gangs of New York (the original 1927 publication)
  • Franklin W. Dixon’s (a pseudonym) The Tower Treasure (the first Hardy Boys book)

Literary works can be a part of the public domain for reasons other than the expiration of copyright—such as when a work is created by the government—but copyright expiration is the major way that literary works become a part of the public domain. Copyright owners of works first published in the United States in 1927 needed to renew that work’s copyright in order to extend the original 28-year copyright term. Initially, the renewal term also lasted for 28 years, but over time the renewal term was extended to give the copyright holder an additional 67 years of copyright protection, for a total term of 95 years. This means that works that were first published in the United States in 1927—provided they were published with a copyright notice, were properly registered, and had their copyright renewed—were protected through the end of 2022. 

Once in the public domain, works can be made freely available online. Organizations that have digitized text of these books, like Internet ArchiveGoogle Books, and HathiTrust, can now open up unrestricted access to the full text of these works. HathiTrust alone has opened up full access to more than 40,000 titles originally published in 1927. This increased access provides richer historical context for scholarly research and opportunities for students to supplement and deepen their understanding of assigned texts. And authors who care about the long-term availability of their works may also have reason to look forward to their works eventually entering the public domain: A 2013 study found that in most cases, public domain works are actually more available to readers than all but the most recently published works. 

What’s more, public domain works can be adapted into new works of authorship, or “derivative works,” including by adapting printed books into audio books or by adapting classic books into interactive forms like video games. And the public domain provides opportunities to freely translate works to enrich our understanding of those works and help fill the gap in works available to readers in their native language.

Updates on the JCPA

Posted December 14, 2022
Photo by Elijah Mears on Unsplash

Last week saw a flurry of news about the Journalism Competition and Preservation Act (“JCPA”), proposed legislation that would create an exemption to antitrust law that would allow certain news publishers to join together to collectively negotiate with digital platforms to negotiate payments for carrying their content. Authors Alliance has consistently opposed the JCPA, as we believe it would harm small publishers and creators, while further entrenching major players in the news media industry. 

Last Monday, December 5th, it was uncovered that the revised JCPA had been included in a “must pass” defense spending bill (the National Defense Authorization Act, or NDAA), leading the legislation’s opposition to promptly decry the move and caution against it. Then, the next day, news broke that Congress had removed the JCPA from the legislation—something to celebrate for those, like Authors Alliance, who believed this was ill-advised legislation that would not have served the interests of the creators who contribute to the news media. 

Background

The JCPA was first proposed as separate bills in the Senate and House of Representatives in March 2021. The JCPA has laudable goals: to preserve a strong, diverse, and independent press, responding to ongoing crises in local and national journalism. But the actual text of the JCPA doesn’t meet those goals, while causing other problems.  One major problem has been that the JCPA implicitly expands the scope of copyright, and would potentially require payment for activities like linking or using brief snippets of content that are not only fair uses, but are crucial for digital scholarship. In June 2021, Authors Alliance joined a group of like-minded civil society organizations on a letter urging Congress to clarify that the bill would not expand copyright protection to article links, and that authors and other internet users would not have to pay to link to articles or for the use of headlines and other snippets that fall within fair use. 

Then, this September, a new version of the bill was released in the Senate. While the revised language made some improvements—like clarifying that the bill would not modify, expand, or alter the rights guaranteed under copyright—it still failed to clarify that the bill would not cover activities like linking that are fundamental for authors creating digital scholarship. And some changes to the legislation posed serious First Amendment concerns. For example, new language in the bill would have forced platforms to carry content of digital journalism organizations that participated in the collective bargaining, regardless of extreme views or misinformation. The revised bill could also have hurt authors of news articles financially, because it failed to include a provision that would require authors of the press articles to be compensated as part of the collective bargaining it envisioned. 

Inclusion in the NDAA

Last week, the news that the JCPA had been included in the NDAA was met with outcry. Its opponents argued that the bill was far too complex to be included in must-pass legislation, and merited further discussion and revision before becoming law. The JCPA was never marked up in the House of Representatives, nor did it receive a hearing there. Authors Alliance once again joined 26 other civil society organizations on a letter protesting the move and urging Congress not to include the JCPAA in military spending or other must-pass legislation. 

A wide variety of other stakeholders also objected to the inclusion of the JCPA in the NDAA. Small publications, lobbyists for platforms, and even journalism trade groups reiterated their opposition. Meta, the company that owns Facebook, even threatened to remove news from their platform were the legislation to pass (in response to a similar bill being passed in Australia, Meta did in fact remove news from its platform in the country). Then, late on Tuesday, December 6th, the latest version of the bill’s text was released, with the JCPA omitted. The NDAA was approved by the House a few days later. 

A Victory for Now

Because the JCPA was removed from the NDAA before its passage, it is no longer on the brink of becoming law. What happens next with the JCPA is less certain. There have already been multiple iterations of the bill, and it could be reintroduced, with or without modifications, at the next legislative session. While it’s unclear how the new makeup of Congress following the midterm elections might affect the JCPA’s chance of becoming law, this is certainly a factor in the bill’s future. This was also not the first time that the government has attempted to support journalism and local news through proposals that could affect users’ and authors’ ability to rely on fair use. Just last year, the Copyright Office conducted a study on establishing a new press publishers’ right in the United States which would have required news aggregators to pay licensing fees as part of their aggregation of headlines, ledes, and short phrases of news articles (you can read about Authors Alliance’s reply comment in that study here), activities. While the Office ultimately decided not to recommend the adoption of a new press publisher’s right, its study shows that the government may continue to investigate these policies from other fronts. 

Analysis: Opinion Released in U.S. v. Bertelsmann

Posted November 18, 2022
Photo by Scott Graham on Unsplash

Last week, the district court released its opinion in United States v. Bertelsmann, an antitrust case concerning a proposed merger between Penguin Random House (“PRH”) and Simon & Schuster (“S&S”), which the court blocked (an “amended opinion” was released earlier this week, but the two documents only differ in their concluding language). Authors Alliance has been covering this case on our blog for the past year, and we were eager to read Judge Pan’s full opinion now that redactions had been made and the opinion made public. This post gives an overview of the opinion; shares our thoughts about what Judge Pan got right, got wrong, and left out; and discusses what the case could mean for the vast majority of authors who are not represented in the discussion.

Background

The Department of Justice initiated this antitrust proceeding after PRH and S&S announced that they intended to merge, with Bertelsmann, PRH’s parent company, purchasing S&S from its parent company, Paramount Global. The trade publishing industry has long been dominated by a few large publishing houses which have merged and consolidated over time. Today, the trade industry is dominated by the “Big Five” publishers: PRH, S&S, HarperCollins, Hachette Book Group, and Macmillan. And a sub-section of the trade publishing industry, “anticipated top sellers,” is the focus of the government’s argument and Judge Pan’s opinion. This market segment is defined as books for which authors receive an advance of $250,000 or higher (a book advance is an up-front payment made to authors when they publish a book, and often the only money these authors receive for their works). 

The main thrust of Judge Pan’s opinion is simple: the proposed merger would have led to lower advances for authors of anticipated top sellers, and the market harm that would flow from the decreased competition in the industry is substantial enough that the merger can not go forward under U.S. antitrust law. To arrive at this conclusion, the court considered testimony from a variety of publishing industry insiders, experts in economics, and authors. 

Defining the Market

Trade publishing houses are those that distribute books on a national scale and sell them in non-specialized channels, like at general interest bookstores or on Amazon. It stands in contrast to self-publishing, academic publishing, and publishing with specialized boutique presses. But changes in how we read and how books are distributed has complicated these distinctions. For example, university presses are sometimes considered to be non-trade publishers, despite the fact that many also publish trade books. University presses are particularly well poised to publish books that bridge the gap between the scholarly and the popular—Harvard University Press’s publication of Thomas Picketty’s Capital in the 21st Century is one example, and it was an unexpected bestseller. Similarly, Amazon sells trade books alongside other types of books. The Authors Alliance Guide to Understanding Open Access is available as a print book on Amazon, but it is one we released under an open access license, and is far from a trade book. Consumers increasingly buy books online as brick and mortar bookstores across the country close or downsize, and the Amazon marketplace obscures the distinction between trade publishing and other types of publishing.

Within trade publishing, there is a small segment of books which are seen as “hot,” which the DOJ calls anticipated top sellers. While PRH argued that this distinction was pulled out of whole cloth, the popular “Publisher’s Marketplace,” a subscription-based service for those in the industry, uses certain terms (essentially code words) to indicate the size of the advance in a book deal when they are announced. “Deals under $50,000 are ‘nice,’ those up to $100,000 are ‘very nice,’ those up to $250,000 are ‘good,’ those up to $500,000 are ‘significant,’ and larger deals are ‘major.””

For the market for anticipated top sellers (trade books with advances of $250,000 or higher), the Big Five collectively control 91% of the market share. In contrast, for books where an author receives an advance under $250,000, the Big Five control just 55% of the market, with non-Big Five trade publishers publishing a significant portion of trade books in this category. Post-merger, the combined PRH and S&S were expected to have a 49% share of the market for anticipated bestsellers, according to expert testimony—more than the rest of the Big Five put together. For these reasons, the merger was determined to be improper as a matter of antitrust.

Beyond Anticipated Top Sellers

While Judge Pan’s opinion is measured, thoughtful, and reaches (from our perspective) the correct result, the broader context of the publishing industry shows how narrow the subset of authors in this market is, and how some authors were left out. The market the court considered in this case is “a submarket of the broader publishing market for all trade books.” In its pre-trial brief, PRH asserted that “[s]ome 57,000 to 64,00 books are published in the [U.S.] each year by one of more than 500 different publishing houses” and “another 10,000-20,000 are self published.” It is unclear whether the first number includes academic books and other non-trade titles. “[A]nticipated top-selling books” account for just 2% of “all books published by commercial publishers” (again, it is unclear what “commercial publishers” means in this context), and an even smaller share of all books published in the U.S. in a given year (a difficult statistic to pin down, but somewhere between 300,000 and 1,000,000 books per calendar year, depending on who you ask). 

It is not just that the authors that are the topic of this discussion are unique in the high advances they receive for their books, it is that the business of publishing a book is fundamentally different for these authors than less commercially successful authors. And this is what is missing from Judge Pan’s opinion: the economic system of Big Five book acquisitions for anticipated top sellers is totally unlike many authors’ experiences getting their work published. While many authors struggle to find a publisher willing to publish their book, and more still struggle to convince their publisher to do so on terms that are acceptable to them, anticipated top sellers are generally the subject of book auctions, whereby editors bid on the rights to a manuscript in an auction held by an author’s literary agent. It is important to keep in mind that for a vast majority of working authors, these auctions do not take place. 

The language in the opinion shows how it generalizes the experiences of commercially successful trade authors to authors more broadly, doing a disservice to the multitude of authors whose book deals do not look like the transactions it describes. Judge Pan states that “[a]uthors are generally represented by literary agents, who use their judgment and experience to find the best home for publishing a book.” Literary agents play an important role in the publishing ecosystem, and serve as intermediaries for some authors to help them develop their manuscripts and get the best deal possible. But the author-agent relationship is also a financial one: agents receive a “commission” of around 15% of all monies paid to the author. It stands to reason that an author who cares more about their work reaching a broad audience than receiving a large advance, or even an advance at all, is much less likely to be represented by an agent. And these authors too care about finding the right home for their work, getting a book deal with favorable terms, and feeling confident that their publisher is invested in their work. Making the publishing industry less diverse, with fewer houses overall, is just as detrimental to these authors as it is to top-selling ones. 

What is troubling about the decision is not that it focuses in on a certain type of author and certain type of book—the question of what the relevant “market” is in antitrust cases is a complicated one—but that the vision of authorship and publication it presents as typical does not reflect the lived experiences of most authors. The dominant narrative that “authors” are famous people who make a living from their writing, primarily through the high advances they receive from trade publishers, simply does not bear out in today’s information economy. 

Overall, the decision in this case is in many ways a boon for authors who care about a vibrant and diverse publishing ecosystem—whether they are authors of anticipated top sellers or authors who forgo compensation and publish open access. When publishing houses consolidate, fewer books are published, and fewer authors can publish with these publishers. This could lead less commercially successful trade authors to turn to other publishers (whether small trade publishers, university presses, or boutique publishers), who would then be forced to take on fewer books by less commercially successful authors. Self-publishing is an option for authors no longer able to find publishers willing to take their work on, but self-published authors earned 58% less than traditionally published authors as of 2017, and this decrease could lead some authors to abandon their writing projects altogether. These downstream effects may be speculative, but they deserve attention: this is almost certainly not the last we will hear about anticompetitive behavior in the publishing industry, and the effects of this behavior on non-top selling authors also matter. We hope that future judges considering these thorny questions will remember that authors are not a monolith, yet all are affected by drastic changes to the publishing ecosystem. 

Judge Blocks Penguin Random House/Simon & Schuster Merger

Posted November 2, 2022
Photo by Sasun Bughdaryan on Unsplash

On Monday, Judge Florence Pan issued an order enjoining (or blocking) the proposed merger of Penguin Random House and Simon & Schuster following a weeks-long trial in the D.C. Circuit. Authors Alliance has been monitoring the case and covering it on this blog for the past year. While Judge Pan’s full opinion is not yet public—it is currently sealed while each party determines what information it would like to be redacted as confidential—but her decision to block the proposed merger strikes a blow for efforts to consolidate major trade publishers and signals judicial concern about too little competition in the publishing industry. 

Judge Pan (who was appointed to the D.C. Circuit Court of Appeals to replace then-judge Kentanji Brown Jackson in September, but has continued to preside over this district court case), issued a short order announcing the decision. Judge Pan found that the Department of Justice had “shown that ‘the effect of [the proposed merger] may be substantially to lessen competition’ in the market for the U.S. publishing rights to anticipated top-selling books.” She concluded that the merger could not move forward under U.S. antitrust law, which seeks to protect market competition and ensure that no one firm wields too much power. 

The DOJ applauded the decision, with Assistant Attorney General Jonathan Kanter stating that the decision “protects vital competition for books and is a victory for authors, readers, and the free exchange of ideas,” and that the merger would have “reduced competition [and] decreased author income.” Penguin Random House, on the other hand, has already signaled that it is considering appealing the decision, and initially indicated it would be filing an “expedited appeal” before walking back this position in later comments. Jonathan Karp, president and CEO of Simon & Schuster, released a statement to the firm’s employees indicating Penguin Random House’s plans to appeal and stating that Simon & Schuster would be reviewing the decision and conferring with Penguin Random House to determine “next steps.” 

The parties have until November 4th to propose redactions to her opinion, which Judge Pan will then decide on, before the court releases it to the public. There is no set timeline for the full decision being released, but the short timeline for the parties to request redactions could signal that the process will not take long. 

One interesting aspect of the case is that the government focused on the market for “anticipated bestsellers” in its filings and argument, as well as the effect that lessened competition would have on authors, not the general public. Judge Pan adopted this position in her order, apparently accepting the argument as valid. Typically, antitrust focuses on harm to consumers, and indeed, Penguin Random House argued staunchly that the lack of a tangible harm to consumers meant the proposed merger did not pose an antitrust problem.

But were the merger to go forward, with fewer firms bidding on books expected to be commercially successful, the authors of those books could receive lower advances or less favorable contract terms due to the lessened competition. While the government chose to focus on a narrow segment of the book market (a move which faced criticism by some), the point that publishing house consolidation can hurt authors’ interests by giving them fewer choices is an important one.

Authors Alliance cares deeply about ensuring that our publishing ecosystem is diverse and vibrant, and the merger could have had deleterious effects on this diversity. At the same time, the focus on the anticipated bestseller market demonstrates one pitfall of the publishing industry: authors of commercial bestsellers tend to be centered as the authors whose interests are most important or worthy of attention, but these represent a vanishingly small percentage of working authors. Many of Authors Alliance’s members are authors who do not publish with trade publishers like Penguin Random House and S&S, and these authors have different motivations and priorities than authors of anticipated bestsellers. It is important that the government consider a variety of different types of authors as they work to shape author-friendly laws and policies, and we look forward to engaging with policy makers to help raise awareness of this important issue. 

Privacy for Public-Minded Authors Part I

Posted October 25, 2022
Photo by Dayne Topkin on Unsplash

Privacy may not seem like an important issue for authors who write to be read: after all, our members are often motivated by a desire to share their creations broadly. But writing in the digital world increasingly presents us with new considerations in the realm of personal and digital privacy. This post surveys an important privacy-related issue for authors: the right to speak or write anonymously, and will be the first in a new series on privacy considerations for public-minded authors. 

Why Publish Anonymously?

The tradition of authors publishing anonymously, or under a pseudonym, stretches back centuries. Authors choose to publish works of authorship under pseudonyms or anonymously for a variety of reasons. Authors affiliated with academic institutions may prefer to publish their work under a different name in order to keep their academic and authorial identities separate. Some authors publish under both their real names and pseudonyms to explore new styles or genres of writing. Authors writing about controversial topics may also choose to use pseudonyms or publish anonymously in order to maintain personal privacy while contributing to our understanding of such topics. Pseudonyms also allow authors to write works jointly under a single name.

Historical Pseudonyms 

Notable pseudonyms from the past show the breadth of motivations authors might have for pursuing this path, as well as the advantages of using pseudonyms. For example, in the mid-1800s, the Brontë sisters (Anne, Charlotte, and Emily) began publishing under the names Acton, Currer, and Ellis Bell to conceal the fact that they were women. After being rejected from multiple literary publications due to gender biases of the time, writing under male pen names gave the Brontës a path forward to share their creations with the world. Jane Eyre and Wuthering Heights have had a deep and profound impact on our literary tradition, and the Brontës’ ability to publish pseudonymously is what made this possible.

The 19th century writer Mary Ann Evans (better known by her pen name, George Eliot), on the other hand, elected to use a male pen name in order to disassociate herself from preconceived notions about female authors. She was also motivated by a desire to keep her previous work as a translator, critic, and editor—which did bear her real name—separate from her identity as a fiction author. Other authors from the past, such as C.S. Lewis and the famed contemporary Italian novelist Elena Ferrante, have used pen names in order to maintain their personal privacy and avoid public scrutiny while still engaging with the literary world. 

Pseudonyms can also allow multiple authors to work together under a single author name, to better appeal to readers or to accommodate publishing conventions. The idea for the Nancy Drew books, for example, originated with Edward Stratemeyer, who also created the Hardy Boys series. Stratemeyer generated ideas for childrens’ novels and then hired ghostwriters to execute his vision. The series’ “author,” Carolyn Keene, never existed. Instead, her name stands in for the slew of ghostwriters who wrote the novels over time. By using a single author name for these books, Stratemeyer (and later, his estate) was able to create a sense of continuity and build reputational capital for the fictional Keene, captivating multiple generations of young readers.

Anonymous Works

Rather than use a pseudonym, some authors choose to publish their works anonymously, so that no author at all is listed on or associated with the work. Like using a pseudonym, publishing anonymously divorces the author’s real identity from the work. But unlike using a pseudonym, an anonymous work conspicuously lacks an author at all. Throughout history, politically or socially controversial works have been published anonymously. Mary Shelley’s Frankenstein was originally published anonymously, and some have speculated that this was due in part to her fear of losing custody of her children were she to be associated with the monstrous tale. More recently, Go Ask Alice, a book about a 15-year old girl’s descent into drug addiction, was published anonymously. The work presented itself as a diary, and the anonymous author led to debate about the work’s veracity. While the book is now considered to be fictional, the anonymous authorship created an implication that the work was somehow a “real” diary, which colored the reading experience for contemporaneous audiences.

Pseudonyms, Anonymity and Copyright

The U.S. Copyright system accommodates authors’ rights to publish pseudonymously in a number of different ways, underscoring the importance of this right as a matter of public policy. First, and most importantly, the Copyright Act provides an avenue for an author to register their work under a pseudonym (longtime readers may recall that copyright registration is not necessary in order for a work to be protected by copyright, but can provide significant benefits in many circumstances). An author can use both their pseudonym and their legal name on their copyright registration, or their pseudonym only. If an author does use their legal name on their copyright registration for a pseudonymously published work, it is important to understand from a privacy perspective that the author’s legal name will become a part of the public record, as copyright registrations are publicly available. If an author registers their copyright under a pseudonym and does not use their legal name in the registration, their legal name will not be made public.

However, there is a trade off for authors who register copyrights under a pseudonym. The duration of copyright protection is different for pseudonymously authored works: rather than being the life of the author plus 70 years, the copyright term for pseudonymous works is 95 years from the date of creation. This is because, without a real person’s identity to associate with the copyright, it is impossible to measure the life of the author. Additionally, a pseudonym itself cannot be copyrighted, as names and short phrases are outside the scope of copyright protection (though in some instances, words or marks identifying the author could be protected in other ways, such as by trademark law). This means that it is possible for multiple authors to use the same pseudonym, which can create confusion for readers and be a detriment to those authors’ ability to reach them. 

Authors are also empowered by the copyright system to register copyrights anonymously. As with pseudonymous works, anonymous works are subject to copyright protection for 95 years from the date of creation, since the life of the author cannot be ascertained without an author being named. Even if a work is registered anonymously, its author can sue to enforce the copyright (though they will likely lose their anonymity in the process). Anonymous works pose challenges for secondary users, however, because it leaves creators who may want to use the anonymous works in their own works without a point of contact.

Authors Alliance Signs on to Amicus Brief in Hunley v. Instagram

Posted October 12, 2022
Photo by Timothy L Brock on Unsplash

Authors Alliance is pleased to announce that we have joined with several civil society and library organizations (EFF, CCIA, ALA, ARL, OTW and ACRL) on an amicus brief submitted to the Ninth Circuit Court of Appeals in Hunley v. Instagram, a case about whether individuals and organizations that merely link to content can be liable for secondary copyright infringement, a judicial doctrine which places liability on a party that knowingly contributes to or facilitates copyright infringement, but does not itself directly infringe a copyright. More specifically, the case asks whether Instagram can be secondarily liable for copyright infringement when users use its “embedding” feature, whereby websites and platforms can employ code to display an Instagram post within their own content. 

The Case

The case arose when a group of photographers, who had captured images related to the death of George Floyd and the 2016 election, became upset that their images were being used in a variety of media outlets without permission. The outlets had used Instagram’s embedding tool to link and post the images rather than copying the images directly. The photographers then sued Instagram in the Northern District of California, on the theory that by offering the “embedding” feature, it was facilitating copyright infringement of others and therefore liable.

The district court dismissed the photographers’ claim because it did not pass muster under an inquiry known as the “server test,” established in the seminal Ninth Circuit case, Perfect 10 v. Amazon. The server test is premised on the idea that a website or platform does not violate the copyright holder’s exclusive right to display their work when a copy of the work is not stored on that website or platform’s servers. In short, the inquiry asks whether the work was stored on a website’s server, in which case the website could be liable for infringement, or whether that website merely links elsewhere without storing a copy of the work, in which case it cannot. When media outlets in this case embedded Instagram posts into their content, they did so using code that embeds and displays the post, but the posts were not actually stored on the outlets’ servers. Therefore, the district court found that Instagram was not liable for secondary infringement under the server test.  

The photographers have appealed to the Ninth Circuit Court of Appeals to challenge the validity of the server test for embedded content. It is worth noting that district courts in other circuits have disapproved of the Ninth Circuit’s Perfect 10 server test and found that embedding can constitute secondary infringement in some cases, and it is not applied in all jurisdictions. For this reason, some have speculated that the Ninth Circuit’s server test is ripe for revisiting

The Brief

Our amicus brief asks the Ninth Circuit to affirm the district court’s dismissal of the photographers’ complaint and the continued viability of the server test. It argues that the Perfect 10 server test should not be discarded, as it has paved the way for an internet that depends on the use of hyperlinks to connect information and present it efficiently. The brief explains how linking works, why linking is important, and the negative consequences for a wide variety of internet users—including authors—that could occur if the court narrows or rejects the server test. 

First, our brief explains how embedding and linking work: when an article or website links to other content, it accomplishes this by using computer code to incorporate another’s content into the work and/or direct users to it. Our brief argues that linking, and particularly the act of inline linking (providing links within the text of a website or blog post itself, as we have done throughout this post) is fundamental to the internet as we know it. Online advertising, message boards, and social media platforms depend on the ability to connect sources of information and direct users elsewhere online. 

Second, our brief argues that abandoning or narrowing the Perfect 10 server test could introduce liability for inline linking. Like embedding social media posts, inline linking to other content incorporates materials created by others indirectly without the secondary user actually hosting that content on a server. In this way, inline linking allows internet users to efficiently verify information or learn more about a given topic in just a click. Inline linking is in this way analogous to embedding, meaning that a decision in favor of secondary liability in this case could threaten the legality of inline linking, potentially disrupting the very fabric of the internet as we know it.

Like the general internet-using public, authors also depend on their ability to link to other sources in their online writings to cite to other sources and engage with other works of authorship. In fact, this is why we decided to weigh in on this case: a decision that introduces liability for inline linking could drastically alter how authors can cite to other sources of information and how they can conduct research in a digital environment. Authors rely on inline linking to save space and preserve the readability of their works while citing sources and engaging with other works. Authors also depend on online linking to perform research, as it can quickly and easily direct them to new sources of information. As more and more works are born digital, an author’s ability to link to other content to enrich their scholarship has become more important than ever. It is crucial that this ability is protected in order for the internet to continue to be an engine of learning and the advancement of knowledge. 

So far, the parties have submitted their opening briefs in this case, and several other amicus briefs have been filed. Oral argument has not yet been scheduled. Authors Alliance will keep our readers informed about updates in this case as it moves forward. 

Hunley-amicus-EFF-CCIA-ALA-et-al

Wiley Removes Over 1,300 Ebooks from Academic Library Collections 

Posted October 7, 2022
chair and empty bookshelf” by Mark Z. is licensed under CC BY-NC 2.0.

Last month, publisher John Wiley & Sons made headlines when it made the controversial decision to abruptly remove over 1,300 ebooks from academic library collections. It did so by removing these titles from ProQuest Academic Complete, a large collection of ebooks that many libraries subscribe to. Earlier this week, Wiley made headlines again when it announced it was temporarily restoring access in the face of public pressure. 

As has unfortunately been typical with other changes in how publishers work (or refuse to work) with libraries, authors of these books were left in the dark, with little say on the decision. We have heard from many authors who believe, as we do, that libraries play an extraordinarily important role in preserving and providing access to the materials we write, in all formats, and we believe they should be able to purchase access on reasonable terms so they can fulfill their missions.  We’re writing this post to highlight the Wiley situation and outline some ways that authors can make their voices heard. 

The Wiley Ebook Situation

Wiley’s move was particularly shocking since the removal of access coincided with the beginning of the academic term. Countless students who were relying on library access to textbooks they needed for their academic courses lost this access exactly when these texts were most needed. It is increasingly common for publishers of academic texts like these to refuse to sell electronic copies to libraries at all, which seems to be the tactic Wiley was pursuing here, leaving these students with no low cost alternative (and in fact, Wiley reportedly refuses to sell textbooks to libraries at all, in either digital or print format). This left instructors scrambling to find new texts to assign, redesign syllabi, and otherwise adopt their courses to a loss of access to the Wiley texts, at a moment when their attention should have been focused on teaching and welcoming students. 

Unsurprisingly, the decision was widely condemned by librarians, civil society organizations, and university libraries. #ebookSOS, which  has been working to highlight these kinds of challenges for several years, organized several efforts in protest. 

Authors Alliance began working with #ebookSOS to raise awareness of this issue among authors whose works were removed from library collections in an effort to encourage Wiley to reverse its decision and provide assurances that it will not take measures like these in the future. The authors we’ve heard from want their books to be read, to serve learning, and to be used to share knowledge with the world. Some of these authors view Wiley’s decision as a betrayal, and indeed, it is hard to square with Wiley’s asserted commitment to “​​[e]nabling discovery by supporting access to knowledge and fueling the engines of research.”

Earlier this week, Wiley relented. It announced that it had decided to temporarily reverse course, restoring access to the removed texts until June 2023, when they will once again be removed. Wiley apologized for the “disruption” the move caused to students, libraries, and instructors, admitting that it caught them off guard. But while this may ease the burden on instructors and students for now, librarians have resoundingly found the measure to be insufficient. Temporarily restoring access to these particular texts does not solve the fundamental problem that large publishers like Wiley can—and do—unilaterally and without warning remove books from digital library shelves, even if their motive is purely to increase their own profits. We’ve continued to hear from authors who have strong reservations both about Wiley’s decision and how it went about making it. If you are an author published by Wiley and have thoughts about its decision to remove access to these ebooks, we would love to hear from you:  Please write to us at info@authorsalliance.org

What Authors Can Do

Unfortunately, this is not the first time that a publisher has acted unilaterally to make its ebooks less accessible to the detriment of readers and authors. Trade publisher MacMillan at one point announced it would employ a two-week embargo before libraries could acquire newly published titles as ebooks. Similarly, rising ebook prices during the pandemic hurt many students who relied on these works to learn. Yet, as Wiley’s response to the outcry following its announcement shows, a concerted campaign to pressure these publishers can have results. For example, MacMillan ultimately abandoned its plans for an ebook embargo following a boycott of all its ebook titles by various library systems. 

Most authors have little say in how publishers distribute their works. Publishing contracts typically give publishers broad discretion to determine when, how, and on what terms authors’ books are sold. While it is understandable that authors will not be involved in every decision about distribution, authors have placed trust in publishers that they will make reasonable decisions. So, what can authors do? 

First, as the Wiley ebook situation has shown, authors’ voices do matter. If you have concerns about your book being available to libraries, speak out! One reason that Authors Alliance exists is to help amplify your voice and help publishers understand your views on how their legal and policy decisions affect your interests in making your books available to the world.  If you’re wondering about how we might help, please get in touch—we’d love to hear from you. Second, authors do have an opportunity to influence how their books are shared when negotiating their publishing contracts. There are no guarantees that a publisher will accept your proposed contract language (and we suspect most will be resistant). But given that several publishers have recently demonstrated their unwillingness to make reasonable distribution decisions, it seems to us equally reasonable to ask for contractual assurances that they will continue to sell your book to libraries on reasonable terms, in all formats. For some language to serve as a starting point, #ebooksos has shared their model contract language for authors, here.

Biden’s Open Access to Research Policy and How it Affects Authors

Several  weeks ago, the White House Office of Science and Technology Policy (OSTP) issued a memo titled Ensuring Free, Immediate, and Equitable Access to Federally Funded Research. The memo, which builds upon earlier policies including this 2013 Obama administration open access memo and this 2008 National Institutes of Health policy, directs all federal agencies with research and development spending to take steps to ensure that federally sponsored research in the form of scholarship and research data will be available free of charge from the day of publication. 

The initial release of the Biden OSTP memo generated a rush of news speculating about its impact on scholarly publishing—how major publishers would react, how academic institutions would respond (and specifically whether it would result in a shift towards more “Gold Open Access” publishing in which authors pay publishers an article processing charge to publishing their article openly), and how many articles this change would affect. SPARC, a nonprofit dedicated to supporting open research, has a great summary of the policy and related news.

We recently caught up with Peter Suber, Senior Advisor on Open Access at Harvard University Library, to talk about the implications of the OSTP policy for authors. Peter is a founding member of Authors Alliance who has been deeply involved in advocating for open access for several decades. 

Q: Give us a brief overview of the new OSTP policy – what is this and why is it important? 

The key background is that back in 2013, the Obama White House OSTP issued a memo asking the 20 largest federal funding agencies to adopt OA policies. The memo applied to agencies spending over $100 million per year on extramural research funding. The new memo from the Biden White House extends and strengthens the Obama memo in three important ways: 

  • It covers all federal funders, not just the largest ones. I’ve seen estimates that the new memo covers more than 400 agencies, but the OSTP has not yet released a precise number. Among other agencies, the new memo covers the National Endowment for the Humanities. So for the first time federal OA policies will cover the humanities and not just the sciences.
  • The Obama memo permitted embargoes of up to 12 months, and publishers routinely demanded maximum embargoes. The Biden memo eliminates embargoes and requires federally funded research to be open on the date of publication. Like the Gates Foundation —which I believe was the first funder to require unembargoed OA to the research it funded—the White House announced its no-embargo policy several years before it will take effect, giving publishers plenty of time to adapt. 
  • The Obama memo covers data, not just articles. This is an important step to cover more research outputs and more of the practices that make up open science and open scholarship.

How will publishers react to this new policy? Of course they have the right to refuse to publish federally funded research. When the NIH policy was new in 2008, we didn’t know whether any publishers would refuse. Because many publishers lobbied bitterly against it and we thought some might do just that. But it turns out that none refused. It’s hard to prove a negative, but the Open Access Directory keeps a crowd-sourced list of publisher policies on NIH-funded authors, and has so far turned up no publishers who refuse to publish NIH-funded authors just because they are covered by a mandatory OA policy. 

Of course one reason is that the NIH is so large. It’s by far the world’s largest funder of non-classified research. Any publishers who refuse to publish NIH-funded authors would abandon a huge vein of high-quality research to their rivals. But when federal OA policy covers smaller agencies as well, some publishers might well refuse to publish, say, NEH-funded research, because they don’t receive many submissions from NEH-funded authors. This is something to watch. 

Q: The Biden memo does not address ownership of rights or licensing for either scholarship or data. How do you think agencies will address rights issues in their implementation? 

Good point. Neither the Obama nor the Bidem memo explicitly requires open licenses. But both require that agency policies permit “reuse”, which will require open licenses in practice. Unfortunately the Obama White House approved agency policies that did not live up to this requirement. We can hope the Biden White House will do better on that point. Of course Plan S requires a CC-BY license and the Biden memo conspicuously stops short of that. As a result, we can expect lots of lobbying, either at the agency level or the OSTP level — for and against explicit open licensing requirements, and for and against specific licenses like CC-BY.

Q: Some people have written about how open access policies and Plan S Rights Retention Strategy in particular undermine authors rights. E.g., this post on Scholarly Kitchen.  Our point of view is that those policies address a negotiating imbalance that has traditionally favored publishers, and allows academic authors –who on the whole prefer broad reach and access to their work– to switch the “default” to open for their articles even when their publishers wish otherwise.  Do you have a response to that argument that OA policies for funded research undermine authors rights? 

I’ve never seen a good argument that rights retention policies harm authors or limit their rights. On the contrary, these policies help authors and enlarge their rights. I’ve made this case in response to criticisms of the rights-retention OA policies at Harvard, and I’ve enumerated the benefits of rights-retention policies for authors. (For background on the Harvard rights-retention policies, I can recommend a handout I wrote for a talk last year.)  

One criticism is that rights-retention OA policies will reduce author choice by causing some publishers to refuse to publish covered authors. But in practice there is no evidence that this actually happens. I’m not aware of a single instance of this happening In the 14 years that Harvard has had its rights-retention policies. The same goes for the more than 80 Harvard-style policies now in effect in North America, Europe, Africa, Asia, and Australasia. 

In fairness, Harvard-style policies give authors the right to waive the open license. By default the university has non-exclusive rights, but authors can waive that license if they wish, and publishers can demand that authors get the waiver. But that too is rare. In our case, very few publishers – just two or three – systematically make that demand, and I haven’t heard that it’s common anywhere else. Our waiver rate is below 5%. Even with waiver options, these policies definitely shift the default to open.

Under the Plan S rights retention strategy, authors add a paragraph to their submission letter saying that any accepted manuscript arising from the submission is already under a CC-BY license. Publishers have the right to desk-reject those articles upon submission. But we don’t know whether any will actually do so. Plan S has a tool to track journal compliance with the Plan S terms, and it will alert authors to steer clear of those publishers. 

Q: There has been speculation that the Biden memo will accelerate the rate at which publishers adopt a “article processing charge” Gold OA model that will require all authors (or their funders or universities) to pay for their articles to be published. What do you think? 

First we should note that the White House guidelines are 100% repository-based or “green”. They require deposit in OA repositories, not publication in OA journals. As far as I can tell, publishing in an OA journal would not even count toward compliance, since those authors would still have to deposit their texts and data in suitable repositories. 

Publishers could say to federally-funded authors, “You can publish with us only if you pay an APC [article processing charge] for our gold option.” Authors could take them up on that or they could withdraw their submissions and look elsewhere. The new OSTP memo lets authors use some of their grant funds to pay “reasonable publication costs”. Some authors may be fooled and think that paying the fee is the only way to comply with the funder policy. But that would be untrue. As more and more authors realize that they can comply with the funder policy by depositing in a repository, at no charge, I predict that they will divide. Some will take the costless path to compliance and refuse to pay what I’ve called a prestige tax just to publish in a certain journal. Others will pay the prestige tax for a journal’s brand and reputation, if only because journal prestige still carries a lot of weight with academic authors. This obstacle to frictionless sharing is a cultural obstacle that new policies cannot directly dismantle. But we should remember that when publishers demand a publication fee and authors pay it, the authors are paying for the journal brand. They are not paying to comply with the funder policy, which they could do at no charge.

The Biden memo is equivocal about this possibility. On the one hand, it lets federal grantees use grant money to pay reasonable publication costs. On the other hand, it requires that agency policies “ensure equitable delivery” of federally funded research. The memo uses “equity” language in similar contexts half a dozen times. On one natural interpretation, this language rules out APC barriers to compliance, because APCs exclude some authors on economic grounds. This is another front on which there will be lots of lobbying as the agencies put their policies into writing. In fact, the lobbying has already begun.

Some publishers will undoubtedly demand fees or try to demand fees to publish federally-funded authors. But we already know that some will not. Science, for example, has already said that it will publish federally-funded authors without requiring them to buy its “gold” OA option. AAAS said that “it is already our policy that authors who publish with one of our journals can make the accepted version of their manuscript publicly available in institutional repositories immediately upon publication, without delay.” In a related editorial, Science explained that its authors may already deposit in the OA repositories of their choice “without delay or incurring additional fees.” It opposes a full shift toward “author pays” gold OA because it discriminates against many kinds of researchers, such as early-career researchers, researchers from smaller schools, and those in underfunded disciplines. It agrees that the APC model “can be inequitable for many scientists and institutions.” Some journals will follow Science, because it’s Science. Some will do so to avoid the equity barrier. And some will do so to signal that they will only evaluate submissions on their merits.

Q: As agencies go about developing their own plans for implementing this policy, will authors or others have an opportunity to give input, or will this be a closed-door process? 

We don’t know yet. The White House didn’t solicit public comments for the 2022 memo, which angers some publishers. The Obama white house memo did solicit public comments, twice, and both times the comments overwhelmingly favored the policy. 

It seems that agencies could still call for public comments before they finalize their policies. The actual development of the policies will be coordinated by three agencies: the OSTP, the Office of Management and Budget, and the National Science and Technology Council Subcommittee on Open Science. We don’t know what guidelines, if any, they will lay down for that coordination. 

The background on coordination goes back to the Obama White House. When it told the large agencies that they must adopt OA policies, it allowed the policies to differ but asked agencies to work together to ensure that the policies aligned. In the end, I believe the policies differed too much. Universities really feel this because they have to comply with all of the policies, since they receive grants from each agency. Like the Obama memo, the Biden memo allows the policies to differ and calls for coordination. We can hope for less divergence than in the past.