Tag Archives: publicity rights

Some Initial Thoughts on the US Copyright Office Report on AI and Digital Replicas

Posted August 1, 2024

On July 31, 2024, the U.S. Copyright Office published Part 1 of its report summarizing the Office’s ongoing initiative of artificial intelligence. This first part of the report addresses digital replicas, in other words, how AI is used to realistically but falsely portray people in digital media. The Office in its report recommends new federal legislation that would create a new right to control “digital replicas” which it defines as  “a video, image, or audio recording that has been digitally created or manipulated to realistically but falsely depict an individual.”

We remain somewhat skeptical that such a right would do much to address the most troubling abuses such as deepfakes, revenge porn, and financial fraud. But, as the report points out, a growing number of varied state legislative efforts are already in the works, making a stronger case for unifying such rules at the federal level, with an opportunity to ensure adequate protections are in place for creators.  

The backdrop for the inquiry and report is a fast-developing space of state-led legislation, including legislation with regard to deepfakes. Earlier this year, Tennessee became the first state to enact such a law, the ELVIS Act (TN HB 2091), while other states mostly focused on addressing deepfakes in the context of sexual acts and political campaigns. New state laws are continuing to be introduced, making it harder and harder to navigate the space for creators, AI companies, and consumers alike. A federal right of publicity in the context of AI has already been discussed in Congress, and just yesterday a new bill was formally introduced, titled the “NO AI Fakes Act.” 

Authors Alliance has watched the development of this US Copyright Office initiative closely. In August 2023, the Office issued a notice of inquiry, asking stakeholders to weigh in on a series of questions about copyright policy and generative AI.  Our comment in response to the inquiry was devoted in large part to sharing the ways that authors are using generative AI, how fair use should apply to training AI, and that the USCO should be cautious in recommending new legislation to Congress

This report and recommendation from the Copyright Office could have a meaningful impact on authors and other creators, including both those whose personality and images are subject to use with AI systems, and those who are actively using AI in the writing and research. Below are our preliminary thoughts on what the Copyright Office recommends, which it summarizes in the report as follows: 

“We recommend that Congress establish a federal right that protects all individuals during their lifetimes from the knowing distribution of unauthorized digital replicas. The right should be licensable, subject to guardrails, but not assignable, with effective remedies including monetary damages and injunctive relief. Traditional rules of secondary liability should apply, but with an appropriately conditioned safe harbor for OSPs. The law should contain explicit First Amendment accommodations. Finally, in recognition of well-developed state rights of publicity, we recommend against full preemption of state laws.”

Initial Impressions

Overall, this seems like a well-researched and thoughtful report, given that the Office had to navigate a huge number of comments and opinions (over 10,000 comments were submitted). The report also incorporates the many more recent developments that included numerous new state laws and federal legislative proposals.  

Things we like: 

  • In the context of an increasing number of state legislative efforts—some overbroad and more likely than not to harm creators than help them—we appreciate the Office’s recognition that a patchwork of laws can pose a real problem for users and creators who are trying to understand their legal obligations when using AI that references and implicates real people.
  • The report also recognizes that the collection of concerns motivating digital replica laws—things like control of personality, privacy, fraud, and deception—are not at their core copyright concerns. “Copyright and digital replica rights serve different policy goals; they should not be conflated.” This matters a lot for what the scope of protection and other details for a digital replica right looks like. Copy-pasting copyright’s life+70 term of protection, for example, makes little sense (and the Office recognizes this, for example, by rejecting the idea of posthumous digital replica rights). 
  • The Office also suggests limiting the transferability of rights. We think this is a good idea to protect individuals from unanticipated downstream use by companies that may persuade individuals to sign deals that would lock them into unfavorable long-term deals. “Unlike publicity rights, privacy rights, almost without exception, are waivable or licensable, but cannot be assigned outright. Accordingly, we recommend a ban on outright assignments, and the inclusion of appropriate guardrails for licensing, such as limitations in duration and protection for minors.” 
  • The Office explicitly rejects the idea of a new digital replica right covering “artistic style.” We agree that protection of artistic style is a bad idea. Creators of all types have always used existing styles and methods as a baseline to build upon, and it’s resulted in a rich body of new works. Allowing for control over “style” however well-defined, would impinge on these new creations. Strong federal protection over “style” would also contradict traditional limitations on rights, such as Section 102(b)’s limits on copyrightable subject matter and the idea/expression dichotomy, which are rooted in the Constitution. 

Some concerns: 

  • The Office’s proposal would apply to the distribution of digital replicas, which are defined as “a video, image, or audio recording that has been digitally created or manipulated to realistically but falsely depict an individual.” This definition is quite broad and could potentially include a large number of relatively common and mostly innocuous uses—e.g., taking a photo with your phone of a person and applying a standard filter on your camera app could conceivably fall within the definition. 
  • First Amendment rights to free expression are critical for protecting uses for news reporting, artistic uses, parody and so on. Expressive uses of digital replicas—e.g., a documentary that uses AI to replicate a recent event involving recognizable people, or reproduction in a comedy show to to poke fun at politicians—could be significantly hindered by an expansive digital replica right unless it has robust free expression protections. Of course, the First Amendment applies regardless of the passing of a new law, but it will be important for any proposed legislation to find ways to allow people to exercise those rights effectively. As the report explains, comments were split. Some like the Motion Picture Association proposed enumerated exceptions for expressive use, while others such as the Recording Industry Association of America took the position that “categorical exclusions for certain speech-oriented uses are not constitutionally required and, in fact, risk overprotection of speech interests at the expense of important publicity interests.” 

We tend to think that most laws should skew toward “overprotection of speech interests,” but the devil is in the details on how to do so. The report leaves much to be desired on how to do this effectively in the context of digital replicas. For its part, “[t]he Office stresses the importance of explicitly addressing First Amendment concerns. While acknowledging the benefits of predictability, we believe that in light of the unique and evolving nature of the threat to an individual’s identity and reputation, a balancing framework is preferable.” One thing to watch in future proposals is what such a balancing framework actually includes, and how easy or difficult it is to assert protection of First Amendment rights under this balancing framework. 

  • The Office rejects the idea that Section 230 should provide protection for online service providers if they host content that runs afoul of the proposed new digital replica rights. Instead, the Office suggests something like a modified version of the Copyright Act’s DMCA section 512 notice and takedown process. This isn’t entirely outlandish—the DMCA process mostly works, and if this new proposed digital replica right is to be effective in practice, asking large service providers that are benefiting from hosting content to be responsive in cases of alleged infringing content may make sense. But, the Office says that it doesn’t believe the existing DMCA process should be the model, and points to its own Section 512 report for how a revised version for digital replicas might work. If the Office’s 512 study is a guide to what a notice-and-takedown system could look like for digital replicas, there is reason to be concerned.  While the study rejected some of the worst ideas for changing the existing system (e.g., a notice-and-staydown regime), it also repeatedly diminished the importance of ideas that would help protect creators with real First Amendment and fair use interests. 
  • The motivations for the proposed digital replica right are quite varied. For some commenters, it’s an objection to the commercial exploitation of public figures’ images or voices. For others, the need is to protect against invasions of privacy. For yet others, it is to prevent consumer confusion and fraud. The Office acknowledges these different motivating factors in its report and in its recommendations attempts to balance competing interests among them. But, there are still real areas of discontinuity—e.g., the basic structure of the right the Office proposes is intellectual-property-like. But it doesn’t really make a lot of sense to try to address some of the most pernicious fraudulent uses, such as deepfakes to manipulate public opinion, revenge porn, or scam phone calls, with a privately enforced property right oriented toward commercialization. Discovering and stopping those uses requires a very different approach and one that this particular proposal seems ill-equipped to deal with. 

Barely a few months ago, we were extremely skeptical that new federal legislation on digital replicas was a good idea. We’re still not entirely convinced, but the rash of new and proposed state laws does give us some pause. While the federal legislative process is fraught, it is also far from ideal for authors and creators to operate under a patchwork of varying state laws, especially those that provide little protection for expressive uses. Overall, we hope certain aspects of this report can positively influence the debate about existing federal proposals in Congress, but remain concerned about the lack of detail about protections for First Amendment rights. 

In the meantime, you can check out our two new resource pages on Generative AI and Personality Rights to get a better understanding of the issues.

Writing About Real People Update: Right of Publicity, Voice Protection, and Artificial Intelligence

Posted March 7, 2024
Photo by Jason Rosewell on Unsplash

Some of you may recall that Authors Alliance published our long-awaited guide, Writing About Real People, earlier this year. One of the major topics in the guide is the right of publicity—a right to control use of one’s own identity, particularly in the context of commercial advertising. These issues have been in the news a lot lately as generative AI poses new questions about the scope and application of the right of publicity. 

Sound-alikes and the Right of Publicity

One important right of publicity question in the genAI era concerns the increasing prevalence of “sound-alikes” created using generative AI systems. The issue of AI-generated voices that mimicked real people came to the public’s attention with the apparently convincing “Heart on My Sleeve” song, imitating Drake and the Weeknd, and tools that facilitate creating songs imitating popular singers have increased in number and availability

AI-generated soundalikes are a particularly interesting use of this technology when it comes to the right of publicity because one of the seminal right of publicity cases, taught in law schools and mentioned in primers on the topic, concerns a sound-alike from the analog world. In 1986, the Ford Motor Company hired an advertising agency to create a TV commercial. The agency obtained permission to use “Do You Wanna Dance,” a song Bette Midler had famously covered, in its commercial. But when the ad agency approached Midler about actually singing the song for the commercial, she refused. The agency then hired a former backup singer of Midler’s to record the song, apparently asking the singer to imitate Midler’s voice in the recording. A federal court found that this violated Midler’s right of publicity under California law, even though her voice was not actually used. Extending this holding to AI-generated voices seems logical and straightforward—it is not about the precise technology used to create or record the voice, but about the end result the technology is used to achieve. 

Right of Publicity Legislation

The right of publicity is a matter of state law. In some states, like California and New York, the right of publicity is established via statute, and in others, it’s a matter of common law (or judge-made law). In recent months, state legislatures have proposed new laws that would codify or expand the right of publicity. Similarly, many have called for the establishment of a federal right of publicity, specifically in the context of harms caused by the rise of generative AI. One driving force behind calls for the establishment of a federal right of publicity is the patchwork nature of state right of publicity laws: in some states, the right of publicity extends only to someone’s name, image, likeness, voice, and signature, but in others, it’s much broader. While AI-generated content and the ways in which it is being used certainly pose new challenges for courts considering right of publicity violations, we are skeptical that new legislation is the best solution. 

In late January, the No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act of 2024 (or “No AI FRAUD Act”) was introduced in the House of Representatives. The No AI FRAUD Act would create a property-like right in one’s voice and likeness, which is transferable to other parties. It targets voice “cloning services” and mentions the “Heart on My Sleeve” controversy specifically. But civil societies and advocates for free expression have raised alarm about the ways in which the bill would make it easier for creators to actually lose control over their own personality rights while also impinging on others’ First Amendment rights due to its overbreadth and the property-like nature of the right it creates. While the No AI FRAUD Act contains language stating that the First Amendment is a defense to liability, it’s unclear how effective this would be in practice (and as we explain in the Writing About Real People Guide, the First Amendment is always a limitation on laws affecting freedom of expression). 

The Right of Publicity and AI-Generated Content

In the past, the right of publicity has been described as “name, image, and likeness” rights. What is interesting about AI-generated content and the right of publicity is that a person’s likeness can be used in a more complete way than ever before. In some cases, both their appearance and voice are imitated, associated with their name, and combined in a way that makes the imitation more convincing. 

What is different about this iteration of right of publicity questions is the actors behind the production of the soundalikes and imitations, and, to a lesser extent, the harms that might flow from these uses. A recent use of a different celebrity’s likeness in connection with an advertisement is instructive on this point. Earlier this year, advertisements emerged on various platforms featuring an AI-generated Taylor Swift participating in a Le Creuset cookware giveaway. These ads contained two separate layers of deceptiveness: most obviously, that Swift was AI-generated and did not personally appear in the ad, but more bafflingly, that they were not Le Creuset ads at all. The ads were part of a scam whereby users might pay for cookware they would never receive, or enter credit card details which could then be stolen or otherwise used for improper purposes. Compared to more traditional conceptions of advertising, the unfair advantages and harms caused by the use of Swift’s voice and likeness are much more difficult to trace. Taylor Swift’s likeness and voice were appropriated by scammers to trick the public into thinking they were interacting with Le Creuset advertising. 

It may be that the right of publicity as we know it (and as we discuss it in the Writing About Real People Guide) is not well-equipped to deal with these kinds of situations. But it seems to us that codifying the right of publicity in federal law is not the best approach. Just as Bette Midler had a viable claim under California’s right of publicity statute back in 1992, Taylor Swift would likely have a viable claim against Le Creuset if her likeness had been used by that company in connection with commercial advertising. The problem is not the “patchwork of state laws,” but that this kind of doubly-deceptive advertising is not commercial advertising at all. On a practical level, it’s unclear what party could even be sued by this kind of use. Certainly not Le Creuset. And it seems to us unfair to say that the creator of the AI technology sued should be left holding the bag, just because someone used it for fraudulent purposes. The real fraudsters—anonymous but likely not impossible to track down—are the ones who can and should be pursued under existing fraud laws. 

Authors Alliance has said elsewhere that reforms to copyright law cannot be the solution to any and all harms caused by generative AI. The same goes for the intellectual property-like right of publicity. Sensible regulation of platforms, stronger consumer protection laws, and better means of detecting and exposing AI-generated content are possible solutions to the problems that the use of AI-generated celebrity likenesses have brought about. To instead expand intellectual property rights under a federal right of publicity statute risks infringing on our First Amendment freedoms of speech and expression.

Federal Right of Publicity Takes Center Stage in Senate Hearing on AI

Posted July 28, 2023

The Authors Alliance found this write-up by Professor Jennifer Rothman at the University of Pennsylvania useful and wanted to share it with our readers. You can find Professor Rothman’s original post on her website, Rothman’s Roadmap to the Right of Publicity, here.

By Jennifer Rothman

On July 12th, the Senate Judiciary Committee’s Subcommittee on Intellectual Property held its second hearing about artificial intelligence (AI) and intellectual property, this one was to focus expressly on “copyright” law. Although copyright was mentioned many times during the almost two-hour session and written testimony considered whether the use of unlicensed training data was copyright infringement, a surprising amount of the hearing focused not on copyright law, but instead on the right of publicity.

Both senators and witnesses spent significant time advocating for new legislation—a federal right of publicity or a federal anti-impersonation right (what one witness dubbed the FAIR Act). Discussion of such a federal law occupied more of the hearing than predicted and significantly more time than was spent parsing either existing copyright law or suggesting changes to copyright law.

In Senator Christopher Coons’s opening remarks, he suggested that a federal right of publicity should be considered to address the threat of AI to performers. At the start of his comments, Coons played an AI-generated song about AI set to the tune of New York, New York in the vocal style of Frank Sinatra. Notably, Coons highlighted that he had sought and received permission to use both the underlying copyrighted material and Frank Sinatra’s voice.

In addition to Senator Coons, Senators Marsha Blackburn and Amy Klobuchar expressly called for adding a federal right of publicity. Blackburn, a senator from Tennessee, highlighted the importance of name and likeness rights for the recording artists, songwriters, and actors in her state and pointed to the concerns raised by the viral AI-generated song “Heart on My Sleeve”. This song was created by a prompt to produce a song simulating a song created by and sung by Drake and The Weekend. Ultimately, Universal Music Group got platforms, such as Apple Music and Spotify, to take the song down on the basis of copyright infringement claims. Universal alleged that the use infringed Drake and The Weekend’s copyrighted music and sound recordings. The creation (and popularity!) of the song sent shivers through the music industry.

It therefore is no surprise that Jeffrey Harleston, General Counsel for Universal Music Group, advocated both in his oral and written testimony for a federal right of publicity to protect against “confusion, unfair competition[,] market dilution, and damage” to the reputation and career of recording artists if their voices or vocal styles are imitated in generative AI outputs. Karla Ortiz, a concept artist and illustrator, known for her work on Marvel films, also called for a federal right of publicity in her testimony. Her concerns were tied to the use of her name as a prompt to produce outputs trained on her art in her style and that could substitute for hiring her to create new works. Law Professor Matthew Sag supported adoption of a federal right of publicity to address the “hodgepodge” of state laws in the area.

Dana Rao, the Executive Vice President and General Counsel of Adobe, expressed support for a federal anti-impersonation right, which he noted had a catchy acronym—the FAIR Act. His written testimony on behalf of Adobe highlighted its support for such a law and gave the most details of what such a right might look like. Adobe suggested that such an anti-impersonation law would “offer[] artists protection against” direct economic competition of an AI-generated replication of their style and suggested that this law “would provide a right of action to an artist against those that are intentionally and commercially impersonating their work through AI tools. This type of protection would provide a new mechanism for artists to protect their livelihood from people misusing this new technology, without having to rely solely on copyright, and should include statutory damages to alleviate the burden on artists to prove actual damages, directly addressing the unfairness of an artist’s work being used to train an AI model that then generates outputs that displace the original artist.” Adobe was also open to adoption of “a federal right of publicity . . . to help address concerns about AI being used without permission to copy likenesses for commercial benefit.”

Although some of the testimony supporting a federal right of publicity suggested that many states already extend such protection, there was a consensus that a preemptive federal right could provide greater predictability, consistency, and protection. Senator Klobuchar and Universal Music’s Harleston emphasized the value of elevating the right of publicity to a federal “intellectual property” right. Notably, this would have the bonus of clarifying an open question of whether right of publicity claims are exempted from the Communications Decency Act’s § 230 immunity provision for third-party speech conveyed over internet platforms. (See, e.g. Hepp v. Facebook.)

Importantly, Klobuchar noted the overlap between concerns over commercial impersonation and concerns over deepfakes that are used to impersonate politicians and create false and misleading videos and images that pose a grave danger to democracy.

Of course, the proof is in the pudding. No specific legislation has been floated to my knowledge and so I cannot evaluate its effectiveness or pitfalls. Although the senators and witnesses who spoke about the right of publicity were generally supportive, the details of what such a law might look like were vague.

From the right-holders’ (or identity-holders’) perspective the scope of such a right is crucial. Many open questions exist. If preemptive in nature, how would such a statute affect longstanding state law precedents and the appropriation branch of the common law privacy tort that in many states is the primary vehicle for enforcing the right of publicity? When confronted with similar concerns over adopting a new “right of publicity” to replace New York’s longstanding right of privacy statute that protected against the misappropriation of a person’s name, likeness, and voice, New York legislators astutely recognized the danger of unsettling more than 100 years of precedents that had provided (mostly) predictable protection for individuals in the state. 

Another key concern is whether these rights will be transferable away from the underlying identity-holders. If they are, then a federal right of publicity will have a limited and potentially negative impact on the very people who are supposedly the central concern driving the proposed law. This very concern is central to the demands of SAG-AFTRA as part of its current strike. The actors’ union wants to limit the ability of studios and producers to record a person’s performance in one context and then use AI and visual effects to create new performances in different contexts. As I have written at length elsewhere, a right of publicity law (whether federal or otherwise) that does not limit transferability will make identity-holders more  vulnerable to exploitation rather than protect them. (See, e.g., Jennifer E. Rothman, The Inalienable Right of Publicity, 100 Georgetown L.J. 185 (2012); Jennifer E. Rothman, What Happened to Brooke Shields was Awful. It Could Have Been Worse, Slate, April 2023.)

Professor Matthew Sag rightly noted the importance of allowing ordinary people—not just the famous or commercially successful—to bring claims for publicity violations. This is a position with which I wholeheartedly agree, but Sag, when pressed on remedies, suggested that there should not be statutory damages. Yet, such damages are usually the best and sometimes only way for ordinary individuals to be able to recover damages and to get legal assistance to bring such claims. In fact, what is often billed as California’s statutory right of publicity for the living (Cal. Civ. Code § 3344) was originally passed under the moniker “right of privacy” and was specifically adopted to extend statutory damages to plaintiffs who did not have external commercial value making damage recovery challenging. (See Jennifer E. Rothman, The Right of Publicity: Privacy Reimagined for a Public World (Harvard Univ. Press 2018)). Notably, Dana Rao of Adobe, recognizing this concern, specifically advocated for the adoption of statutory damages.

The free speech and First Amendment concerns raised by the creation of a federal right of publicity will turn on the specific scope and likely exceptions to such a law. Depending on the particulars, it may be that potential defendants stand more to gain by a preemptive federal law than potential plaintiffs do. If there are clear and preemptive exemptions to liability this will be a win for many repeat defendants in right of publicity cases who now have to navigate a wide variety of differing state laws. And if liability is limited to instances in which there is likely confusion as to participation or sponsorship, the right of publicity will be narrowed from its current scope in most states. (See Robert C. Post and Jennifer E. Rothman, The First Amendment and the Right(s) of Publicity, 130 Yale L.J. 86 (2020)).

In short, the focus in this hearing on “AI and Copyright” on the right of publicity instead supports my earlier take that the right of publicity may pose a significant legal roadblock for developers of AI. Separate from legal liability, AI developers should take seriously the ethical concerns of producing outputs that imitate real people in ways that confuse as to their participation in vocal or audiovisual performances, or in photographs.