Is authorship inherently anthropocentric? Thaler, human authorship and the contracts quietly filling the gap
While the internet has been busy reducing the decision to the soundbite that the Supreme Court’s denial excludes AI art from copyright protection, others have refreshingly pointed out why it is necessary to return to the D.C. Circuit Court judgment and read it carefully. As reported by the IPKat (here), the D.C. Circuit Court's decision was hardly surprising. It is a textbook exercise in statutory interpretation which is really what we should expect a court to do when deciding whether autonomously generated content merits copyright protection.
But what happens when two constructions are possible: (1) the Copyright Act requires human authorship as a matter of ordinary meaning and legislative history; (2) ‘Authorship’ is simply the legal interface through which copyright channels value to those best placed to exploit it, including those directing nonhuman creativity.
Petitioning against an anthropocentric view of authorship
Thaler’s petition disagreed with the anthropocentric construction of the Act, and accused both the District Court and Circuit Court of ‘import[ing] absent words into the statute’. The petition argued that the human authorship requirement rests on shaky foundations (e.g. the U.S. Copyright Office, Sixty-Eighth Annual Report of the Register of Copyrights), and that any presumption that the 1976 Act statutory scheme prioritises humanity as a necessary condition for authorship is misguided. According to Thaler, the US Copyright Office had ‘vastly overstep[ped] its authority by engaging in extra-statutory policy making’.In Thaler’s view, not only does the Act fail to explicitly prohibit nonhuman authorship (see, § 102(b)) but it encourages it through other nonhuman-related provisions (corporations, governments, and the work-for-hire doctrine). Authorship, specific to the US, seems to be more of a doctrinal vehicle for allocating rights and incentives (perhaps in a capitalism-forward copyright order?), than an anthropocentric account of creativity. Indeed some commentators suggest that the traditional concept of authorship no longer exists in the US.
Even further, if this requirement was doctrinally well-founded, Thaler argued that there is lacking criteria, supported by inconsistent enforcement by the US Copyright Office. The petition alleged that the US Copyright Office’s requirement of human authorship was based on two arguments: (1) that human beings are not responsible for creative choices when AI is used; and (2) that the use of AI involves randomness. Such an approach, according to Thaler, would have disastrous effects on copyright registration for photography.
The private law of AI authorship
Despite Thaler’s plea failing, the petition rightly reflects that the creative industries are already in the midst of pivotal change given AI’s increasing economic use, and reminds us that: ‘Soon, if it is not already the case, the vast majority of commercial works registered for copyright will rely at least in part on generative AI tools’.Lacking legislative intervention, it is contract law, not copyright, that increasingly influences the market value of genAI output. While some may remark that copyright can flexibly subsume AI-assisted human creativity (e.g. US Copyright Office Part 2 AI Report), recent decisions suggest a higher threshold for user prompting with China’s Li v Liu being a notable outlier (compare Théâtre D’opéra Spatial, Zarya of the Dawn, the Czech DALL-E case, and the Munich Logo case). This has not deterred platforms from allocating rights over genAI output.
Terms of use for OpenAI, Anthropic, and Midjourney outline that users own genAI content following assignment with a separate requirement that input does not infringe other copyright works. These terms also carve out a licence for these companies to use genAI content to provide, maintain and improve their services including model training. This is generally subject to an opt-out and a prohibition against users using the output to train competing models. Further downstream, platforms like Adobe Stock allow contributors to submit genAI images, vectors or videos.
In contrast, Getty Images and Shutterstock do not allow genAI content to be submitted given issues around assigning IP to an individual when models’ training data comprises multiple works, complicating artist compensation. Interestingly both Getty Images and Shutterstock offer in-house licensing genAI tools, creating new AI-related revenues in the face of legal uncertainty over AI authorship. These concerns seem to be addressed through internal compensation frameworks (e.g. Shutterstock Contributor Fund) alongside external certification models (Fairly Trained).
Perhaps these contractual frameworks, despite the absence of helpful guidance on the status of genAI content and originality, construct their own market for AI-related authorship.
.jpg)
Photo by Burcu on Unsplash
This Kat reclines, unbothered by authorship disputes,
content to let the humans sort it out
A perfect balance of legal ambiguity and commercial certainty
The human authorship requirement outlined by the D.C. Circuit Court of Appeals conveniently preserves the commercial status quo. By excluding autonomous genAI content from copyright protection, authorship continues to function as a doctrinal vehicle for allocating rights to creative value, including AI-generated value. So, it turns out that both constructions of the Act are not mutually exclusive. .jpg)
This Kat reclines, unbothered by authorship disputes,
content to let the humans sort it out
It simply requires all genAI output to meet an ambiguous human creativity threshold that the Court fails to address. One that most AI platform users are likely unable to demonstrate (given cases above), making ownership platform terms ineffective. Regardless of whether such users should normatively own genAI content, this twist of fate is particularly telling as the same terms also allocate copyright liability risk to users regarding input.


An important function of a Supreme Court is to take into account how policy issues impact individual cases. At this time humanity has not decided to give AI an legal status as a person, and the Supreme Court is mindful of that. It is not really about what US copyright law says. When we come up with a new legal status for AI then each area of law will need to decide how AI fits into that. We cannot make this huge leap based on a copyright case, but instead some sort of law changing the status of AI will need to initially come in based on all the issues it seeks to resolve. Only then can IP laws be reinterpreted or even rewritten. If US copyright law was not written with AI in mind, it cannot now be used to decide how AI fits into the IP system, and so we need to keep fudging/deferring thinking about the issue as much as we can for now.
ReplyDeleteJust to be clear, the US Supreme Court did not decide anything.
DeleteThank you anonymous. The US Supreme Court decided it didn't want to hear about this dispute. That is possibly for the reasons I go through, but yes, I am only guessing
Delete