[Guest post] Litigation commenced against the developers of AI image generation software

Katfriends Adrian Aronsson-Storrier and Oliver Fairhurst from Lewis Silkin report on recent litigation in the UK against the developers of AI generation software. This litigation has arisen amongst a flurry of recent interest in AI generated works. In just the past year the IPKat has hosted a number contributions on IP and AI generated creations/inventions, including book reviews, a report on the US Copyright Office decision to reject protection for an artwork created by a machine, analysis of the UK IPO consultation on AI and IP, and updates on Dr Thaler’s attempt to name the DABUS algorithm as an inventor on a patent application (with the next thrilling instalment in the DABUS saga due before the UK Supreme Court in March).

Here's what Adrian and Oliver write:

Litigation commenced against the developers of AI image generation software

by Adrian Aronsson-Storrier and Oliver Fairhurst

On 15 January 2023 the founder and CEO of Stability AI, a key company behind the AI image generation tool Stable Diffusion tweeted that he believed that the images used to train his AI tool were “ethically, morally and legally sourced and used” but that “some folks disagree”. Within days, stock image supplier Getty Images announced that it was one of the “folks” who disagreed – and disagreed so strongly that it had commenced legal proceedings in the High Court in London alleging copyright infringement.

While at this stage the particulars of Getty’s claim are unavailable, their press release states that Stability AI “unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images”. It is unclear whether the claim relates solely to the training of the Stability AI system, or whether it includes the outputs of that system.

This blog post speculates on the issues that may arise in that litigation and other similar cases. To do so, it first discusses how AI image generation tools are trained and how copyright protected images are involved in that process.

What is AI image generation software?

The current generation of AI image generation tools such as Stable Diffusion, Midjourney and DALL·E 2 are designed to take a text description or prompt from a user and generate an image that matches the prompt. For example, if we ask the Stable Diffusion generator for a “cat wearing a suit” it generates images of dapper cats at the press of a button:

As an aside, as it has been discussed previously on the IPKat, there is some uncertainty whether these image outputs are protected under copyright law and, if protected, who owns the relevant copyright. This issue is often discussed in connection to section 9(3) of the Copyright Design and Patents Act (UK), (CDPA) which provides that in the case of an artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken. There has been limited case law citing the section 9(3) and there remains some ambiguity and academic debate on the ownership of computer-generated works under English law.

In this instance however the user generating the images was in Ireland and the online software model generating the images was hosted in the US. While the Irish Copyright and Related Rights Act includes a similar provision concerning the authorship of computer-generated works, Irish academics have noted this provision may be inconsistent with the EU acquis. Last year the IPKat reported that the US Copyright Office refuses to register AI-generated works and requires human authorship as a prerequisite to copyright protection. To add additional complexity to the situation, under the Stable Diffusion license, Stability AI claims no rights in the outputs of the model as long as the user has complied with use restrictions designed to prevent the dissemination of harmful materials. In summary- the availability of copyright protection for these images of besuited felines would make an ideal exam question for IP students.

How do you train AI tools?

In order to understand the merits of the Getty lawsuit it is useful to understand how AI image generation tools are created and the ways in which they may be trained using copyright protected images. Last month the IPKat hosted a detailed technical analysis of the operation of such tools. By way of recap, as a first step in learning how to create images in this way, these tools draw upon huge datasets of pairs of text and images. This process allows the AI to learn to create connections between the structure, composition and data within an image and with the accompanying text. The Stable Diffusion AI has utilised a dataset called LAION-5B developed by a German non-profit. LAION-5B comprises a set of 5.85 billion links to images stored on websites, paired with short descriptions of each image. By analysing images from the dataset which are described as ‘cat’, the AI is able to successfully identify when an image does or does not contain a cat. It is possible to search for entries within this dataset, and it appears that some of the images within the dataset have watermarks:

In the second step of training the AI model is then taught to be able to generate images from random noise or static, through a process called diffusion. To achieve this ability to generate images the AI is given an image to which small amounts of noise are gradually added, corrupting the image and destroying its structure until it appears to be entirely random static. As this process occurs the AI attempts to understand how the addition of noise changes the image. An example of an image undertaking this training process is shown below:

The AI model is then trained on a reverse diffusion or denoising process, which is designed to restore or create structure and recognisable images from random static, as shown below:

Finally, once it has undergone multiple rounds of this training the AI model can combine these techniques to generate entirely new images in response to a user text prompt. To generate an image the AI model takes random static (or a ‘seed’) along with the user’s text description of the image they are seeking. The model then undertakes a process which is analogous to the step of reversing image corruption shown above to create an image matching the user’s prompt.

It is important to note that, while the Stability AI model was trained on images in the LAION-5B, those images are not stored and reproduced within the final released version of the stable diffusion model. Instead, the model has analysed the similarities between the countless cat images in LAION-5B and has stored information about the patterns or structural similarities in these images, which permits it to identify or generate an entirely new image which matches its criteria for a cat. This is analogous to human memory – we may not have a perfect photographic recall of every cat we have ever seen, but after having seen a few, we are able to describe the general features of a cat and distinguish a cat image from other images.

Does training a generative AI infringe copyright in the training materials?

As discussed above, images with a Getty watermark are included in the LAION-5B dataset. This suggests that it is likely that Stability AI has drawn upon those images as part of the training process for the stable diffusion model. Given the expansive interpretation of the reproduction right, it is likely that the training use of any images owned by Getty will constitute a reproduction and infringe, unless a defence applies. This is because a reproduction will occur even where the copy of a work is transient or temporary, and so a temporary copy of a file in the memory of a computer or accessing an internet stream of a film may constitute infringement.

A key question for the forthcoming litigation will be whether Stability AI is able to rely on any copyright exceptions to evade liability. If the training of the AI model occurred within an EU jurisdiction, it may have been permitted under the new exception for text and data mining in Article 4 of the Copyright in the Digital Single Market Directive. In the UK, the current text and data mining exception in section 29A of the CDPA is restricted to research for a non-commercial purpose. While versions of the stable diffusion model have been released under a permissive MIT or “Open and Responsible AI license” permitting royalty free access, Stability AI monetises its models through services such as DreamStudio and may be considered to have a commercial purpose. Stability AI’s website suggests however that the training of the stable diffusion model occurred with assistance of a German university research group, and it may therefore be necessary to consider the applicability of the German text and data mining exception in section 44b of the German Copyright Act.

Stability AI may also argue that the temporary copy exception in s28A of the CDPA permits the reproduction of Getty’s images to train the model. This exception provides copyright will not be infringed by the making of a:

1. temporary copy which is;

2. transient or incidental;

3. an integral and essential part of a technological process;

4. the sole purpose of which is to enable… a lawful use of the work; and

5. which has no independent economic significance.

The CJEU has held that this exception “is intended to ensure the development and operation of new technologies” and the UK Supreme Court has held that the conditions for the exception “are overlapping and repetitive, and each of them colours the meaning of the others. They have to be read together so as to achieve the combined purpose of all of them”. Stability AI might argue that as discussed above their AI model only created temporary and transient copies of its training images, and that any use of the images to train the model was analogous to a human browsing the internet or using a search engine, which the UK Supreme Court has indicated does not, in the normal course of affairs, infringe copyright. The UK courts have not previously been asked to apply this copyright exception to the AI training process and any decision on this point has the potential to provide useful clarity on the legality of developing and training generative AI.

There is the added issue of whether Stability AI could be bound by the terms of use of the Getty Images website, which expressly forbid “any data mining, robots or similar data gathering or extraction methods”.

Would the output of an AI tool infringe the copyright in the training materials?

There is also the issue of whether the output of an AI tool infringes the copyright of works on which that tool was trained. If, for instance, an AI tool creates an image that looks very similar to a famous image or illustration, has the tool (or more accurately its owner) infringed the copyright in that work?

To take an example, one can ask an AI image generation tool to create an image of Marilyn Monroe in the style of Andy Warhol and be presented with a fair representation of the famous Marilyn prints. The fact that the tool even knows what the ‘style of Andy Warhol’ means suggests that it has been trained on Mr Warhol’s works, and the output appears to confirm that.

The tool may of course have come up with the output entirely independently and the similarity purely or largely arose by chance or based on the intervening user input. However, given that Mr Warhol’s work is widely available online and the output is very similar, the alleged infringer may bear a challenging burden of showing that the earlier work has not been copied. That carries very significant evidential problems, and an alleged infringer may struggle to convince a judge that there was no copying. The creators of AI tools may therefore need to be careful to document the works on which their tools were trained and consider measures to prevent infringements. The UK and USA will be attractive destinations for such litigation by virtue of their procedural rules on disclosure/discovery.

As with all new technologies, analogy can be made to earlier technologies and the litigation that surrounded them. In the 1980s, a dispute arose between certain musical rights holders and the manufacturers and retailers of dual tape decks that were, it was claimed, designed to facilitate the copying of music. That claim ultimately failed, and the manufacturers/retailers were not liable for the (misuse) of their products. The analogy is obvious, but where the AI tool actively encourages people to create images ‘in the style’ of artists, the line may have been crossed.

What next?

This recent litigation takes place against a backdrop of increased public debate around the law and ethics of AI generated art. Recent articles in the press have suggested that AI generated art represents a ‘grotesque mockery of what it is to be human’ and a threat to the livelihoods of artists. The House of Lords Communications and Digital Committee has also expressed concerns about the impact of AI art and the “potential harm to the creative industries”. The legality of training AI image generators is also being debated and challenged in the United States, where a class action complaint was recently filed by artists against Stability AI in the US District Court for the Northern District of California.

This public attention and these recent cases may act as a catalyst for further legislative attention to this space, including in relation to any new copyright exception to facilitate text and data mining in the UK. In June 2022 the UK government announced a plan to introduce a new copyright and database exception permitting text and data mining for any purpose, and in July 2022 released a policy paper outlining the government’s “pro-innovation approach to regulating AI”. The UK government text and data mining proposals were however criticised by representatives of the creative industries, and in November 2022 the Minister for Media, Data and Digital Infrastructure suggested that the exception “is not going to proceed” and that the UK IPO would extend their consultation on the issue. The proposals were also criticised in January 2023 by the Communications and Digital Committee who stated that the proposed changes were misguided and that while “developing AI is important, but it should not be pursued at all costs”. It will therefore be interesting to see how the government responds to this development and attempts to balance the interests of the creative and technology sectors as part of its 10 year plan to make Britain “a global AI superpower”.
[Guest post] Litigation commenced against the developers of AI image generation software [Guest post] Litigation commenced against the developers of AI image generation software Reviewed by Nedim Malovic on Friday, February 03, 2023 Rating: 5

No comments:

All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.

It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.

Learn more here: http://ipkitten.blogspot.com/p/want-to-complain.html

Powered by Blogger.