This Kat was very happy to participate in the AI & Creativity: Protecting Creators in the Age of AI Panel which took place as part of the AI Fringe event on Friday, 3 November 2023 at the Knowledge Centre of the British Library, London. The event was Organised by AI Fringe (AI for everyone) and co-convened by DACS (The Design and Artists Copyright Society). Readers can watch the panel recording on the DACS YouTube channel here.
Silvia Baumgart, Associate at Saunders Law, reports on her views of the panel below:
As it is stated on AI Fringe’s website, ‘The AI Fringe is a series of events hosted across London and the UK to complement the UK Government’s AI Safety Summit by bringing a broad and diverse range of voices into the conversation’. I attended one of the last events, a panel discussion about ‘AI & Creativity’.
The members of the panel consisted of artists, the CEO of DACS, an entrepreneur and investor engaged in commercialization of products utilising AI systems, and an IP expert. The order of appearance from right to left in the photograph is.
Image: DACS |
- IPKat's Dr Hayleigh Bosher, Associate Dean / Reader in Intellectual Property Law, Brunel University London
- Matthew Blakemore, Chief AI Strategist at AI Caramba!
- Christian Zimmerman, CEO of DACS
- Keiken, Artist Collective
- Dr Pogus Ceasar, Artist
- The panel was chaired by Lara Carmona, Director of Policy and Engagement, Creative UK
I expected to learn more about creators’ perspectives on the way AI machine learning software is trained to create new works and how it may impact creators’ working practices and their ability to generate an income from exploitation of their IP rights in the future. To be fair, the discussion touched on most of the issues but without any information about the reasons why these issues have arisen in the context of AI.
Thousands of copyright works have been used to train machine learning software which has – according to Elon Musk - the potential to make humans (including creatives) obsolete in the future. Whether this will be the case is questionable, but in the present, creators have not seen any financial compensation for their important contribution to the development of AI systems. Furthermore, it is not at all clear to what extent creators who use the technology have copyright in the output and so far, it is not even clear if the work they create using AI infringes other people’s copyright work. Yet, answers to these questions are vital to ensure that creatives working in all areas of the creative industry can earn a living in the future since most of their income is based on the commercialisation of copyright in their work.
While both visual artists on the panel, Dr Pogus, and Keiken, emphasized the usefulness of AI for their work (it’s possible to create works faster, illustrate new ideas faster, use AI as a creative collaborator), Keiken in particular pointed out that humans are still the driving force behind the creation and that there is lots of innovation especially in their area of creative practice, which focuses on ‘imagining the future’. Because of this, humans’ knowledge and innovation should be protected. Keiken emphasized that in order to create for example a VR game they would want to own their intellectual property and control distribution rather than assign the IP rights to a commissioner. For Keiken, financial compensation for the creative work they do is important as otherwise their creative practice would not be sustainable.
Dr Pogus’s view on the usefulness of copyright ranged from questioning its enforceability (which is true has become much more challenging in a digital world) to the point of being dismissive citing the musician Grimes (notably the ex-partner of Elon Musk) who made her voice freely available to create derivative works using AI technology without the need to ask for her permission or financial reward.
Dr Bosher had to add at this point that Grimes, when asked if she would be happy for her voice to be used in the context of creating graphic, racist or violent content, responded that she would in this case object. Rather than ‘killing copyright’ (Grimes’ aim apparently) it appears that she still likes to rely on copyright to control the use of her creative output – but does not seem to know it.
Mr Zimmermann acknowledged that in the context of creators being asked for their permission, ‘the cat is out of the bag’ since AI systems have already scraped huge amounts of data (including visual images) to learn to create new works without creators even knowing whether or not their works have been used.
He referred to a survey sent to all DACS members to find out more about their concerns about AI. More than 1,000 members responded – a number which shows the importance of a meaningful debate on the issues surrounding AI. While the vast majority are not at all unwilling to embrace the technology, results also show that 95% of members would like to be at least informed if their work is used to train an AI machine, most would also like to be remunerated [Note that the survey results are not yet published].
In this context, Mr Blackmore of AI Caramba! confirmed that the problem is that many big tech companies, who have developed AI systems are not transparent about the data they have used despite having this information and being able to disclose it. In Christian Zimmermann’s view, knowing which creative content has been used does matter since it is not possible to enforce rights in images if it cannot even be shown from which works the images created by AI are derived from.
Mr Blackmore added that in his view the tech industry needs to be incentivized to act fairly. The current UK government’s approach to leave it to the tech companies to regulate themselves via a code of conduct they would voluntarily sign up to is not enough.
Mr Blackmore also raised the issue of copyright protection of AI created works. Questions that need to be answered are: ‘Are such works deemed derivative works?’ and ‘Is the human input (in the case of AI software, some key words) enough to make the human an ‘author’ and consequently vest copyright in the ‘human’ author?’.
Asked what creators could do to get their voices heard and influence the shaping of policies in this area, Dr Bosher recommended that everyone concerned could get in touch with their MP to educate them on the issue.
Dr Hayleigh Bosher added that not only our attitude towards new technologies is driven by society but also our position in relation to intellectual property (IP). At the moment, copyright law in relation to the use of existing copyright works for the purpose of training machine learning software as well as in relation to the work created by AI is ambiguous. We need clarity but to achieve clarity voices of creators need to be heard.
Image: DACS |
Dr Ceasar also admitted that he would rather that somebody would ask for his permission if they wanted to use his work for commercial purposes. He emphasized as well that transparency is key. The problem with enforcement is that AI only takes small elements of existing works which makes it difficult to trace the resulting work back to its original. Furthermore, existing works are used internationally, making it almost impossible to enforce rights.
There is currently no solution in relation to works already used by AI companies. Any solution has to be on an international level. Hayleigh Bosher noted that in this context, it is unhelpful to polarise sectors, e.g. the creative sector against big tech companies, rather we need collaboration.
Christian Zimmermann agreed that a conversation is urgently needed but so far, content creators have been left out but should have had a place around the table. He again confirmed that AI companies have the knowledge about what they have used but would not share this knowledge with creators. The solution in the future could be that creators would have the choice to opt out but quite frankly should have been asked in the first place for their permission.
I did have some conversations afterwards with friends who are successful illustrators and art directors, presumably the target audience of the event. While they are not experts in copyright law they were interested lay persons. However, they were quite disappointed about the level of the debate which seemed to touch on key issues only in the most basic way. Instead, they felt the debate was dominated by little understanding of commercial artists’ need to sustain their practice by relying on copyright. Concerns of content creators were dismissed at times as driven by ‘fear’ of technology and an inability to adapt. Uncritical optimism about the benefits of AI was sometimes – and luckily – reigned in by Hayleigh Bosher, who was able to explain distinctly the purpose of copyright and was able to set out the key issues in this debate. This was followed by Christian Zimmermann who made it clear that according to their survey creators do not ‘fear’ the technology but that their concerns are simply driven by commercial considerations, namely to make a living from their creative work in the future. Rather surprisingly, Matthew Blackmore also was not supportive of AI companies’ exploitation of legal uncertainties to profit from content creators’ works without fair remuneration.
It would have been helpful if the organisers rather than assuming that their audience does not understand complex legal issues and therefore, simply avoiding to educate them ‘in a simple not time-consuming way’ as Keiken requested right at the end of the debate – which I thought was a very important point but was a bit lost right at the end - would have vastly improved the quality of the debate. This could have been achieved by demonstrating first how AI works as this explains why current copyright law is so unfit to protect content creators’ rights. It also shows why the output of AI may not be copyright protected under the wording of current legislation. The IPKat has published a very insightful article which explains in detail how AI software learns using existing visual images and how it then generates new works using what it has learned. It would have been easy to start the debate with a similarly useful introduction of the legal issues linked closely to the technology.
And last but not least, I echo Dr Bosher’s note that in this debate we should not lose sight of a fundamental objective of IP law namely, to balance the need to encourage innovation and (fair) remuneration of creators. It seems that so far, the focus is mainly on encouraging innovation while fair remuneration of creators has taken a back seat in the debate.
No comments:
All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.
It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.
Learn more here: http://ipkitten.blogspot.com/p/want-to-complain.html