This post is brought to you by Oprah Nwobike, who is a lawyer and doctoral researcher at Brunel University London focusing on copyright and artificial intelligence in the music industry, under the supervision of IPKat's Dr Hayleigh Bosher.
Impact of AI in the creative industries
How would you feel if you found out this blogpost was written by AI? This has become a lingering question in the minds of consumers of online content. With the sudden popularity and accessibility of AI systems such as Open AI’s ChatGPT, there has been a lot of attention regarding the regulation and governance of the industry.
|
Hayleigh Bosher and Coran Darling giving evidence at the Inquiry |
Artificial intelligence is a highly debated topic in the intellectual property field currently. The UK government’s approach to AI regulation appears to be focused on increasing innovation by giving businesses incentives to invest in the AI industry, with the ambition of strengthening the UK’s position as a global leader in the AI industry. Meanwhile, the Science, Innovation and Technology Select Committee is currently conducting an inquiry into the impact of AI on several sectors including the creative industries.
On the 10th of May 2023, the Committee held an oral evidence session in the UK parliament regarding the Impact of AI in the creative industry relating to copyright law, featuring one of the IPKat team, Dr Hayleigh Bosher, along with witnesses from UK Music’s Jamie Njoku-Goodwin, Paul Fleming General Secretary, Equity and Coran Darling, Associate intellectual Property and Technology, DLA Piper. You can watch the full session here.
Does AI create?
The technology behind AI is complicated, however, at its core, AI engines ingest massive data sets which are used to train software that can generate code, images, sounds, or text. AI systems adapt through progressive learning algorithms to continuously learn new things from the tasks it performs and the data it is fed. When an AI system runs a sequence of data processing, it measures its own performance and develops additional skills. It has the capacity to run through a large number of tasks extremely quickly, which allows it to become more capable at performing tasks it has been trained to do.
How does AI threaten the creative industries?
The use of AI technology to generate images or music and other creative works, has legal implications for copyright and related rights of creators and rightsholders. As the creative industry is responsible for 6% of the UK’s GDP, the dialogue is particularly important with regards to the protection of the rights of the creatives.
There seems to be resistance in considering the actions generative AI performs as creativity since the AI output is being compared to human creative works. This is based on the position that AI learns from works created by humans, so it is just generating output and not performing a creative action, which is a position Jamie Njoku-Goodwin shares. For example, Google's MusicLM turns text description to music, but the AI itself is trained on a dataset of 280,000 hours of music to generate coherent songs based on command from the user. AI companies take music created by humans and generate ‘new’ AI music based on what has been trained. Human creativity on the other hand involves a much more complex process that cannot be replicated by AI as it entails several processes that are inherently human. Because human creativity involves the process of expressing ideas based on past experiences, imagination, cultural influences, and other exclusively human experiences, the resistance to AI being deemed ‘creative’ is understandable.
Performance synthetization is a huge part of the creative industries, especially in the gaming industry. However, due to the lack of clear regulation in this area, there is uncertainty around the use of the likeness and sound of someone’s voice and there is no legal framework to protect against the misuse of this technology to create deepfakes. Paul Fleming identified several issues with the use of AI to create deepfakes, a form of performance synthetization, which are created in situations where no consent has been sought from the rightsholder of the images. He referred to research that suggested that over 96% of deepfakes online are pornographic in nature and are of women, which affects their name, brand, personality and what they represent, thereby highlighting the necessity of adequate regulation.
|
Is AI just a tool? Image: Riana Harvey |
Kat, Dr Hayleigh Bosher, identified the lack of clarification in the technical application of the law in these instances. She emphasized the urgency of the matter due to the fact that AI poses an immediate threat to creators, because of the rapid development in AI activity, such as the advancements in AI voice technology where the song ‘Heart on My Sleeve’ was generated in the style of and using the voices of Drake and The Weekend. Unlike older AI systems that chop up and rearrange pre-existing recordings, these AI systems are creating new sounds that resemble a target voice. In this instance, where AI is being used to create music that sounds like an artist, the artist has few legal options to reliably protect themselves.
Text and data mining exception – a suitable option?
The current UK legislation provides a copyright exception for text and data mining (TDM) under 29A of the Copyright, Designs and Patents Act 1988 for non-commercial research for copyright works to which a person already has lawful access. However, from 29 October 2021 to 7 January 2022, the UK Intellectual Property Office (IPO) ran a consultation on AI and intellectual property covering text and data mining using copyright material. The Response to Consultation Report notes that the UK IPO planned to introduce a new copyright and database exception which allowed TDM for any purpose, including commercial use.
This decision was met with backlash, particularly from the creative industries. One major concern is that the exception would result in no economic reward for creatives that hold copyright of the works used to train the AI. In addition, there was a concern that generative AI created by tech companies that already threaten the creatives in the industry will be worsened with the introduction of such exception. The House of Lords Communications and Digital Committee said the proposal was “misguided” and should be scrapped. The government did then appear to backtrack in February, with science minister George Freeman saying it would not take the exception forward. However, later came the publication of the A Pro-Innovation Approach to AI Regulation White Paper, which alluded to the Government reconsidering this exception.
Njokwu- Goodwin said that there has been a ‘lack of interaction with rights holders’ and that there is no evidence that permission has been sought by AI companies to use their works, as they are taking people's work without recognizing input. Njoku-Goodwin also adds that this process reduces the creative output to mere data and numbers rather than it being treated as a creation by a human which in turn undermines the benefit of having copyright in the first place.
The lack of interaction with rightsholders might be due to the lack of certainty around the question of whether the process of AI generating a work requires a license, in order to ingest the music catalogue. Then, of course, there is the additional question of whether the AI generated output can be an infringement of copyright, under the current legal test.
Recommendations by the Witnesses
The ratification of the Beijing Treaty on Audiovisual Performances is a position most of the witnesses deemed a positive starting point. This international instrument will confer protection of performing artists for fixations of their work on an audiovisual medium and ensure the compensation for the use of their creative contributions. It grants performers economic rights as well as moral rights, giving them the ability to protect their image as well as benefit financially, which directly provides a legal framework that deals with deepfakes and performance synthetization.
The witnesses suggested that the current IP legal framework is not fit for purpose, as it was written without the current complexities AI poses in mind. Even the computer-generated clauses in the CDPA 1988, appears only to address the concept of AI as a tool that assists in creativity, not one that generates. They agreed that the current framework is an analogue one in a digital age, that is no longer appropriate in the application of AI generated works.
Coran Darling suggested that AI has to be seen as a tool. However, this position does not take into account the automation of generative AI where there is barely any input from the programmer or user. Therefore, with the advancements we have seen in recent years, I would argue that AI has surpassed such classification.
On top of updated legal measures, witnesses suggested that the implementation of a code of practice could enable the management of the major risks AI poses. This could serve as a way to bring control back to the rightsholders and perhaps provide an opportunity for all stakeholders to create a more balanced approach where human creativity is valued, while still enabling the development of AI.
Concluding Thoughts
The creative industries have benefited from some of the technological advancements provided by AI, such as redefining what creators can accomplish due to its ability to perform complex tasks. For example, in the gaming industry it enhances player experience through adaptive gameplay, it helps musicians generate music in a range of styles, it can automate image editing tasks and so on.
However, if the issues that stem from the ill-suited copyright framework and system regarding the use of these advanced technologies are not appropriately addressed, then there will be a dangerous presumption that AI creativity and development is valued more than human creativity. Moreover, not adequately protecting creative workers will see them displaced by AI that is using their work at their expense.
Some of these issues highlighted in the session included the non-regulation of performance synthetization, the lack of dialogue between AI companies and rights holders, and lack of clarification on the technical application of the law. Therefore, an overhaul of the current legal framework and approach to the regulation of AI technology is necessary to ensure adequate protection of the human creatives as well as allowing the advancement of AI technology. To achieve this, the positions of the creatives and the AI tech companies will have to be considered by the UK government and copyright will need to be updated to fairly balance these interests, to find a way to protect the creators and rightsholders and ensure the advancement of the AI industry.
Watch the full session
here and keep up to date with the Inquiry and future sessions
here.
Thank you for a useful and thoughtful post. I don't know how easy or valid it is to distinguish "just generating output" from "performing a creative action". If a human did either they might be perceived as being "creative" but there's a significant bias (in business, society, culture, law etc) against acknowledging AI creativity, probably because it's not something we've ever previously had to consider. It's true to say that AI can't (currently) directly replicate human creativity, but I'm not sure it's valid to say that the latter is a "much more complex process", nor that creativity should be seen as "inherently human" (unless you subscribe to eg the Lovelace Objection). To take a steer from Turing, asking whether AI can be creative is the wrong question - it's much more productive to ask whether AI can indistinguishably imitate humans (who are, undoubtedly, capable of creativity).
ReplyDelete