AI: Decoding IP conference - day 1 highlights

Last week London hosted a two day conference on ‘AI: Decoding IP’ at the London Stadium in the Olympic Park. The event was intended to facilitate dialogue and to help the IPO and WIPO identify the key questions to be addressed regarding the impact of the IP law framework on AI development, and vice-versa. Guest contributor Sarah Blair (Bristows LLP) reports on the highlights of day 1:

The conference was opened by Ros Lynch (Director, Copyright & IP Enforcement, IPO) who invited the opening speakers to introduce the sorts of IP issues that arise from questions concerning artificial intelligence.

Chris Skidmore MP (Minister of State for Universities, Science, Research and Innovation, UK) began by reassuring the audience that we are not yet at a stage of being controlled by robot overlords [by way of a video recorded message; his personhood was not confirmed]. He referred to two AI and IP reports that the IPO had published that morning (‘A worldwide overview of AI patents’ and ‘The Music Data Dilemma’) which presents some of the IPO’s thinking on the topic of AI.

Next up was Lord David Kitchin (Justice of the Supreme Court, UK) who referred to the disruption of AI as potentially being the fourth industrial revolution. Quoting Professor Stephen Hawking, AI could prove to be “either the best, or the worst thing, ever to happen to humanity”. Lord Kitchin suggested that the promise of AI is profound, powered by astonishing improvements to jobs, economic growth and an increased quality of life. However, this shouldn’t prevent difficult questions being asked about its implications: does big data create bias? Can such bias be avoided? Are AI processes verifiable? Could AI systems collude, e.g. in pricing systems? Are fundamental rights engaged? There are also important questions for IP law: can AI inventions be patented? Could AI be an ‘inventor’, or does inventiveness require human involvement? The patent system is founded in the principles of incentive and disclosure – but AI machines themselves do not respond to incentives. Would the system be swamped with AI created inventions? Questions are also raised in relation to copyright – does copyright subsist in works created by AI? Can AI be an author? UK law provides protection for ‘computer generated works’ where there is no human author – is this fit for purpose? Other laws are also engaged: competition, trade secrets, GDPR… Lord Kitchin suggested that all aspects need to be considered and addressed to identify what our objectives should be for AI and IP.

Francis Gurry (Director General, WIPO, Geneva) took a similar view: discussion is needed to identify what policy settings will best favour innovation in the field of AI.  By looking for gaps in the system, we can identify whether new rights are needed or whether the existing system is sufficient. It will also require a geopolitical approach: data is central to the development of AI, but whose data is it? How is data protected? How do we ensure the integrity of the data?

With these many questions swimming in the attendees’ heads, the first panellists dived into the topic of Applications of AI and New Business Models, continuing to discuss the topic of data and AI. Lord Tim Clement-Jones (Chair of House of Lords AI Select Committee, UK) explained that AI systems require huge datasets to develop and be trained which may give rise to copyright or database infringement. Whilst exceptions exist (e.g. temporary copy, non-commercial research), they are often not available to inventors of AI who, more often than not, have a commercial purpose. Lord Clement-Jones questioned whether there should be a public benefit exception, to take into account the benefits of AI for society. Alternatively, is it in the public interest to grant access to AI solutions and to prevent IP monopolies developing in this area? Andrew Burgess (Strategic Advisor on AI, UK) provided some examples of the public benefit of AI, such as the development of technologies to identify the sale of infringing products online. Dr Anat Elhalal (Head of Technology, Digital Catapult) shared her industry perspective on some of these issues. Often small tech companies need access to ‘big data’ owned by the big companies, yet coming to data sharing agreements can be arduous, often taking more than a year, which is too great a length of time for a start-up. This needs speeding up to facilitate innovation. Dr Zoë Webster (Director of AI and Data Economy, UK Research & Innovation) provided the audience with some interesting examples of the AI solutions being developed, for example Winnow’s food waste initiative which takes photos of food being scraped into restaurant bins to identify how to reduce food waste.

The next panel discussed AI and IP – Disrupting the Established. Belinda Gascoyne (Senior IP Counsel for IBM Europe, Middle East and Africa) turned the audience’s attention to whether other areas of law might assist in the protection of AI. Trade secrets, for example, need not be registered and offer some level of protection. However, trade secrets protection is arguably unsuitable for the fast-paced nature of AI technologies; it doesn’t prevent independent creation and is subject to one’s ability to keep the technology confidential. Professor Ryan Abbott (Professor of Law and Health Sciences, University of Surrey School of Law) considered how AI disrupts the traditional notions of inventions and inventors. Take the Google DeepMind challenge match for example, the system developed by playing against itself and now dramatically outperforms people – what is the invention and is it derived from a human or the AI itself? Numerous people are often involved in the creation of AI works: what extent of contribution is necessary to establish authorship? Daniel Berman (CEO Maccabi Enterprise Development & Chairman, eHealthVentures, Israel) turned the audience’s attention to how ‘AI and Big Data’ is disrupting the medical sector in Israel. Maccabi has compiled medical data to create a user-friendly personal health assistant app, ‘K Health’, to help diagnose conditions from self-reported symptoms. Dr Ben Mitra-Kahn (General Manager & Chief Economist IP Australia) focused on the disruption of AI to public services. His observation was that people often dislike change, they simply want things to get better. Australia’s trade mark office has implemented an AI tool to guide applicants through the application process. It is now 80% more likely to have an application accepted first time, reducing the time and cost of an appeal.  He jested that the only people who oppose this disruption are trade mark attorneys.

The panellists in the third session dug deeper into some of these issues in a session titled Ownership, Entitlement and Liability.

Dr Noam Shemtov (Reader in IP & Technology Law and Deputy Head, Queen Mary University) kicked off the discussion by delving into patent law. With reference to research carried out in several jurisdictions, he considered three questions: 1) Can AI be designated an inventor? The research indicated that this was unlikely - inventiveness requires the deployment of human mental faculties. 2) Should AI be designated as an inventor? Potentially not. The attribution right under patent law serves two objectives, both related to personhood: i) informing the public about the inventor’s involvement to give him/her kudos; and ii) to serve as an expressive incentive rather than a pecuniary one. Neither are relevant to AI and the research did not substantiate an alternative worthy objective specifically for AI. 3) Who is the inventor of an invention involving AI activity? In most cases, this should be a human. Where no human manifests intellectual domination, this must be attributed to the owner or user of the system.

IPKat’s Eleonora Rosati (Associate Professor in IP Law, University of Southampton) turned the discussion to copyright, more specifically to text and data mining (TDM). She explained that there are two aspects of TDM which are relevant to copyright: first, extracting information from large amounts of text and data, (e.g. mining Instagram photos to determine the fashionable colours in specific countries); and, 2) the creation of new copyright works (e.g. the AI created portrait of Edmond de Belamy which sold for $432,000 last year). Eleonora pointed out that not all TDM involved copying: the purpose is sometimes to extract information – which copyright doesn’t protect as such – to create a specific output. The new EU DSM Directive provides two mandatory exceptions, one for research organisations and the other for businesses (although the data owners can opt out to the latter). This now requires a party to either obtain a license or to rely on an exception when conducting TDM.

The human-centred view of copyright law was reflected upon by Professor Tanya Aplin (Professor of IP Law, King’s College London). A key requirement for copyright protection is originality: the work must be the author’s own intellectual creation, requiring a personal touch. Authorship, she argued, is a distinctly human notion – protection is tied to the author’s death and authors have moral rights to control attribution and the integrity of the work. This was contrasted against related rights, such as sound recordings and films, which can be created by companies and whose term of protection is linked to e.g. the date of creation. Copyright exists to protect creative labour, a uniquely human trait. Professor Aplin suggested that deeper philosophical debates are therefore also necessary when considering AI and IP. Professor Lionel Bently (Herschel Smith Professor of IP, University of Cambridge) expanded on the issues surrounding originality and human authorship: when implementing copyright provisions for computer-generated works in 1988, the UK government didn’t amend the requirements for originality. Yet how can an AI machine make free and creative choices and stamp its own personal touch on a work? Professor Bently doubted whether computer-generated works are compatible with EU law.

The third panel session concluded with Belinda Isaac (Principle, Isaac & Co Solicitors, UK) asking the question: should AI be regulated? Who is liable when things go wrong? We frequently implement regulation to set parameters and to prevent harm to humans, e.g. in medicine. Should it be any different with AI? What of an autonomous vehicle killing a human, or facial recognitions creating ethnicity biases – who is responsible and how do we correct these problems? To provide certainty to the industry, the UK government had to implement legislation confirming that it is the insurers who are responsible for accidents caused by autonomous vehicles, and if the vehicle is not insured it falls to the human driver. Is a similar approach needed for other AI technologies? Belinda suggested that this requires a dialogue both top-down from government and bottom-up from data scientists in respect of responsibility, transparency and accountability.

The fourth and final panel session of Day 1 considered Ethics and Public Perception. Dr Jon Machtynger (Microsoft Solutions Architect – AI and Analytics) suggested that all companies developing AI should develop an ethical framework. Microsoft’s principles include: fairness (everyone should be treated similarly, bias must be identified); reliability; privacy (to obtain sufficient data, people must trust the system); inclusiveness (to start with data from left-out groups before considering mainstream needs to create a system which meets all needs); transparency (if problems arise, we need to be able to see why, how and when they occurred); and, accountability. A sensitive-use framework might also be helpful: will it impact consequential services that a participant would receive, e.g. a loan or medical care? Could there be risk of harm, e.g. military use? Might it infringe someone’s rights, e.g. speech, assembly or association? Whilst science asks CAN we do this, humanities asks SHOULD we do this? Such frameworks might change, but having something on ‘paper’, Dr Machtynger suggests, is an important starting point. Dr Christopher Markou (Leverhulme Early Career Fellow and Affiliated Lecturer, University of Cambridge) went even further to say that there should be ‘redlines’ – zones of experience where we shouldn’t allow AI to impact without regulation being first put in place. For example, machines should not be given the power to decide people’s lives such as by making decisions in the Family Court. He warned against a laissez-faire approach to AI which might diminish the aspects of human experience that make life meaningful. Dr Markou concluded the day with an interesting thought: society is quick to think that everything is unprecedented, for example when the Kodak camera was invented, but we often later realise that our systems of law are robust and can absorb technological shocks.

Thanks were given to Mishal Husain (broadcast journalist, UK) who moderated all of the panel sessions and the attendees headed off to continue the discussion over drinks and dinner.

Yes, it transpires that machine learning has been applied to create new images of cats, thereby ending the need for the Internet to exist.

AI: Decoding IP conference - day 1 highlights AI: Decoding IP conference - day 1 highlights Reviewed by Alex Woolgar on Friday, June 28, 2019 Rating: 5

No comments:

All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.

It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.

Learn more here:

Powered by Blogger.