Thanks to Katharine Stephens (Bird & Bird) for this guest post:
On 27 September 2019, WIPO held a Conversation
on IP and AI; a fascinating day of presentations and discussion on the
impact of AI on IP systems, IP policies, IP rights management and international
cooperation on IP matters. The list
of presentations and the presenters'
biographies give you a view of the breadth and depth of the discussion. However, this report has to start with an
apology as it reports on the legal discussions only and in the very briefest terms
and misses out all mention of the presentations on the use of AI by the EPO and
other patent offices.
Francis
Gurry, Director General of WIPO, opened the
day by stressing the need to engage now with the issues raised by AI, noting that
Governments are starting to engage on strategy, opening up data for commercial
use, and the need for regulation. The
day's "conversation", together with WIPO's recently
published landscape of the patent data in AI and its development of
tools using AI applications for administration of IP (which were described
and in some instances demonstrated during the course of the day), was all part
of WIPO's engagement with those issues.
The impact of AI on the IP system and IP policy
There was a very upbeat note sounded by Andrei Iancu, Under
Secretary of Commerce for IP and Director of USPTO, at the start of the first
panel discussion. AI has the potential to
be the most disruptive technology we will see in our lifetime, but that does not
mean that it has to be seen in a negative way and we were asked to consider some
of the many positive examples of AI, such as personalised medicine. As to policy, AI raises hugely important
questions on IP law. For example, if a
machine invents, is it an inventor? If
it is, who owns the IP? As AI
innovations are a black box, how does that satisfy the requirements for
disclosure? Is a new right needed to
protect the datasets used to train AI systems so that companies are encouraged
to share them? These are just some of
the questions
on which the USPTO has requested comments (deadline for responses extended
to November 8, 2019).
In the same session, Nuria Oliver, Chief Data Scientist in
Data-Pop Alliance and Chief Scientific Advisor to the Vodafone Institute, noted
that the way AI-generated content was not only dependent on the data input (the
same algorithm might generate a work of art or a cartoon depending on the input),
but was something of a black box. This
was echoed by Zeng Zhihua, Director General in Automation Department and Patent
Examination Cooperation Guang Dong Center, CNIPA. In particular, Zeng noted that if AI was to
be used by patent offices, its decision-making processes had to be transparent.
AI and patents
Belinda
Gascoyne, Senior IP Law Counsel, IBM, noted that IP laws were written when
there was only a human author in contemplation.
These are now being put to the test; for example, the University of
Surrey has announced
that its creation machine, DABUS, is named as the inventor of a new food
container, the subject of patent application GB1816909.4. Belinda also discussed the issue of the
patentability criteria for AI-related inventions and the lack of coherence as
between countries and even within some countries as between patent office and
courts. She noted that the EPO had amended
its Guidelines for Examination in November 2018 to include a section
dealing with AI, but the referral pending before the EPO's Enlarged Board
of Appeal in G01/19
could pose a threat to the stability of the test of what is patentable in
Europe. Belinda agreed with AIPPI's 2017
resolution on the patentability of computer implemented inventions ("CIIs")
which stated that "patents should be available, and patent rights
enjoyable, without discrimination for inventions in all fields of technology,
including CIIs". This was echoed in
the "refreshing" approach taken by the JPO in their recently updated examination
guidance and the case
examples pertinent to AI-related technology.
Beat Weibel, Chief IP Counsel, Siemens, contrasted new
inventions that incorporate AI technology and inventions made by AI. The former were CIIs and should not be
treated any differently from normal CII inventions. AI-generated inventions should be patentable,
but the question was who should be the inventor? One could "pretend" that a natural
person is the inventor (not a good solution in the long run); or nominate a
machine as an inventor (also not a good solution as machines do not have rights
or duties); or, and this was his preferred solution, expand the definition of inventor
to include the legal person who controls the AI system.
Zhixiang Liang, Vice President and General Counsel, Baidu,
underlined the need to ensure respect for data privacy and safety. He referred to the book 'AI Superpowers' and noted that AI, for a big company such as
Baidu, meant super-responsibility.
Socio-economic and
ethical impacts of AI
The IPO's Chief Executive and Comptroller General, Tim Moss,
moderated the third panel. Nuria Oliver
again spoke and introduced us to the acronym FATEN to describe 5 dimensions of
the ethical principles raised by AI, namely:
F: fairness
A: (human) autonomy and accountability
T: trust and transparency
E: (b)eneficience, education and equality
N: non-maleficience.
Tom Ogada, Executive Director, African Centre for Technology
Studies, rightly warned the audience about the widening technology gap between
countries and was hopeful that, not only could AI improve productivity, but
that it could also increase numbers of jobs.
AI and copyright
In the afternoon we were treated to a discussion on whether
AI will change human creativity and how AI-generated works should be protected,
chaired by Karyn Temple, Register of Copyrights and Director, US Copyright
Office.
Pierre Sirinelli, Professor of Private Law and Criminal
Science at the University of Paris took us through the arguments on whether copyright
exists in AI created works. His starting
point was that the form and identity of a work produced by an AI system was the
same as one created by a human. However,
if we maintain that originality is an imprint of personality then such a work
will not be protected. Although it might
be possible to sidestep that test by saying that an author makes arbitrary
choices and a machine does the same thing, this would ultimately not work
because people make subjective decisions whereas machines are cold and
objective. Looking to countries where a work
receives copyright when effort and investment is made, this still does not help
because that effort and investment goes into creating the AI system, not the
work which is generated by the press of a button. Another problem in the copyright system is that
a work has to come from a physical person.
It might, again, be possible to get around this requirement by looking
at those copyright systems which immediately transfer copyright such as, in an
employment scenario, from the employee to the employer. However, this is still not a solution since there
is no person creating the work and so no one who can make the transfer. He concluded that if we want protection for
AI-generated works, we must either find a right outside copyright – a new sui
generis right – or abolish the link to a physical person within our copyright
rules.
Pravin Anand, Managing Partner, Anand and Anand, started by
telling us that the Indian courts have considered that idols and animals are
legal entities for the purpose of owning copyright. If an animal or idol can be a person for the
purposes of copyright then a machine should also qualify. He backed this up by considering a number of
deeming provisions under Indian law: the
author of a film is the person who took the initiative and took responsibility
for its creation; in relation to computer-generated works, the author is the
person who has caused the work to be created; and an employer owns a work
created by an employee in the course of his/her employment. He thought that a similar deeming provision
could be used for AI.
Andres Guadamuz, Senior Lecturer in Intellectual Property
Law, the University of Sussex and author of "Do androids dream of electric
copyright?", asked the question:
since machines are getting so good at creating works, if those works
remain in the public domain, what will happen to works generated by human
authors? Will the humans be able to compete in the marketplace? He plays a game
with his students which he calls "bot or not" where he shows them
photos and paintings and plays music and reads poems to them. Over the last 5 years, the students had found
it increasingly difficult to identify those works created by a human and those
created by AI. He concluded that the UK
has the best system to deal with AI-generated works under section 9(3) CDPA
1988 which provides that copyright in a computer-generated work goes to the
person by whom the arrangements necessary for the creation of the work are
undertaken. Similar provisions also
exist in Ireland, NZ and in India.
Kats, of course, dream of regular sheep (and mice)
Tobias McKenney, Associate Copyright Counsel, Google, noted
that the text and data mining ("TDM") exception was a very important
touchpoint when considering whether the underlying data should be open to all. Questions that have to be borne in mind are:
is AI safe and is it discriminatory? How
do you answer those questions? Looking
at the process does not answer the question of whether the output is safe. The big debate at the moment is how to make
sure that AI is explainable. He gave the
example of a complaint that had been made about Quick, Draw!, a game where the
computer tries to guess what you are drawing.
The more it is used, the better the machine learning gets. At one point, and to Google's surprise, a
user complained that it was biased because, although it was good at recognising
drawings of sneakers, it was bad at recognising high heeled shoes and ballerina
shoes. But bias can be much more serious
than that and Tobias gave us three points to consider:
- If data in the public domain, such as literary works written before 1870, are used in the input to an AI system, think about what those works may say about gender and race.
- If you need to demonstrate safety and reliability of an AI system, you cannot have a TDM exception which requires you to destroy data immediately after it has been used.
- Datasets have to be observable and if copyright laws make that an infringement, then, again, how can you show that an AI system is not biased?
AI and data
Erich Andersen, Corporate VP and Chief IP Counsel,
Microsoft, started the debate on data protection and free flow of data by
pointing out that we needed to think about privacy law as a 'new' protection
for data. Microsoft had drafted three data sharing agreements
which had been annotated with lots of helpful legal points in an effort to
remove barriers and empower people and organisations to share and use data more
effectively.
Jonathan Osha, Reporter General of AIPPI, had asked AIPPI's
members whether a new right was needed to protect data used in AI systems; and
to his surprise they were split 50:50. To
get to a solution, therefore, he realised that the question had to be more
specific i.e. are we talking about vast pools of unstructured data or training
data? Each scenario raises different IP
questions and the balancing of different interests such as increasing innovation,
societal benefits and privacy issues. The
question of whether to create a new right is going to be the subject of an
AIPPI study question for next year. AIPPI's
resolution on copyright in AI-generated works is about to be published (spoiler
alert - see the IPKat's
report of the debate at the AIPPI Congress on this topic).
Andreas Wiebe, Chair for Civil Law, Intellectual Property
Law, Media Law and Information and Communications Technology Law, University of
Göttinger, pointed out that there was
no copyright in raw data, although it could be protected by confidentiality. Data is produced whether there is a right in
it or not therefore there was no need to incentivize its creation. The question was whether we need a new right to
protect data on disclosure. The other
big issue was allowing access to data. He
was of the view that a compulsory licence was going too far and that competition
law would probably not help as it was difficult to say whether data gives
market power/ dominance. He therefore queried
what could be done now to support markets in their developments of AI.
Virginie Fossoul, Legal and Policy Officer, European
Commission Directorate-General for the Internal Market, Industry,
Entrepreneurship and SMEs noted the importance of open data. She warned that creating a new right was very
complicated as it required a balancing exercise between competing interests and
warned against rushing into legislation.
It was an extreme option to say there should be compulsory licencing as
in the payment
services directive. She also
referred to the directive
on reuse of public sector data, but noted that it was very difficult to
find a horizontal solution across all sectors.
It was better to foster sharing without legislation until we were better
able to understand the markets.
Erich then picked up on the TMD exception in the directive on copyright and
related rights in the digital single market. Data is the fuel for AI and therefore it is
important that society has access to data, particularly in the medical field where
a number of breakthroughs have been made by looking at big datasets. Ursula Von der Leyen had made the fuelling of
modern innovations one of the planks of her agenda
for her presidency of the European Commission, whilst also noting that a balance
needed to be struck between the free flow of data and privacy issues. To this end, an enormous amount of research is
going into anonymising data and Microsoft had just announced
that they were working on an open data differential privacy platform. (This is where the technology does not strip
out personal data, rather it does something similar to inserting white noise
into the data so that it is anonymised).
He finished by noting that researchers need to work with Governments to find
safe-harbours so that they can be confident that they are working within the
law.
The final word went to Andreas who turned to a very apt quote
from Stephen Hawking: "Whereas the short-term impact of AI depends on
who controls it, the long-term impact depends on whether it can be controlled
at all."
In closing the day's conversation, Francis Gurry thanked the
presenters for their invaluable input to the debate and stated that there would
be questions on these issues published for consultation in either October or November
of this year.
[Guest Post] IP and AI - the debate continues, this time at WIPO
Reviewed by Alex Woolgar
on
Monday, October 14, 2019
Rating:
No comments:
All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.
It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.
Learn more here: http://ipkitten.blogspot.com/p/want-to-complain.html