Your AI overlords demand to be recognized... |
Cue Dr Matt Fisher - a medical doctor and Grays Inn Scholar who is currently studying for the Bar - who has this to say on the topic:
"Last week the European Patent Office stated that an inventor must be a ‘natural person’. This provides the basis for the EPO’s decision in December to reject two patent applications which had named the artificially intelligent system Dabus as the sole inventor. The patent applications were made by Dr Thaler’s team at Surrey University and described a flashing beacon light and plastic food container.
Dr Thaler’s failed ‘AI inventor’ applications have raised an important question - can AI be inventive? However, in this article, I shall not dwell on the specific merits of Dr Thaler’s applications but instead focus on Google DeepMind’s artificially intelligent system AlphaGo.
In 2016, AlphaGo was capable of reading and understanding the prior art of the game Go, finding novel and inventive solutions to defeat its opponent Lee Sedol (the world champion) 4 games to 5 and effectively communicating that solution to Aja Haung, the DeepMind team member who placed stones on AlphaGo’s behalf. However, the technology behind AlphaGo is not limited to board games, it also extends to sectors with patentable subject matter such as telecommunications and pharmaceuticals.
This article argues that AI can be inventive, that AI differs from other platform technologies and for the sake of transparency, AI inventorship should be recognised.
AlphaGo
In 1997, Garry Kasparov became the first chess world champion to lose a match to a computer which had been programmed by expert chess players. In 2016, Lee Sedol became the first Go world champion to lose to a computer that had taught itself - AlphaGo.
Prior to Sedol’s defeat it was thought that humans would retain the upper hand in Go because in any one position there are 200 possible moves, whilst in chess there are 20. To put this into perspective the number of possible configurations of a Go board is greater than the number of atoms in the universe. In fact, even if you took all the computers in the world and ran them for a million years, this would still not be enough computing power to calculate all the possible variations. Therefore, AlphaGo cannot simply compute every possible move, it must rely on something else – intuition.
In the second game AlphaGo made Move 37 and gave us a glimpse of a future shaped by computer intuition and inventiveness. However, Move 37 appeared at first to be a mistake because no competent human player would have made it. In fact, AlphaGo knew that there was only a 1 in 10,000 chance a human would have made Move 37, yet despite this, it ignored the games received wisdom or prior art and made an inventive move.
Post-match analysis concluded that Move 37 was the pivot on which the game had turned in AlphaGo’s favour. Lee Sedol later described it as a “really creative and beautiful” move. The lessons that AlphaGo has taught and continues to teach human Go players amounts to an improvement in the game’s prior art and if Move 37 were patentable, it would be an inventive step.
How AI differs from other platform technologies e.g. mouse models
Aja Haung was the DeepMind team member who placed stones on the Go board for AlphaGo in its victory over Lee Sedol. However, it is unreasonable to suggest that Haung was the creative mind behind Move 37 because at no point did he exercise choice. Move 37 would have occurred irrespective of whether it was Haung or this author placing the stones.
Mouse models on the other hand still require scientists to exercise choice either when selecting the antigenic challenge or in determining which antibody therapeutics may be suitable for use in humans. The scientist in making these choices must rely on their experience and intelligence, which is why the scientist rather than the mouse is the named inventor. A useful antibody therapeutic would not exist irrespective of whether a scientist skilled in the art or this author were conducting the experiment.
I have used AlphaGo as an example because the human operator is an automaton but mouse models do pose an interesting problem; namely making a choice based on a menu of limited options does not appear particularly inventive. Is there a difference between a mouse model and an artificially intelligent system that still requires the human operator to exercise choice?
The difference is that the immune system of a mouse is static, in that it does not get better over time at identifying antibody therapeutics for use in humans. Artificially intelligent systems on the other hand are dynamic in that over time they do get better at their given task. For example, DeepMind went onto develop AlphaGo Zero in 2017, which with no prior knowledge of the game Go and only the basic rules as an input had at three days surpassed the abilities of AlphaGo Lee, the version discussed above. It had achieved this entirely from self-play, without human intervention or historical data. In just a few weeks AlphaGo Zero had accumulated thousands of years of human knowledge but it had also discovered new knowledge. It developed unconventional strategies and creative new moves that echoed and surpassed Move 37. Mouse models will always require an intelligent human scientist but artificially intelligent systems will over time make their human operators increasingly redundant. How will we know when humans have become redundant in the inventive process if an AI cannot be named as an inventor?
Furthermore, Google recently announced that its Sycamore quantum processor had performed a specific task in 200 seconds that would have taken the world’s best supercomputer 10,000 years to complete. One of the major applications of quantum computing is in artificial intelligence and if successfully applied will make AlphaGo Zero appear prehistoric in comparison.
The dual purpose of patents - ownership and transparency
In return for the grant of a twenty-year monopoly society demands transparency from the owners of patents. But in denying the existence of ‘AI inventors’ society risks losing this transparency because it encourages human inventors to either down play the extent of AI assistance in a patent application or alternatively to rely on trade secrets. In either scenario transparency is lost but does this matter?
The creative outputs of both Aja Haung and AlphaGo are owned by Google but that ownership should not make it irrelevant which of the two’s creativity was responsible for defeating Lee Sedol. Without the legal fiction of AI inventorship we are headed for a society in which Aja Haung became the Go world champion in 2016. "
GuestPost: Natural persons have a monopoly on inventiveness - fact or legal fiction?
Reviewed by Annsley Merelle Ward
on
Monday, February 03, 2020
Rating:
No comments:
All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.
It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.
Learn more here: http://ipkitten.blogspot.com/p/want-to-complain.html