The question of whether it should be possible to name artificial intelligence (AI) code as an inventor on a patent application continues to dog patent offices and courts around the world. However, despite the global attention on the so-called "AI inventor" patent applications, we are no nearer to understanding how the AI ("DABUS") actually goes about the process of inventing, or even if it can be said to really invent at all. Meanwhile, the main commercial players in the AI field, such as Google DeepMind, continue to navigate the patent system without apparent concern about the issue of AI inventorship.
DABUS and the Emperor's New Clothes
The team behind the fight for an algorithm to be named as an inventor on a patent application have had some recent success. The South African patent office accepted DABUS as an inventor of a South African patent (IPKat), although it must be noted that South Africa takes a very light touch with respect to patent examination. The Australian Federal Court also found that DABUS could be named as an inventor (Thaler v Commissioner of Patents [2021] FCA 879) (currently under appeal). The US District Court, by contrast, recently found against naming an algorithm as an inventor (IPWatchDog). In the UK, we are imminently expecting the decision from the Court of Appeal (expected in September), whilst the oral proceedings in the case before the EPO are scheduled for December.
It therefore seems that the issue of AI inventorship is not going away anytime soon. However, as this Kat has argued a number of times (IPKat: here, here and here), there is currently little or no evidence that DABUS is, in fact, capable of inventing according to the normally accepted standards for inventorship. The inventor of DABUS, Dr Thaler, claims that his AI invention has extraordinary powers (for a machine). As his website explains:
So that Thaler's artificial inventors could appreciate their creations, he equipped them with learning rules to bind memories, contained within a series of nets, together to produce not only complex concepts, but also the consequences of said concepts, what psychologists would call affective responses...In other words, feelings or sentience was the result.
So DABUS can not only invent, but has feelings and sentience? [Merpel was also particularly intrigued by Dr Thaler's work in the 'Journal of Near Death Studies' (abstract), in which Dr Thaler provides an apparently "credible model of both salvation and damnation"]. If true, DABUS's abilities should be considered an extraordinary achievement. However, DABUS and Dr Thaler have been widely ignored by the mainstream AI community. When publishing, Dr Thaler publishes on his AI creations in niche journals. As Dr Laurence Aitchison, Senior Lecturer in Machine Learning and Computational Neuroscience at the University of Bristol, comments "It would be good to see the code behind DABUS, but unfortunately this has not been made available, contrary to what would be normally expected in the field, even for commercial products". The patent and legal community none-the-less appear willing to accept the claim of AI inventorship without it.
The Emperor's New Clothes |
For this Kat, the persistent preoccupation of the IP commentariat with DABUS, has parallels with the parable of The Emperors' New Clothes. For many of us, AI is a magical black box. Very few in the IP profession or academia have the expertise to determine whether AI is or is not yet capable of AI inventorship. However, in this GuestKat's opinion, many of us seem surprisingly willing to accept the remarkable claims of AI inventorship, on the basis of little or no evidence (IPKat: here, here and here).
Patent offices do not assess inventorship
So how has the DABUS case managed to get so far, without the expected evidence that DABUS can actually invent? The reason for this is quite simple. Patent offices do not generally assess claims of inventorship. As noted by the Board of Appeal in their preliminary opinion on whether DABUS can be named as the inventor on a EP patent application: "the issue of how the invention was made...is outside the competence of the EPO". The legal proceedings before the patent offices and court are, at the moment, purely on the formal aspect of whether a non-human can be named as an inventor.
The question of whether the AI was actually behind the invention, and how this was achieved, also does not come under sufficiency. The patent office is only concerned with whether the claimed method or product itself is sufficiently disclosed.
Are we ignoring the real AI inventors (and does it matter)?
The DABUS team argue that AI should not be denied inventorship because doing so would stifle innovation. However, as the AI inventorship saga trundles on, AI companies continue to make truly extraordinary advances. A notable example of this is DeepMind's AlphaFold (Jumper et al., Nature, 2021), which uses deep neural networks to accurately predict protein structure. Elucidation of protein structure has many practical uses, not least in drug design. If any AI was capable of inventing, one might think it would be AlphaFold.
AlphaFold is itself offered open-source under a Creative Commons Licence, and the code can be downloaded from DeepMind's website. Interestingly, AlphaFold also appears to be the subject of at least one patent application (WO2021110730A1). In contrast to Dr Thaler, DeepMind is a commercial company that is clearly comfortable with simultaneous commercialisation and publication of the code behind their AI products. Furthermore, for a company with a growing patent portfolio (IPKat here and here), DeepMind appears unconcerned with the question of AI inventorship. What generally matters is who has the rights to the invention. Even the DABUS team is not arguing that AI should own IP.
In contrast to the abilities of AlphaFold, the DABUS inventions are a bit more subdued. DABUS has apparently invented a food container with a fractal surface, and a light that turns on and off in a fractal pattern (EP 18275163 and EP 18275174). It is still open to question as to whether either of these inventions are novel or inventive. Notably, the European search opinion in both cases (here and here) found the claims as originally drafted to lack novelty, and in the case of the food container, this was in view of a document from 1998.
By contrast, the achievement of AlphaFold reminds us that AI will undoubtedly be a useful tool in innovation over the coming years. However, AlphaFold also serves to highlight the apparent irrelevance of the AI inventor debate for many innovators. Once again, this Kat asks whether we could not move on from DABUS please? She suspects the Emperor has no clothes.
Further reading
The first AI inventor - IPKat searches for the facts behind the hype
EPO refuses "AI inventor" applications in short order - AI Inventor team intend to appeal
The mirage of AI invention - nothing more than advanced trial and error?
In my view, the root of legal misconceptions - or, at least, one of the main contributing factors - is the tendency to anthropomorphise AI systems. S. Thaler is not the only one who portrays an ML system in an anthropomorphic way. Meanwhile, the anthropomorphisation of AI has been criticised within the computer science community itself for its ‘wishful’ language (see e.g. Drew McDermott, ‘Artificial Intelligence Meets Natural Stupidity’ 57 ACM SIGART Bulletin 4 (1976)). While for an AI researcher, such anthropomorphic language might be ‘helpful when explaining complex models to audiences with minimal background in statistics and computer science’ (David Watson, ‘The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence’ 29 Minds & Machines 417-440, 434 (2019)), the lay audience, including legal professionals, might take such representations at face value. On the proposition that ‘If any AI was capable of inventing, one might think it would be AlphaFold’: a closer and more critical look would cast some doubt on (1) whether there was a breakthrough in solving the protein folding problem (see e.g. Philip Ball, ‘Behind the screens of AlphaFold’ https://www.chemistryworld.com/opinion/behind-the-screens-of-alphafold/4012867.article); (2) whether AlphaFold should be credited with the claimed achievement. As the group of researchers in ML explain, the output of ML - including in the case of AlphaFold - is the function of how the computational process was configured and setup by humans (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3910332) - ML is not a 'magic black box' yielding inventions 'by itself'.
ReplyDeleteIs your title a reference to the 1989 600-page brick "The Emperor's new mind" by Roger Penrose, or is it just a coincidence?
ReplyDeleteThe book seeks to provide an answer to the question "can computers have a mind?", by taking the reader on a breathtaking tour of modern mathematics and physics. The author's view, if I can summarize it, is that there is still something fundamental missing in understanding the nature of "strong AI".
I find this topic rather irksome, and I'm at a loss to understand what the petitioners are really after, or what harm they are actually suffering. In my view, at best they are some pedants tediously trying to make their point, and at worst they are hiding some sinister unavowable motives. The proper forum to settle these questions would be to petition democratically elected governments, rather than pushing their agenda, whatever it is, through courts, who would in the end make new law rather than "interpreting" existing statutes.
This debate also ignores the historical origins of the right of the inventor to be named, which was a fight that lasted roughly six decades centered around 1900. In turn this raises questions about theories for justifying patents, which are rarely explicitly invoked. The classical bargain theory is already unsatisfactory, and would fail if the source of "innovation" is a non-sentient bucket of chips. (Advocates often hopscotch between theories without even realising it.)
In any case, one should be careful about what one wishes for: what is good for the goose should also apply to the gander. If the skilled person of Art. 56 EPC could be under some circumstances a multidisciplinary team, then why couldn't it be an AI system?
Back in the USSR, one Genrich Altshuller looked at countless patents to try to understand how inventors think, and came up with a systematic set of rules for solving technical problems. This came to be codified in a method called "TRIZ", which has a minor cult-like following. It is actually rather seductive, and is in my view a kind of generalised problem-solution approach. Very roughly, the analysis involves the identification of constraints and contradictions in the problem statement, and lists rules suggesting which approach could be tried, such as "Do it inversely", "Change the state of the physical property.", or "Do it in advance." (from "Suddenly the inventor appeared", by Altshuller, 1994; other books, such as "Grundlagen der klassischen TRIZ" provide more elaborate tabulations)
What would happen if these rules were coded into IBM's trivia champion "Watson"?
Appropriate rules could be added for fields such as small molecule drug discovery: "try using the metabolite", "try isolating one of the enantiomers", or "try replacing functional groups around the core". Big Pharma will surely LOVE this new standard.
In any case, many patent office managers have wet dreams of replacing the worthless scum known as "examiners" (I used to be one) by "cheap", "efficient", and "predictable" electronics... I saw an item recently about the EPO pre-classifying incoming applications using "AI". Manual classification was always an approximate business, but I'm not sure that automatic it would actually improve the probability of getting a new applicant in the hands of the person most qualified to handle it. A lot of informal negotiation and reallocation went on in the form of "stock management" at the unit level, which management essentially denied existed.
I can only repeat what I think about AI: nothing intelligent and truly artificial.
ReplyDeleteBy its conception AI is only doing what it has been told to do, mainly data crunching on the basis of training data.
That it can do things more precisely as a human being (provided it is correctly trained) and does not get tired does not render it intelligent whatsoever.
AI poses nevertheless a big problem: without knowledge of the correlation algorithm and of the training data, can we trust AI? I think not.
Rather than discussing whether AI can be an inventor, which in my opinion it cannot, safeguards should be created so as we do not have to accept conclusions taken by AI. This is a job for governments and the judiciary.
To complete the bibliography of Roufousse T. Fairfly i would add a further book from R.D Precht: AI and the meaning of life. It is in German, but I do not know whether it has been published in English. Worth reading. It goes about posthumanism and transhumanism, and the fact that computers will never have feelings although lot of our decisions are governed by feelings whether we like it or bot.
I have also heard about the dream of some EPO managers wanting to replace examiners by AI systems. And even some representatives are convinced that dealing with applications can be simplified by using AI. It is difficult to beat such stupidity.
The problems experienced with pre-classifying incoming applications by AI most probably stem from the problem inherent to AI. It is a black box, and why should we trust the result.
Thank you for the article, highlighting the dubious nature of this endeavor.
ReplyDeleteAs I wrote before on Kluwer, there are a lot of questions...
Why should an "AI" be able to be an inventor, but not an animal (often far more intelligent than any known "AI")?
Where is the line between a "normal" CAD program optimizing a solution and an "AI" inventing a solution? Should all CAD-programs now be inventors? Would any given number of parallel copies of DABUS be able to invent the same solution concurrently? Then Mr Thaler could apply for patents for the same solution from DABUS1, DABUS2, ..., DABUSn?
For me, most telling is that Thaler & Co want its "AI" to be an inventor, but don't see it necessary for the "AI" to assign the rights to the invention to them. And what happens if the "AI" terminates its employment-contract and signs onto a new employer?
To me, all this doesn’t make sense. I hope PETA, the Humane Society or similar will sue Mr. Thaler when he switches off DABUS. And I assume that HAL and Eddie the Shipboard Computer see it similarly - the thoughts of Marvin on this might be too depressing... ;-)
US law links inventorship to an act of conception of an invention. I'm not an AI expert but I'm sceptical that an AI is capable of "conceiving" anything. I prefer the notion that the inventor is the one that reviews the output of an AI and, stimulated by that output, then performs an act of conception of an "invention".
ReplyDeleteAnd then there's a further act of conception, namely that which occurs when the drafter of a patent claim "conceives" that abstract "concept" which finds expression in the draft claim.
Has all this been said before? Is it trite? Or is it a reasonable way to visualise and then manage the contributions to the art made by an AI of ordinary skill in that art?
I don't think it is important to know how DABUS or any other AI makes inventions; I don't know how I make inventions either. At least for the purposes of patent law, I am happy to accept that it does create inventions, in the sense that its outputs would not have been obvious to a skilled person without the use of such a machine.
ReplyDeleteIn that case, you have two choices for identifying the inventor. Either you can treat the AI as the inventor, in which case patent law needs to be changed to determine who owns the rights to the invention, just as it does in the case of an invention made by an employee. Or you can treat the AI as just another tool that the skilled person would use, in which case, that person will be identified as the inventor. In terms of ownership of the invention, you probably end up in the same place. As for who is named as the inventor on the patent, that may be important to the creators of the AI but is perhaps not very important to anyone else.
What seems more interesting to me is the question of inventive step as AI becomes better than humans at generating unforeseen results -- at least in certain areas, of which protein folding may already be one. If we tie inventive step to the test of what would be obvious to a skilled human without the use of an AI, then AIs may easily exceed that bar and churn out thousands of patentable inventions in a routine manner. That is not what patent law is designed to encourage. Therefore we will need to adapt the level of inventive step to match what would be obvious for an AI (or for a skilled person with the assistance of AI) and it may become very much harder for a mere human to make a patentable invention. We will also face some interesting discussions with examiners about how to establish obviousness!
"In terms of ownership of the invention, you probably end up in the same place. As for who is named as the inventor on the patent, that may be important to the creators of the AI but is perhaps not very important to anyone else."
DeleteI just realized, that maybe this solves the questions of "Why?" and "cui bono?".
If the company's "AI" is the inventor, then the ownership will, in the end, be the same, but there is no longer a need to pay the "inventor" anything (whatever the compensation actually is, depending on national laws).
In my humble opinion you start from a wrong premise. You consider that AI “does create inventions, in the sense that its outputs would not have been obvious to a skilled person without the use of such a machine”.
DeleteThe problem is that any AI system cannot by essence do more than it has been told to do. And all the ambiguity results from this wrong premise.
In that respect, I cannot imagine that “AI becomes better than humans at generating unforeseen results”.
There is therefore no need to adapt the level of inventive step to match what would be obvious for an AI (or for a skilled person with the assistance of AI).
I cannot follow you when you express the fear that it “may become very much harder for a mere human to make a patentable invention”.
To me the discussion on AI is highly academic and should remain in this field.
The UK appeal decision is now avaialble:
ReplyDeletehttps://www.bailii.org/ew/cases/EWCA/Civ/2021/1374.html
With due respect to his function, it is worth noting that LJ Birss has a clear tendency to attempt regularly to distinguish himself from his peers.
ReplyDeleteIt is difficult to understand what he wants to achieve.
His views are often contentious and do not really bring matters forward.
In the DABUS case his point of view is neither compelling nor convincing contrary to that of his colleagues.