IPKat reported on the AI inventor patent applications earlier this year (IPKat: The first AI inventor - IPKat searches for the facts behind the hype). Dismissed as a publicity stunt by some, the team behind the European, UK and US applications have defended the applications as an attempt to address the moral issues arising from inventions for which no human inventor can be ascribed. However, it is clear from the Minutes of Oral Proceedings (and preliminary opinion), that the EPO was not convinced that the human applicant (Dr Thaler, inventor of the AI) had satisfactorily demonstrated how he had derived the right to the applications from the AI. Important questions also remain as to whether the AI can truly be said to have invented, and whether the whole debate is decades too premature.
Ownership is what matters in European patent law
One crucial issue of AI inventorship is how the applicant can be said to derive the right to a patent from the AI. How can an AI assign such a right? The Receiving Section stressed that the EPO does not investigate whether the designation of an inventor is correct (Rule 19(2) EPC). However, an applicant for a patent who is not the inventor is required to state how they derived the right to the invention from the inventor (Article 81 EPC). The team behind the AI inventor applications have always maintained that the applications are owned by the inventor of the AI, Dr Thaler, and not the AI itself. But how does Dr Thaler derive this right?
Dr Thaler, initially indicated that he derived the rights to the invention from the AI inventor as the AI's employer (a claim that raises certain philosophical questions in and of itself). The EPO did not accept this as a valid derivation of right, and Dr Thaler tried again by claiming that he was the successor in title of the AI as the AI's owner. However, as the Receiving Section noted in the preliminary opinion: "machines do not have legal personality and cannot own property...a machine cannot own rights to an invention and cannot transfer them within a employment relationship (as proposed by the applicant on 24.07.2019) or by succession (as suggested by the applicant on 02.08.2019)".
Chappie: Artificial General Intelligence employee (and most underrated movie of 2015) |
Has the AI actually invented?
As already mentioned, the EPO does not investigate whether the inventors named on a patent application really are the inventors (Rule 19(2) EPC). The question of whether the AI really did invent the fractal food container and fractal light signalling inventions that are the subject of the applications is therefore not one that the EPO will attempt to answer. However, it is worth remembering that the team's claims about the AI inventor are extraordinary.
An AI that is truly capable of inventing will be capable of reading and understanding the prior art, finding a problem to be solved, finding a novel and inventive solution to the problem and then communicating that solution in a way that is understandable to a skilled person. An AI capable of such a complex collection of tasks would have a close to a human level of intellect, i.e. an artificial general intelligence (AGI). An artificial general intelligence is a 20 year goal of companies like Google DeepMind and Google Brain. Dr Thaler has, in fact, argued that his algorithm "paves the way for sentient AI since it teaches how machines may generate the equivalent of subjective feelings...It is expected to be the successor to deep learning and the key to achieving human level machine intelligence. It will be used to build highly transparent and self-explanatory synthetic brains to achieve so-called “Artificial General Intelligence” (AGI)" [Merpel: someone needs to tell Demis Hassabis...]. A summary of the incredible abilities of Dr Thaler's AI can be read on his website.
So far, however, the precise details of how the AI performs the inventive act have not been published and remain distinctly vague. It seems likely (at least to this Kat) that Dr Thaler's AI may have more limited abilities than are enthusiastically propounded by the team. This raises the question of whether the AI is, in fact, no more than a tool whose abilities can be equated to other types of technology.
At the recent Life Science Patent Network (LSPN) London conference, this Kat had the pleasure of participating in a panel discussion with Professor Ryan Abbott on the topic of AI inventorship. Professor Ryan Abbott made a plea for the importance of the moral and social issues surrounding AI inventorship. However the Professor did not seem to address the question of how the AI could be said to be different to other types of platform technology, e.g. mouse models.
The central claim of the AI inventor team is that there was no human input in the generation of the invention: "the machine only received training in general knowledge in the field and proceeded to independently conceive of the invention and to identify it as novel and salient". However, the same could be said of a mouse model used to identify novel, inventive and useful antibody therapeutics. The human experimenter provides the antigenic challenge, and the mouse's immune system produces structurally unique, non-obvious therapeutics that are potential suitable for use in humans. The "invention" of the antibody structure could not have been derived by the human experimenter themselves. However, no one is arguing that mice should be inventors.
Are the AI's inventions even novel?
As a side issue, a perusal of the file reveals that the AI inventor may also not be as good at inventing as had been first thought. The two applications claimed a fractal food container (EP 18275163) and fractal light signals (EP 18275174). The team behind the AI have been previously eager to point out that a search by the UK IPO found the claims of the UK application to be novel and inventive. The EPO did not agree with the UK IPO. Both the Food Container and Light Signals invention, as originally claimed, were found to lack both novelty and inventive step (European search opinion). The Examiner cited, for example, US 5803301 as disclosing a beverage container having all of the features of the one allegedly invented by the AI.
Prior to the EPO's decision to refuse the applications because of the inventorship issues, the applicant had submitted amendments and arguments in response to the substantial objections. However, the issue of novelty and inventorship have become redundant in view of the formality issues with the applications.
Final thoughts
If Dr Thaler's appeal of the Receiving Office decision does indeed go ahead, IPKat will await the Statement of Grounds with interest (deadline for filing the appeal will be early 2020). Based on their submissions to the Receiving Office, it seems that the AI inventor team are lacking the legal arguments to overturn the decision. Moral and social arguments are unlikely to convince the Boards of Appeal. A request for a referral to the Enlarged Board can probably also be expected, but is similarly unlikely to be granted. None-the-less, the team have undoubtedly been successful in one goal, bringing attention to Dr Thaler and the team. Notably, we are only able to see the file history because early publication of the applications was requested. However, in this Kat's humble view, the whole argument surrounding AI inventorship is premature until the existence of an AI truly capable of a inventive act has been proved.
Further reading:
Independently of the discussion relating to inventorship, it does not seem that the machine was so "intelligent" as the search in both cases has revealed very relevant documents.
ReplyDeleteIn the case of the can, the mere connection of cans through their external profile is known. The only difference is that in the case of the application, the surface is a fractal surface. Whether this is inventive remains to be seen. As the application has been refused by the Receiving Section, we might never know.
As far as the light beacon is concerned, the whole invention seems entirely based on studies of the applicant himself. I would say if only the theory on which the applicant bases its application is proven that one could start believing what is going on. It would interesting if the applicant provides more than a “paper” invention and would show a real device working according to the claimed invention. To me this invention is nearing a substantial lack of sufficiency. As the application has been refused by the Receiving Section, we might never know.
What is striking as well, is that in both cases the notion of fractals come up. I do not think this is innocent.
When reading the explanations given about the way the invention was allegedly created, it is difficult to follow that “the machine was not trained on any special data relevant to present invention”, but a few lines higher is said that the machine. Either one or the other, but not both at the same time. The whole. A quick look at the references allegedly explaining the working of DABUS, at least US 5659666 has never crossed the Atlantic and US 7454388 has not led to a European Patent due to problems with Art 123(2). For the EP application corresponding to US 2015/0379394 summons to OP have been issued. Art 84 (if not Art 83) seems to be a major problem, so that we might also end up with problems under Art 123(2).
On the other hand, artificial intelligence does appear no more than a hype, which will most probably end up like a deflated balloon. There is nothing intelligent in those machines whatever the applicant of both applications may say.
They are only doing what they are told and if some self-perturbation of connection weights between neurons, like alleged in DABUS should all bring the desired result, this needs a bit more explanation.