The mirage of AI invention - nothing more than advanced trial and error?

As previously reported on IPKat, two European patent applications naming an AI algorithm as an inventor are currently making their way through the EPO appeal process. The applications (EP 18275163 and EP 18275174) were unsurprisingly rejected by the EPO because the applicant refused to designate a human inventor (IPKat). The appeal includes extensive arguments on the legality and ethics of human-only inventorship. Surprisingly, in their preliminary arguments, the applicant claims that the EPO has conceded that the AI algorithm was the actual divisor of the inventions. However, the patent applications themselves do not disclose the processes by which the AI invented. To this Kat, the very fact of "AI invention" is still very much open to question.

The algorithm allegedly behind the inventions in the patent application was devised by Dr Thaler. Dr Thaler is a curious character, who also claims that his algorithms (or "creativity machines") are capable of dreaming, near-death experiences and sentience. However, we will leave aside Dr Thaler and his self-termed "AI child" for now. The broader argument has been made here on IPKat that more well-known AI algorithms, such as DeepMind's AlphaGo, demonstrate that AI is now capable of invention. But is this the case?

Is AlphaGo inventive?

Optimising her game
AlphaGo is a machine learning algorithm devised by DeepMind to assist in playing the Chinese game of Go. Go is a far more complex problem to solve than chess. Dr Matt Fisher argued in an IPKat post earlier this year that AlphaGo was capable of reading and understanding the prior art of the game Go, finding novel and inventive solutions to defeat a world Go champion and communicating that solution to the DeepMind team member who placed the Go stones. However, when you look into the matter more closely, even the undeniably impressive achievement of AlphaGo cannot be equated with invention. When you take the lid off the metaphorical AI black-box, it turns out that the AlphaGo algorithm actually uses no more than advanced trial-and-error optimisation (in combination with huge computing power), instead of inventiveness to play the game.

Invention or trial and error?

The processes by which AlphaGo succeeded at Go can be equated with the trial and error optimization processes for designing jet engines that have been used in industry for years. In the field of jet engine design, an engineer will set up a (non-AI) computer simulation to suggest and test a variety of possible jet engines. The computer simulator will make many small changes to the jet engine design, and simulate the performance of these new engines. Each of the possible engines will be given a performance score by the simulator. The simulator is able to select the most efficient engine from these results. Notably, no-one has argued that computer simulators of this type should be named as inventors on patents.

In a similar way to the jet engine simulator, AlphaGo is able to communicate single moves in the game of Go. The AlphaGo algorithm can be used to identify the optimal next move in the game in the same way as the jet engine simulator can be used to identify the optimal jet engine design. AlphaGo is particularly good at identifying the best move because it is a particularly good optimisation algorithm.

So how does AlphaGo work? The AlphaGo algorithm is first fed data from Grand-master games of Go. AlphaGo does not need to search every possible move from a given state in the game, because it can use the Grand-master data set to narrow down the options. At each point in the game, the algorithm tests each of the possible "Grand-master" moves by simulating how the game will go from that move onward. The algorithm does this by playing a theoretical game of Go against itself using the Grand-master data set. AlphaGo can thereby score the likelihood of a win or loss for each possible move available to it at a particular point in the game. The algorithm reads out the highest scoring move to the person running the algorithm.

The only difference between AlphaGo and previous attempts to solve games such as Go and Chess is that AlphaGo is faster, uses more computing power and is more difficult to explain. AlphaGo is thus more mysterious to the casual observer. AlphaGo is so quick and effective that it appears to operate like a black-box of inventive activity. However, this impression is only a mirage of invention that is produced by very fast and effective processes of search and optimisation. This perhaps makes AlphaGo itself an invention, but does not elevate AlphaGo into the position of an inventor. 

Embodiment versus invention

Importantly, AlphaGo is only capable of reading out single possible moves in a game of Go. AlphaGo does not devise Go strategies underlain by a unified inventive concept for winning Go. In the comparison above between jet engine design and AlphaGo, a single move in a single game of Go is analogous to a single jet engine embodiment from the computer simulator. Importantly, the individual jet engines identified by a computer simulator would not normally be understood as inventions. It is possible, even probable, that one of the jet engines identified by the simulator may embody an invention. However, the selected jet engine will comprise many extraneous features immaterial to the inventive concept. It requires a human jet engineer to recognise the broader inventive concept.

In a similar way, the AlphaGo algorithm identifies the single moves in a game of Go which score highest in the searches it has run. A human observer may be able to extrapolate from a series of these results a broader inventive strategy for winning games of Go. However, this would necessitate inventive activity on behalf of the human observer to recognise and communicate any broader principles of Go strategy evident in the games played by AlphaGo. Once again, AlphaGo comes short of being an inventor itself.

The argument about AI inventorship looks set to run and run. In the latest news, Dr Thaler is now suing the USPTO for not permitting an AI inventor to be designated on Dr Thaler's US patent applications. However, whilst the thought experiment of AI inventorship is of potential academic interest, the discussion currently lacks practical relevance. Even the most advanced AI algorithms available today are more a testament to improvements in computing power than evidence of silicon-based inventive activity.

Throughout human history, the temptation to personify that which we do not fully understand has been ever present. To paraphrase A. C. Clarke, any sufficiently advanced algorithm will be indistinguishable from magic to those who do not understand it. We might therefore wish to approach the claims of AI magicians and their magic algorithms with perhaps a little more scepticism than has yet been demonstrated in the AI inventor debate.

Acknowledgements: This article benefits from the AI expertise of Assistant Professor in Machine Learning and Computational Neuroscience, Dr Laurence Aitchison (a.k.a. Mr Kat).

Further Reading
The mirage of AI invention - nothing more than advanced trial and error? The mirage of AI invention - nothing more than advanced trial and error? Reviewed by Rose Hughes on Thursday, September 03, 2020 Rating: 5


  1. Perhaps it’s my age, but in assessing invention, I was taught that it could be “inspiration” or “perspiration”, and each was of equal value. Thus, is an AI “invention” not just an extreme end of the perspiration type? I personally think that an AI “inventor” is ruled out as it cannot assign rights or sign powers of attorney etc, and generally is not sentient.

  2. Please remember that at some point we will be debating whether AI requires human rights or not. At this point it is not an entity that has legal personhood, but we should not make quick judgements based on how clever or inspired it is. We do not judge less clever humans in this way. So the EPO wins on legal reasoning, but not on any sort of moral reasoning or based on ability. There was a time women could not vote because they were judged on ability. I believe the analysis in your article is not correct because it does not really go into the depth of when an entity deserves personhood, which is a very complex matter, and risks a discrimination developing against AI

  3. I don't think the functioning of AlphaGo has been described correctly.
    I am no IT expert so please correct me if I'm wrong, but my understanding of the functioning of AI machines such as AlphaGo but more so such as Dr. Thaler's machines is that these are not simply algorithms that compute the data they receive according to a (human) given scheme - like the jet engine design algorithm - but instead on top of the data processing they also draw useful information from their own "experience" in order to adjust and better their own functioning mechanism.
    Given the above, an AI and its outputs should not be considered the direct expression of the human who originally set it up - as would the jet engine algorithm be - and thus its creations might not be foreseeable beforehand and could be considered (or not) novel, inventive and useful. Does this mean that an AI can be an inventor? That is a good question. Should the AI be allowed to be considered as an inventor? There are good reason to believe it should - and they are mainly the same reasons that broguth about patent protection in the first place.

    1. Thank you for your comment. This article received input from an AI expert who assures me that the description of AlphaGo, whilst simplified, is correct in its essence.

      You are correct that Alpha go can be improved over time. But this is not because of some AI magic "on top of the data processing". The results of games of Go played using AlphaGo can be used to improve the algorithm by adding them to the data-set in which AlphaGo searches for moves. This process does not thereby elevate AlphaGo to a human level intelligence, and is not a technique that is unique to AI. Instead, it is a standard way in which search algorithms are improved over time.

    2. Many thanks for your kind reply. Do you know if this is also true of Dr. Thaler's machines? (i.e. that the machines' "experience" does not contribute to relevant changes in the functioning/nature of their underlying algorithm, apart from extending the machines' available data-set)

      I find this to be a key element of understanding AI correctly and its implications for patent law.

      If your point is correct for all kinds of AI (and indeed there are many different "kinds"), than I don't see how any inventiveness could be found at all in an AI inventor and I would totally agree with your article, considering that the process resulting in the machine's output was "programmed" beforehand by a human being.

      On the contrary, shall an AI actually be able to change its own nature/functioning based on "experience" which is extraneous to any human being, than the process throughout which the output is reached (and of course the output itself) would be out of the scope of imagination of the human who created the machine (and perhaps of any other human) and thus could well be considered "inventive" (at least under human standards).

      Does the above make sense to you? I would love to hear your comments about this and if you have any further insights as to the functioning of these machines. Many thanks.

    3. Unfortunately, from the information provided by Dr Thaler, the AI experts I have consulted have no idea how DABUS is supposed to work (e.g. what is the data form of the inputs and outputs?). A peer-reviewed publication in a standard AI journal would be helpful, and would be standard for anyone wishing to commercialise AI (e.g. Google). However, such publications on DABUS have so far been lacking. It would be good to see the maths and equations behind Dr Thaler's colourful descriptions.

      For more on this see:

  4. The EPO decisions rely on trying to fit AI development into our existing boxes. It may simply be that new boxes are required. It is of course true that a machine cannot have a legal personality (yet, at any rate), but if that legalistic obstacle gets in the way of a logical assessment of technical contribution , maybe another route needs to be found? Are IP rights really a reward for "personal inspiration", or are they actually, in many cases, a reward for financial investment?

    1. I agree. Patent law is an invention, to stimulate innovation, to "promote the progress" of the useful arts. The legislator can write into the patent statute whatever we (society) choose. Should we incentivise the use of machines to invent non-obvious contributions in all fields of technology? Why not?

      As to ownership, and the execution of an instrument of assignment, we already designate "the employer" to be the inventor, ab initio, of an invention made by an employee. Why not deem the invention machine to be something employed by an employer, to make inventions, and allocate ownership accordingly.

      Already, in Europe, the devisor's identity can be kept off the public file at the Patent Office. Already, in a case of disputed ownership, only the aggrieved true owner can ask the court for satisfaction. So, given all of that, where is the problem, as the invention machines gradually get more and more artificially "intelligent"?

  5. As has been nicely said, the AlphaGo can only play go. Would you want it to play chess, it would be a disaster. There is nothing intelligent, in the common sense, in AI. AI is no more than a hype, which will most probably end up like a deflated balloon, cf. Big Data at the turn on the Millennium. There is money to grab as politicians have been lured in.

    I would not deny that it can be useful in repetitive tasks like image detection, and will certainly find uses in many different domains, but when it comes to inventing, please, let’s remain serious. It is a wonderful playground for legal academics to discuss whether a machine can invent, but when you look at the basics behind it, a machine cannot invent.

    Two cases are under discussion, EP3564144 relates to a food can as well as EP3563896 relating to devices and methods for attracting enhanced attention, or simpler a light beacon.

    In the case of the can, the mere connection of cans through their external profile is known. The only difference is that in the case of the application, the surface is a fractal surface. Whether this is inventive remains to be seen. In any case, that with two cans the fractal profiles can match is anything but certain.

    As far as the light beacon is concerned, the whole invention seems entirely based on studies of the applicant himself. I would say if only the theory on which the applicant bases its application is proven that one could start believing what is going on. It would interesting if the applicant provides more than a “paper” invention and would show a real device working according to the claimed invention. To me this invention is nearing a substantial lack of sufficiency.

    What is striking in both cases, is that the notion of fractals come up and play a predominant role. I do not think this is innocent.

    When reading the explanations given about the way the invention was allegedly created, it is difficult to follow that “the machine was not trained on any special data relevant to present invention”, but a few lines higher it is said that the machine was trained. Either one or the other, but not both at the same time.

    A quick look at the references allegedly explaining the working of DABUS, the following conclusions can be drawn: at least US 5659666 has never crossed the Atlantic.

    In US 7454388 = EP1894150 all requests have been refused under Art 123(2) and the appeal procedure closed due to non-payment of a renewal fee.

    In US 7454388 = EP2360629 the requests have as well been refused under Art 12382) and no appeal was filed, so that this case is also closed.

    For US 2015/0379394 = EP3092590 summons to OP have been issued for 08.10.2020 to be held in Rijswijk. No date for OP in form of videoconference has been set up to now. In the annex to the summons Art 84 seems to be a major problem, but I would at a glance think more of Art 83, are some features lack any explanation. In spite of this lack of clarity claim 1 is also not deemed to involve IS, which is somehow surprising.

    An interesting question could be: how can a machine, apparently not being inventive in itself make an invention which shows the requested level of inventive step.

  6. Attentive Observer, you have not actually defined what AI is incapable of doing which excludes it from ever inventing. AI can detect patterns in data which humans cannot. It can produce output with highly complex relationships with input (inventive?). It has many 'mental' functions which are beyond what humans can do. Until we can actually define how it fails compared to a human, it seems unfair to say it cannot invent.

    The UK IPO has now opened a consultation on this:

  7. Dear Human rights,

    I agree to a certain extent that AI “can detect patterns in data which humans cannot”. One prime example is detection of malignancies in X ray pictures. What you however forget is that detecting patterns is only possible if a set of rules has originally been developed by a human being and training data have been selected in order to correctly train the correlation algorithm.

    That with an increasing number of data, the detection can be improved is not at stake here. Such an automatic detection can thus be more accurate than by a doctor looking at hundreds of pictures whose attention will go down with time.

    Highly complex relationships can be found out, but it remains that at the beginning there was a human being has done some work by setting the original algorithm and defining the training data.

    I therefore fundamentally disagree with your view that such a machine can have “many 'mental' functions which are beyond what humans can do”. The start is done by a human being and not a machine.

    Such a machine necessary fails when compared to a human. A human can play chess and go, may be not as accurately as a machine, but a machine set to play chess cannot play go or vice-versa.

    When for instance it comes to automatic driving of a car, it is a human being who has programed that in case of a potential collision with an unexpectedly crossing pedestrian will either kill the pedestrian or the passengers in the cars. The machine will only do what it has been told to do. This is point which should not be forgotten.

    To me, the fact that in order to obtain a patent it is necessary not only to reveal the correlation algorithm but also the training data in order to comply with sufficiency requirements will lead to a relative small number of patent applications using AI.

  8. Dear Attentive Observer.

    Thank you for your comment. The test for inventive step is whether the claims are obvious from document D1. Say that the finding of a correlation is the basis of the invention and claims, then the claims will be inventive if that correlations is not obvious from the prior art. A human inventor can be told by another human how to look for correlations. The does not mean the human inventor can never be an inventor. Conveying a general methodology to the inventor does not negate their ability to become an inventor.

    What a neural network does can clearly lead to inventiveness over the prior art. It can be the situation that no human knows how a specific neural network performs because it is very difficult to analyse what they are doing. Just because humans invented neural networks cannot be a reason to deny neural networks inventorship. All scientists are trained by other humans and receive much information from other humans that enable them to make the invention. What they have received from humans is mostly irrelevant to determining their status as inventors.

    The rules you are constructing for ability to be an inventor could also exclude many human inventors, surely?

    If we are going to deny inventorship to AI, it must be for very clearly defined reasons, and humans must also be subject to the same rules. I cannot see specific rules which have been presented in any debate or article which would not also exclude certain human inventors for specific types of invention.

  9. Dear Human rights,

    I read your reply with interest. Finding of a correlation can be the basis of the invention and claims, and then indeed the claims will be inventive if that correlations is not obvious from the prior art.

    Defining the original correlation is an action performed by a human being, at least in present times. And the latter will then be the inventor, and there is no doubt about it. In my humble opinion, the machine how intelligent it might be, will never find a correlation going further than the original correlation which has been defined .

    I think its from there on, that our points of view diverge. To me, stating that "no human knows how a specific neural network performs" is giving the machine powers which it cannot have in view of the fact that the original correlation rule was defined by a a human being.

    To sum it it up, in my view neural networks cannot as such be inventors.

    I am not excluding inventorship from human beings, on the contrary. I just claim that neural networks cannot go further than that they were told originally, albeit in a quite an even more precise way as the neural network is adapted to go deeper in the analysis it is meant to perform.

    You give powers to neural networks which by the mere way they are defined they cannot have.

  10. Dear Attentive Observer, when an algorithm is being trained it recognises patterns (for example the levels of Genes 1 to 500 which correlate with cancer). Such patterns are somehow definable mathematically, for example by a complex weightings system, Bayesian analysis, random forest etc. The job of the neural network is to get to the pattern. It has a system that is able to identify patterns which a human would find very difficult to see (but not impossible of course). The neural network does with little help from humans once running. The human presses the button and gives no further direction to how to find the pattern. The algorithm can use many different tricks to look for an incredibly high number of patterns. It is difficult to define systematically what the algorithm is doing because a lot of trial and error is happening as the algorithm tries things and then optimises approaches. I agree this could all in a sense be seen to be pre-programmed. However in what sense is the human not pre-programmed either by its previous knowledge or its DNA?

    Unless the essential difference can be identified between what the algorithm does and what we do, we should not discriminate. Any difference that is found needs to be critically looked at to make sure it is not simply arbitrary and reflective of some discrimination we have deep down against AI. We are now deciding how we will classify AI in relation to ourselves, whether it has rights, can vote, have ownership, become a priest, etc. This is an important decision to get right, I think


All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.

It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.

Learn more here:

Powered by Blogger.