The cover article of the May 2024 edition of the CIPA Journal proposed a new test for inventive step using AI. The article was inspired by the EPO's AI assisted search tool, AI-PreSearch. The CIPA journal article proposes to use an AI derived measurement of semantic similarity between the claims and the prior art as a new test for inventive step. However, in this Kat's view, using the amount of "similarity" between the claims and the prior art as a test for inventive step would constitute a vast oversimplification of patent law, lacking any correspondence with the established legal concepts of novelty and inventive step. In this Kat's view, the proposal presented in the CIPA Journal fails to recognise that, whilst AI search-tools such as AI-PreSearch may be excellent at searching the prior art, they possess no functionality for applying complex legal tests.
EPO AI assisted search: Language models and vector search
Last year the EPO announced the introduction of a new tool to assist Examiners in patent search. According to an article by the EPO Head of Data Science Alexander Klenner-Bajaja, the AI assisted pre-search has relatively simple architecture using machine learning language model assisted vector search. Details of an earlier version of the model are described in Vowinckel et al. 2023.
Multi-dimensional vector space...with cats |
Vector search is a standard machine learning method whereby inputs (e.g. features, images, text) are represented as vectors and compared. In language model assisted search, the language model produces a vectorial representation of the input text which includes its contextual semantic information. The vectors can then be compared to each other to find semantically similar texts in an embedded space. The vector space may have many thousands of dimensions. Language model assisted vector search is a widely used technique to find and recommend personalised image, music, podcasts and even AirBnBs to users.
AI-PreSearch uses a language model (EP-RoBERTa) that has been trained on patent documents. In AI-PreSearch, EP-RoBERTa produces a vector representation of the claims to be searched. The vector representation is then mapped to the 250,000 dimension patent subject area classification (CPC) space. The application can then be searched against all the prior art stored and embedded in a vector database. The closer in proximity the application vector to a prior art vector, the more textual and semantically similar the prior art is to the application. The model could be used to search the whole application or parts of it, such as the claims.
"Similarity" is not a test for inventive step
The CIPA Journal article proposes that AI-PreSearch could be used in a new inventive step test. The article proposes:
"Suppose a new patent application is received and converted into an embedding space using a large language model. The idea for a new test for inventive step is 'the new application is inventive if the embedding space around the embedding vector of the new application within a radius of x, is empty and there is a technical effect [...]'. Values of x could be found from historical data about granted patents and the state of the art. The historical values could then be used to determine a value for x to use now. "
However, in this Kat's view, the similarity between a claimed invention and the prior art, as determined by their relative positions in the embedded space, has nothing whatsoever to do with the current legal tests for novelty and inventive step. AI-PreSearch is simply a search tool for identifying documents semantically similar to the claims. The degree of "semantic similarity" between the claims and prior art does not overlap with any of the pre-existing tests for inventiveness, whether this is the Windsurfer/Pozzoli test of the UKIPO, the problem-solution test of the EPO or the non-obviousness test of the USPTO.
In the problem-solution approach, for example, the first step is to identify the closest prior art. Superficially, it may seem that semantic "similarity" may help identify the closest prior art in the problem-solution approach. However, the closest prior art is "that which in one single reference discloses the combination of features which constitutes the most promising starting point for a development leading to the invention [...] In practice, the closest prior art is generally that which corresponds to a similar use and requires the minimum of structural and functional modifications to arrive at the claimed invention" (EPO Guidelines for Examination, G-VII-5.1).
AI-PreSearch assists in identifying contextually and semantically similar documents to the claimed invention. However, the simple vector search of AI-PreSearch does not and cannot identify a) which disclosure constitutes the most promising starting point for a development leading to the invention, b) which disclosure corresponds to a similar use to the claimed invention or c) which disclosure requires the minimum of structural and functional modifications to arrive at the claimed invention. None of these tests correspond to "similarity" in vector space. Similarly, there is no overlap of a test of similarity in vector space with any of the steps in the Windsurfer/Pozzoli test.
The CIPA journal article admits that there is currently no legal basis for replacing the current tests for inventive step with a "similarity" test. However, this lack of legal basis is not only absent in the case law, it is also in the legal texts themselves. The European Patent Convention (EPC) states that "an invention shall be considered as involving an inventive step if, having regard to the state of the art, it is not obvious to a person skilled in the art" (Article 56 EPC). In this Kat's view, the amount of semantic similarity between a disclosure and the claimed invention cannot be equated, according to any stretched definition of the term, with "non-obviousness" to a skilled person.
Final thoughts
For this Kat, the use of a simple measure of "semantic similarity" between the claims and prior art as a test for inventive step, would constitute an absurd reduction of the complex legal notion of inventiveness. Readers may recall the infamous exchange (infamous at least to patent attorneys) in Episode 16, Series 6 of the US legal drama suits:
Donna: Benjamin applied for a patent and it turns out our technology overlaps with someone else’s
Louis: How much overlap?
Donna: 32.5%
Louis: That’s over the threshold. Unless Benjamin can get you below 30...
"Only 30% overlap? That's inventive!" |
For this Kat, the proposal presented in the CIPA Journal ultimately fails to recognise the limited functionality of AI-PreSearch. AI-PreSearch, according to the EPO, is very good at searching. However, it has no ability to learn or apply legal tests. Importantly, AI-PreSearch's language model EP-RoBERTa is not in the Generative Pre-trained (GPT) family of large language models made famous by OpenAI. EP-RoBERTa is based on BERT, an earlier type of large language model from Google, and the first to use transformers to represent contextual information in language. As such, unlike ChatGPT, EP-RoBERTa has no ability to answer questions, generate text or to learn to apply tests grounded in verbal reasoning. AI-PreSearch simply uses vector search to identify and rank the similarities of prior art documents to the claims of a patent application. Whilst AI-PreSearch may be great at searching, it has no hope of providing an alternative to inventive step assessment.
GPT large language models (LLMs) such as ChatGPT, by contrast, have far greater functionality than simple AI assisted search tools. LLMs trained on patent prosecution data and legal texts can generate legal reasoning regarding the inventiveness or otherwise of a claimed invention. LLMs may also be combined with a vector search for prior art, to perform the full functionality of search and examination. Implementation of such a process would not constitute a new test for inventive step. Instead, it would be automating the current legal tests currently applied by patent examiners. However, we are not yet at the point where AI can replace a patent examiner. Specifically, the verbal reasoning produced by LLMs is currently fairly generic and superficial (IPKat). Nonetheless, as the functionality of these tools continues to grow, a future place for AI in patent examination seems likely. However, in this Kat's view, it is probably safe to assume that the role of AI in patents will not be as new similarity test for inventive step.
Further reading
- Patent Oscars: The good, the bad and the ugly (March 2019)
- Use of large language models in the patent industry: A risk to patent quality? (Oct 2023)
Acknowledgements: Thanks as always to Mr PatKat (a.k.a Dr Laurence Aitchison) for his ML insights and expertise.
I hope to see a stimulating comments thread on this topic. Rose, you have no confidence in this AI proposal, dismissing it as:
ReplyDelete"..an absurd reduction of the complex legal notion of inventiveness.."
I agree with you. But mind you, some would assert that so too is the EPO's problem-solution approach.
Not me, however. EPO-PSA takes a real world look at the obviousness issue, gives full faith and credit to the inventor and nurtures the good patent drafting standards which we need if the patent system is to deliver its full potential, to promote the progress of technological innovation.
As we see with the unfolding of the UPC, zealous lawyers are quick to make things more complicated than necessary, as soon as you let them. EPO-PSA (not an invention of a court or litigation lawyer) does the opposite. As Einstein advocated, it keeps things as simple as possible, but scrupulously refrains from any injudicious and unfair over-simplification. AI is great for searching, but is a no-brainer when it comes to adjudicating the obviousness issue.
I agree that a similarity test is not a suitable replacement for the current legal tests, in particular for inventive step. But I wonder how long it will be before we have LLMs which are finetuned or otherwise trained to make assessments under those tests to a professional standard.
ReplyDeleteThis examiner agrees with the IP Kat. In this examiner's view, the AI results are nice and can be useful sometimes, but in all cases this examiner still needs to understand the case and does a search by themselves.
ReplyDeleteA similarity test in a vector space is not an inventive step assessment. #facepalm
AI would fail from the get-go. EPO guidelines: "According to T 176/84 the state of the art to be considered when examining for inventive step includes, as well as that in the specific field of the application, the state of any relevant art in neighbouring fields and/or a broader general field of which the specific field is part, that is to say any field in which the same problem or one similar to it arises and of which the person skilled in the art of the specific field must be expected to be aware. "
ReplyDeleteSo, AI would first have to decide on the skilled person (not always easy). Then AI would have to decide on the range of the state of the art: specific for sure, neighbouring also or broader general field also. Would depend on the similarity of problem. Hard to catch for AI, as the wordings might be very different. Huge roadblocks for AI already.
It has always been the hope of EPO’s management that ever new computer implemented searches would make searching much easier and more efficient. For the upper management, less examiners would be needed and even more patents could be granted. What a wonderful perspective.
ReplyDeleteAI could be considered as a quantum leap in this respect. The problem is that AI cannot do better than what it has been told to do. Every AI has a kind of bias, which could be detrimental to the quality of a search. Experience has shown that such a bias can even be difficult to detect.
As the quality of the “products” delivered by the EPO is anything but increasing, this could become fatal to the EPO.
That AI can be a help for searching will happen, but we are quite far from assessing automatically IS with the help of AI. Last but not least, assessing IS has nothing to do with similarity! A human brain will always be needed to assess IS.
While not really suitable for assessing inventive step in the form proposed, I do think AI will have a role in the assessment of inventive step in the future. There are quite a lot of parallels between the skilled person and trained AI, if you think about it. In essence, I think the skilled person could be emulated by AI. There are difficulties implementing this of course, given the different technical fields and different relevant dates for the assessment, but it certainly does not seem beyond the realm of possibility.
ReplyDeleteYes, I agree that this is how it is likely to go. For example, at the EPO, once you have determined the objective technical problem, simply ask the AI: "Starting from D1, how would you solve [objective technical problem]". If the AI comes up with the claimed solution, it is obvious. No comebacks from the applicant, unless they can convince the examiner that the objective technical problem is wrong.
DeleteYou could even make the case that this should already be the test. AI may not be equivalent to the skilled person, but it hardly requires inventive skill from the skilled person to plug the objective technical problem into ChatGPT.
I fully agree that the skilled person is to be defined first. Referring to Art 56, this requires a definition of the "art". This definition is dependent on the field of the invention as stated in the claims.
ReplyDeleteIf for example an independent claim is directed to a broad category of device without speciying a specific field of use, the "art" can be defined by reference to the category of device. Let’s take the example of a capacitor. If a claim specifies a specific field of use in addition to the category of device, such as high voltage equipment or automotive applications (or the description make it clear that this is the field of special interest to the applicant), the relevant "art" to which the skilled person pertains is the claimed field of use and has to be so, since the technical problems and desirable technical effects for a capacitor between high voltage and automotive applications are so different. And this is key for the assessment of inventive step. Is IA capable of providing any help for defining the « art » of the skilled person ? Taking the example above, if an independent claim relates to a category of device and another claim adds a field of use limitation (or that field of use is clearly of specific interest to the applicant), then two different « arts » should be considered for the search. And the cited prior art will likely be quite different.
It is also of note that the prior art cited in the search (including the prior art cited in non-EPO PCT search reports) is sometimes paid no attention whatsoever by the ED. This is not infrequent, judging from audits recently carried out within the SACEPO/SQAP 2023 audits by mixed panels including EPO experienced examiners and external assessors.
Another key issue in the assessment of inventive step may be the definition of the CGK of the skilled person. Is there any help IA can bring in this area ? This would require close, field-specific investigation. A very small segment of the prior art qualifies as CGK.
Other issue of interest : whether the commercial availability of IA systems specifically designed for a field of technology can be claimed by an opponent or the ED to be part of the CGK, and be used as argument to show the obviousness of the claimed subject matter.
I think the author of this blog has missed the point of the CIPA Journal's cover article. The author of the CIPA journal article was not advancing the use of machine learning language model assisted vector search to apply complex legal tests (e.g. the current test for inventive step), but rather the replacement of those complex legal tests with a machine learning language model assisted vector search. There would be no need to consider whether the AI can decide what is obvious and what is not under existing criteria, because the test for inventive step would be changed from assessing what is obvious to assessing what is sufficiently different.
ReplyDeleteThere will certainly be cases where the AI-powered model will deliver a different answer on inventive step than the existing judicially applied tests, but that is not a bar to the AI-powered model being used. Whether this is a good idea is for the legislator to decide, taking into account both the substantive considerations (e.g. is 'sufficiently different' as judged by machine learning language model assisted vector search an appropriate test for rewarding an inventor with a patent; and is it better or worse than a person deciding whether something is obvious or not) and administrative considerations (e.g. cost, certainty, access.)
I think there is no good reason why the obviousness criterion in patents could not be replaced with a 'sufficient difference test as assessed by a machine learning language model assisted vector search', thought I can think of many reasons why it should not.
Isn't it important that the law should be capable of being understood and applied by inventors, potential infringers and their legal advisers? While we all know that it can be difficult to decide what is "inventive" or "obvious", with experience and training we can form a reasonable judgement about it and our clients can understand the principle involved.
ReplyDeleteI don't look forward to explaining to a future client that the test for whether their invention can be protected is "whether the embedding space around the embedding vector of the new application within a radius of x is empty". Presumably there will be no way of assessing this without preparing the specification, or at least a set of claims, following which you will as often as not be presenting the client with a large bill for the work and the unsatisfactory answer, "Computer says no."