When is the inventor of an AI model also an inventor of the model's output? A closer look at the USPTO Guidance for AI-Assisted Inventions
According to the USPTO guidance for AI-assisted inventions, AI has the potential to solve some of society's most difficult challenges. However, in the patent realm, the USPTO also believes that "inventorship analysis should focus on human contributions, as patents function to incentivize and reward human ingenuity". How then are AI-generated inventions to be protected? As previously reported (IPKat), the USPTO guidance seeks to clarify how the USPTO will analyse inventorship issues for inventions made with AI input. In this Kat's view, under the guiding principles provided by the USPTO, it is likely that any AI-assisted invention of value will be considered to have involved "significant contribution" from a natural person. The natural person can then be named an inventor on the patent application. Absent the advent of Artificial General Intelligence, patent inventorship thus remains within the human realm. Of more practical consequence, the legal test provided in the Guidance for determining whether the inventors of a particular AI system should also be considered inventors of its output, remains open to interpretation. Given the potential value of the arising IP, the Guidance thus raises the possibility for ownership disputes over commercially valuable outputs from AI systems.
The USPTO's guiding principles for AI-assisted inventions
The Federal Circuit in Thaler v. Vidal (43 F.4th 1207, 1213 (Fed. Cir. 2022) found "that only a natural person can be an inventor, so AI cannot be". However, this does not mean that AI-assisted inventions are unpatentable. The USPTO Guidance confirms that, whilst AI systems cannot be listed as inventors, the natural person making use of the AI-system may be listed as the inventor if the natural person significantly contributed to the claimed invention.
The USPTO goes on to provide some guiding principles for ascertaining whether a natural person's contribution to an AI-assisted invention is "significant". According to the Guidance, merely prompting the AI system or reducing its output to practice should not per se be considered a significant contribution that rises to the level of inventorship by the natural person. However, prompt engineering or devising a new conception using the output may be considered a significant contribution. Additionally, "a natural person who develops an essential building block from which the claimed invention is derived may be considered to have provided a significant contribution to the conception of the claimed invention. As such, a natural person "who designs, builds, or trains an AI system in view of a specific problem to elicit a particular solution could be an inventor, where the designing, building, or training of the AI system is a significant contribution to the invention created with the AI system". However, by contrast, "a person simply owning or overseeing an AI system that is used in the creation of an invention, without providing a significant contribution to the conception of the invention, does not make that person an inventor".
Kat-assisted drug design |
The Guidance is accompanied by two examples demonstrating the application of the Guiding Principles:
Example 1: Transaxle for Remote-Control Car
The first example provides 5 different scenarios relating to the use of an AI-system in an engineering problem, the provision of a transaxle for a remote-control car. The owners of a remote-control car business, Ruth and Morgan, decide to use a free online AI system to create a preliminary design of the transaxle. The AI system receives natural language prompts as inputs, and outputs text and images. Ruth and Morgan prompt the system to "[c]reate an original design for a transaxle for a model car, including a schematic and description of the transaxle". The AI system outputs a preliminary design that Ruth and Morgan agree should work in the car.
In Scenario 1, a patent application is prepared for the transaxle as outputted by the AI system. According to the Guidance, Ruth and Morgan are not considered inventors of the transaxle because they did not make a significant contribution to the invention. Ruth and Morgan merely recognised a general problem, inputted this to the AI system, and received the output.
In Scenario 2, Morgan builds the transaxle, following the schematic provided by the AI system exactly and using well-known materials. According to the Guidance, Morgan is still not recognised as an inventor, given that reducing an invention to practice alone is not considered a significant contribution that rises to the level of inventorship.
In Scenario 3, Ruth and Morgan further prompt the AI system to provide alternative transaxle designs. After selecting the resulting output, Ruth and Morgan begin building the transaxle but find that modifications are needed to make it operable. They conduct some experiments to find an optimized design that will work. In this scenario, the additional input from Ruth and Morgan would be considered to rise to the level of significant contribution worthy of the term inventor. The fact that they used the AI system to provide the initial embodiment does not negate their subsequent contributions.
In Scenario 4, Ruth and Morgan use the AI system again to suggest a manufacturing material for the transaxle they invented in Scenario 3. A conventional manufacturing material is suggested by the AI system and selected by Ruth and Morgan. According to the Guidance, Ruth and Morgan are still considered inventors of the invention defined by a dependent claim to the transaxle made from the material. Ruth and Morgan invented the transaxle itself, and therefore they are the inventors of all of its dependent features. The additional element does not negate the significance of their earlier contributions.
In Scenario 5, the character of Maverick the lead AI engineer, is introduced. Maverick oversaw the creation and training of the AI system. The system was trained on diverse collections of documents from various fields, via standard supervised learning techniques. When designing and training the system, Maverick was unaware of any specific problems related to transaxles in remote cars. Maverick is thus not considered an inventor because they merely oversaw the AI system. Maverick made no contribution to solving the specific problem addressed by the invention.
Example 2: Developing a Therapeutic Compound for Treating Cancer
The second example relates to AI-assisted drug discovery. A university professor, Marisa, is researching the development of a drug targeting a particular protein for use in the treatment of prostate cancer. The professor consults the university's AI expert, Raghu, and explains that she wants to try in silico drug-target interaction prediction methods to speed up the process of drug discovery and minimize expensive and time-consuming wet lab work. The university hosts a deep neural network-based prediction model for predicting the strength of binding between drug compounds and target proteins. The model accepts drug-target pairs as inputs, and outputs a numerical value of binding affinity. The drugs are inputted in SMILES format (a well-known text-string format for representing chemical compounds). The proteins are represented by their amino acid sequences. The model has been trained on diverse sets of compounds and targets from previous drug-target experiments conducted at the university by a data scientist, Lauren. Lauren also oversees the maintenance and regular updating of the AI model.
The AI-expert, Raghu, uses the model to predict the drug compounds that have high binding affinity to the target protein. The model gives each compound a value of 0-1 for binding affinity to the target. Raghu then sorts the compounds in descending order to identify the compounds with the highest binding affinity. Based on these results, Marisa hypothesizes that the top 6 compounds are likely to have therapeutic potential in prostate cancer and selects them for further wet-lab experiments and characterization with her postdoc. In these experiments, the top candidate is found to have undesirable off-target effects. Marisa and her postdoc work to optimize the structure of the compound to avoid these effects.
In this example, Marisa and her postdoc are considered inventors of the resulting optimized compound. Further work and significant contribution were required to arrive at the eventual lead compound having both high binding affinity and low off-target effects. Marisa and her postdoc therefore provided a significant contribution to the invention. According to the Guidance, the AI expert Raghu and data scientist Lauren, who developed and trained the model, are not considered to have made a significant contribution to the invention and are therefore not considered inventors.
Interestingly, the Guidance also considers the inventorship of a claim directed to "a method of identifying and synthesizing a lead drug compound to treat prostate cancer, using a pre-trained deep neural network to identify binding affinity to the specific target protein, and then synthesizing a lead compound by introducing structural modifications". The Guidance conclude that, in this scenario, Marisa and her postdoc are still considered the inventors, because Marisa came up with the idea for the target protein in prostate cancer and both Marissa and her postdoc selected the compound, devised the modifications necessary and developed the method of synthesis necessary for a lead drug compound.
In an alternative scenario, Marisa and the AI expert devise a new AI generative model to produce compounds optimized for adsorption, distribution, metabolism, excretion and toxicity (ADMET) related properties required for clinical success. The new model receives compounds as input, and outputs a novel optimized drug compound. Both Marisa and the AI expert make a significant contribution to the development of the model and are considered in the Guidance as inventors of the eventually selected lead compound. This scenario provides an example of the inventive contribution that may occur when a natural person designs, builds, or trains an AI system in view of a specific problem to elicit a particular solution.
Final thoughts
The legal situation outlined by the USPTO Guidance is reassuring. The position of the USPTO should be considered in the context of the fundamental purpose of the patent system, which is to reward human endeavour. In the first and second scenarios of Example 1, Ruth and Morgan merely followed the output of the AI with little if any intellectual input. While AI evangelicals may argue that the USPTO approach may deny the patentability of potentially-valuable, purely AI-derived inventions, this Kat is sceptical (IPKat). Currently, the machine learning systems most capable of an imaginative approach to commercially significant problems are designed with a specific problem in mind, require creative prompting and/or data processing, and a combination of human intelligence for implementation of the output. This situation is clearly exemplified in Example 2, which takes place within the field of AI-assisted drug development. Furthermore, when purely AI-derived inventions become the norm within a field, the value of those individual "inventions" are unlikely to currently justify the monopoly of patent protection. In other words, the bar for inventorship will be raised, as is usual in patent law, to account of increased automation and lessening of the burden for development, for example, as has already occurred in the antibody field.
The AI-assisted drug design described in Example 2 provides guidance for perhaps the most legally complex inventorship scenarios involving AI: cases in which the preceding development of a new AI model could be considered an inventive contribution to the output of the model. There is a potentially fine line between developing an AI system for a "specific problem to elicit a particular solution" and the design of a system for a more general problem and generalised solution. However, this is not a new problem. The pharmaceutical field has been dealing with the complexity of IP ownership and licensing of platform technologies for developing new clinical candidates for decades. Given the growing use of freely available and licenced AI systems in product development, and the many parties potentially involved, it is at this nexus of inventorship and ownership that we are likely to see the most contention.
Further reading
- Artificial intelligence is not breaking patent law: EPO publishes DABUS decision (J 8/20) (July 2022)
- Bad cases make bad law: Has DABUS "the AI inventor" actually invented anything? (Aug 2023)
- The relevance of G 2/21 to machine learning inventions (T 2803/18) (Aug 2023)
- Use of large language models in the patent industry: A risk to patent quality? (Oct 2023)
- USPTO call for comments: Impact of AI on patentability (May 2024)
- "Using AI tools to help assess inventive step": A response to the CIPA journal article (June 2024)
No comments:
All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.
It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.
Learn more here: http://ipkitten.blogspot.com/p/want-to-complain.html