The IPKat has received and is pleased to host the following guest contribution by Katfriend Anna Pokrovskaya (RUDN University, Intellectual Property Center “Skolkovo”) reviewing current debates in the US and the EU on the role that Artificial Intelligence (AI) may play in patentability considerations. Here’s what Anna writes:
Can AI be considered a PHOSITA? Policy debates in the US and the EU
by Anna Pokrovskaya
|
"Reviewing" the prior art
|
The question of whether AI can be considered a person having ordinary skill in the art (PHOSITA) in a given field is a topic of significant debate in the policy realms of both the US and the EU. As AI continues to advance and permeate various aspects of our lives, including healthcare, finance, and technology, understanding its role and level of expertise in these fields becomes crucial for developing appropriate policies and legal frameworks.
US
In the US, the debate centres on the legal implications of AI's capabilities and its impact on intellectual property law, including in relation to patentability. The U.S. Patent and Trademark Office (USPTO) and courts traditionally assess patentability based on the expertise of PHOSITA (
35 U.S. Code § 103).
One of the key considerations in evaluating AI's expertise is the ability of AI systems to replicate and exceed human knowledge and understanding in a specific field. AI systems can analyze vast amounts of data, learn patterns, and generate valuable insights, often at a level beyond human capacity. However, debates continue regarding AI's ability to possess the same level of experience, judgement, and intuition as human specialists. The multidimensionality of expertise in a given field adds complexity to the debate, requiring a nuanced understanding of AI's capabilities and limitations.
EU
Turning to the EU, the policy debates surrounding AI's specialist status are influenced by factors such as ethical considerations, data protection, and liability (
Ethics Guidelines for Trustworthy AI).
The European Patent Convention (EPC) defines a PHOSITA as a hypothetical person who has the average level of knowledge and skills in the technical field at the relevant date of the patent application (
Article 56 EPC). Some argue that AI's ability to process vast amounts of data, learn from it, and generate novel solutions gives it a level of expertise that should be recognized in the determination of inventive step (
G1/19).
The European Commission has been actively working on developing an AI regulatory framework that promotes the responsible and trustworthy use of AI while upholding fundamental rights and values (see
IPKat). Discussions revolve around categorizing AI systems based on their level of risk, distinguishing between high-risk AI systems, such as those used in critical sectors or with potential for significant impact, and low-risk AI systems. The EU emphasizes the importance of human oversight and accountability in the deployment of AI systems (see
IPKat). According to the EU's proposed regulations, certain high-risk AI systems may be subject to strict requirements, including conformity assessments, explicit consent, and transparency obligations (
EU AI Act). This approach aims to strike a balance between fostering innovation and ensuring the protection of individuals and society from potential risks associated with AI technologies.
Some reflections
Overall, the question of whether AI can be considered a PHOSITA remains the subject of a complex and ever-evolving policy debate. It requires careful consideration of AI's capabilities, limitations, and ethical implications, while also addressing legal and regulatory challenges. As AI continues to advance, policymakers in the US, EU, and around the world will need to collaborate and adapt their policies to ensure that AI is integrated responsibly and effectively in various domains, benefiting society as a whole.
Policy recommendations in this area can be made to ensure a balanced approach that considers the capabilities and limitations of AI systems while also safeguarding the rights and interests of inventors and society at large. Some potential recommendations may include:
- Establishing a framework for evaluating AI expertise: Policymakers could consider developing guidelines or criteria that assess the capabilities, reliability, and robustness of AI systems in a given field. This framework could consider factors such as the accuracy, interpretability, and generalizability of AI models, as well as the availability of relevant training data and the transparency of algorithms.
- Encouraging collaboration and interdisciplinary research: To bridge the gap between AI capabilities and human expertise, policymakers could promote collaboration between AI researchers and domain experts in various fields. Interdisciplinary research efforts can help ensure that AI systems understand the nuances and complexities of a particular domain, leading to more accurate assessments of their expertise.
- Continuous monitoring and evaluation: As the field of AI continues to evolve rapidly, policymakers should establish mechanisms for ongoing monitoring and evaluation of AI systems' capabilities. Regular assessments can help determine the extent to which AI systems can be considered specialists and inform any necessary updates to policies and regulations.
- Ethical considerations: Policymakers should also prioritize ethical considerations when addressing the question of AI expertise. This includes transparency in AI decision-making, avoiding biases, promoting fairness, and ensuring accountability.
- International collaboration and harmonization: Given the global nature of AI development and patentability, policymakers in the US and the EU should collaborate and harmonize their policies to the extent possible. This would ensure consistency in assessing AI expertise and avoid potential discrepancies in patentability criteria across jurisdictions.
Those above are just a few recommendations that can inform the ongoing debates surrounding AI expertise in the US and the EU. Overall, it is essential for policymakers to consider a multidisciplinary approach, consulting experts from AI research, intellectual property law, ethics, and other relevant fields to strike the right balance between encouraging innovation and protecting inventors' rights.
I fail to grasp why the enquiry should be "Can AI be considered a *person* having ordinary skill in the art?". How is it more relevant than asking if the (notional, but human) skilled person can use an AI? Does it even make any difference for the end result, i.e., the bar for inventive step or obvioussness might become higher because using an AI expands the field of what is obvious? If it doesn't, then framing the issue in terms of "can AI be considered a person for the purposes of patent law" is unhelpful if my opinion. If anything, I believe it muddles the discussion, like the DABUS cases did for the question of inventorship in situations where AI was used at some point in the process of conceiving the invention.
ReplyDeleteWhat I find interesting, is the question whether or not a commonly available AI may take on the role of the person skilled in the art which figures in the EPO problem-solution approach. Let me explain:
ReplyDeletePatents are, when you look at it, a very odd market-disturbing legal oddity. The justification of the existence of patents is a deal that society wants to make with inventors: full disclosure of inventions that would otherwise remain hidden, in exchange for a limited time monopoly on said invention.
The whole problem-solution approach is designed to test whether an invention qualifies for this bargain. The PSA delivers an objective technical problem (OTP). Then the key question is, is the claimed answer to the OTP out of reach of the general (non-inventive) population? If yes, a patent may be granted. If no, there should be no patent, society already has access to this invention.
What if the general population has access to common AI that can solve the OTP? Society then has nil interest in granting a patent, since the invention is already within reach, thanks to AI. Nowadays still largely an academic question, but with AI's capabilities increasing, it may become more of a skilled "person" than an average human over time. If these super-smart AI's are commonly available to members of the public, then is it still justified to grant patents to inventions that are not within reach of average humans, but which are in reach of commonly available AIs? As I see it, there would be no justification for the bargain.
Examination would then boil down over the question of what would be a fair, hindsight free, OTP. Once that is defined, it's just a question of serving that OTP to an AI with training and technology of the patent's priority date, and see if the answer rolls out. If it does, no patent. If it does not, it's an invention.
Effectively, this means that once AI's become as smart at combining information as humans, the threshold for inventive step should rise and rise with the abilities of this AI, with the result that there will be less and less inventions that are "smart" enough to qualify for society's bargain.
None of this has anything to do with the philosophical question whether AI's are a person or not. It has everything to do with why we have patents in the first place: a monopoly for inventions that are otherwise out of reach of the general public. AI can vastly increase the reach of that general public.
@Harm van der Heyden
DeleteThe reference to PHOSITA in respect of IA systems conveys the false anthromorphic perception of IA systems as « intelligent » in a human sense. « Artificial intelligence » is a metaphor but a misleading one. It is wrong to isolate IA systems from human actions. IA systems cannot act autonomously, they are just tools driven, trained, configured, tested and used by human beings.
For a very well documented analysis, see « Clarifying Assumptions About Artificial Intelligence Before Revolutionizing Patent Law », Kim et al, GRUR International 14 Feb 2022, htpps://doi.org/10.1093/grurint/ikab174. See also AIPPI Q272 resolution §4(a)-(e) which lists human contributions to be considered for an AI system to yield outcomes of interest.
Patent law thus needs no revolution. IA-aided inventons, however, raise challenging issues, especially in respect of the sufficiency of disclosure requirements, the definition of the skilled person for inventive step and sufficiency assessment and the « plausibility » issue.
As to sufficiency, I am quite pleased to give you credit for your insighful article « AI inventions and sufficiency of disclosure – when enough is enough » published in IAM Yearbook 2020. I had cited it in my account of EPO decision T 0161/18 (epi-information 4-2022) in which the Board had remarkably raised ex officio an Art 83 rejection for lack of disclosure of training data.
It is of note that the revision of the EPO Guidelines recently reported on this blog includes an addition concerning the disclosure of training data in AI systems.
As to the definition of the skilled person, it is a fact that AI applications require interdisciplinary teams including data scientists suitably competent in the field or type of data to be used for training and specialists competent in the field of the invention. An interesting example of in-depth discussions around the skills required for AI applications can be found in the file of EP2449150 which relates to Agrosciences.