Artificial intelligence, insufficiency and inventive step: detailed disclosure needed at the EPO

19/05/2022

Two recent decisions of the European Patent Office’s Boards of Appeal illustrate a risk faced by patent applications for artificial intelligence inventions: if the application does not describe the AI in enough detail then it can be refused for insufficient disclosure and lack of inventive step.

The two decisions are T161/18 from May 2020 and T1191/19 from May 2022. Both relate to inventions that apply AI to solve a problem in a medical setting, but they are instructive for AI inventions more generally. In both cases, a lack of detail in the description of the AI led to a finding of insufficient disclosure and lack of inventive step.

T161/18 (Equivalent Aortic Pressure)

In T161/18, the invention related to determining an aortic blood pressure in a patient based on blood pressure measurements taken at the periphery. A trained AI is used to transform the peripheral blood measurements and obtain an aortic blood pressure.

For sufficiency, a trained AI was needed to put the invention into effect and the application did not disclose how to obtain such a trained AI. The application did mention certain requirements for a data set upon which the AI should be trained – that it should be based on a wide range of patients of different ages, genders, constitutional types, health conditions and the like so that the AI does not become too specialised – but ultimately the application left too much to the skilled person to determine. From the limited disclosure in the application, developing the training data set and training the AI was considered to be an undue burden for the skilled person.

For inventive step, the claims were distinguished over the closest prior art by the non-specific use of a trained AI to transform the peripheral blood measurements and obtain an aortic blood pressure. The appellant argued that this difference provided certain advantages. But, since the claim did not detail how the trained AI is particularly adapted to this technical field, such as requirements for the training of the AI to bring about the technical effect, the Board was not convinced these advantages were achieved over the whole scope of the claim. The Board discounted any technical effect in their assessment of inventive step and found that it would be obvious to use AI as an alternative to the techniques of the closest prior art based on general trends in technology and the increasing use of AI in many fields.

T1191/19 (Neuroplasticity)

In T1191/19, the invention related to selecting between different neuroplasticity interventions for patients based on a database of patients and the outcomes of the different interventions for those patients. The selection is performed using an AI and the patients are classified based on a ‘meta learning scheme’. The closest prior art was a description of an AI using the meta-learning scheme.

For sufficiency, the Board found that the application did not disclose how the meta-learning scheme was applied to the problem in a manner sufficiently clear and complete for it to be carried out by the person skilled in the art. The application did not disclose any example set of training data and validation data, which the meta-learning scheme required as input. The Board commented that, “at the level of abstraction of the application, the available disclosure is more like an invitation to a research programme.

For inventive step, the appellant faced the issue that the meta-learning scheme was disclosed in the closest prior art. Beyond a mere reiteration at an abstract level of the scheme disclosed in the closest prior art, the Board could not see in the claimed invention any non-obvious detail of the application of the prior art meta-learning scheme to the problem of selecting a neuroplasticity intervention.

Sufficiency and inventive step: a common deficiency

While both cases were found to be unpatentable for lack of sufficient disclosure in the application as a whole and lack of inventive step of the claims, these issues are caused by the same problem: there was not enough detail in the application as to how to carry out the invention. The AI that is needed to carry out the invention was disclosed without enough detail that the skilled person could not carry out the invention without embarking on a research programme.

The failures on inventive step were caused by the same problem. The same absent disclosure needed to meet the sufficiency requirements could also have been used in the claims to describe how the AI is adapted to the problem at hand to solve the problem, such that the purported advantages are achieved across the whole of the claim scope.

Conclusion

Both these decisions serve as a reminder to applicants for AI inventions in Europe to provide sufficient detail in their applications that the skilled person could produce a trained AI to put the invention into effect.

The Board indicated that a full set of training data provided in the application would meet this requirement, although we expect that these requirements could be met with a more limited disclosure, if greater reliance is made on the common general knowledge of the skilled person. For example, a description of how to assemble a set of training data and how to train the AI based on the training data might suffice. Since new material cannot be added to the application after it has been filed, it may be safest to disclose as much as possible in the application as filed, erring on the side of over disclosure rather than under disclosure.

This requirement does not just extend to the description but to the claims as well. An extremely detailed description of how to obtain an example of a trained AI that can perform the invention may readily meet the sufficiency requirements, but might not be good basis for broad claims in Europe. It may be safest to include dependent claims in the application as filed that specify the training features needed to bring about the advantages in general or abstract terms. Ideally these would be at varying levels of specificity for maximum flexibility in amendments.

Finally, it is worth noting that the Board in T161/18 was not convinced that that the purported advantages of the trained AI were achieved over the whole scope of the claim. This touches on the issue of ‘plausibility’ (discussed previously here and here), which is a requirement that a technical effect relied on to demonstrate inventive step must be made plausible by the specification, and that it should be plausible over the whole scope of the claim. The purpose of the plausibility requirement is to exclude speculative applications from patentability, i.e. where the purported inventors have not actually invented something at the time of filing, but instead seek to gain a monopoly based on a prediction or assertion.

There is no suggestion that the inventors in T161/18 or T1191/19 were mere ‘armchair inventors’ who had not actually contributed anything to the art. But, given the difficulties these applications faced before the Boards of Appeal, any applicants looking to protect AI inventions in Europe might consider providing experimental data in their applications to help persuade examiners that the advantages are actually achieved, with such experimental data preferably linked to the scope of the claims.

This article is for general information only. Its content is not a statement of the law on any subject and does not constitute advice. Please contact Reddie & Grose LLP for advice before taking any action in reliance on it.