Skip to Main Content.

*This article was originally published in Law360 on September 25, 2024.

How can artificial intelligence inventions be protected? Analyzing guidance issued by the U.S. Patent and Trademark Office on July 16, it appears that there are at least two paths for establishing that an AI invention is eligible for protection, and that which strategy to use may turn on how broadly the invention is applied.

1. Focus on the Implementation Details

One strategy that can be derived from the new guidance’s examples is to selectively include implementation details — particularly by including details that are not things that could be done mentally by human beings, while avoiding explicit mentions of the underlying math. Including these details avoids abstractness and differentiates the claim from a task that can be performed by human minds.

This strategy, and how it can be successfully applied, is demonstrated by comparing two examples of neural network trainings: training steps from an anomaly detection method in new examples provided with the July 2024 guidance,[1] and a method of training a neural network for facial detection from examples issued previously in 2019.[2]

In both cases, the claims recited steps of obtaining training data,[3] manipulating it to generate input data for a neural network,[4] and training the neural network with the input data.[5] However, the facial detection training from the 2019 examples was found to be patent-eligible, while the anomaly detection training from the new examples was not, and the difference came down to the details included in the claims.

For the anomaly detection training, the claim stated that the data manipulation included discretization, and that training the neural network included back propagation and gradient descent. This led to the USPTO finding that claim to be directed to an ineligible abstract idea, because discretization was simple enough that it could be practically performed in a human mind,[6] and back propagation and gradient descent were both interpreted as mathematical calculations.[7]

By contrast, the facial detection training claim recited that the data manipulation included acts such as “rotation” and “contrast reduction,” and no detail at all was provided for the neural network training, beyond the data that it used. This led to the USPTO finding the claim eligible since the recited manipulations could not practically be performed in the human mind,[8] and since the mathematical concepts underlying the training were not actually recited in the claim.[9]

2. Clarify the Invention’s Downstream Benefits

Another strategy that the new guidance’s examples indicate can successfully be used to establish eligibility is to explicitly recite an AI invention’s downstream benefits in a claim. This is demonstrated by the USPTO’s treatment of the claims in an example that used machine learning for separating speech sources in a recording.[10]

In analyzing this example, the USPTO indicated that machine learning steps of using a neural network to generate embedding vectors and partitioning those embedding vectors into clusters corresponding to different speech sources were both abstract ideas.[11]

However, while it found that the broadest artificial intelligence claim was not eligible, it concluded that two narrower claims focusing on specific applications — i.e., stitching together separated speech sources to create a new signal with extraneous speech removed, and generating a transcript of speech from a target speech source — were both eligible for protection.

The reason was that, in both cases, the hypothetical disclosure accompanying the example had described the downstream applications as improving on existing technology, and explicitly recited the steps for implementing those downstream applications.[12]

Additional Guidance Takeaways

While combinations are clearly possible, the fact that these different strategies focus on different parts of a claim implies that different approaches are likely to be more appropriate for different types of inventions.

More particularly, for AI inventions that are broadly applicable to a wide range of applications — e.g., new foundation models — the first strategy of selectively including details of the invention’s implementation may be more appropriate, since enumerating and describing the benefits of the commercially relevant applications of this type of invention may not be feasible.

Conversely, for AI inventions with a limited class of beneficial uses — e.g., particular AI applications — the second strategy of describing and reciting downstream benefits may make more sense, since doing so can short-circuit the consideration of how many, and what kinds, of details to include about the AI.

Of course, what details a claim should include, and what downstream benefits it may make sense to recite, are decisions that should be made on a case-by-case basis and will be influenced not only by patent eligibility but also by factors such as the similarity of the prior art and the patent applicant’s business objectives.

However, while other considerations exist, through its detailed examples, the USPTO’s latest guidance has provided a road map for what appear to be at least two viable paths for establishing the eligibility of AI inventions.


[1] July 2024 Subject Matter Eligibility Examples, Example 47, (hereinafter “Example 47”) available at https://www.uspto.gov/sites/default/files/documents/2024-AI-SMEUpdateExamples47-49.pdf.

[2] Subject Matter Eligibility Examples: Abstract Ideas, Example 39, (hereinafter “Example 39”) available at https://www.uspto.gov/sites/default/files/documents/101_examples_37to42_20190107. pdf.

[3] Example 47 (“receiving, at a computer, continuous training data”); Example 39 (“collecting a set of digital facial images from a database”).

[4] Example 47 (“discretizing, by the computer, the continuous training data to generate input data”); Example 39 (“applying one or more transformations to each digital facial image including mirroring, rotating, smoothing, or contrast reduction to create a modified set of digital facial images; and creating a first training set comprising the collected set of digital facial images, the modified set of digital facial images, and a set of digital non-facial images”).

[5] Example 47 (“training, by the computer, the ANN based on the input data and a selected training algorithm to generate a trained ANN, wherein the selected training algorithm includes a backpropagation algorithm and a gradient descent algorithm”); Example 39 (“training the neural network in a first stage using the first training set”).

[6] July 2024 Subject Matter Eligibility Examples, at 7 (“step (b) recites discretizing continuous training data to generate input data by processes including rounding, binning, or clustering continuous data, which may be practically performed in the human mind using observation, evaluation, judgment, and opinion.”).

[7] July 2024 Subject Matter Eligibility Examples, at 6 (“The training algorithm is a backpropagation algorithm and a gradient descent algorithm. When given their broadest reasonable interpretation in light of the background, the backpropagation algorithm and gradient descent algorithm are mathematical calculations.”).

[8] Subject Matter Eligibility Examples: Abstract Ideas, at 9 (“the claim does not recite a mental process because the steps are not practically performed in the human mind.”).

[9] Id. (“While some of the limitations may be based on mathematical concepts, the mathematical concepts are not recited in the claims.”).

[10] July 2024 Subject Matter Eligibility Examples, Example 48, (hereinafter “Example 48”) available at https://www.uspto.gov/sites/default/files/documents/2024-AI-SMEUpdateExamples47-49.pdf.

[11] July 2024 Subject Matter Eligibility Examples at 19:

The claim also recites a step (c) that determines “embedding vectors V using the formula V = fθ(X), where fθ(X) is a global function of the input signal.” The recited formula is clearly a mathematical formula or equation, and the determination is a mathematical calculation. Thus, the claim recites a mathematical formula or equation as well as a mathematical calculation, both of which fall within the mathematical concepts grouping of abstract ideas.

Id. at 22:

Step (d) recites “partitioning the embedding vectors V into clusters corresponding to the different sources sn.” The claim places no limits on how this partitioning is performed. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, “partitioning . . . into clusters” encompasses a human arbitrarily selecting groups of vectors and mentally assigning them to clusters.

[12] For the generation of a new signal with extraneous speech removed, see July 2024 Subject Matter Eligibility Examples at 24:

While on their own steps (b)-(e) recite judicial exceptions, steps (f) and (g) are directed to creating a new speech signal that no longer contains extraneous speech signals from unwanted sources. The claimed invention reflects this technical improvement by including these features. Further, … these steps reflect the improvement described in the disclosure. Accordingly, the claim is directed to an improvement to existing computer technology or to the technology of speech separation, and the claim integrates the abstract idea into a practical application.

For creation of a transcript for a target speech source, see July 2024 Subject Matter Eligibility Examples at 28:

The ordered combination of the steps of receiving a mixed speech signal, processing the speech signal to produce masked clusters, converting the masked clusters into separate signals in time domain, extracting spectral features from one such converted signal, and generating a sequence of words from the extracted spectral features to produce a transcript reflects the technical improvement discussed in the disclosure. Accordingly, the claim is directed to an improvement to existing speech-to-text technology, and the claim integrates the abstract idea recited in steps (b), (c), and (d) into a practical application of speech-to-text conversion of a speech signal corresponding to one source of the mixed speech signal.