ARTICLE
24 February 2026

Accidental AI Forfeiture: How Inputting Data Into AI Can Destroy Patent Rights

L
Losey

Contributor

Our lawyers have been trusted by a majority of the Fortune 100, numerous founders, and a variety of rapidly growing businesses.

We were founded with a singular purpose. To serve our clients with the best possible representation in significant matters.

Our firm is proud to have a strong track record of successful outcomes. We aim to bolster that record with every new matter.

The integration of Generative AI into R&D is accelerating innovation, but it has introduced a silent potential killer of intellectual property: the inadvertent disclosure of proprietary inventions through...
United States Intellectual Property
Eric Poland’s articles from Losey are most popular:
  • within Intellectual Property topic(s)
  • with Inhouse Counsel
  • in United States
  • with readers working within the Pharmaceuticals & BioTech industries

The integration of Generative AI into R&D is accelerating innovation, but it has introduced a silent potential killer of intellectual property: the inadvertent disclosure of proprietary inventions through inputting information into AI. Under 35 U.S.C. §102, these prompts may constitute a Public Disclosure, creating Prior Art that could invalidate claims for novelty. To preserve global patent eligibility, companies should adhere to the "File First, Disclose Later" doctrine.

For businesses seeking market exclusivity, managing this risk is a critical, yet often overlooked, component of a modern patent strategy.

1. The Bedrock of Patent Law: Priority, Disclosure, and Prior Art

The foundation of the patent system is an exchange: a government-granted monopoly in return for publicly teaching the invention. This trade is contingent on the invention meeting several requirements including the novelty standard, meaning it was not previously known or available to the public, and nonobvious standard, meaning that it was not obvious to someone of ordinary skill in the art based on publicly available information. In the U.S., these requirements are codified in 35 U.S.C. §102, 103.

Why is the Priority Date Important?

The Priority Date is the date that a patent application claims priority to and establishes the date for which the novelty and inventiveness standards are determined. This date is typically the filing date of the first patent application (i.e., the priority patent application) that the application at issue claims priority to.

  • Why it Matters: The Priority Date acts as a legal fence. Generally, information that became available to the public before this date is considered Prior Art against your invention. Your patent claims must be novel and non-obvious over this body of Prior Art. The earlier date you file, the less Prior Art exists, and the stronger your patent position becomes.

Public Disclosure vs. Prior Art

  • Prior Art: This is the collective body of knowledge that is publicly available before your Priority Date. Prior Art can include patents, patent applications, publications, products in public use or on sale, or information otherwise available to the public.
  • Public Disclosure: This is the act by which a piece of information (such as your novel invention) becomes Prior Art. A public disclosure occurs when the invention is made available to the public without a binding obligation of confidentiality. Examples of public disclosure include publishing an article, offering a product for public sale, or simply disclosing the information to a third party without protection in place to keep the information confidential.

2. When Does an AI Prompt Become a Public Disclosure?

A public disclosure is triggered when novel, enabling information is made available to the public, which may occur when the information is input into an AI tool. For an input into an AI tool to potentially affect your patent rights, the input must satisfy these two standards:

Enabling Detail: The 'Undue Experimentation' Standard: The information provided to the AI must contain sufficient technical detail (the secret sauce) to allow a "Person Having Ordinary Skill in the Art" (POSITA) to replicate or practice the invention without undue experimentation (35 U.S.C. §112(a)). The enablement standard, affirmed by the Supreme Court in cases like Amgen Inc. v. Sanofi, 598 U.S. 594 (2023), means you cannot claim a monopoly over something you haven't taught the public how to use. It also can be used to rebut whether a public disclosure is prior art (e.g., if the public disclosure is not enabling then it will not be prior art for that teaching). For example, a prompt containing a novel, functional algorithm may be deemed enabling, while a prompt including a simple idea without a functional algorithm may not be deemed enabling.

Publicly Available: The 'Reasonable Diligence' Test: The information must be made accessible to the public. Courts have long held that information is "publicly available" if an interested person exercising reasonable diligence could have accessed it (see, e.g., In re Hall, 781 F.2d 897 (Fed. Cir. 1986)). This is the key legal battleground emerging with Generative AI, as the risk hinges on whether the AI provider is legally treated as a private confidant or a public utility.

Legal Standard Definition Law/Caselaw
Enabling Detail (The "What") The disclosure must contain sufficient technical detail (the secret sauce) to allow a "Person Having Ordinary Skill in the Art" (POSITA) to replicate or practice the invention without undue experimentation. 35 U.S.C. §112(a) (Enablement); Amgen Inc. v. Sanofi, 598 U.S. 594 (2023)
Publicly Available (The "Where") The information must be accessible to the public without a binding duty of confidentiality. It is available if an interested person exercising reasonable diligence could have accessed it. 35 U.S.C. §102(a) (Prior Art); In re Hall, 781 F.2d 897 (Fed. Cir. 1986)

How Does the Obviousness Trap (§103) Apply?

A disclosure does not need to reveal the entire invention in one document to affect patentability. Patent examiners are permitted to combine multiple public references to demonstrate that an invention would have been obvious to a Person of Ordinary Skill in the Art (PHOSITA) (35 U.S.C. §103). This principle, reaffirmed by the Supreme Court in KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398 (2007), means that sharing pieces your invention across different public AI platforms can provide the public with the components necessary for an examiner, or patent challengers, to find your final claim unpatentable.

If your AI prompt includes technical details or specific implementations that would be obvious to combine with other publicly known information, then it could be seen as a public disclosure and may affect your ability to obtain a patent even if the AI prompt does not include all the information necessary to recreate your invention.

3. Legal Risk Management: Open vs. Closed AI Systems

In the AI context, the risk of creating prior art when inputting information may hinge on whether the AI provider (and their system's architecture) is viewed as a "confidential repository" (like an attorney) or a "public message board."

Since no court has yet explicitly ruled on whether an input to a public generative AI platform constitutes an invalidating public disclosure under patent law, the analysis would likely look to the contractual language and data architecture of the service provider. This is a developing area of law where the following distinctions may be critical:

AI System Type Data Handling & Contractual Terms Current Disclosure Risk
Open/Public AI (e.g., free consumer versions of ChatGPT, Gemini, Claude) The Terms of Service (ToS) often reserve the right to store your prompts and use them to train and refine the model. High Risk. By agreeing to the ToS, the user may be contractually forfeiting confidentiality. If the novel, enabling prompt is ingested for training, it could be disseminated to an unauthorized third party (the provider) and potentially made accessible across the AI's future knowledge base, potentially satisfying the "publicly available" disclosure test.
Closed/Enterprise AI (e.g., paid business accounts, private LLM instances) The provider typically offers contractual assurances (often subject to separate agreements) that your input data is not stored and not used for model training. Low Risk. This environment is legally analogous to sharing information under a strong Non-Disclosure Agreement (NDA). This may be an acceptable environment for sharing proprietary technical details.

The takeaway is clear: your input is proprietary information only if put into closed systems. Putting it into an open system can be surrendering control—and potentially your global patent rights.

4. Why is "File First, Disclose Later" the Best Strategy?

Many countries maintain a strict "Absolute Novelty" standard (such as those under the European Patent Convention and in China), in which any public disclosure (whether from the applicant, inventor, or other parties) at any time (even one day) before the priority date will be considered prior art.

Jurisdiction Type Standard Risk & Deadline
United States Relative Novelty Offers a one-year grace period from the inventor's first public disclosure to file a U.S. patent application. This window is a valuable safeguard.
Most Foreign Countries Absolute Novelty Requires the patent application to be filed before any public disclosure occurs.

Some foreign laws, including the EPC (Article 55) and Japanese Patent Law (§ 30), offer limited exceptions:

  • Unauthorized Disclosure (Theft): If a third party steals and publishes your inventive concept, many jurisdictions offer a six-month window (or similar period) to file, provided the disclosure was the result of an "evident abuse" of the inventor's rights.
  • Official Presentations: Some countries, like Japan, provide a limited grace period (up to one year) for certain public disclosures made by the inventor, such as presentations at officially recognized scientific conferences.

When there is a public disclosure, a comprehensive patent filing strategy must first establish the date of the first public disclosure, as this action triggers potential bar dates (dates by which the patent application must be filed by or face serious prior art obstacles).

The Bottom Line: Public disclosures before the filing date complicate patent filing strategy and can generate questions to a patent's validity. The exceptions are narrow and difficult to prove. For any company seeking global protection, the safest, most cost-effective rule remains: File first, disclose later. Any unauthorized or inadvertent disclosure, such as through an AI prompt, requires immediate, costly, and complex legal triage to determine if patent rights have been irrevocably lost or compromised.

5. Advanced Disclosure Risks: Extrapolation and Learning

Beyond the risk of the AI simply repeating your confidential information, its core functionality introduces subtle mechanisms that can compromise novelty or obviousness. Modern AI models introduce sophisticated, indirect disclosure risks that are driven by the technical function of the AI and the contractual terms that govern its use.

A. The Extrapolation Risk (The "Black Box" Problem)

Definition: This occurs when the AI system uses a non-enabling fragment of your confidential input and combines it with its vast internal database of public prior art to generate an output that is fully enabling and may be considered a public disclosure. The AI acts as the accidental combiner of your input, prior art, and AI logic jumps, creating a complete, published reference that destroys your novelty or obviousness.

For example, you may input only a fraction of your invention—a unique component or a new material use—assuming it is not enabling on its own. However, the AI possesses a vast knowledge base of prior art. If the model combines your non-confidential input with readily available public information to generate an output that is fully enabling, that output could be visible to the next user, effectively completing the public disclosure of your entire invention. In this scenario, the AI acts as the accidental combiner of prior art, potentially creating an invalidating reference.

B. The Learning Risk (Competitive Advantage Leak)

Definition: This risk occurs when the specific technical logic behind your confidential or sensitive problem statement is used to train the public model. The AI becomes optimized to solve problems in your niche, providing a subtle competitive advantage to the next user who queries the model in the same domain.

Even if your prompt contains only non-confidential, technical problems or requirements, using a public AI model carries the risk that your input is used for future model training. This means that:

  • Your Strategy is Shared: The underlying logic of your query helps the AI become better at solving similar problems.
  • Competitors Gain an Edge: Another client who queries the AI for a solution in your niche may benefit from the model having been trained on your specific requirements or problem statements, subtly accelerating their R&D path and giving them a competitive advantage gained from your inputted data.

C. The Self-Sabotage Risk: Challenging Your Own Enablement

While not yet established in legal precedent, there is a risk that an AI record can be weaponized against your own patent later on, particularly during litigation when opponents conduct deep discovery into your R&D process.

The Threat to Enablement (§112): If the records show that your engineers struggled to get the AI to provide a functional implementation—continually asking questions on basic mechanisms of implementation without achieving a functional result—this evidence could be used to argue that you never fully conceived or enabled the invention before your patent was filed. If your final patent application relies only on the non-functional implementations suggested by the AI, it could be invalidated for failing the enablement requirement because a POSITA would still require undue experimentation to practice the claimed invention.

These threats and others may not be fully understood until the appear in litigation down the road, much too late for the affected parties.

6. Case Study: The Fatal Prompt

Imagine a mid-level engineer, Jane, working on a new compound used in industrial batteries, Novel Compound X. The compound uses a specific, proprietary 12-step molecular synthesis method that reduces toxicity. No patent has been filed yet.

To quickly check if a specific reactant in Step 7 is available from a new supplier, Jane pastes the following prompt into a public AI chat window:

"Please provide the current supplier list for reactant Z or suitable substitutions. We use it in our unique synthesis process for Novel Compound X, which involves a 12-step molecular bond sequence, specifically the novel cross-linking reaction at step 7 (using catalyst Y) to achieve toxicity reduction for large-format storage batteries."

Risk Element Analysis of the Prompt Disclosure Outcome
Enabling Detail? YES. Jane disclosed the compound's novel function (toxicity reduction), the mechanism (12-step molecular bond sequence), the novel component (cross-linking reaction at step 7), and the specific catalyst (Y). A POSITA in battery chemistry could now use that description as a starting point to reverse-engineer the process. The disclosure may be technically sufficient to function as Prior Art.
Publicly Available? MAYBE. Jane used a public AI model whose ToS permits using input data for training. By clicking "Accept," Jane agreed to surrender confidentiality. The AI provider is now an unauthorized third party holding the information. The inputted information may result in a public disclosure and may trigger the loss of Absolute Novelty protection globally. The disclosure may prevent the company from obtaining broad patent protection in major foreign markets and in the US if a priority application isn't filed within one year.

The act of submitting this single prompt may have created an immediate, irreversible public disclosure. If Jane's company attempts to file a patent application, the date of the AI prompt or resulting disclosure may become the new, earlier public disclosure date. For most of the world, including critical markets in Europe and China, this invention may be unpatentable after the public disclosure. Even in the U.S., the company has only one year from the prompt date to file, creating a major time constraint and complicating the prosecution strategy.

7. The Evolving Legal Analogies: Why the Risk is Real

While the legal standard for AI-based patent disclosure remains unsettled, recent, high-stakes litigation in related IP fields supports the need for extreme caution regarding confidential input:

  • Trade Secret Misappropriation via Input: Cases like OpenEvidence v. Pathway Medical, No. 1:25-cv-10471 (D. Mass. 2025) illustrate that courts are examining the vulnerability of AI prompts and system data. If a company fails to take reasonable measures (like using a closed, enterprise account) to protect its proprietary information, a court may rule that the right to secrecy has been forfeited. This forfeiture of secrecy directly supports the argument for public disclosure in patent law.
  • Copyright Training Data and Public Dissemination: The Copyright Wars (e.g., N.Y. Times Co. v. Microsoft Corp., 757 F. Supp. 3d 594) and various lawsuits against generative AI platforms confirm the massive volume and scope of data ingestion, undermining any claim that the input remained private. These cases establish that the AI providers operate on a massive scale of data utilization, reinforcing the point that any non-confidential input is subject to wide-scale, unauthorized use and dissemination. If an AI is trained on your material, the internal dissemination of your data is vast, severely compromising confidentiality.

8. The Official Warning: USPTO Guidance on Confidentiality

The U.S. Patent and Trademark Office (USPTO) has formally acknowledged and warned of the risk of using AI in the patent process, making its position clear on the ethical and procedural obligations of patent practitioners and explaining how it views AI and confidentiality.

The USPTO guidance specifically warns that practitioners must maintain the Duty of Client Confidentiality (as required by 37 C.F.R. 11.106):

  1. Practitioner's Duty (37 C.F.R. 11.106): The patent attorney is bound to maintain the confidentiality of all client information. This professional duty is a crucial reasonable measure of secrecy that keeps the invention protected until the patent application is filed.
  2. The AI Breach: When a practitioner (or someone under their supervision) inputs novel, enabling data into a public AI, the model's Terms of Service (ToS) usually grant the AI vendor the right to store that prompt and use it for model training. This action constitutes a breach of the duty of confidentiality because the information is shared with an unauthorized, non-confidential third party (the AI provider).

Implication for Public Disclosure: The USPTO's caution shows that they view the act of inputting novel, confidential information into an unauthorized, public AI system as a failure to maintain the necessary procedural safeguards and as a breach of confidentiality. Accordingly, this breach of confidentiality posits the risk of the inputted information being seen as a public disclosure. When a practitioner breaches the Duty of Confidentiality by sharing novel information with a public AI provider (who is contractually permitted to use and store that data), the USPTO is signaling that the shared novel information is no longer confidential. This may satisfy the "publicly available" requirement under 102 because the invention has been placed outside of a binding, protected sphere of secrecy.

By emphasizing the practitioner's existing Duty of Candor and Good Faith (37 C.F.R. 1.56), the USPTO mandates that attorneys must proactively manage AI risk to prevent an inadvertent public disclosure that could be fatal to the patentability of the claimed invention. By extension, applicants and inventors should use the same level of caution to avoid an inadvertent public disclosure.

9. Best Practices for Patent-Conscious Innovation

Protecting your patent rights requires a strict protocol for integrating AI into your workflow:

  • Do Not Share Novel, Enabling Information: Assume all information entered into any public-facing, consumer-grade AI model is an immediate, irreversible public disclosure. If you are using public AI to brainstorm or refine the technical details of a new idea, STOP.
  • Prioritize Background Research: Use AI initially only as a useful tool for summarizing existing, already-public background information or "State of the Art." This carries lower disclosure risk.
  • Engage Enterprise-Grade Solutions: If a new invention must be processed by an AI, you should use a dedicated, private enterprise solution that contractually guarantees:
    1. Restricted data retention of your prompts.
    2. No use of your prompts for model training.
    3. A duty of confidentiality regarding your input.

Take Control of Your Innovation Timeline

The power of AI is immense, but the legal consequences of its misuse (or misunderstanding its use) are swift and potentially devastating. By treating your proprietary inputs as confidential and adhering to a file-first, disclose-later philosophy, you can leverage AI innovation while fully protecting your firm's global IP portfolio. Included is an AI Confidentiality and Patent Disclosure Checklist to immediately assess and improve your internal processes.

10. AI Confidentiality and Patent Disclosure Checklist

This checklist is designed for R&D teams, inventors, and legal departments to assess the procedural steps required before using any Artificial Intelligence (AI) tool during the innovation cycle. Use this before inputting any technical details related to a novel invention into an AI tool.

Part 1: Preliminary Vetting (Is the Input Confidential?)

Answer this question for the information you intend to input into the AI tool:

Question Assessment
1. Does the input contain non-public or confidential data? For example, is the information a Trade Secret, internal R&D data, or proprietary information that has NOT been previously disclosed outside of the company? __Yes / __No
2. Does the input relate to a potentially patentable invention? For example, is the subject matter a novel improvement or variation over existing technology, for which patent protection is desired? __Yes / __No

Action: If you answer "Yes" to at least one of Question 1 or Question 2, proceed immediately to Part 2 for validation against the AI criteria. Do not proceed unless all criteria in Part 2 are verified. If you answer "No" to both questions, you should still consult with an attorney if you have any doubts about the content's sensitivity.

Part 2: AI Platform Vetting and Confidentiality Guarantee

If the information contains confidential data or relates to a potentially patentable invention (from Part 2), the AI platform must meet the following criteria. DO NOT PROCEED unless all boxes below are checked and verified via contract/ToS.

Action / Requirement (Requires Verification) Status (Must be "Verified")
1. Data Training Opt-Out/Exclusion: Is there a contractual guarantee that your input prompts and outputs will NOT be used by the AI provider to train the underlying model? __ Verified
2. Data Retention Policy: Does the provider have a policy limiting the retention of data for only confidential purposes, meaning your prompt history is not stored on their servers for non-confidential use after the session is complete? __ Verified
3. Enterprise/Private Account: Are you using a dedicated, paid Enterprise or Private Instance of the AI tool (i.e., not the free, public version)? __ Verified
4. Confidentiality Clause: Does the service agreement explicitly include a Duty of Confidentiality regarding your input data, similar to an NDA? __ Verified
5. Data Geography: Is the data processing environment within a jurisdiction that meets your company's compliance and data sovereignty requirements (e.g., EU-GDPR, US-only)? __ Verified

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More