I. Introduction
Much ink recently has been spilt regarding preserving the attorney-client and work-product privileges when parties or counsel use open-source artificial intelligence in litigation. The natural extension, however, is how Courts should treat litigants’ use of open-source AI with respect to documents in litigation, whether proprietary, confidential, or merely exchanged between parties.
Many state and federal courts, in California at least, have created Model Protective Orders to govern parties’ exchange and use of documents in litigation and to avoid bickering and law and motion over attempts to draft bespoke protective orders. Often, however, such Model Protective Orders fail to deal adequately with questions of using open-source AI to assist a litigant with documents exchanged in litigation or drafting or responding to pleadings that mention confidential or proprietary information.
Until now. In Morgan v. V2X, Inc., Civil Action No. 25–cv–01991–SKC–MDB (D. Colo. March 30, 2026), Magistrate Judge Maritza Dominguez Braswell addressed the propriety of a protective order designed to prohibit an in pro per Plaintiff from uploading litigation document to open-source AI to assist him in litigation. In a lengthy opinion, Magistrate Judge Dominguez Braswell highlighted the deficiency in the existing protective order that she previously had entered. But, while the Court held that an in pro per Plaintiff was entitled to some measure of work product protection, the Court explained that, “the Court cannot ignore the real risks associated with mainstream [open-source AI] tools that persistently collect and store data and could compromise confidentiality.”
Open‑source Large Language Models (LLMs) are AI models whose model weights, architecture, and often training code are publicly available, allowing organizations to inspect, modify, fine‑tune, and deploy the models on their own infrastructure (cloud, on‑premises, or hybrid). Closed or Proprietary LLMs are AI models whose architecture, training data, and model weights are not publicly available and are owned, hosted, and managed by a vendor. In 2023, the California State Bar’s Standing Committee on Professional Responsibility and Conduct issued Practice Guidance for the Use of Generative Artificial Intelligence in the Practice of Law, focusing on “non-secure” AI versus “secure” AI rather than “open” or “closed”. The Standing Committee specifically warned against inputting confidential client information into non-secure AI tools and requiring lawyers to review all AI-generated work product and to understand how a particular generative open-source AI tool works: "A lawyer should review the Terms of Use or other information to determine how the product utilizes inputs. A lawyer who intends to use confidential information in a generative AI product should ensure that the provider does not share inputted information with third parties or utilize the information for its own use in any manner, including to train or improve its product.”
Counsel and, frankly, courts need to be familiar with these obligations as well as how such obligations should be addressed to protect client confidences and property exchanged and used in litigation.
II. Backdrop of Decisions on Privilege and Work Product Concerns in Using Public Source AI
a. Two Open-source AI Privilege Decisions Lay the Groundwork
Despite the California State Bar (and other states) foreshadowing these issues in ethics guidance opinions, the recent decisions in Warner v. Gilbarco, Inc. No. 2:24-cv-12333 (E.D. Mich. 2026) and United States v. Heppner United States v. Heppner, No. 25 Cr. 48 (S.D.N.Y. 2026) fired the first shots across the bow, and set the table on how attorney-client privilege and the work-product doctrine might apply when litigants use open-source AI. Although seemingly reaching divergent outcomes, the cases can be justified in their approach to the risks and protections associated with using open-source AI in litigation.
United States v. Heppner, No. 25 Cr. 48 (S.D.N.Y. 2026) addressed somewhat unique facts. A criminal defendant on his own initiative used the free, consumer version of Anthropic's Claude AI to generate documents outlining potential defense strategies and legal arguments. Judge Jed S. Rakoff found no attorney-client privilege because no attorney participated in the communications because "the discussion of legal issues between two non-attorneys is not protected by attorney-client privilege".
By contrast, Judge Anthony P. Patti held in Warner v. Gilbarco, Inc., No. 2:24-cv-12333 (E.D. Mich.), that a pro se plaintiff's use of ChatGPT to help draft pleadings did not waive work-product protection because open-source AI constituted "tools, not persons". Judge Patti held that while attorney-client privilege can be waived by voluntary disclosure to any third party, work-product protection is only waived when materials are disclosed to an adversary or in a manner likely to reach one – the use of open-source AI did not constitute such conduct.
Commentators suggest that the variable across both decisions is not the AI tool itself but the architecture around the tool: whether counsel directed its use, whether the platform maintained confidentiality, and whether the user’s procedural posture created the equivalent of attorney involvement. But whether Warner and Heppner reach opposite conclusions or can be justified is of no moment in terms of measures that litigants can take to protect privileged and proprietary litigation documents via protective orders.
b. Morgan v. V2X, Inc.: a Protective Order against Uploading Discovery to Open AI
While the courts in Heppner and Warner focused primarily on privilege, the U.S. District Court for the District of Colorado in Morgan v. V2X, Inc., Civil Action No. 25–cv–01991–SKC–MDB (D. Colo. March 30, 2026) addressed the privilege issue in a different procedural posture: whether protective orders against using open-source AI was necessary in the context of the litigation and consistent with the Federal Rules of Civil Procedure. In Morgan, the Plaintiff used open-source AI tools to assist in the litigation, prompting the defendant to seek modification of the parties’ protective order over concerns that confidential discovery may have been sent into the public domain via open-source AI platforms.
The Court believed that the protective order already in place “arguably” addressed the situation, but ultimately concluded that, ““disclosure” in the form of transmission to an AI system with a provider that stores the Confidential Information in their own databases, and/or for their own purposes, may violate the current Protective Order.” The Court expressed concern about limiting the Plaintiff’s ability to research and litigate his case, but nevertheless ordered the Plaintiff to disclose the name of any AI tool he used in connection with Confidential Information and prohibited the Parties from inputting confidential discovery into an open-source AI tool:
No party or authorized recipient may input, upload, or submit CONFIDENTIAL Information into any modern artificial intelligence platform, including any generative, analytical, or large language model based tool (“AI”), unless the AI provider is contractually prohibited from: (1) storing or using inputs to train or improve its model; and (2) disclosing inputs to any third party except where such disclosure is essential to facilitating delivery of the service. Where disclosure to a third party is essential to service delivery, any such third party shall be bound by obligations no less protective than those required by this Order. In addition, the AI provider must contractually afford the party or authorized recipient the ability to remove or delete all CONFIDENTIAL information upon request. A party intending to use AI that it contends meets these requirements must retain written documentation of these contractual protections.
The Court was not clear about addressing derivative materials, such as pleadings or written discovery responses in open-source AI. The court acknowledged that the Order “will (at least for now) bar the parties from using most, if not all, mainstream low‑to‑no‑cost AI to process Confidential Information,” but concluded that it “cannot ignore the real risks associated with mainstream tools that persistently collect and store data and could compromise confidentiality.”
c. Open-source AI’s Terms of Use
True to the Morgan decision (and the California State Bar’s Guidance), the Terms of Use of most open-source AI products do not align with permitting their use for analyzing confidential or proprietary information subject to a typical protective order. For example, OpenAI’s terms permit the provider to use submitted content to provide, maintain, develop, and improve its services, subject to platform‑specific opt‑out mechanisms. The governing terms do not categorically prohibit internal use or retention of uploaded materials, nor do they provide court‑enforceable guarantees regarding downstream handling or deletion.
Anthropic’s terms reflect a similar tension. While users are responsible for their input and retain rights as between themselves and the provider, the terms expressly allow Anthropic to use submitted materials to provide, maintain, improve, and develop its services, including training its models, subject to opt‑out settings. Even where users opt out of training, the terms reserve rights to use materials for safety review, policy enforcement, and related internal purposes.
By contrast, protective orders – whether Model or bespoke - presume that confidential discovery will be disclosed only to defined recipients, that those recipients are bound by the order’s terms, and that the court can enforce compliance through sanctions or injunctive relief. Open-source AI providers arguably do not fit within that framework. They are not parties to the litigation, do not execute acknowledgments agreeing to be bound by protective orders, and operate under standardized, non‑negotiable terms that reserve discretion over data handling in ways the court cannot meaningfully supervise.
Hence, as did the California Bar, the Court in Morgan focused on contractual prohibitions rather than assurances of good faith or technical security. Absent express limits on training, retention, and internal reuse—and enforceable deletion rights—open-source AI terms of use generally undermine the argument that uploading confidential or proprietary discovery is compatible with Model Protective Orders. The issue is not the sophistication of the AI tools, but the mismatch between consumer‑oriented contracts and legacy discovery confidentiality regimes.
d. State and Federal Civil Procedure Rules and Model Protective Orders: How They Fall Short
The Morgan decision exposes an AI blind spot in the Model Protective Orders used in both state and federal courts. The model protective orders in Los Angeles County Superior Court and the U.S. District Court for the Central District of California, for example, define “permissible disclosure” in terms of categories of people, not the tool or the architecture around it.
Section 7 of the LA County Model Protective Order limits access to attorneys, in-house counsel, officers, directors, employees, court reporters, deposition witnesses, mock jury participants, and outside experts or consultants, each of whom must sign a certification under the order's Exhibit A before receiving confidential materials.
The Central District's model order takes a similar approach in Section 9.2, authorizing disclosure to Outside Counsel of Record, officers, directors, employees, Experts, court personnel, "professional jury or trial consultants, mock jurors, and Professional Vendors," and mediators, with Experts, Professional Vendors, and certain witnesses required to sign an "Acknowledgment and Agreement to Be Bound" under Exhibit A. The Central District order defines "Professional Vendors" in Section 4.13 as "persons or entities that provide litigation support services (e.g., photocopying, videotaping, translating, preparing exhibits or demonstrations, and organizing, storing, or retrieving data in any form or medium) and their employees and subcontractors". Like Morgan, this definition “could” encompass an AI platform, but is inadequate where non-human-operated service providers are involved.
Neither the Los Angeles nor Central District Model Order addresses the distinct risk profile of AI platforms, which may retain, reuse, or train on user inputs under their terms of service. Of course, an AI tool cannot itself sign the Los Angeles County certification or the Central District's "Acknowledgment and Agreement to Be Bound," cannot be advised of its obligations in the manner both orders contemplate, and cannot be subjected to traditional audit mechanisms in the way a human recipient can. Neither the California Code of Civil Procedure nor the Federal Rules of Civil Procedure were drafted with generative AI in mind and, as a result, the default rules treat "disclosure" as sharing information with a human third party or publishing it in a publicly accessible manner, and not as feeding it into a system whose internal data practices may be fundamentally incompatible with confidentiality obligations.
III. Conclusion: Model and Bespoke Protective Orders Should Address (and Limit) Use of Open-source AI
Morgan suggests that courts and litigants can address open AI-related confidentiality risks without attempting to regulate strategy or tool choice. Protective orders can be updated in a narrow, practical way to account for how confidential discovery is processed on the assumption that AI systems will be involved. Most existing protective orders regulate confidentiality by identifying who may receive protected material but should be revised to address use of open AI not only for confidential or proprietary documents but also for derivative materials filed under seal, such as pleadings, motions, declarations, and deposition transcripts. Morgan suggests language that can be included in bespoke or Model Protective Orders, and any judicial hesitance to revise a Model Protective Order should be met with a citation to the Morgan decision.

/Passle/678034865f458907b06ca7a9/SearchServiceImages/2026-04-03-16-28-43-696-69cfeabbbe2f6ded6309d062.jpg)
/Passle/678034865f458907b06ca7a9/SearchServiceImages/2026-04-01-19-09-13-898-69cd6d59fb215ac3ae0c3864.jpg)
/Passle/678034865f458907b06ca7a9/SearchServiceImages/2026-03-31-18-26-05-302-69cc11bdff6f5278dd0714d5.jpg)