Article by: Wallace Lightsey, Josh Lonon, Matthew Richardson, and Austin Coward
A highly respected federal judge in New York just issued a decision that a criminal defendant’s written exchanges concerning his case with Anthropic’s AI platform “Claude” were not protected by the attorney-client privilege and therefore could be obtained by the government to use against the defendant. The judge noted that the ruling appeared to be the first known decision regarding the interplay of AI and privilege.
Here’s what happened: The defendant (not a lawyer) used Claude to prepare written analyses about his case. Later, he claimed those communications were privileged because he had input confidential information that he learned from counsel and because he had Claude prepare the materials for him to use in seeking advice from his attorney. The court said no. Why? Claude isn’t a lawyer. There’s no attorney-client relationship. And the platform’s privacy terms allowed collection and potential disclosure of inputs and outputs, which prevented there being any reasonable expectation of confidentiality.
The context of this decision was a criminal prosecution, but there is no reason it would not also apply in civil litigation or a regulatory proceeding. Therefore, if you input confidential information, deal strategy, or litigation analysis into a public AI platform, you may be creating discoverable material that the other side can force into the open during litigation, investigations, or administrative actions. The other side may be able to see not just sensitive confidential information, but also your legal strategy and advice from your counsel, if it was shared with the AI platform.
“Preparing to talk to my lawyer” via AI does not create privilege, and forwarding the output to counsel later does not fix it. In a chilling passage in a footnote, the court noted that “if certain information that [the defendant] input into Claude was privileged, he waived the privilege by sharing that information with Claude and Anthropic, just as if he had shared it with any other third party.”
None of this means don’t use AI. It means use it intelligently. Do not input private information, confidential attorney advice or legal analysis, or other client information relating to a pending legal matter into AI without first checking with your counsel.
Similar risks apply to social media use during active legal proceedings. You should always be extremely careful about posting information relating to a pending legal matter on social media. There are horror stories of plaintiffs in personal injury cases, who were claiming catastrophic injuries in a lawsuit, having posted videos of themselves water-skiing just weeks before they were deposed or their case went to trial. Even in complex business cases, information posted on social media can be used by the other side to attack your case. It’s better not to post anything about a pending legal matter.
The New York decision discussed here is United States v. Heppner, Case No. 25 Cr. 503 (JSR) (S.D.N.Y. Feb. 17, 2026).

