AI Is Not Your Lawyer: Lessons from United States v. Heppner
By R. Zebulon Law and Matthew G. Stein
Brian Walshe made headlines recently after being convicted of murdering his wife, Ana, in part because of the search terms he had entered on Google. The police discovered that his searches included “dismemberment” and “best ways to dispose of a body.” We assume he never Googled “how to erase my search history.”
Instead of Google, many people are turning to generative AI for information. However, users should be forewarned. Conversations on public AI platforms are not confidential (just like Google searches). Consider Bradley Heppner, who recently used AI to seek legal advice in a case he was involved in. In United States v. Heppner, Judge Rakoff of the Southern District of New York held that Heppner’s chats with a publicly available AI tool (Claude) and the outputs the tool generates are not protected by attorney-client privilege or the work product doctrine.

In Heppner, Bradley Heppner asked Claude to assess his legal exposure, create defense strategies, and prepare materials that he later shared with his lawyer. The opposing side asked the defendant to produce all documents related to the case, and Heppner, likely concerned about the other side finding his AI conversations, tried to assert attorney-client privilege. The court rejected that argument.
The court found that the materials were not created at counsel’s direction, were not confidential, and were not communications with an attorney. The court emphasized that when a client independently uses a consumer-grade public AI platform to analyze legal issues or input sensitive information, those communications are treated as disclosures to a third party (like Google searches) and are not protected by privilege. As Judge Rakoff wrote, “Because Claude is not an attorney, that alone disposes of Heppner’s claim of privilege.”
Notably, the decision does not address law firms’ use of enterprise AI tools. Company-approved, closed-system enterprise AI tools potentially offer different safeguards, guided by contractual confidentiality provisions, security measures, and user training. However, from the Heppner case, the need for client AI guidance is clear.
Key Takeaways for Clients:
- Avoid public AI for sensitive information. Do not enter privileged or confidential material into public AI tools. This includes legal strategy, client communications, and case facts. Assume anything shared could be discoverable or used against you.
- Don’t use AI for legal advice without counsel. Non-lawyers should not rely on AI, even private tools, for legal guidance unless acting under the direct guidance of counsel. Those interactions are unlikely to be protected.
- Assume AI outputs are discoverable. Saved chats, exports, and notes from AI interactions could be subpoenaed and used in litigation.
- Know the difference between consumer and enterprise AI. Consumer AI tools lack confidentiality protections, while safeguards offered by closed-universe enterprise AI tools are potentially different. However, privilege and work product doctrine depend on the specific circumstances, and you should consult with counsel before relying on any assumed protections for AI-generated materials.
The Bottom Line
United States v. Heppner is unlikely to be the final word on AI and privilege. Courts will continue to address how generative AI affects legal matters. For now, the ruling serves as an important reminder that confidentiality is the foundation of privilege. Publicly available AI platforms do not meet that threshold.
Think of using public AI tools as discussing your case at a cocktail party. Once you share private matters, you may not be able to control where it goes or how it could be used. Convenience can be tempting, but not at the cost of confidentiality.
