THINKING OF USING AI TO PROVIDE LEGAL ADVICE – THINK AGAIN. AI CHATS CAN BECOME EVIDENCE AGAINST YOU
Has a legal claim been asserted against you? Do you think you have a claim against someone else and want to know whether to pursue it? Think twice before “chatting” with an AI chatbot about your legal matters.
A recent court ruling, United States v. Bradley Heppner, from the U.S. District Court for the Southern District of New York, has sent a shockwave through the legal world: holding that communications with publicly available generative AI platforms are not protected by attorney-client privilege. If you share sensitive details with a bot, such information may no longer be protected against disclosure.
The risk became a reality for one defendant who turned to an AI chatbot to help navigate a government investigation. Acting without his lawyer’s guidance, he used the AI tool to analyze his legal exposure and map out a potential defense strategy. He likely thought he was working in private, but when the government seized those digital communications during his arrest, the legal safety net he expected simply wasn’t there.
His legal team fought to keep the communications out of the government’s hands, claiming they were protected under the attorney-client privilege and the work product doctrine. However, because the defendant acted independently of his counsel and shared his strategy with a third-party platform, the court found those protections did not apply.
This serves as a stark reminder that in the eyes of the law, a chatbot is not a confidential advisor.
In the case of United States v. Heppner, the court rejected the defense’s arguments by focusing on three long-standing legal pillars. The reasoning serves as a cautionary tale for anyone assuming a “private” digital interface is the same as private legal consultation.
Here is the breakdown of the court’s logic:
- A Chatbot Is Not a Lawyer
The court was very clear: attorney-client privilege requires, well, an attorney. Because an AI platform is not a licensed professional, it cannot enter into a privileged relationship or owe a duty of loyalty to a user. The judge compared using an AI for legal research to chatting with a friend or doing a basic internet search —neither of which creates a legal “cone of silence.”
- The “Privacy” is an Illusion
To claim privilege, communication must be kept strictly confidential. The court looked at the AI platform’s own fine print, which stated that user data could be used to train their models or shared with third parties. By clicking “agree” to those terms, the defendant effectively waived his right to privacy. In the eyes of the court, he wasn’t speaking in a private office; he was speaking in a room where the walls were taking notes.
- The Lack of Professional Direction
Although this case arose in a criminal context, its implications extend to civil matters and the workplace. Whether an HR manager is summarizing a harassment complaint or an executive is using a chatbot to weigh the risks of a termination, those digital conversations could easily become “Exhibit A” in a future lawsuit.
To protect your organization, consider these proactive measures:
Start by limiting the use of public AI tools for sensitive legal or HR matters. It is equally important to train your management teams on these risks so they understand that a password-protected AI account is not the same thing as a confidential conversation.
The safest approach is to involve counsel early when legal claims arise. Following a lawyer’s specific direction helps ensure your strategy remains protected under the attorney work-product doctrine. Finally, if AI is necessary for your workflow, look into secure, enterprise-grade platforms that offer strict confidentiality protections, rather than using public versions that may expose your internal data.