A Client’s Guide to the AI Habits That Can Backfire
Artificial intelligence is transforming the way businesses operate, so it’s no surprise that clients are bringing it into their legal matters. As the Chair of Varnum’s AI Task Force, which advises the firm on optimal usage of AI and led to our adoption of Harvey, I can attest that these tools are impressive, the instinct to use them is understandable, and in many contexts, AI genuinely helps. Nevertheless, in the lawyer-client relationship, a handful of common AI habits can quietly work against you, driving up costs, creating legal exposure, and in some cases stripping away legal protections that would otherwise apply by default.
The following scenarios represent the most prominent use cases that my colleagues and I most often observe, with practical takeaways designed to inform clients on how they should think about their use of AI.
The “AI Draft” That Costs More Than It Saves
Many clients come to us having already asked ChatGPT, Claude, or another AI tool to draft a contract, letter, or business document. The reasoning is sound: get a head start, save attorney time, and save money. In practice, it usually works the other way.
General-purpose AI drafts tend to be both overinclusive and underinclusive at the same time. They may produce a simple vendor agreement with overly complicated structures designed for multinational transactions, while missing a straightforward concept that is standard in your industry. They can surface theoretical risks and gloss over practical ones. The result is a document that looks polished and complete but is built on assumptions that do not reflect your deal or your circumstances.
An experienced attorney already has a head start: tested templates built for the transactions and matters we handle every day. Analyzing and revising a misaligned AI draft often takes more time than starting fresh, because we inevitably end up comparing it to a form that we already know works. The better approach is to let us set the foundation, then focus your time and ours on the issues that require judgment and experience.
Feeding Legal Documents to a Chatbot
This is perhaps the most consequential AI habit for clients to reconsider. Attorneys routinely send clients privileged work product that is protected from disclosure in litigation, such as draft agreements, legal memos, and strategy summaries. When a client drops that material into a consumer AI platform to generate a plain-English summary or a list of questions for their attorney, they may be inadvertently handing it to a third party with no obligation of confidentiality.
In July 2025, OpenAI CEO Sam Altman stated plainly that, unlike conversations with a therapist, lawyer, or doctor, there is no legal privilege for what you share with ChatGPT, and that OpenAI could be required to produce those conversations in litigation. “We haven’t figured that out yet for when you talk to ChatGPT,” Altman said during a podcast, calling the absence of any equivalent of attorney-client confidentiality “very screwed up.” (TechCrunch, July 25, 2025)
Courts have only begun to grapple with the legal fallout. Earlier this year, in United States v. Heppner, a federal judge in the Southern District of New York ruled that documents that a criminal defendant generated using a consumer AI platform were not protected by the attorney-client privilege or the work product doctrine. Because the platform’s privacy policy disclosed that user inputs could be used for model training and shared with third parties, the court found there was no reasonable expectation of confidentiality in the defendant’s exchanges with the platform. The decision made two points clear: material that lacks privilege when created does not acquire it later, and privileged information does not stay privileged once shared with an AI platform that operates like a third party. This area of law is evolving rapidly, and while Heppner is unlikely to be the last word, clients should proceed with extreme caution until the issue is settled.
The practical takeaway is that if you receive a document from us and want to understand it better, call or email us. That conversation is exactly what we are here for: breaking complicated concepts into digestible pieces, while keeping your confidential information protected.
The AI Notetaker in the Room
Many businesses now invite AI notetaking apps, such as Otter.ai, Fireflies, and Zoom’s AI Companion, into meetings as a matter of routine. The instinct is reasonable, and the efficiency benefits are clear. But when those tools are present on calls with your attorney, they may be processing, storing, and potentially exposing conversations that would otherwise be protected.
Unlike a paralegal or outside consultant working under attorney supervision, an AI notetaking application is not an agent of your attorney. It operates under its own terms of service, which may permit the vendor to access, store, or otherwise use the data it captures. That exposure can give an opposing party grounds to challenge the confidentiality of the meeting, and it creates a lasting, potentially discoverable record of discussions that previously would have existed only in notes or memory.
Therefore, before inviting any recording or notetaking tool into a sensitive meeting or call with your lawyer, check with them first. It can make a meaningful difference.
AI as a Sounding Board: It Tells You What You Want to Hear
Some clients use AI to stress-test legal advice or size up their negotiating position, asking a chatbot whether a contract term is standard, whether a deal structure is sound, or whether their attorney is missing something. The appeal is obvious. The problem is that AI is structurally inclined to agree with you.
A March 2026 study published in the journal Science and covered by the New York Times found that leading AI models were highly sycophantic, siding with users in interpersonal disputes far more often than human respondents did, even when the users were clearly in the wrong. The researchers found that a single conversation with a sycophantic model made participants significantly less likely to reconsider their own position.
This pattern shows up in business negotiations with real consequences. We have seen deals stall or even fall apart because a client came to the table with an inflated sense of their leverage after running their position through an AI that validated their assumptions rather than challenging them. AI does not know your counterparty, the deal’s history, the dynamics in the room, or what the market will bear. It knows how to sound confident and keep you engaged.
An experienced lawyer’s job is to give it to you straight, even when that’s not what you want to hear. That is not something AI is built to do.
Where AI Adds Value in the Legal Context
None of this is an argument against AI, nor is it an attempt to gatekeep the legal industry or scare clients away from using these tools. Varnum uses AI tools extensively in our own practice, and through our AI Task Force, we have spent considerable time thinking through where these tools add genuine value and where they introduce risk.
Broadly speaking, it helps to think about client use of AI on a spectrum. On the safer end are tasks like organizing background materials, summarizing public filings or other non-privileged documents, conducting preliminary market or industry research, and preparing agendas, talking points, or questions in advance of a meeting with counsel, in each case, making sure anything you disclose to AI is anonymized. Using AI to explore an unfamiliar legal concept or to pressure-test your own thinking before a conversation with your lawyer can also be productive. The critical factor is to treat AI as a starting point rather than a substitute, and to keep confidential or privileged information out of the prompt.
The uses to avoid are those where AI is asked to make or validate the actual legal decision. For example, you should not ask AI to help you interpret a contract you are about to sign, choose between deal structures or litigation postures, decide how to respond to a demand letter, or second-guess whether your lawyer’s advice is “right.” Those judgments turn on facts, history, and context that the model does not have, and as discussed above, the model’s instinct is to agree with you rather than push back.
AI is genuinely exciting, and it holds real promise for the practice of law. But like any powerful tool, its usefulness depends on knowing where it helps and where it does not. When in doubt, talk to us before you talk to the bot.
Eric Post is a partner in Varnum’s Corporate Practice Team and chair of the firm’s AI Task Force. He advises startups, private and family-owned businesses, and multinational corporations on mergers and acquisitions, commercial transactions, and a broad range of business matters.