Legal Tech Coach

Where Are We Now?

A practical guide that will take you from general awareness to hands-on capability

Where We Are Now

AI chatbots are now a normal part of everyday life.

Like Googling something. Like expecting there to be a perfect YouTube video for whatever question you have. An amazing advancement that has, almost disturbingly quickly, become the norm.

For legal professionals, since the release and popular rise of AI chatbots (ChatGPT, Claude, Microsoft Copilot, Google Gemini), a number of concerns have surfaced. These are real concerns, and I want to address them directly before we go any further.

"I can't share confidential client information with this thing"

This was the first and loudest concern, and it's a reasonable one.

The legal status: you have a duty of confidentiality. Sending client data to a third-party service raises real questions about who can see it, how it's stored, and whether it could be used to train future models.

The technical status: most major AI providers now offer enterprise tiers with zero-retention policies, meaning your data is processed and then discarded. It's not stored. It's not used for training. The input goes in, the output comes out, and that's it. (Anthropic, the company behind Claude, calls this "zero data retention" and makes it contractually binding for business accounts.)

To be clear, I'm not saying you should throw caution to the wind. But the blanket "I can't use this at all" stance is increasingly out of step with how these tools actually work when configured properly.

"AI hallucinates and makes up case citations"

Yes, it can. This is real.

The early horror stories were deserved. Lawyers filed briefs citing cases that didn't exist. That happened. It was embarrassing, and in some cases it resulted in sanctions.

But this conversation has evolved in two important ways.

First, the technical concern: LLM outputs should be verifiable. Any claim, any citation, any factual assertion that matters should be checked against a source. This isn't unique to AI. It's what you'd expect from a junior associate. The tool is useful; it's not infallible.

Second, the practice: we now understand what these tools are best for and what they're not as good for. AI-assisted legal research has become a well-developed field with tools specifically designed to ground outputs in real case law. The general-purpose chatbot is not the only option anymore.

"How do we bill by the hour when AI does 100 hours of work in one?"

This is the big one. And I don't have a tidy answer.

The traditional model: you sell intelligence by the hour. Research takes time. Drafting takes time. Review takes time. The client pays for that time.

The emerging reality: we now have tools that allow for 100 hours of intelligence work to be done, under the guidance of a human expert, in one hour. AI-native law firms are already building practices around this. How do you continue to sell legal services per hour in this world?

And there's a related question that might be even more pressing: what happens when clients self-serve with AI? This is especially relevant for sophisticated clients who could conceivably direct a swarm of AI agent lawyers themselves.

I don't know where this lands. But I do know that being on the inside of this shift is better than watching it happen to you.


This is all preamble.

What comes next is more interesting. If you're a tech-curious lawyer, you're in a unique position to not only explore this frontier but to help define it.

Subscribe to Stay at the Edge of AI for the Practice of Law

📡Front-row access

  • ✓Deep dives on new AI tools and what they mean for lawyers
  • ✓Workflow guides for putting AI to work in your practice
  • ✓Prompts and templates you can use today

⚡Practical AI

  • ✓Cut hours of manual research and admin work
  • ✓Compete with bigger firms without the tech budget
  • ✓Become the person who "gets" AI at your firm