Where Are We Now?
The Status Quo of AI Legal Tech at the Start of 2026
A practical guide that will take you from general awareness to hands-on capability
Where We Are Now
AI chatbots are now a normal part of everyday life.
Like Googling something. Like expecting there to be a perfect YouTube video for whatever question you have. An amazing advancement that has, almost disturbingly quickly, become the norm.
For legal professionals, since the release and popular rise of AI chatbots (ChatGPT, Claude, Microsoft Copilot, Google Gemini), a number of concerns have surfaced. These are real concerns, and I want to address them directly before we go any further.
“I can’t share confidential client information with this thing”
This was the first and loudest concern, and it’s a reasonable one.
The legal status: you have a duty of confidentiality. Sending client data to a third-party service raises real questions about who can see it, how it’s stored, and whether it could be used to train future models.
The technical status: most major AI providers now offer enterprise tiers with zero-retention policies, meaning your data is processed and then discarded. It’s not stored. It’s not used for training. The input goes in, the output comes out, and that’s it. (Anthropic, the company behind Claude, calls this “zero data retention” and makes it contractually binding for business accounts.)
To be clear, I’m not saying you should throw caution to the wind. But the blanket “I can’t use this at all” stance is increasingly out of step with how these tools actually work when configured properly.
“AI hallucinates and makes up case citations”
Yes, it can. This is real.
The early horror stories were deserved. Lawyers filed briefs citing cases that didn’t exist. That happened. It was embarrassing, and in some cases it resulted in sanctions.
But this conversation has evolved in two important ways.
First, the technical concern: LLM outputs should be verifiable. Any claim, any citation, any factual assertion that matters should be checked against a source. This isn’t unique to AI. It’s what you’d expect from a junior associate. The tool is useful; it’s not infallible.
Second, the practice: we now understand what these tools are best for and what they’re not as good for. AI-assisted legal research has become a well-developed field with tools specifically designed to ground outputs in real case law. The general-purpose chatbot is not the only option anymore.
“How do we bill by the hour when AI does 100 hours of work in one?”
This is the big one. And I don’t have a tidy answer.
The traditional model: you sell intelligence by the hour. Research takes time. Drafting takes time. Review takes time. The client pays for that time.
The emerging reality: we now have tools that allow for 100 hours of intelligence work to be done, under the guidance of a human expert, in one hour. AI-native law firms are already building practices around this. How do you continue to sell legal services per hour in this world?
And there’s a related question that might be even more pressing: what happens when clients self-serve with AI? This is especially relevant for sophisticated clients who could conceivably direct a swarm of AI agent lawyers themselves.
I don’t know where this lands. But I do know that being on the inside of this shift is better than watching it happen to you.
That’s the preamble.
Necessary to establish, so that we can all start on the same page. Now let’s talk about what this guide is actually going to do for you.
What This Guide Will Teach You
The goal here is to get you from the bird’s eye view we just established to actually sitting in the driver’s seat.
You’re a tech-curious lawyer. That puts you in a unique position: you can not only explore this new frontier, but help define it. The people who engage now, who build intuition for what these tools can and can’t do, are the ones who will shape how the profession adapts.
This guide will give you a practical, hands-on, anyone-can-do-it walkthrough. The steps are simple. But they’re designed to teach you the fundamental mental models for thinking about how, if, and where AI fits into your professional life.
Those mental models are:
LLM vs. Agent
An LLM is the chatbot. You type, it responds. An agent is an LLM in a loop with tools: it can read files, search the web, execute code, and keep working until the task is done. This distinction changes everything about what’s possible.
Context is King
Running an agent on a collection of documents and data is where the real power emerges. You’re not just asking a chatbot a question. You’re giving it access to your working materials and letting it operate on them. The more relevant context you provide, the more useful the output.
Tasks vs. Jobs
You don’t hand over your job. You delegate tasks. Start with small, specific pieces of work. Give more and more detail work to an agent whom you are supervising. You bring the judgment. The agent brings the throughput.
AI as Interface to Everything
Stop clicking through dashboards. Stop sifting through spreadsheets. Talk to an AI that can do that for you. The interface to your data, your documents, your workflows doesn’t have to be a screen full of buttons. It can be a conversation.
Next Lesson
Getting Started with Claude Code
Install Claude Code, configure it for legal work, and run your first agent task.
Continue to Lesson 2Subscribe to Stay at the Edge of AI for the Practice of Law
📡Front-row access
- ✓Deep dives on new AI tools and what they mean for lawyers
- ✓Workflow guides for putting AI to work in your practice
- ✓Prompts and templates you can use today
⚡Practical AI
- ✓Cut hours of manual research and admin work
- ✓Compete with bigger firms without the tech budget
- ✓Become the person who "gets" AI at your firm