If you're happy with your career and you're not curious about AI, don't read this essay.
There is a LOT of hype around AI and FOMO is real, but those aren't reasons enough to engage with this subject. Especially if you don't fee like anything is missing from your professional life.
But, if you are curious about AI, and if work feels more like a drag than a deeply satisfying flow state, then welcome! You're in the right place.
This isn't a crash course. By the end of this essay you will not be AI fluent.
This is the conversation we'd be having if you told me that you've played around with ChatGPT but you don't see what the big deal is.
There are 4 concepts you need to know.
By the end of this essay, you'll be ready to start playing with AI like a professional.
Think in Terms of Tasks, Not Jobs
AI won't take your job.
Yes, it can do many of the tasks involved in your job, but that doesn't mean the job itself goes away.
For example, as a lawyer, your job might involve the following tasks:
- Marketing your services
- Qualifying leads
- Intaking new clients
- Drafting documents
- Research
- Communicating with your client
- Communicating with opposing counsel
- Exercising judgement about strategy
AI is becoming very good at some of these tasks, and this will absolutely have an effect on your job, but let's be clear: the job of being a lawyer is more than just a collection of tasks. It's being responsible for an outcome on behalf of a client.
Clients aren't hiring you to perform tasks. They're hiring you to solve problems.
AI can market your services, qualify & intake leads, draft documents, perform research, and on and on. But someone (i.e. you) still needs to decide what the objective is, what risks are acceptable, and what advice should ultimately be given. And ultimately, that same someone still needs to be accountable.
That's the job.
What AI changes is how the tasks inside the job get done.
Instead of personally executing every task, lawyers will increasingly direct systems that execute them—reviewing the output, refining it, and integrating it into a strategy that serves the client.
In other words, your job as a lawyer shifts from doing every task by hand to orchestrating the work.
To learn more about this concept, check out this podcast: Marc Andreessen: The real AI boom hasn't even started yet.
Context is King
"Garbage in, garbage out."
Meaning, unless you give the AI everything it needs to perform a task, don't expect good results.
Imagine you're working on a litigation matter and your associate brings you a draft of your opening brief. AI can flag weak arguments that deserve your attention; factual claims that are not well supported by the underlying depositions.
But, AI can only do this if it has access to the deposition transcripts.
Without sufficient context, you're asking the AI to hallucinate. It's like asking someone to analyze a contract but refusing to let them see the deal file.
This begs the question, do you have access to the context you need?
Because in practice, most of the work of using AI professionally is not writing clever prompts. It’s assembling the right context.
The AI needs to see the materials that a competent lawyer would review before doing the work.
That might include:
- the complaint and key pleadings
- deposition transcripts
- contracts or deal documents
- relevant statutes or regulations
- prior briefs filed in the matter
- emails or communications between the parties
Once the AI has access to that information, it can begin to operate like a very fast junior associate.
It can:
- surface inconsistencies in testimony
- summarize large bodies of evidence
- identify arguments that are thinly supported
- suggest alternative ways to frame a claim
But without that context, the AI is forced to guess.
And when AI guesses, it sounds confident while being wrong, which is exactly what lawyers cannot afford.
This is why effective AI use in legal practice often looks less like chatting with a chatbot and more like building a workspace around the relevant materials in a matter.
The better the context you provide, the more useful the AI becomes.
In other words, when using AI for legal work, the question is rarely: “What should I ask the AI?”
The better question is: “What information does the AI need to see before it can help me think about this problem?”
Lawyers who learn to gather and provide the right context will get dramatically better results from AI than those who treat it like a search engine.
Don't Trust, Verify
"But AI hallucinates."
Yes. AI models sometimes generate confident-sounding nonsense. That’s a real limitation, and it’s the reason many lawyers cite for avoiding AI entirely.
But this misses an important point: hallucination is only dangerous when you can’t verify the output.
If you ask an AI to analyze documents in your matter folder and require it to cite its sources, verification becomes straightforward. The AI might say: “The indemnification cap is $5 million under Section 8.3 of the Purchase Agreement.” You click the citation. Either it’s there or it isn’t.
When the AI is working from the documents in your case, you have the same ability to check its work that you would have with a junior associate. The model may make mistakes, but those mistakes are visible.
The real risk is not hallucination. The real risk is using AI in situations where verification is difficult—for example, when it is generating information from its training data rather than analyzing the materials in front of you.
So the question isn’t whether to trust AI.
The question is how to structure your workflow so the AI’s work can be checked.
When the model is grounded in your documents and citing its sources, verification becomes part of the process.
Supervise
Using AI well turns out to require many of the same skills lawyers already use when supervising junior lawyers.
You give clear instructions. You provide the relevant materials. You review the work product. And you apply your judgment to decide what holds up and what doesn’t.
This isn’t a coincidence.
In practice, AI functions a lot like a very capable junior associate. It can draft, summarize, analyze, and surface issues. But it still needs direction, and its work still needs review.
There are two ways to approach this.
One is to ask the AI to do your thinking for you and accept the output at face value. That’s lazy—and in a professional setting, dangerous.
The other is to treat AI as a force multiplier for your own judgment. The model helps with the first draft, the research synthesis, or the document analysis. You evaluate the output, refine it, and decide how it fits into the larger strategy of the matter.
The responsibility doesn’t move.
You’re still the lawyer.
The AI is simply a resource you supervise.
That’s the orientation. Four concepts:
- Think in terms of tasks, not jobs.
- Give the AI the context it needs to do the work.
- Verify the output instead of hoping it’s correct.
- Treat AI like a capable junior resource that requires supervision.
These concepts apply to whatever AI tool you're using.
Ignore them, and the tools will feel unreliable, confusing, or risky.
The difference is not the model.
It’s how you use it.