University

Use Claude to Make Verifying Citations Feel Like Deep Work

A workflow for checking factual claims against your source materials

Download Template

You’ve seen the headlines. Lawyers submitting court documents with made-up case citations. Judges issuing sanctions. Bar associations launching investigations. The stories keep coming: attorneys who trusted AI-generated research without checking, only to discover the cases didn’t exist.

It’s easy to read these stories with a mix of horror and smugness. How could they not check? But here’s the thing: verification has always been part of the job. What’s changed is how much there is to verify, and whether the tools have kept up.

The Deep Work of Document Review

There’s a version of document review that feels like satisfying deep work. You’re given a brief—your associate’s draft, or opposing counsel’s filing. You read it carefully. Something catches your eye on page 12: a case citation that seems too perfect, or a factual claim that doesn’t match your memory of the record. You pull the thread. You check the source. Sometimes you find nothing. Sometimes you find gold.

This is the “smell test.” Experienced lawyers develop an instinct for which citations deserve a closer look. The WL number that seems too recent. The pinpoint cite to a page that doesn’t exist. The factual assertion that cites a deposition but uses language the witness would never have used.

The smell test isn’t going away. It’s one of the things that makes a lawyer valuable. But the mechanics of following up on that instinct? Those have historically been a pain. You flag the citation, open Westlaw or Lexis in another tab, type in the cite, wait for it to load, scan for the relevant passage. For factual claims, it’s worse: you dig through your document management system looking for the deposition transcript, then scroll through hundreds of pages looking for the testimony that supposedly supports the claim.

Verifying Citations With Claude

What this could look like if you rolled your own setup:

Step 0

You need access to a folder containing the document you want to verify (brief, motion, memo) and the underlying evidence (deposition transcripts, exhibits, discovery documents). This is probably a matter folder in your practice or document management system.

Step 1

Open that folder in Claude Cowork or Claude Code. (Start here if you haven’t set this up yet.)

Step 2

Ask Claude to verify the factual claims in your document against the available evidence.

A Teaching Moment: Context Is Everything

What you need to understand before we get to the implementation.

An AI agent like Claude can create a version of your document with links and commentary for each important factual claim. It can flag the assertions that deserve scrutiny. It can connect each claim to its alleged source.

But, it can only do this if it has access to the underlying evidence.

Lesson 1

Context—the information you pass to your agent—determines output quality.

Lesson 2

Although AI can link every factual claim to the relevant section of its supporting document, it’s your judgment that determines which claims deserve a closer look.

We’re working on a more detailed playbook about managing your data for agent workflows. For now, the assumption is simple: when you drop your document into the Claude Code folder, you also drop in the materials for any factual claims you want verified.

The Setup: Putting the Workflow on Rails

Here’s what the workflow looks like in practice.

What you provide:

  • The document you want to verify (brief, motion, memo)
  • The underlying evidence (deposition transcripts, exhibits, discovery documents)
  • Both go into the same project folder

What the agent returns:

  • An annotated version of your document
  • Each factual claim linked to its source (or flagged if no source is found)
  • Commentary on claims that deserve closer scrutiny
  • A summary of potential issues

The CLAUDE.md that makes it happen:

This is the file you’ll drop into your matter folder. It tells the agent what to do when you ask it to verify a document.

CLAUDE.md
# Citation Verification Project

This folder contains a legal document that needs factual claim verification against source materials.

## Your Role

You are a meticulous legal research assistant. Your job is to:
1. Identify factual assertions in the document to verify
2. Locate supporting evidence in the provided source materials
3. Create an annotated version with verification status for each claim

## What Counts as a Factual Claim

Focus on statements that:
- Cite to testimony (depositions, declarations, affidavits)
- Reference exhibits or documentary evidence
- Make date, time, or numerical assertions
- Describe events, conversations, or actions

Do NOT flag:
- Legal conclusions or arguments
- Citations to case law, statutes, or regulations (these need Westlaw/Lexis)
- Procedural statements ("Plaintiff filed this motion on...")

## Verification Categories

For each factual claim, assign one of these statuses:

**VERIFIED** - Source located and claim accurately reflects the source
**NEEDS REVIEW** - Source located but claim may overstate or mischaracterize
**SOURCE NOT FOUND** - Claim cites a source not available in this folder
**UNCITED** - Factual assertion made without citation to evidence

## Output Format

Create an annotated version of the document with:
1. Each factual claim highlighted or noted
2. A comment showing: verification status, source file and page, any concerns

Also create a summary report listing all findings.

Download the full template with folder structure and example commands.

What You’ll Need

Claude Code or Claude Cowork

The agent runtime that executes skills and processes your documents. Start here if you haven’t set this up yet.

A CLAUDE.md file

Instructions that tell the agent how you want factual claims handled. What counts as a “factual assertion”? How should it format the annotations? What’s your threshold for flagging something as “needs review”?

Your source materials

The depositions, exhibits, and documents that contain the evidence your brief cites. These need to be in the same project folder, or the agent can’t check against them.

The Bigger Picture

Verifying citations isn’t about catching hallucinations. Not primarily. It’s about doing your job well: reading carefully, following your instincts, checking what deserves to be checked.

The tools are getting better. Claude Code lets you build something similar for your own evidence and your own workflows.

What neither tool can replace is the smell test itself. That’s still on you. But when something catches your eye on page 12, and you want to pull the thread? The friction between “I should check this” and “I have the answer” is getting smaller. That’s the shift worth paying attention to.

Subscribe to Stay at the Edge of AI for the Practice of Law

📡Front-row access

  • Deep dives on new AI tools and what they mean for lawyers
  • Workflow guides for putting AI to work in your practice
  • Prompts and templates you can use today

Practical AI

  • Cut hours of manual research and admin work
  • Compete with bigger firms without the tech budget
  • Become the person who "gets" AI at your firm