Top 5 OpenClaw Security Mistakes

Don't make these obvious errors when setting up your AI agent

I've been building AI products since OpenAI first made it easy to build with LLMs. I've done corporate governance due diligence on securities offerings. When I installed OpenClaw, I didn't want to configure it like an idiot. So I read the post-mortems, recruited an agent to do some research, and used the following checklist for myself.

OpenClaw deserves the hype, and you should feel free to play with it; just don't make these obvious security mistakes.

1. Don't Run It On Your Main Machine

OpenClaw runs on your system. That means it can see your files, your browser sessions, your SSH keys, your password manager. Everything your user account can access. Run it on your daily driver and one bad prompt could expose years of accumulated data.

The fix: Dedicated hardware or Docker. A $600 Mac mini works. So does a VM or container with explicit folder mounts. Either way, create a non-admin account. No sudo access. The agent should never be able to install software or modify system settings. If something goes wrong, the blast radius stays contained.

2. Don't Hand Over All Your Keys

OpenClaw ships with a default skill set that can do a lot out of the gate. Email, calendar, file system, web browsing, terminal commands. Most people enable everything on day one because it's easier.

The fix: Roll out skills one at a time. Start with file reading and nothing else. Add capabilities as you actually need them. When a skill requires an API key (like email or a third-party service), the agent should be able to use the skill without seeing the key. Store credentials in environment variables or a secrets manager; don't paste them into the chat. One exception: keys for the LLM itself (like OPENAI_API_KEY) make sense to expose, since that's how the agent runs. Everything else stays walled off.

3. Don't Invite the Agent Into Your Accounts

Some skills need access to services you already use: email, cloud storage, project management tools. The lazy approach is to give the agent your credentials and let it log in as you.

The fix: Create separate accounts with scoped permissions. If the agent needs to send emails, give it its own email address with limited send privileges. If it needs access to a folder, share that specific folder. Don't hand over your entire drive. The same logic applies to any SaaS tool: a dedicated bot account with minimal permissions beats an all-access pass to your personal account. When the project ends, revoke the bot's access. Clean break.

4. Don't Let Strangers Prompt Your Agent

Prompt injection is the oldest trick in the AI playbook. Someone embeds instructions in a document, an email, or a webpage, and your agent follows them. If your agent processes untrusted input (messages from strangers, files from unknown sources), it's only a matter of time before someone exploits it.

The fix: Don't let others prompt your agent directly. If you want the agent active on social media or email, have it observe incoming content and surface summaries for your review. But don't let it act on stranger's instructions without a human checkpoint. Treat every external input as potentially hostile. The agent can still be useful; it just shouldn't be autonomous when processing content you didn't create or explicitly trust.

5. Keep an Audit Trail

Without logs, you have no idea what the agent actually did. Maybe it read 200 files looking for a keyword. Maybe it made 15 API calls to a service you forgot you connected. Maybe it accessed a folder it shouldn't have. If you can't review the history, you can't catch problems. And you can't defend your workflow if someone asks questions.

The fix: Export action logs after every session that touches sensitive work. Store them with your project files. Every prompt, tool call, file access, and API request should be reviewable. Set a calendar reminder to audit permissions monthly. Check what skills are enabled, what accounts are connected, and whether any of it has drifted from your original setup. Logs can contain sensitive data themselves, so apply the same retention policies you'd use for any confidential document.


Quick Setup Checklist

  1. Dedicated hardware, VM, or Docker (non-admin account, no sudo)
  2. Skills enabled one at a time, not all at once
  3. Credentials in environment variables, not in chat
  4. Separate bot accounts for external services
  5. No direct processing of untrusted prompts
  6. Action logs exported and reviewed periodically

Treat the agent like a contractor with a security clearance. Give it a workspace, scoped access, and oversight. Don't hand over the keys to everything on day one.

Subscribe to Stay at the Edge of AI for the Practice of Law

📡Front-row access

  • ✓Deep dives on new AI tools and what they mean for lawyers
  • ✓Workflow guides for putting AI to work in your practice
  • ✓Prompts and templates you can use today

⚡Practical AI

  • ✓Cut hours of manual research and admin work
  • ✓Compete with bigger firms without the tech budget
  • ✓Become the person who "gets" AI at your firm