If you’re anything like me, you probably have a Raspberry Pi sitting in a drawer somewhere, gathering dust. Or maybe it’s currently running Pi-hole and doing absolutely nothing else with 95% of its CPU.
We tend to think of modern AI as this heavy, expensive beast that requires NVIDIA H100 GPUs and a data center cooling system to function. And sure, if you’re training a model, you need that horsepower. But running an agent—the logic that actually does things for you—is surprisingly lightweight.
I recently set up OpenClaw on a Raspberry Pi 5, and honestly? It feels like the closest thing to a real-life "Jarvis" I’ve ever had. It runs 24/7, costs pennies in electricity, and lives entirely on my local network.
Here is how you can build a $50 always-on assistant that actually does work.
Why the Pi?
You might be asking, "Why not just run this on my MacBook?"
You can. But the magic of an AI agent is availability. You want something that wakes up at 3 AM to check if your server went down. You want something that organizes your downloads folder while you're at work. You want a system that doesn't sleep when you close your laptop lid.
A Raspberry Pi (or any low-power mini PC like an Orange Pi or an old Intel NUC) is perfect for this. It sips power—usually less than 5 watts at idle—and it’s always there, waiting for instructions.
The Hardware You Need
You don’t need a supercomputer. Here is the realistic minimum spec:
- Raspberry Pi 4 or 5: I recommend the 8GB version if you plan to run small local models (like TinyLlama) directly on the device. If you are just using OpenClaw to connect to cloud APIs (like OpenAI or Anthropic) or a networked GPU server, the 4GB model is plenty.
- SD Card: A decent 32GB+ card. Speed matters here, so get a Class 10 or A2 rated card.
- Power Supply: Don't skimp on this. Flaky power causes weird errors.
Setting It Up
The beauty of OpenClaw is that it's designed to be efficient. It’s written in Go (mostly), so it compiles down to a single binary that runs beautifully on ARM architectures.
1. The Environment
Start with a fresh install of Raspberry Pi OS Lite (64-bit). You don't need a desktop environment eating up your RAM.
Once you're SSH'd in, the cleanest way to run this is via Docker. If you don't have Docker installed, just run the standard convenience script:
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
2. The Container
Instead of messing with dependencies, we can just pull the image. (I’m assuming you have your API keys ready—either for a cloud provider or your local Ollama instance).
Create a docker-compose.yml file:
version: "3.8"
services:
openclaw:
image: openclaw/core:latest
container_name: jarvis
restart: always
network_mode: host
volumes:
- ./data:/app/data
- /home/pi/Downloads:/mnt/downloads # Give it access to folders you want it to manage
environment:
- OPENCLAW_API_KEY=your_key_here
- LLM_PROVIDER=ollama # or openai, anthropic
- LLM_ENDPOINT=http://localhost:11434 # if using local Ollama
Then, just fire it up:
docker compose up -d
3. Connecting the "Brains"
Here is where you have a choice.
Option A: The Cloud Route.
You set LLM_PROVIDER=openai. The Pi is just the "body." It sends your request to the cloud, gets the logic back, and then executes the command on your local network. This is fast and smart, but costs money per token.
Option B: The Local Route.
You run a small model like Phi-3-mini or Gemma-2b via Ollama directly on the Pi.
- Pros: 100% free, private, works offline.
- Cons: It will be slower. On a Pi 5, a 3B parameter model runs decently, but don't expect instant responses.
Option C: The Hybrid (My Favorite).
I run a powerful model (like Llama 3) on my gaming PC. My Pi runs OpenClaw 24/7. OpenClaw sends heavy thinking tasks to my gaming PC when it's on, and falls back to a lighter cloud model when it's off.
What Can It Actually Do?
Once it's running, you treat it like a remote worker who has shell access.
- "Check my internet speed every hour and log it to a CSV."
Since it's on the Pi, it can monitor your home network performance reliably. - "Watch my 'Downloads' folder. If a PDF arrives, move it to the 'Documents' folder."
It becomes a smart file sorter that never sleeps. -
"Ping me on Telegram if my website returns a 500 error."
It’s a free uptime monitor. -
Home Automation Bridge.
Since the Pi is on your local network, OpenClaw can send requests to your Philips Hue bridge or Home Assistant instance locally, without needing to open ports to the outside world.
The "Good Enough" Assistant
We get obsessed with having the smartest possible AI. But for 90% of home automation and admin tasks, you don't need a genius. You need a reliable intern.
A Raspberry Pi running OpenClaw isn't going to solve the theory of relativity. But it will reliably rename your files, check your backups, and send you reminders. For a $50 board and $2 a year in electricity, that’s a pretty incredible return on investment.
Official Links
- Website: openclaw.ai
- GitHub Repository: github.com/openclaw/core
- Documentation: docs.openclaw.ai
Conclusion
You don't need a rack of servers to start experimenting with AI agents. In fact, constraints often make for better projects. Setting this up on a Pi forces you to think about efficiency and real-world utility rather than just raw compute power.
Give it a shot this weekend. Worst case, you can always go back to running Pi-hole.