A free, open-source AI assistant that runs entirely in Docker. No extra hardware. Fully ephemeral. Full desktop environment in a container.
One container is all you need. No complex orchestration, no GPU requirements, no fuss. Pull, run, chat.
Conversations vanish when the container stops. Your data stays yours — nothing persists unless you want it to.
Runs on the machine you already have. No dedicated GPU, no cloud spend, no special infrastructure required.
MIT licensed. Fork it, modify it, self-host it. Transparency isn’t a feature — it’s the foundation.
From zero to AI assistant in seconds. Docker handles dependencies so you can focus on what matters.
Not just a chatbot — OtterBot ships with a complete desktop environment inside the container. Browser, file manager, terminal, and more.
What does the installer do? It checks for Docker (and offers to install it), generates a docker-compose.yml and .env with a secure random DB key, pulls the image, and starts the container. All data lives in ~/otterbot.
See the docs for manual Docker setup and all runtime options.
After using OpenClaw for a few days, I was frustrated. I could never tell what was actually happening. Everything had to be built from scratch with zero guidance. Cost-efficient models delivered a terrible experience, and the higher-tier models had response times that made conversations painful.
When I talk to an assistant, I want an immediate response. Assistants don’t need to be brilliant — they just need to coordinate the right agents. OtterBot runs beautifully on GPT-OSS-120B and Qwen3-Coder-Next, giving you a great experience without burning through expensive API credits. OpenCode, Claude Code, Codex, and Gemini CLI come pre-installed — just add your API key and go.
Combine that with the 3D view, and you always know exactly what your agents are doing.