OpenClaw

Computer use MIT
The open answer to Claude Computer Use — powered by your local model.

OpenClaw gives any local LLM real hands on your machine: mouse, keyboard, screenshot analysis, browser automation and shell access, all behind a safety layer that asks you before anything irreversible. It pairs vision-capable models (LLaVA, Qwen-VL, Pixtral, Llama-3.2-Vision) with a robust action runtime so agents can actually do work — book a flight, debug a repo, rearrange a spreadsheet — without ever leaving your desktop. The architecture is modular: swap the planner, the executor, or the model, with no cloud round-trip.

  • Vision + action loop: see the screen, decide, click
  • Sandboxed action runtime with user confirmation
  • Pluggable models — Ollama, llama.cpp, vLLM
  • Browser, shell, filesystem & app-level tools
  • Record, replay & share agent trajectories
  • Cross-platform (macOS, Linux, Windows)
Best forDesktop automation & assistant-style workflows
ModelsAny vision-capable local LLM
LicenseMIT — fully open

Hermes Agent

Multi-agent Tool-calling native
A messenger framework for long-horizon, multi-model reasoning.

Hermes Agent is built on top of the renowned Hermes family of instruction-tuned open models, which are exceptionally strong at structured function calling and JSON-mode output. The framework turns those abilities into a full orchestration layer: multiple specialist agents (Planner, Researcher, Coder, Critic) exchange typed messages, call local tools, and converge on a solution. Conversations, tool calls and internal thoughts are all traceable — a dream for developers building serious assistant products.

  • Reliable, schema-validated tool calling
  • Role-based multi-agent graphs
  • Built-in memory, retrieval and scratchpad
  • Streamed traces for full transparency
  • Works with any OpenAI-compatible local runtime
  • Python + TypeScript SDKs
Best forDevelopers building reliable agent products
ModelsHermes, Llama-3, Qwen, Mistral — any tool-capable LLM
LicenseApache 2.0

ZeroClaw

Zero-config Autonomous
One binary. One goal. It figures out the rest.

ZeroClaw is the "just-run-it" autonomous agent. Download a single executable, type a natural-language objective — "summarize every PDF in ~/Research and generate a literature review" — and ZeroClaw handles model selection, tool registration, planning and self-review. It ships with sensible defaults (Ollama as the brain, a safe file & web toolset, a verifier pass), so you can move from idea to working agent in minutes.

  • Single cross-platform binary, no dependencies
  • Automatic model detection (Ollama / llama.cpp)
  • Plan → act → verify → refine loop
  • Safe-by-default tools with undo journal
  • Optional headless / CLI daemon mode
  • Opinionated, but fully scriptable
Best forUsers who want an autonomous agent right now
ModelsAny Ollama / llama.cpp model
LicenseMIT

Open Interpreter

Code execution
A natural-language interface to your entire computer.

Open Interpreter turns any LLM into a local developer assistant that can write and execute Python, shell and JavaScript on your machine. Ask it to rename a folder of files, crunch a CSV, build a site, or automate a workflow — and it will actually do it, step by step, asking for confirmation along the way.

  • Runs code locally in Python/Bash/JS
  • Streams a chain-of-action you can interrupt
  • Works with any OpenAI-compatible local backend
  • "OS mode" for vision + computer control
  • Well-documented Python API
Best forDevelopers automating local workflows
ModelsAny local or cloud LLM
LicenseAGPLv3

CrewAI

Role-based
Role-playing agent teams, orchestrated like a company.

CrewAI lets you compose "crews" of specialized agents — each with a role, goal, backstory and tools — and have them collaborate on complex tasks. It's a clean mental model that scales from hobby automations to enterprise-style workflows, and it runs happily on top of local models via Ollama or any llama.cpp server.

  • Role / goal / backstory abstractions
  • Sequential and hierarchical task flows
  • Rich tool ecosystem & easy custom tools
  • Local or cloud LLMs with a line of config
  • Python-first, production deployments supported
Best forTeams modeling business processes as agents
ModelsAny LLM via LiteLLM
LicenseMIT

AutoGen

Microsoft Research
Conversation as a programming model for agents.

AutoGen, from Microsoft Research, pioneered the idea that multi-agent systems can be expressed as structured conversations between LLMs. Its latest versions are async-first, work with any OpenAI-compatible endpoint (including Ollama and llama-server), and ship a visual designer called AutoGen Studio for rapid prototyping.

  • Multi-agent conversations with typed messages
  • Async, event-driven runtime
  • AutoGen Studio no-code designer
  • Deep tool use and code execution support
  • Strong research backing & community
Best forResearchers & advanced developers
ModelsOpenAI-compatible endpoints
LicenseMIT

LangGraph

Graph runtime
Stateful agent workflows as explicit graphs.

LangGraph trades free-form chat-of-agents for explicit, inspectable state machines. You describe nodes (LLM calls, tools, routers) and edges (conditions), and the runtime gives you checkpointing, human-in-the-loop, streaming and production-grade reliability. It plugs into local models with the same one-liner as it does into cloud ones.

  • Explicit graph-based agent definitions
  • Durable state & checkpointing
  • Human-in-the-loop interrupts
  • First-class streaming & tracing
  • Pairs with LangSmith for observability
Best forProduction-grade agent deployments
ModelsAny OpenAI-compatible endpoint
LicenseMIT

Continue

IDE agent
A local coding agent that lives inside your editor.

Continue is a VS Code and JetBrains extension that turns any local model into a Copilot-class coding agent: autocomplete, chat, edit, refactor and agentic multi-file changes, all with zero code leaving your machine when paired with Ollama or llama.cpp. It's the friendliest entry point to local dev-agents today.

  • Chat, inline edit, agent & autocomplete modes
  • Works with any local runtime
  • Custom slash commands & context providers
  • VS Code & JetBrains support
  • Privacy-first defaults, all data stays local
Best forDevelopers wanting a local Copilot
ModelsOllama, llama.cpp, vLLM
LicenseApache 2.0

Aider

Terminal coder
Pair-programming with an LLM, straight from your terminal.

Aider is a terminal-native coding agent that edits your git repo for you. It builds a repo map, commits every change with a descriptive message, and works with local models through Ollama or any OpenAI-compatible endpoint — a favorite of engineers who prefer the CLI over an IDE plugin.

  • Understands whole-repo context via repo maps
  • Automatic git commits per change
  • Voice coding & screenshots-as-input
  • Benchmarks many models for code quality
  • Runs entirely offline with local LLMs
Best forTerminal-first engineers
ModelsAny chat model, code-tuned preferred
LicenseApache 2.0

One model. A dozen agents. Infinite possibilities.

See how each platform compares on autonomy, computer-use, developer support and more.

Open comparison table → Back to runtimes

Join the global local-AI community

Live posts on X, 470K+ builders in r/LocalLLaMA, active Discord & Matrix rooms, and trending GitHub repos — all gathered in one hub.