Two tables. Every major open-source tool. Pick the right stack for your hardware, your privacy needs, and your use case — no proprietary apps, no cloud-only services.
Which engine should actually run your model? Check platform support, interface style, and who each tool is really built for.
| Tool | Interface | Platforms | Model format | Local server / API | Beginner friendly | Advanced control | License | Best for |
|---|---|---|---|---|---|---|---|---|
| OllamaDaemon + CLI | CLI + API | macOS · Linux · Win | GGUF | ● OpenAI-compatible | ●●●● | ●●● | MIT | Backend for everything |
| Open WebUISelf-hosted web UI | Web UI + API | Any (Docker / Node) | via Ollama / OpenAI API | ● built-in proxy | ●●●●● | ●●●● | BSD-3 | Private ChatGPT for a team |
| JanOpen ChatGPT | GUI | macOS · Linux · Win | GGUF | ◐ via extensions | ●●●● | ●●● | AGPLv3 | Fully open-source chat |
| GPT4AllNomic AI | GUI + SDK | macOS · Linux · Win | GGUF | ◐ local API | ●●●●● | ●● | MIT | Doc-grounded chat for everyone |
| llama.cppThe engine | CLI + lib | macOS · Linux · Win · iOS · Android | GGUF | ● llama-server | ○ | ●●●●● | MIT | Maximum performance |
| vLLMBerkeley | HTTP server | Linux (CUDA/ROCm) | HF Transformers | ● OpenAI-compatible | ○ | ●●●●● | Apache 2.0 | Production at scale |
| Apple MLXFramework | Python / Swift lib | macOS (Apple Silicon) | MLX / safetensors | ◐ via mlx-server | ○ | ●●●●● | MIT | Peak speed on Macs |
| LocalAIOpenAI drop-in | HTTP server | Any (Docker) | Many backends | ● full OpenAI surface | ●● | ●●●● | MIT | Replace OpenAI silently |
| Text Gen WebUIoobabooga | Web UI | macOS · Linux · Win | GGUF · GPTQ · EXL2 | ● OpenAI-compatible | ●● | ●●●●● | AGPLv3 | Power users & research |
| KoboldCppWriter tool | Web UI + API | macOS · Linux · Win | GGUF | ● OpenAI + Kobold | ●●● | ●●●● | AGPLv3 | Story & roleplay writing |
| Cherry StudioOpen desktop client | GUI | macOS · Linux · Win | GGUF (via Ollama) | ◐ via Ollama | ●●●●● | ●● | Apache 2.0 | Polished open-source desktop app |
| AnythingLLMTeam workspace | Web + Desktop | macOS · Linux · Win · Docker | Any via backends | ● built-in | ●●● | ●●●● | MIT | Private knowledge base |
Chat is just the start. These are the frameworks and apps that actually let agents do things on your behalf.
| Agent | Focus | Autonomy | Computer use | Multi-agent | Tool-calling | Local-first | Interface | License |
|---|---|---|---|---|---|---|---|---|
| OpenClawDesktop agent | Computer control | ●●●● | ● screen + mouse + shell | ◐ | ● | ● | Desktop app | MIT |
| Hermes AgentOrchestrator | Reliable tool use | ●●●● | ◐ via tools | ● role graphs | ● JSON-native | ● | SDK (Py / TS) | Apache 2.0 |
| ZeroClawOne-shot autonomy | Zero-config autonomous | ●●●●● | ● file + web | ◐ internal | ● | ● | Single binary / CLI | MIT |
| Open InterpreterCode executor | Local scripting | ●●● | ● OS mode | ○ | ● | ● | CLI + lib | AGPLv3 |
| CrewAIRole-based crews | Business workflows | ●●●● | ◐ via tools | ● crews | ● | ● | Python SDK | MIT |
| AutoGenMicrosoft | Conversational agents | ●●●● | ◐ via tools | ● chat graphs | ● | ● | Python + Studio | MIT |
| LangGraphLangChain | Production graphs | ●●●● | ◐ via tools | ● nodes | ● | ● | Python + TS SDK | MIT |
| ContinueIDE copilot | Coding | ●●● | ◐ inside IDE | ○ | ● | ● | VS Code / JetBrains | Apache 2.0 |
| AiderTerminal coder | Coding | ●●● | ◐ repo edits | ○ | ● | ● | CLI | Apache 2.0 |
Install Jan or Open WebUI (with Ollama) and pull Llama 3.1 8B. You'll have a ChatGPT-quality assistant on your desk in 5 minutes — 100% open source.
Run Ollama for the backend and pick Hermes Agent or LangGraph for orchestration. Add Continue in your IDE.
Try ZeroClaw for zero-config autonomy, or OpenClaw if you need real computer control.
Pair vLLM or LocalAI on the server with AnythingLLM for the user workspace. Private. Scalable.