Short, opinionated walkthroughs for the most common local-AI journeys. No subscriptions, no affiliate links, no filler.
Install Ollama, pull Llama 3.1, and chat — from zero to working in under a minute.
A practical matrix of RAM, VRAM and quantization so you stop guessing which model fits.
Use AnythingLLM or Jan to ground a local model in your own PDFs, markdown and docs.
Set up a safe sandbox, connect a vision model, and let OpenClaw automate real tasks.
Compose a Planner → Researcher → Critic graph over any local model, with reliable JSON tools.
From daily digest automation to overnight code reviews — ZeroClaw recipes for real work.
Wire up Continue + Ollama for fast, private autocomplete and repo-aware refactors.
Permissions, sandboxes, and network rules that keep agents powerful but contained.
Tokens/sec vs. memory vs. quality — how to evaluate local models for your real workload.
Guides are being written by the community. Want to contribute one? Propose a topic →