Private AI-Powered Web Research on Your Mac with OpenClaw — A Complete Docker Walkthrough

The era of needing to send every query to a cloud API is ending. Modern Apple Silicon ships with enough unified memory and neural engine throughput to run 12-billion-parameter language models at conversational speed — and the results are genuinely good, not a compromise you tolerate for the sake of privacy. Tools like Local Deep Research pair these local models with self-hosted search engines like SearXNG to create a fully private research pipeline that lives entirely on your machine: no API keys, no usage fees, no data leaving your network. The practical upside goes beyond privacy, though. When your research stack runs locally, you can iterate without rate limits, run long exploratory sessions without watching a billing dashboard, and keep sensitive or proprietary queries completely off third-party servers. And this is just the starting point. With open-source frameworks like OpenClaw pushing agentic AI capabilities into local environments, we're heading toward a world where autonomous research agents — ones that plan, search, read, and synthesize across multiple rounds — run as background processes on your laptop the same way a compiler or linter does today. A hybrid setup that combines local models for reasoning with self-hosted search for information retrieval positions you at the front of that wave, rather than waiting for a cloud vendor to package it for you.

When you have installed these, you can use my OpenClaw skill to automate your research: https://clawhub.ai/eplt/local-deep-research


A step-by-step guide for beginners. All commands can be copied and pasted into Terminal.


Prerequisites

What you need before starting:

  • A Mac running macOS Monterey (12) or later — Apple Silicon (M1/M2/M3/M4) or Intel both work
  • At least 16 GB of RAM (8 GB minimum, but things will be slow)
  • At least 25 GB of free disk space (the default Ollama model alone is ~8 GB)
  • An internet connection for the initial download

Step 1 — Install Docker Desktop

Docker Desktop is the engine that runs all three services (LDR, Ollama, SearXNG) in isolated containers.

  1. Go to https://docs.docker.com/desktop/setup/install/mac-install/
  2. Download the .dmg file for your chip type — Apple Silicon or Intel. If you're not sure, click the Apple menu → About This Mac; if it says "Apple M1/M2/M3/M4" you want Apple Silicon.
  3. Open the .dmg and drag Docker.app into your Applications folder.
  4. Launch Docker Desktop from Applications. It will ask for your password to install a helper — this is normal.
  5. Wait until the Docker icon in the menu bar shows a steady state (no animation). You can verify it's running by opening Terminal and typing:
docker --version

You should see something like Docker version 27.x.x. If you get "command not found", restart Docker Desktop and try again.


Step 2 — Free Up Port 5000 (macOS AirPlay Conflict)

macOS Monterey and later use port 5000 for AirPlay Receiver. LDR also defaults to port 5000. You have two choices:

Option A — Disable AirPlay Receiver (simplest):

Go to System Settings → General → AirDrop & Handoff and turn off AirPlay Receiver. (On older macOS versions: System Preferences → Sharing → uncheck AirPlay Receiver.)

Option B — Use a different port for LDR (no system changes):

You'll edit one line in the Docker Compose file in Step 3. I'll mark it clearly below. This maps LDR to port 5050 on your Mac while the container still runs internally on 5000.


Step 3 — Download the Docker Compose File and Start Everything

Open Terminal (press ⌘+Space, type "Terminal", press Enter) and run these commands one at a time:

# Create a folder to keep things organized
mkdir -p ~/local-deep-research && cd ~/local-deep-research

# Download the official Docker Compose file
curl -O https://raw.githubusercontent.com/LearningCircuit/local-deep-research/main/docker-compose.yml

If you chose Option B above (different port), edit the file now:

# Open the file in the built-in text editor
nano docker-compose.yml

Find the line under the local-deep-research service that says:

    ports:
      - "5000:5000"

Change it to:

    ports:
      - "5050:5000"

Then press Ctrl+O (save), Enter (confirm), Ctrl+X (exit nano).

Now start everything:

docker compose up -d

This single command will: - Pull the Local Deep Research image (~1 GB) - Pull the Ollama image (~1.5 GB) - Pull the SearXNG image (~200 MB) - Create a private Docker network so the three containers can talk to each other - Download the default LLM model (gemma3:12b, ~8 GB) — this is the slowest part on the first run

Expect the first launch to take 10–30 minutes depending on your internet speed. The Ollama container's health check won't pass until the model is fully downloaded, and LDR waits for Ollama to be healthy before starting.


Step 4 — Watch the Progress

While everything downloads, you can monitor logs:

# Watch all three services at once (Ctrl+C to stop watching)
docker compose logs -f

# Or watch just one service
docker compose logs -f ollama
docker compose logs -f searxng
docker compose logs -f local-deep-research

When you see a line like Uvicorn running on http://0.0.0.0:5000 in the LDR logs, everything is ready.

You can also check the status of all containers:

docker compose ps

All three services should show Up (or Up (healthy) for Ollama).


Step 5 — Verify Each Service

SearXNG — open your browser and go to:

http://localhost:8080

You should see the SearXNG search page. Try a quick search to confirm it works.

Ollama — confirm the model is loaded:

docker compose exec ollama ollama list

You should see gemma3:12b in the list.

Local Deep Research — open your browser and go to:

http://localhost:5000

(or http://localhost:5050 if you changed the port in Step 3)

You should see the LDR web interface. It will prompt you to create an account on first visit.


Step 6 — Run Your First Research

  1. In the LDR web interface, type a research question — something like: "What are the health benefits of green tea?"
  2. The default search strategy is Source-Based, which is a good all-rounder for general web research.
  3. The default LLM is gemma3:12b running locally via Ollama — no API keys needed.
  4. Click Start Research and watch the progress bar. LDR will search via SearXNG, read the results, and generate a cited report.

Step 7 — Make Everything Start Automatically on Reboot

The Docker Compose file already includes restart: unless-stopped on all three containers, which means Docker will restart them automatically. You just need Docker Desktop itself to launch at login:

  1. Open Docker Desktop
  2. Click the gear icon (Settings) in the top-right
  3. Under General, check Start Docker Desktop when you sign in to your computer
  4. Click Apply & Restart

That's it. After a reboot, Docker Desktop will start automatically, and all three containers (LDR, Ollama, SearXNG) will come back up on their own. You can verify after a reboot:

docker compose -f ~/local-deep-research/docker-compose.yml ps

Step 8 — Basic Configuration via the Web UI

Most settings can be changed directly in the LDR web interface at:

http://localhost:5000/settings

Key settings to review:

LLM Settings — The default provider is Ollama with gemma3:12b. If you want to use a cloud provider (OpenAI, Anthropic, etc.), change the provider and enter your API key here. If you want a different local model:

# Pull a different model (example: llama3:8b for faster but less capable)
docker compose exec ollama ollama pull llama3:8b

Then select it in the Settings → LLM section.

Search Settings — The default search engine is SearXNG. The default strategy is source-based. The six strategies are:

  • Source-Based — best for general research; pulls authoritative sources with citations
  • Focused Iteration — fast factual Q&A; narrow queries refined over multiple iterations
  • Focused Iteration Standard — same as above but produces longer, fully-cited reports
  • Iterative Refinement — runs source-based first, then identifies gaps and researches them in follow-up rounds
  • Topic Organization — clusters results into thematic groups; good for broad exploratory topics
  • MCP (Agentic ReAct) — fully autonomous; the LLM decides which tools to use; most flexible but least predictable

For general web searching, Source-Based is the safest default. Switch to Focused Iteration when you need quick factual answers.


Common Pitfalls and How to Fix Them

"Port 5000 is already in use" — You didn't disable AirPlay Receiver and didn't remap the port. Go back to Step 2 and choose Option A or B.

Ollama container keeps restarting / health check failing — The model is still downloading. Run docker compose logs -f ollama and wait. The default gemma3:12b is ~8 GB. If it's truly stuck, try:

docker compose down
docker compose up -d

SearXNG returns no results — SearXNG aggregates results from public search engines, which sometimes rate-limit or block requests. Wait a minute and try again. If it persists, open http://localhost:8080 directly and test a search there. You can also check SearXNG's enabled engines at http://localhost:8080/preferences.

LDR shows "connection refused" errors for Ollama or SearXNG — The containers may not be on the same Docker network. Verify with:

docker network inspect local-deep-research_ldr-network

All three containers should be listed. If not, bring everything down and up again:

cd ~/local-deep-research
docker compose down
docker compose up -d

Results are in the wrong language — There is a known issue where the global "Search Language" dropdown in LDR settings does notaffect SearXNG. To change the SearXNG search language, go to Settings and look for the engine-specific setting search.engine.web.searxng.default_params.language and set it to an ISO 639-1 code (e.g., fr for French, de for German, es for Spanish). The global dropdown only affects Brave, SerpAPI, and a few other engines.

"max_workers must be > 0" — Usually means Ollama isn't running or the model name is wrong. Check that the model name in LDR settings exactly matches what Ollama has:

docker compose exec ollama ollama list

Low memory / Mac becomes unresponsive — gemma3:12b uses ~8–10 GB of RAM. If your Mac only has 8 GB total, switch to a smaller model:

docker compose exec ollama ollama pull gemma3:4b

Then change the model in LDR Settings → LLM → Model.

SQLCipher warning in the settings page — LDR uses encrypted databases by default. In a local-only Docker setup this is usually not critical. If you see the warning and want to suppress it, add this environment variable to the local-deep-research service in docker-compose.yml:

      - LDR_BOOTSTRAP_ALLOW_UNENCRYPTED=true

Then restart:

docker compose down && docker compose up -d

Useful Maintenance Commands

# Stop everything (data is preserved in Docker volumes)
cd ~/local-deep-research
docker compose down

# Start everything again
docker compose up -d

# Update all images to the latest version
docker compose pull
docker compose up -d

# Pull an additional Ollama model
docker compose exec ollama ollama pull mistral:7b

# Delete a model you no longer need
docker compose exec ollama ollama rm gemma3:12b

# View disk usage by Docker
docker system df

# Nuclear option: remove everything including data (⚠️ destroys research history)
docker compose down -v

Useful Links

ResourceURL
LDR GitHub Repositoryhttps://github.com/LearningCircuit/local-deep-research
LDR Configuration Referencehttps://github.com/LearningCircuit/local-deep-research/blob/main/docs/CONFIGURATION.md
LDR Troubleshooting Guidehttps://github.com/LearningCircuit/local-deep-research/wiki/Troubleshooting
SearXNG Documentationhttps://docs.searxng.org/
LDR Discord Communityhttps://discord.gg/ttcqQeFcJ3
LDR GitHub Issueshttps://github.com/LearningCircuit/local-deep-research/issues
Docker Desktop for Machttps://docs.docker.com/desktop/setup/install/mac-install/

That covers the full installation from zero to a working LDR + SearXNG + Ollama stack on macOS. The whole thing runs locally — no API keys required for the default configuration, no data leaves your machine (SearXNG proxies your searches so the search engines never see your IP directly).