4.4 KiB
Services
Where these services run (first deployment)
For the first deployment target, Ollama and AnythingLLM run on the remote SSH server that hosts the AI stack and repositories, not necessarily on the user’s Linux laptop. Access from the client may use SSH local forwarding or internal hostnames. See deployment-target.md.
Overview
| Service | Delivery | Default URL / port | Config / persistence |
|---|---|---|---|
| Ollama | systemd (ollama.service) |
http://127.0.0.1:11434 (API) |
Models under Ollama data dir; listen address via systemd override |
| AnythingLLM | Docker (mintplexlabs/anythingllm) |
http://localhost:3001 |
$HOME/anythingllm + .env bind-mounted ; one workspace per project (see anythingllm-workspaces.md) |
| AnythingLLM Desktop | AppImage (optional) | local Electron app | User profile under ~/.config/anythingllm-desktop (installer) |
Ollama
- Install: official script
https://ollama.com/install.sh(used on target Ubuntu hosts). - Service:
systemctl enable --now ollama(handled by installer). - Default bind: loopback only (
127.0.0.1:11434), which blocks Docker containers on the same host from calling Ollama.
Expose Ollama to Docker on the same host
Run configure-ollama-for-docker.sh as root (or equivalent):
- Drop-in:
/etc/systemd/system/ollama.service.d/override.conf Environment="OLLAMA_HOST=0.0.0.0:11434"systemctl daemon-reload && systemctl restart ollama
Verify: ss -tlnp | grep 11434 shows *:11434.
Models (reference)
- Embeddings for AnythingLLM + Ollama:
ollama pull nomic-embed-text - Custom name
qwen3-code-webdev: not in the public Ollama library as-is; this repo includesModelfile-qwen3-code-webdevdefining an alias (default base:qwen3-coder:480b-cloud). Rebuild withollama create qwen3-code-webdev -f Modelfile-qwen3-code-webdevafter editingFROM.
AnythingLLM (Docker)
Workspaces and projects
AnythingLLM is used with dedicated workspaces per project so RAG memory, documents, and threads stay isolated. A sync job (“moulinette”) keeps selected repository files aligned with each workspace. Operational rules: anythingllm-workspaces.md.
Script: install-anythingllm-docker.sh
- Image:
mintplexlabs/anythingllm(override withANYTHINGLLM_IMAGE). - Container name:
anythingllm(override withANYTHINGLLM_CONTAINER_NAME). - Ports:
HOST_PORT:3001(default3001:3001). - Capabilities:
--cap-add SYS_ADMIN(Chromium / document features in container). - Networking:
--add-host=host.docker.internal:host-gatewayso the app can reach Ollama on the host athttp://host.docker.internal:11434onceOLLAMA_HOSTis set as above. - Volumes:
${STORAGE_LOCATION}:/app/server/storage${STORAGE_LOCATION}/.env:/app/server/.env
Re-running the script removes the existing container by name and starts a new one; data remains in STORAGE_LOCATION if the bind path is unchanged.
Configure LLM provider (Ollama)
In $STORAGE_LOCATION/.env (mounted into the container), set at minimum:
LLM_PROVIDER='ollama'OLLAMA_BASE_PATH='http://host.docker.internal:11434'OLLAMA_MODEL_PREF='<model name>'(e.g.qwen3-code-webdev)EMBEDDING_ENGINE='ollama'EMBEDDING_BASE_PATH='http://host.docker.internal:11434'EMBEDDING_MODEL_PREF='nomic-embed-text:latest'VECTOR_DB='lancedb'(default stack)
See upstream .env.example:
https://raw.githubusercontent.com/Mintplex-Labs/anything-llm/master/docker/.env.example
After editing .env, restart the container: docker restart anythingllm.
AnythingLLM Desktop (AppImage)
Script: installer.sh — downloads the official AppImage, optional AppArmor profile, .desktop entry. Interactive prompts; not a headless service.
- Documentation: https://docs.anythingllm.com
- Use either Docker or Desktop on the same machine if you want to avoid conflicting ports and duplicate workspaces.
Operational checks
systemctl is-active ollama
curl -sS http://127.0.0.1:11434/api/tags | head
docker ps --filter name=anythingllm
docker exec anythingllm sh -c 'curl -sS http://host.docker.internal:11434/api/tags | head'
The last command must succeed after OLLAMA_HOST=0.0.0.0:11434 and host.docker.internal are configured.