**Motivations:** - Ollama and AnythingLLM moved from 192.168.1.164 to the ia LAN host. **Root causes:** - Upstreams still targeted 192.168.1.164. **Correctifs:** - Set upstream servers to 192.168.1.173:11434 and :3001. **Evolutions:** - Docs aligned with ia role IP; note to edit site conf if IP changes. **Pages affectées:** - deploy/nginx/sites/ia.enso.4nkweb.com.conf - deploy/nginx/README-ia-enso.md - docs/features/ia-enso-nginx-proxy-ollama-anythingllm.md - docs/infrastructure.md - docs/services.md
2.8 KiB
2.8 KiB
Feature: Reverse proxy ia.enso.4nkweb.com for Ollama and AnythingLLM
Author: 4NK team
Objective
Expose Ollama and AnythingLLM on the public proxy hostname with HTTPS, path prefixes /ollama and /anythingllm, and gate Ollama with a Bearer token checked at the proxy (compatible with Cursor’s OpenAI base URL + API key).
Impacts
- Proxy (nginx): new
server_name, TLS, locations, HTTPmapfor Bearer validation; maps deployed under/etc/nginx/conf.d/when using the provided script. - Backend (192.168.1.173, role ia): must accept connections from the proxy on
11434and3001; Ollama must not rely on the clientAuthorizationheader (nginx clears it after validation). - Clients: Cursor uses
https://ia.enso.4nkweb.com/ollama/v1and the shared secret as API key; avoids private-IP SSRF blocks in Cursor when the hostname resolves publicly from the client infrastructure.
Repository layout
| Path | Purpose |
|---|---|
deploy/nginx/sites/ia.enso.4nkweb.com.conf |
server blocks, upstreams to 192.168.1.173 (edit if IA host IP changes) |
deploy/nginx/http-maps/ia-enso-ollama-bearer.map.conf.example |
Example Bearer map (manual install) |
deploy/nginx/http-maps/websocket-connection.map.conf.example |
Example WebSocket map (manual install) |
deploy/nginx/deploy-ia-enso-to-proxy.sh |
SSH deploy: maps + site, nginx -t, reload; Bearer-only retry if websocket map already exists |
deploy/nginx/README-ia-enso.md |
Operator reference: automated + manual steps, env vars, checks, troubleshooting |
Deployment modalities
Preferred: run ./deploy/nginx/deploy-ia-enso-to-proxy.sh from smart_ide on a host with SSH access (see README-ia-enso.md for prerequisites and environment variables).
Manual: DNS → TLS (certbot) → install map directives inside http { } (via conf.d or http-maps includes) → install site under sites-available / sites-enabled → nginx -t → reload. Details: deploy/nginx/README-ia-enso.md.
Restrict backend ports on the IA host (192.168.1.173 in repo config) to the proxy source where a host firewall is used.
Analysis modalities
curlto/ollama/v1/modelswith and withoutAuthorization: Bearer <secret>(expect 200 / 401).- Browser access to
/anythingllm/and application login. - Cursor connectivity after configuration (no
ssrf_blockedif the hostname does not resolve to a blocked private IP from Cursor’s perspective).
Security notes
- The Bearer secret is equivalent to an API key; rotate by updating the
mapfile and client configs together. - AnythingLLM remains protected by its own application authentication; the
/anythingllmlocation does not add the Ollama Bearer gate. - A public URL for
/ollamaexposes the inference endpoint to anyone who knows the secret; combine with network controls if required.