smart_ide/docs/features/ia-enso-nginx-proxy-ollama-anythingllm.md
Nicolas Cantu a6bd0ea14c Document ia.enso nginx proxy (operator guide, cross-links)
**Motivations:**
- Single operational reference for deploy script vs manual steps and troubleshooting.

**Root causes:**
- README mixed http-maps manual path with script using conf.d without full operator context.

**Correctifs:**
- Align documentation with deploy script paths and prerequisites.

**Evolutions:**
- Expanded README-ia-enso.md (tables, SSRF context, env vars, rotation, troubleshooting).
- Feature doc table and deployment pointers; links from docs/README, infrastructure, services.

**Pages affectées:**
- deploy/nginx/README-ia-enso.md
- docs/features/ia-enso-nginx-proxy-ollama-anythingllm.md
- docs/README.md
- docs/infrastructure.md
- docs/services.md
2026-03-23 01:04:04 +01:00

2.8 KiB
Raw Blame History

Feature: Reverse proxy ia.enso.4nkweb.com for Ollama and AnythingLLM

Author: 4NK team

Objective

Expose Ollama and AnythingLLM on the public proxy hostname with HTTPS, path prefixes /ollama and /anythingllm, and gate Ollama with a Bearer token checked at the proxy (compatible with Cursors OpenAI base URL + API key).

Impacts

  • Proxy (nginx): new server_name, TLS, locations, HTTP map for Bearer validation; maps deployed under /etc/nginx/conf.d/ when using the provided script.
  • Backend (192.168.1.164): must accept connections from the proxy on 11434 and 3001; Ollama must not rely on the client Authorization header (nginx clears it after validation).
  • Clients: Cursor uses https://ia.enso.4nkweb.com/ollama/v1 and the shared secret as API key; avoids private-IP SSRF blocks in Cursor when the hostname resolves publicly from the client infrastructure.

Repository layout

Path Purpose
deploy/nginx/sites/ia.enso.4nkweb.com.conf server blocks, upstreams to 192.168.1.164
deploy/nginx/http-maps/ia-enso-ollama-bearer.map.conf.example Example Bearer map (manual install)
deploy/nginx/http-maps/websocket-connection.map.conf.example Example WebSocket map (manual install)
deploy/nginx/deploy-ia-enso-to-proxy.sh SSH deploy: maps + site, nginx -t, reload; Bearer-only retry if websocket map already exists
deploy/nginx/README-ia-enso.md Operator reference: automated + manual steps, env vars, checks, troubleshooting

Deployment modalities

Preferred: run ./deploy/nginx/deploy-ia-enso-to-proxy.sh from smart_ide on a host with SSH access (see README-ia-enso.md for prerequisites and environment variables).

Manual: DNS → TLS (certbot) → install map directives inside http { } (via conf.d or http-maps includes) → install site under sites-available / sites-enablednginx -t → reload. Details: deploy/nginx/README-ia-enso.md.

Restrict backend ports on 192.168.1.164 to the proxy source where a host firewall is used.

Analysis modalities

  • curl to /ollama/v1/models with and without Authorization: Bearer <secret> (expect 200 / 401).
  • Browser access to /anythingllm/ and application login.
  • Cursor connectivity after configuration (no ssrf_blocked if the hostname does not resolve to a blocked private IP from Cursors perspective).

Security notes

  • The Bearer secret is equivalent to an API key; rotate by updating the map file and client configs together.
  • AnythingLLM remains protected by its own application authentication; the /anythingllm location does not add the Ollama Bearer gate.
  • A public URL for /ollama exposes the inference endpoint to anyone who knows the secret; combine with network controls if required.