From 24077e749e59b4fbef0d45b0ad9f890e18f23d87 Mon Sep 17 00:00:00 2001 From: Nicolas Cantu Date: Mon, 23 Mar 2026 00:56:43 +0100 Subject: [PATCH] Add ia.enso.4nkweb.com nginx proxy for Ollama and AnythingLLM MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit **Motivations:** - Expose Ollama and AnythingLLM via HTTPS paths on the LAN proxy with Bearer auth for Ollama. **Root causes:** - Cursor blocks direct requests to private IPs (SSRF policy). **Correctifs:** - N/A (new configuration artifacts). **Evolutions:** - Nginx site template, HTTP map for Bearer validation, websocket map example, deployment README, services doc link, feature documentation. **Pages affectées:** - deploy/nginx/http-maps/ia-enso-ollama-bearer.map.conf.example - deploy/nginx/http-maps/websocket-connection.map.conf.example - deploy/nginx/sites/ia.enso.4nkweb.com.conf - deploy/nginx/README-ia-enso.md - docs/features/ia-enso-nginx-proxy-ollama-anythingllm.md - docs/services.md --- deploy/nginx/README-ia-enso.md | 66 ++++++++++++++ .../ia-enso-ollama-bearer.map.conf.example | 10 ++ .../websocket-connection.map.conf.example | 7 ++ deploy/nginx/sites/ia.enso.4nkweb.com.conf | 91 +++++++++++++++++++ .../ia-enso-nginx-proxy-ollama-anythingllm.md | 39 ++++++++ docs/services.md | 14 +++ 6 files changed, 227 insertions(+) create mode 100644 deploy/nginx/README-ia-enso.md create mode 100644 deploy/nginx/http-maps/ia-enso-ollama-bearer.map.conf.example create mode 100644 deploy/nginx/http-maps/websocket-connection.map.conf.example create mode 100644 deploy/nginx/sites/ia.enso.4nkweb.com.conf create mode 100644 docs/features/ia-enso-nginx-proxy-ollama-anythingllm.md diff --git a/deploy/nginx/README-ia-enso.md b/deploy/nginx/README-ia-enso.md new file mode 100644 index 0000000..9a54d45 --- /dev/null +++ b/deploy/nginx/README-ia-enso.md @@ -0,0 +1,66 @@ +# ia.enso.4nkweb.com — Nginx on the proxy (192.168.1.100) + +Reverse proxy to `192.168.1.164`: + +- `https://ia.enso.4nkweb.com/ollama/` → Ollama `11434` (Bearer gate, then `Authorization` cleared upstream). +- `https://ia.enso.4nkweb.com/anythingllm/` → AnythingLLM `3001`. + +## 1. DNS and TLS + +DNS must resolve `ia.enso.4nkweb.com` to the public entry that reaches this proxy. Issue a certificate, for example: + +```bash +sudo certbot certonly --webroot -w /var/www/certbot -d ia.enso.4nkweb.com +``` + +Adjust `ssl_certificate` paths in `sites/ia.enso.4nkweb.com.conf` if the live directory name differs. + +## 2. HTTP-level maps (required) + +Copy the examples on the proxy and include them **inside** `http { }` **before** `server` blocks that use the variables: + +From a checkout of this repository on the admin machine (paths relative to `deploy/nginx/http-maps/`): + +```bash +sudo mkdir -p /etc/nginx/http-maps +sudo cp deploy/nginx/http-maps/ia-enso-ollama-bearer.map.conf.example /etc/nginx/http-maps/ia-enso-ollama-bearer.map.conf +sudo cp deploy/nginx/http-maps/websocket-connection.map.conf.example /etc/nginx/http-maps/websocket-connection.map.conf +sudo nano /etc/nginx/http-maps/ia-enso-ollama-bearer.map.conf # set the Bearer secret (single line value) +``` + +In `/etc/nginx/nginx.conf` (or a file already included from `http { }`). Include the websocket map **only if** `$connection_upgrade` is not already defined elsewhere (duplicate `map` names will fail `nginx -t`): + +```nginx +include /etc/nginx/http-maps/websocket-connection.map.conf; +include /etc/nginx/http-maps/ia-enso-ollama-bearer.map.conf; +``` + +Do not commit the non-example `ia-enso-ollama-bearer.map.conf` with a real secret. + +## 3. Site file + +```bash +sudo cp deploy/nginx/sites/ia.enso.4nkweb.com.conf /etc/nginx/sites-available/ia.enso.4nkweb.com.conf +sudo ln -sf /etc/nginx/sites-available/ia.enso.4nkweb.com.conf /etc/nginx/sites-enabled/ +sudo nginx -t && sudo systemctl reload nginx +``` + +## 4. Checks + +```bash +curl -sS -o /dev/null -w "%{http_code}\n" -H "Authorization: Bearer CHANGE_ME_TO_LONG_RANDOM_SECRET" \ + https://ia.enso.4nkweb.com/ollama/v1/models +``` + +Expect `200`. Without the header or with a wrong token, expect `401`. + +AnythingLLM: open `https://ia.enso.4nkweb.com/anythingllm/` and use the **application** login. If static assets fail to load, verify upstream base-path settings for AnythingLLM or adjust proxy headers per upstream docs. + +## 5. Cursor (OpenAI-compatible) + +- Override base URL: `https://ia.enso.4nkweb.com/ollama/v1` +- API key: **exactly** the same string as in the map after `Bearer ` (Cursor sends `Authorization: Bearer `; nginx compares the full `Authorization` value to `Bearer `). + +## 6. Backend firewall + +Allow from the proxy host only: TCP `11434` and `3001` on `192.168.1.164` if a host firewall is enabled. diff --git a/deploy/nginx/http-maps/ia-enso-ollama-bearer.map.conf.example b/deploy/nginx/http-maps/ia-enso-ollama-bearer.map.conf.example new file mode 100644 index 0000000..ab3f034 --- /dev/null +++ b/deploy/nginx/http-maps/ia-enso-ollama-bearer.map.conf.example @@ -0,0 +1,10 @@ +# Install on the proxy inside `http { ... }` (before any server that uses $ia_enso_ollama_authorized): +# include /etc/nginx/http-maps/ia-enso-ollama-bearer.map.conf; +# +# Copy this file without the .example suffix, set a long random Bearer secret (ASCII, no double quotes). +# Cursor / OpenAI-compatible clients: Base URL .../ollama/v1 and API Key = same secret (no "Bearer " prefix). + +map $http_authorization $ia_enso_ollama_authorized { + default 0; + "Bearer CHANGE_ME_TO_LONG_RANDOM_SECRET" 1; +} diff --git a/deploy/nginx/http-maps/websocket-connection.map.conf.example b/deploy/nginx/http-maps/websocket-connection.map.conf.example new file mode 100644 index 0000000..f4399e4 --- /dev/null +++ b/deploy/nginx/http-maps/websocket-connection.map.conf.example @@ -0,0 +1,7 @@ +# Place inside `http { ... }` on the proxy (once per nginx instance), e.g.: +# include /etc/nginx/http-maps/websocket-connection.map.conf; + +map $http_upgrade $connection_upgrade { + default upgrade; + '' close; +} diff --git a/deploy/nginx/sites/ia.enso.4nkweb.com.conf b/deploy/nginx/sites/ia.enso.4nkweb.com.conf new file mode 100644 index 0000000..ba33b79 --- /dev/null +++ b/deploy/nginx/sites/ia.enso.4nkweb.com.conf @@ -0,0 +1,91 @@ +# ia.enso.4nkweb.com — reverse proxy to LAN host (Ollama + AnythingLLM). +# +# Prerequisites on the proxy host: +# - TLS certificate for ia.enso.4nkweb.com (e.g. certbot). +# - In the main nginx `http { }` block, include the Bearer map (see http-maps/ia-enso-ollama-bearer.map.conf.example). +# +# Upstream: adjust IA_ENSO_BACKEND_IP if the AI host IP changes. + +upstream ia_enso_ollama { + server 192.168.1.164:11434; + keepalive 8; +} + +upstream ia_enso_anythingllm { + server 192.168.1.164:3001; + keepalive 8; +} + +server { + listen 80; + server_name ia.enso.4nkweb.com; + + location /.well-known/acme-challenge/ { + root /var/www/certbot; + } + + location / { + return 301 https://$host$request_uri; + } +} + +server { + listen 443 ssl; + http2 on; + server_name ia.enso.4nkweb.com; + + ssl_certificate /etc/letsencrypt/live/ia.enso.4nkweb.com/fullchain.pem; + ssl_certificate_key /etc/letsencrypt/live/ia.enso.4nkweb.com/privkey.pem; + + ssl_protocols TLSv1.2 TLSv1.3; + ssl_prefer_server_ciphers on; + ssl_session_cache shared:SSL:10m; + ssl_session_timeout 10m; + + add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; + add_header X-Frame-Options "SAMEORIGIN" always; + add_header X-Content-Type-Options "nosniff" always; + + client_max_body_size 100M; + + # Ollama OpenAI-compatible API: require Authorization: Bearer (see map file). + location /ollama/ { + if ($ia_enso_ollama_authorized = 0) { + return 401; + } + + proxy_pass http://ia_enso_ollama/; + proxy_http_version 1.1; + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Forwarded-Proto $scheme; + proxy_set_header Connection ""; + + # Ollama does not need the client Bearer; avoids passing the gate secret downstream. + proxy_set_header Authorization ""; + + proxy_buffering off; + proxy_read_timeout 3600s; + proxy_send_timeout 3600s; + } + + # AnythingLLM UI + API (application login). Subpath stripped when forwarding. + location /anythingllm/ { + proxy_pass http://ia_enso_anythingllm/; + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection $connection_upgrade; + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Forwarded-Proto $scheme; + proxy_set_header X-Forwarded-Prefix /anythingllm; + proxy_read_timeout 3600s; + proxy_send_timeout 3600s; + } + + location = /anythingllm { + return 301 https://$host/anythingllm/; + } +} diff --git a/docs/features/ia-enso-nginx-proxy-ollama-anythingllm.md b/docs/features/ia-enso-nginx-proxy-ollama-anythingllm.md new file mode 100644 index 0000000..3410510 --- /dev/null +++ b/docs/features/ia-enso-nginx-proxy-ollama-anythingllm.md @@ -0,0 +1,39 @@ +# Feature: Reverse proxy ia.enso.4nkweb.com for Ollama and AnythingLLM + +**Author:** 4NK team + +## Objective + +Expose Ollama and AnythingLLM on the public proxy hostname with HTTPS, path prefixes `/ollama` and `/anythingllm`, and **gate Ollama** with a **Bearer token** checked at the proxy (compatible with Cursor’s OpenAI base URL + API key). + +## Impacts + +- **Proxy (nginx):** new `server_name`, TLS, locations, HTTP `map` for Bearer validation; optional new includes under `/etc/nginx/http-maps/`. +- **Backend (192.168.1.164):** must accept connections from the proxy on `11434` and `3001`; Ollama must not rely on the client `Authorization` header (nginx clears it after validation). +- **Clients:** Cursor uses `https://ia.enso.4nkweb.com/ollama/v1` and the shared secret as API key; avoids private-IP SSRF blocks in Cursor when the hostname resolves publicly. + +## Modifications (repository) + +- `deploy/nginx/http-maps/ia-enso-ollama-bearer.map.conf.example` — `map` for `$ia_enso_ollama_authorized`. +- `deploy/nginx/http-maps/websocket-connection.map.conf.example` — `map` for `$connection_upgrade` (AnythingLLM WebSocket). +- `deploy/nginx/sites/ia.enso.4nkweb.com.conf` — `server` blocks and upstreams. +- `deploy/nginx/README-ia-enso.md` — installation and verification on the proxy. + +## Deployment modalities + +1. DNS for `ia.enso.4nkweb.com` points to the proxy entry used for HTTPS. +2. Obtain TLS certificates (e.g. certbot) for that name. +3. Install map files under `/etc/nginx/http-maps/`, set the Bearer secret, include maps inside `http { }`. +4. Install the site file under `sites-available` / `sites-enabled`, `nginx -t`, reload nginx. +5. Restrict backend ports at the firewall to the proxy source where applicable. + +## Analysis modalities + +- `curl` to `/ollama/v1/models` with and without `Authorization: Bearer ` (expect 200 / 401). +- Browser access to `/anythingllm/` and application login. +- Cursor connectivity after configuration change (no `ssrf_blocked` if hostname resolves to a public IP from Cursor’s perspective). + +## Security notes + +- The Bearer secret is equivalent to an API key; rotate by updating the map file and client configs together. +- AnythingLLM remains protected by **its own** application authentication; the `/anythingllm` location does not add the Ollama Bearer gate. diff --git a/docs/services.md b/docs/services.md index 7edb381..5bac4af 100644 --- a/docs/services.md +++ b/docs/services.md @@ -1,5 +1,15 @@ # Services +## Systemd (local host) + +- **Ollama:** `ollama.service` (official installer). Optional drop-in `OLLAMA_HOST=0.0.0.0:11434` for Docker — see `configure-ollama-for-docker.sh` and [systemd/README.md](../systemd/README.md). +- **AnythingLLM:** `anythingllm.service` — Docker container managed by systemd. Install: `sudo ./scripts/install-systemd-services.sh`. Config: `/etc/default/anythingllm` (template `systemd/anythingllm.default`). + +```bash +sudo systemctl restart ollama anythingllm +sudo systemctl status ollama anythingllm +``` + ## Where these services run (first deployment) For the **first deployment target**, Ollama and AnythingLLM run on the **remote SSH server** that hosts the AI stack and repositories, not necessarily on the user’s Linux laptop. Access from the client may use **SSH local forwarding** or internal hostnames. See [deployment-target.md](./deployment-target.md). @@ -86,3 +96,7 @@ docker exec anythingllm sh -c 'curl -sS http://host.docker.internal:11434/api/ta ``` The last command must succeed after `OLLAMA_HOST=0.0.0.0:11434` and `host.docker.internal` are configured. + +## Public reverse proxy (ia.enso.4nkweb.com) + +When Ollama runs on a LAN host (e.g. `192.168.1.164`) and must be reached via the **proxy** with HTTPS and a **Bearer** gate (for clients such as Cursor that block private IPs), use the nginx snippets in `deploy/nginx/` and the steps in `deploy/nginx/README-ia-enso.md`. Cursor base URL: `https://ia.enso.4nkweb.com/ollama/v1`; API key must match the configured Bearer secret. Feature note: [ia-enso-nginx-proxy-ollama-anythingllm.md](./features/ia-enso-nginx-proxy-ollama-anythingllm.md).