Add ia.enso.4nkweb.com nginx proxy for Ollama and AnythingLLM

**Motivations:**
- Expose Ollama and AnythingLLM via HTTPS paths on the LAN proxy with Bearer auth for Ollama.

**Root causes:**
- Cursor blocks direct requests to private IPs (SSRF policy).

**Correctifs:**
- N/A (new configuration artifacts).

**Evolutions:**
- Nginx site template, HTTP map for Bearer validation, websocket map example, deployment README, services doc link, feature documentation.

**Pages affectées:**
- deploy/nginx/http-maps/ia-enso-ollama-bearer.map.conf.example
- deploy/nginx/http-maps/websocket-connection.map.conf.example
- deploy/nginx/sites/ia.enso.4nkweb.com.conf
- deploy/nginx/README-ia-enso.md
- docs/features/ia-enso-nginx-proxy-ollama-anythingllm.md
- docs/services.md
This commit is contained in:
Nicolas Cantu 2026-03-23 00:56:43 +01:00
parent 259fc62cc3
commit 24077e749e
6 changed files with 227 additions and 0 deletions

View File

@ -0,0 +1,66 @@
# ia.enso.4nkweb.com — Nginx on the proxy (192.168.1.100)
Reverse proxy to `192.168.1.164`:
- `https://ia.enso.4nkweb.com/ollama/` → Ollama `11434` (Bearer gate, then `Authorization` cleared upstream).
- `https://ia.enso.4nkweb.com/anythingllm/` → AnythingLLM `3001`.
## 1. DNS and TLS
DNS must resolve `ia.enso.4nkweb.com` to the public entry that reaches this proxy. Issue a certificate, for example:
```bash
sudo certbot certonly --webroot -w /var/www/certbot -d ia.enso.4nkweb.com
```
Adjust `ssl_certificate` paths in `sites/ia.enso.4nkweb.com.conf` if the live directory name differs.
## 2. HTTP-level maps (required)
Copy the examples on the proxy and include them **inside** `http { }` **before** `server` blocks that use the variables:
From a checkout of this repository on the admin machine (paths relative to `deploy/nginx/http-maps/`):
```bash
sudo mkdir -p /etc/nginx/http-maps
sudo cp deploy/nginx/http-maps/ia-enso-ollama-bearer.map.conf.example /etc/nginx/http-maps/ia-enso-ollama-bearer.map.conf
sudo cp deploy/nginx/http-maps/websocket-connection.map.conf.example /etc/nginx/http-maps/websocket-connection.map.conf
sudo nano /etc/nginx/http-maps/ia-enso-ollama-bearer.map.conf # set the Bearer secret (single line value)
```
In `/etc/nginx/nginx.conf` (or a file already included from `http { }`). Include the websocket map **only if** `$connection_upgrade` is not already defined elsewhere (duplicate `map` names will fail `nginx -t`):
```nginx
include /etc/nginx/http-maps/websocket-connection.map.conf;
include /etc/nginx/http-maps/ia-enso-ollama-bearer.map.conf;
```
Do not commit the non-example `ia-enso-ollama-bearer.map.conf` with a real secret.
## 3. Site file
```bash
sudo cp deploy/nginx/sites/ia.enso.4nkweb.com.conf /etc/nginx/sites-available/ia.enso.4nkweb.com.conf
sudo ln -sf /etc/nginx/sites-available/ia.enso.4nkweb.com.conf /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx
```
## 4. Checks
```bash
curl -sS -o /dev/null -w "%{http_code}\n" -H "Authorization: Bearer CHANGE_ME_TO_LONG_RANDOM_SECRET" \
https://ia.enso.4nkweb.com/ollama/v1/models
```
Expect `200`. Without the header or with a wrong token, expect `401`.
AnythingLLM: open `https://ia.enso.4nkweb.com/anythingllm/` and use the **application** login. If static assets fail to load, verify upstream base-path settings for AnythingLLM or adjust proxy headers per upstream docs.
## 5. Cursor (OpenAI-compatible)
- Override base URL: `https://ia.enso.4nkweb.com/ollama/v1`
- API key: **exactly** the same string as in the map after `Bearer ` (Cursor sends `Authorization: Bearer <key>`; nginx compares the full `Authorization` value to `Bearer <secret>`).
## 6. Backend firewall
Allow from the proxy host only: TCP `11434` and `3001` on `192.168.1.164` if a host firewall is enabled.

View File

@ -0,0 +1,10 @@
# Install on the proxy inside `http { ... }` (before any server that uses $ia_enso_ollama_authorized):
# include /etc/nginx/http-maps/ia-enso-ollama-bearer.map.conf;
#
# Copy this file without the .example suffix, set a long random Bearer secret (ASCII, no double quotes).
# Cursor / OpenAI-compatible clients: Base URL .../ollama/v1 and API Key = same secret (no "Bearer " prefix).
map $http_authorization $ia_enso_ollama_authorized {
default 0;
"Bearer CHANGE_ME_TO_LONG_RANDOM_SECRET" 1;
}

View File

@ -0,0 +1,7 @@
# Place inside `http { ... }` on the proxy (once per nginx instance), e.g.:
# include /etc/nginx/http-maps/websocket-connection.map.conf;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}

View File

@ -0,0 +1,91 @@
# ia.enso.4nkweb.com — reverse proxy to LAN host (Ollama + AnythingLLM).
#
# Prerequisites on the proxy host:
# - TLS certificate for ia.enso.4nkweb.com (e.g. certbot).
# - In the main nginx `http { }` block, include the Bearer map (see http-maps/ia-enso-ollama-bearer.map.conf.example).
#
# Upstream: adjust IA_ENSO_BACKEND_IP if the AI host IP changes.
upstream ia_enso_ollama {
server 192.168.1.164:11434;
keepalive 8;
}
upstream ia_enso_anythingllm {
server 192.168.1.164:3001;
keepalive 8;
}
server {
listen 80;
server_name ia.enso.4nkweb.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
http2 on;
server_name ia.enso.4nkweb.com;
ssl_certificate /etc/letsencrypt/live/ia.enso.4nkweb.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/ia.enso.4nkweb.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
client_max_body_size 100M;
# Ollama OpenAI-compatible API: require Authorization: Bearer <shared secret> (see map file).
location /ollama/ {
if ($ia_enso_ollama_authorized = 0) {
return 401;
}
proxy_pass http://ia_enso_ollama/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
# Ollama does not need the client Bearer; avoids passing the gate secret downstream.
proxy_set_header Authorization "";
proxy_buffering off;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
# AnythingLLM UI + API (application login). Subpath stripped when forwarding.
location /anythingllm/ {
proxy_pass http://ia_enso_anythingllm/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Prefix /anythingllm;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
location = /anythingllm {
return 301 https://$host/anythingllm/;
}
}

View File

@ -0,0 +1,39 @@
# Feature: Reverse proxy ia.enso.4nkweb.com for Ollama and AnythingLLM
**Author:** 4NK team
## Objective
Expose Ollama and AnythingLLM on the public proxy hostname with HTTPS, path prefixes `/ollama` and `/anythingllm`, and **gate Ollama** with a **Bearer token** checked at the proxy (compatible with Cursors OpenAI base URL + API key).
## Impacts
- **Proxy (nginx):** new `server_name`, TLS, locations, HTTP `map` for Bearer validation; optional new includes under `/etc/nginx/http-maps/`.
- **Backend (192.168.1.164):** must accept connections from the proxy on `11434` and `3001`; Ollama must not rely on the client `Authorization` header (nginx clears it after validation).
- **Clients:** Cursor uses `https://ia.enso.4nkweb.com/ollama/v1` and the shared secret as API key; avoids private-IP SSRF blocks in Cursor when the hostname resolves publicly.
## Modifications (repository)
- `deploy/nginx/http-maps/ia-enso-ollama-bearer.map.conf.example``map` for `$ia_enso_ollama_authorized`.
- `deploy/nginx/http-maps/websocket-connection.map.conf.example``map` for `$connection_upgrade` (AnythingLLM WebSocket).
- `deploy/nginx/sites/ia.enso.4nkweb.com.conf``server` blocks and upstreams.
- `deploy/nginx/README-ia-enso.md` — installation and verification on the proxy.
## Deployment modalities
1. DNS for `ia.enso.4nkweb.com` points to the proxy entry used for HTTPS.
2. Obtain TLS certificates (e.g. certbot) for that name.
3. Install map files under `/etc/nginx/http-maps/`, set the Bearer secret, include maps inside `http { }`.
4. Install the site file under `sites-available` / `sites-enabled`, `nginx -t`, reload nginx.
5. Restrict backend ports at the firewall to the proxy source where applicable.
## Analysis modalities
- `curl` to `/ollama/v1/models` with and without `Authorization: Bearer <secret>` (expect 200 / 401).
- Browser access to `/anythingllm/` and application login.
- Cursor connectivity after configuration change (no `ssrf_blocked` if hostname resolves to a public IP from Cursors perspective).
## Security notes
- The Bearer secret is equivalent to an API key; rotate by updating the map file and client configs together.
- AnythingLLM remains protected by **its own** application authentication; the `/anythingllm` location does not add the Ollama Bearer gate.

View File

@ -1,5 +1,15 @@
# Services # Services
## Systemd (local host)
- **Ollama:** `ollama.service` (official installer). Optional drop-in `OLLAMA_HOST=0.0.0.0:11434` for Docker — see `configure-ollama-for-docker.sh` and [systemd/README.md](../systemd/README.md).
- **AnythingLLM:** `anythingllm.service` — Docker container managed by systemd. Install: `sudo ./scripts/install-systemd-services.sh`. Config: `/etc/default/anythingllm` (template `systemd/anythingllm.default`).
```bash
sudo systemctl restart ollama anythingllm
sudo systemctl status ollama anythingllm
```
## Where these services run (first deployment) ## Where these services run (first deployment)
For the **first deployment target**, Ollama and AnythingLLM run on the **remote SSH server** that hosts the AI stack and repositories, not necessarily on the users Linux laptop. Access from the client may use **SSH local forwarding** or internal hostnames. See [deployment-target.md](./deployment-target.md). For the **first deployment target**, Ollama and AnythingLLM run on the **remote SSH server** that hosts the AI stack and repositories, not necessarily on the users Linux laptop. Access from the client may use **SSH local forwarding** or internal hostnames. See [deployment-target.md](./deployment-target.md).
@ -86,3 +96,7 @@ docker exec anythingllm sh -c 'curl -sS http://host.docker.internal:11434/api/ta
``` ```
The last command must succeed after `OLLAMA_HOST=0.0.0.0:11434` and `host.docker.internal` are configured. The last command must succeed after `OLLAMA_HOST=0.0.0.0:11434` and `host.docker.internal` are configured.
## Public reverse proxy (ia.enso.4nkweb.com)
When Ollama runs on a LAN host (e.g. `192.168.1.164`) and must be reached via the **proxy** with HTTPS and a **Bearer** gate (for clients such as Cursor that block private IPs), use the nginx snippets in `deploy/nginx/` and the steps in `deploy/nginx/README-ia-enso.md`. Cursor base URL: `https://ia.enso.4nkweb.com/ollama/v1`; API key must match the configured Bearer secret. Feature note: [ia-enso-nginx-proxy-ollama-anythingllm.md](./features/ia-enso-nginx-proxy-ollama-anythingllm.md).