Remove nginx Bearer auth from ia.enso /ollama by default

**Motivations:**
- Simplify Cursor/custom clients; Bearer caused confusion with Cursor user API key.

**Root causes:**
- N/A.

**Correctifs:**
- Drop if map check and Authorization stripping on /ollama/; deploy script no longer emits Bearer map.

**Evolutions:**
- Optional Bearer documented in http-maps example; README/services/feature/infrastructure updated; proxy redeployed.

**Pages affectées:**
- deploy/nginx/sites/ia.enso.4nkweb.com.conf
- deploy/nginx/deploy-ia-enso-to-proxy.sh
- deploy/nginx/README-ia-enso.md
- deploy/nginx/http-maps/ia-enso-ollama-bearer.map.conf.example
- docs/features/ia-enso-nginx-proxy-ollama-anythingllm.md
- docs/services.md
- docs/infrastructure.md
This commit is contained in:
Nicolas Cantu 2026-03-23 07:45:35 +01:00
parent dfc978deef
commit c13ce79696
7 changed files with 51 additions and 115 deletions

View File

@ -10,11 +10,11 @@ Reverse TLS vers lhôte LAN **`192.168.1.164`** (Ollama + AnythingLLM ; IP su
| **Ollama** API native (ex. liste des modèles) | `https://ia.enso.4nkweb.com/ollama/api/tags` |
| **Ollama** API compatible OpenAI (Cursor, etc.) | base URL `https://ia.enso.4nkweb.com/ollama/v1` — ex. `https://ia.enso.4nkweb.com/ollama/v1/models` |
**Bearer nginx** : tout ce qui est sous `/ollama/` exige `Authorization: Bearer <secret>` (sauf si tu modifies le `map`). Le secret nest **pas** transmis à Ollama en aval. AnythingLLM sous `/anythingllm/` utilise lauth **applicative**, pas ce Bearer.
**Sécurité :** `/ollama/` na **pas** de garde Bearer nginx par défaut : toute personne qui peut joindre lURL peut utiliser lAPI Ollama. Restreindre par **pare-feu** (IP du proxy uniquement vers `.164`) ou réintroduire un `map` Bearer (voir `http-maps/ia-enso-ollama-bearer.map.conf.example`). AnythingLLM sous `/anythingllm/` reste derrière son **login applicatif**.
| Chemin (relatif) | Backend | Port LAN | Protection |
|------------------|---------|----------|------------|
| `/ollama/` | Ollama | `11434` | **Bearer** nginx puis `Authorization` effacé vers Ollama |
| `/ollama/` | Ollama | `11434` | Aucune auth nginx (Ollama sans clé par défaut) |
| `/anythingllm/` | AnythingLLM | `3001` | Login AnythingLLM |
**Contexte Cursor :** une URL en IP privée (ex. `http://192.168.1.164:11434`) peut être refusée par Cursor (`ssrf_blocked`). Un **nom public** HTTPS vers le proxy évite ce blocage si le DNS résolu depuis Internet nest pas une IP RFC1918.
@ -28,14 +28,11 @@ Reverse TLS vers lhôte LAN **`192.168.1.164`** (Ollama + AnythingLLM ; IP su
Depuis la racine du dépôt **`smart_ide`**, sur une machine avec accès SSH au bastion puis au proxy :
```bash
export IA_ENSO_OLLAMA_BEARER_TOKEN='secret-long-ascii-sans-guillemets-ni-backslash'
# accès LAN direct au proxy (.100), sans bastion (variable vide = pas de ProxyJump) :
# export DEPLOY_SSH_PROXY_HOST=
./deploy/nginx/deploy-ia-enso-to-proxy.sh
```
Si `IA_ENSO_OLLAMA_BEARER_TOKEN` est absent, le script génère un token hex (affichage unique) à conserver pour Cursor.
### Prérequis sur le proxy
- `http { include /etc/nginx/conf.d/*.conf; ... }` dans `/etc/nginx/nginx.conf` (sinon le script échoue avec un message explicite).
@ -61,17 +58,16 @@ sudo certbot certonly --webroot -w /var/www/certbot -d ia.enso.4nkweb.com --non-
| Chemin sur le proxy | Rôle |
|---------------------|------|
| `/etc/nginx/conf.d/ia-enso-http-maps.conf` | `map` Bearer (`$ia_enso_ollama_authorized`) et, si besoin, `map` WebSocket (`$connection_upgrade`) |
| `/etc/nginx/conf.d/ia-enso-http-maps.conf` | `map` WebSocket `$connection_upgrade` (ou fichier stub si doublon ailleurs) |
| `/etc/nginx/sites-available/ia.enso.4nkweb.com.conf` | `server` HTTP→HTTPS + HTTPS |
| Lien `sites-enabled/ia.enso.4nkweb.com.conf` | Activation du vhost |
Si `nginx -t` échoue à cause dun **doublon** `map $http_upgrade $connection_upgrade` déjà présent ailleurs, le script retente avec **Bearer seul** dans `ia-enso-http-maps.conf`.
Si `nginx -t` échoue à cause dun **doublon** `map $http_upgrade $connection_upgrade`, le script retente avec un **stub** commenté à la place du `map`.
### Variables denvironnement du script
| Variable | Défaut | Rôle |
|----------|--------|------|
| `IA_ENSO_OLLAMA_BEARER_TOKEN` | généré | Secret pour `Authorization: Bearer …` |
| `IA_ENSO_SSH_KEY` | `~/.ssh/id_ed25519` | Clé privée SSH |
| `IA_ENSO_PROXY_USER` | `ncantu` | Utilisateur SSH sur le proxy |
| `IA_ENSO_PROXY_HOST` | `192.168.1.100` | Cible SSH (IP ou hostname LAN) |
@ -95,23 +91,11 @@ sudo certbot certonly --webroot -w /var/www/certbot -d ia.enso.4nkweb.com
Adapter dans `sites/ia.enso.4nkweb.com.conf` les directives `ssl_certificate` / `ssl_certificate_key` si le répertoire `live/` diffère.
### 2. Maps HTTP (`$ia_enso_ollama_authorized`, WebSocket)
### 2. Maps HTTP (WebSocket ; Bearer optionnel)
**Option A — un seul fichier sous `conf.d` (équivalent au script)**
Créer `/etc/nginx/conf.d/ia-enso-http-maps.conf` en reprenant le contenu généré par le script ou en combinant :
**WebSocket (AnythingLLM)** : si `$connection_upgrade` nexiste pas déjà dans linstance, inclure `http-maps/websocket-connection.map.conf.example` dans `http { }` ou utiliser le fichier `ia-enso-http-maps.conf` déployé par le script.
- `http-maps/websocket-connection.map.conf.example` (uniquement si `$connection_upgrade` nexiste pas déjà dans linstance),
- et un `map $http_authorization $ia_enso_ollama_authorized { ... "Bearer <secret>" 1; }`.
**Option B — fichiers séparés sous `/etc/nginx/http-maps/`**
Copier les `.example` sans suffixe, éditer le secret Bearer, puis dans `http { }` :
```nginx
include /etc/nginx/http-maps/websocket-connection.map.conf;
include /etc/nginx/http-maps/ia-enso-ollama-bearer.map.conf;
```
Ne pas commiter un fichier contenant le secret réel.
**Bearer sur `/ollama/` (optionnel)** : pour réactiver une garde nginx, ajouter le `map` de `http-maps/ia-enso-ollama-bearer.map.conf.example` et dans `location /ollama/` un `if ($ia_enso_ollama_authorized = 0) { return 401; }` (+ `map_hash_bucket_size 256` si secret long). Ne pas commiter le secret.
### 3. Fichier `server`
@ -135,18 +119,11 @@ sudo nginx -t && sudo systemctl reload nginx
### API Ollama via le proxy
```bash
curl -sS -o /dev/null -w "%{http_code}\n" \
-H "Authorization: Bearer <secret>" \
https://ia.enso.4nkweb.com/ollama/v1/models
curl -sS -o /dev/null -w "%{http_code}\n" https://ia.enso.4nkweb.com/ollama/v1/models
curl -sS -o /dev/null -w "%{http_code}\n" https://ia.enso.4nkweb.com/ollama/api/tags
```
Attendu : **200** avec le bon secret ; **401** sans en-tête ou secret incorrect.
API native Ollama (même Bearer) :
```bash
curl -sS -H "Authorization: Bearer <secret>" https://ia.enso.4nkweb.com/ollama/api/tags
```
Attendu : **200** sans en-tête `Authorization` (config par défaut sans Bearer nginx).
### AnythingLLM
@ -156,9 +133,9 @@ Si les assets statiques échouent, vérifier la doc upstream (sous-chemin, en-t
### Cursor
- URL de base OpenAI : `https://ia.enso.4nkweb.com/ollama/v1`
- Clé API : **identique** au secret Bearer du `map` nginx (sans préfixe `Bearer ` dans le champ ; Cursor envoie `Authorization: Bearer <clé>`).
- Clé API : laisser vide ou valeur factice si Cursor lexige (plus de Bearer nginx sur `/ollama/` par défaut).
Si **`curl`** vers `/ollama/v1/models` ou `/ollama/api/tags` avec ce Bearer renvoie **200** mais Cursor affiche **`ERROR_BAD_USER_API_KEY` / `Unauthorized User API key`**, léchec vient **du client Cursor** (validation ou routage via linfra Cursor), pas du proxy. Cas signalés sur le forum Cursor : [Unauthorized User API key with custom openai api key/url](https://forum.cursor.com/t/unauthorized-user-api-key-with-custom-openai-api-key-url/132572). Vérifier version de Cursor, mode confidentialité / type de compte, et sujets liés à loverride dURL OpenAI.
Si **`curl`** vers `/ollama/v1/models` renvoie **200** mais Cursor affiche **`ERROR_BAD_USER_API_KEY`**, léchec vient **du client Cursor** (validation / infra Cursor), pas du proxy : [forum](https://forum.cursor.com/t/unauthorized-user-api-key-with-custom-openai-api-key-url/132572).
---
@ -168,21 +145,13 @@ Sur **`192.168.1.164`**, nautoriser **11434** et **3001** TCP que depuis **19
---
## Rotation du secret Bearer
1. Mettre à jour la ligne `"Bearer …"` dans `/etc/nginx/conf.d/ia-enso-http-maps.conf` (ou le fichier `map` manuel équivalent).
2. `sudo nginx -t && sudo systemctl reload nginx`.
3. Mettre à jour la clé API dans Cursor (et tout autre client).
---
## Dépannage
| Symptôme | Piste |
|----------|--------|
| `nginx -t` erreur sur `connection_upgrade` | Doublon de `map $http_upgrade $connection_upgrade` : retirer lun des blocs ou ninstaller que le `map` Bearer. |
| `could not build map_hash` / `map_hash_bucket_size` | Secret Bearer trop long pour la valeur par défaut ; le fichier `ia-enso-http-maps.conf` du script inclut `map_hash_bucket_size 256;` — mettre à jour le déploiement ou ajouter cette directive dans `http { }`. |
| `401` sur `/ollama/` | Secret différent entre client et `map` ; en-tête `Authorization` absent ou mal formé (`Bearer ` + secret exact). |
| `nginx -t` erreur sur `connection_upgrade` | Doublon de `map $http_upgrade $connection_upgrade` : retirer lun des blocs ou laisser le stub du script. |
| `could not build map_hash` / `map_hash_bucket_size` | Uniquement si tu réactives un `map` Bearer avec un secret très long ; ajouter `map_hash_bucket_size 256;` dans `http { }`. |
| `401` sur `/ollama/` | Bearer nginx réactivé manuellement : aligner client et `map`, ou désactiver la garde. |
| `502` / timeout | Ollama ou AnythingLLM arrêtés sur le backend ; pare-feu ; mauvaise IP dans `upstream` (vérifier `grep server /etc/nginx/sites-available/ia.enso.4nkweb.com.conf` sur le proxy ; redéployer avec `IA_ENSO_BACKEND_IP=192.168.1.164`). |
| Erreur SSL / `cannot load certificate` | Certificat absent : exécuter certbot sur le proxy pour `ia.enso.4nkweb.com`, ou adapter les chemins `ssl_certificate` dans le fichier site. |
| Cursor `ssrf_blocked` | Lhôte utilisé résout encore vers une IP privée côté infrastructure Cursor ; vérifier DNS public / NAT. |

View File

@ -4,7 +4,6 @@
# Requires passwordless sudo for nginx on the proxy host.
#
# Environment:
# IA_ENSO_OLLAMA_BEARER_TOKEN Bearer secret for /ollama (if unset, openssl rand -hex 32).
# IA_ENSO_SSH_KEY SSH private key (default: ~/.ssh/id_ed25519).
# IA_ENSO_PROXY_USER SSH user on proxy (default: ncantu).
# IA_ENSO_PROXY_HOST Proxy IP or hostname (default: 192.168.1.100).
@ -31,7 +30,6 @@ IA_ENSO_PROXY_USER="${IA_ENSO_PROXY_USER:-ncantu}"
IA_ENSO_PROXY_HOST="${IA_ENSO_PROXY_HOST:-192.168.1.100}"
IA_ENSO_BACKEND_IP="${IA_ENSO_BACKEND_IP:-192.168.1.164}"
DEPLOY_SSH_PROXY_USER="${DEPLOY_SSH_PROXY_USER:-$IA_ENSO_PROXY_USER}"
# ${VAR:-default} treats empty VAR as unset, so DEPLOY_SSH_PROXY_HOST= would wrongly become the bastion.
if [[ ! -v DEPLOY_SSH_PROXY_HOST ]]; then
export DEPLOY_SSH_PROXY_HOST='4nk.myftp.biz'
elif [[ -z "$DEPLOY_SSH_PROXY_HOST" ]]; then
@ -39,19 +37,6 @@ elif [[ -z "$DEPLOY_SSH_PROXY_HOST" ]]; then
fi
export DEPLOY_SSH_PROXY_USER
TOKEN="${IA_ENSO_OLLAMA_BEARER_TOKEN:-}"
if [[ -z "$TOKEN" ]]; then
TOKEN="$(openssl rand -hex 32)"
echo "IA_ENSO_OLLAMA_BEARER_TOKEN was unset; generated token (store for Cursor API key):"
echo "$TOKEN"
echo "---"
fi
if [[ "$TOKEN" == *'"'* ]] || [[ "$TOKEN" == *'\'* ]]; then
echo "Token must not contain double quotes or backslashes." >&2
exit 1
fi
if [[ ! "$IA_ENSO_BACKEND_IP" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "IA_ENSO_BACKEND_IP must be an IPv4 address (got: ${IA_ENSO_BACKEND_IP})" >&2
exit 1
@ -60,26 +45,18 @@ fi
write_maps_file() {
local path="$1"
local with_websocket="$2"
{
cat <<'HASHOF'
# Long Bearer keys (e.g. openssl rand -hex 32) exceed default map_hash buckets.
map_hash_bucket_size 256;
HASHOF
if [[ "$with_websocket" == "1" ]]; then
cat <<'MAPEOF'
cat <<'MAPEOF' >"$path"
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
MAPEOF
else
cat <<'STUB' >"$path"
# ia-enso: $connection_upgrade is defined in another conf.d file; no duplicate map here.
STUB
fi
cat <<MAPEOF
map \$http_authorization \$ia_enso_ollama_authorized {
default 0;
"Bearer ${TOKEN}" 1;
}
MAPEOF
} >"$path"
}
TMP_DIR="$(mktemp -d)"
@ -115,15 +92,15 @@ REMOTE
echo "Deploying ia.enso upstreams to ${IA_ENSO_BACKEND_IP} (Ollama :11434, AnythingLLM :3001)."
if ! try_install 1; then
echo "Retrying with Bearer map only (websocket map likely already defined on proxy)..."
echo "Retrying with stub maps file (websocket map likely already defined on proxy)..."
if ! try_install 0; then
echo "Deploy failed (SSH, sudo, nginx -t, or missing include /etc/nginx/conf.d/*.conf)." >&2
echo "Re-run from a host with SSH access to the proxy (LAN direct: DEPLOY_SSH_PROXY_HOST=); reuse token with IA_ENSO_OLLAMA_BEARER_TOKEN if needed." >&2
echo "Re-run from a host with SSH access to the proxy (LAN direct: DEPLOY_SSH_PROXY_HOST=)." >&2
exit 1
fi
fi
echo "Done. Public URLs:"
echo "Done. Public URLs (no nginx Bearer on /ollama/):"
echo " AnythingLLM: https://ia.enso.4nkweb.com/anythingllm/"
echo " Ollama API: https://ia.enso.4nkweb.com/ollama/api/tags (native) — Bearer required"
echo " Cursor/OpenAI base: https://ia.enso.4nkweb.com/ollama/v1 — API key = Bearer secret (see token above if generated)."
echo " Ollama native: https://ia.enso.4nkweb.com/ollama/api/tags"
echo " OpenAI-compat: https://ia.enso.4nkweb.com/ollama/v1"

View File

@ -1,10 +1,8 @@
# Install on the proxy inside `http { ... }` (before any server that uses $ia_enso_ollama_authorized):
# include /etc/nginx/http-maps/ia-enso-ollama-bearer.map.conf;
# OPTIONAL: Bearer gate on /ollama/ (default repo site has no nginx auth on Ollama).
# Install inside `http { ... }` before server blocks that use $ia_enso_ollama_authorized, and add to
# location /ollama/ { if ($ia_enso_ollama_authorized = 0) { return 401; } ... }
#
# Copy this file without the .example suffix, set a long random Bearer secret (ASCII, no double quotes).
# Cursor / OpenAI-compatible clients: Base URL .../ollama/v1 and API Key = same secret (no "Bearer " prefix).
#
# Required for long Bearer strings (e.g. hex tokens); omit only if nginx already sets this in http {}.
# Copy without the .example suffix, set secret (ASCII, no double quotes in value).
map_hash_bucket_size 256;

View File

@ -4,11 +4,12 @@
# AnythingLLM UI: https://ia.enso.4nkweb.com/anythingllm/
# Ollama OpenAI API: https://ia.enso.4nkweb.com/ollama/v1/ (e.g. .../v1/models, .../v1/chat/completions)
# Ollama native API: https://ia.enso.4nkweb.com/ollama/api/tags (and other /api/* paths)
# /ollama/* requires Authorization: Bearer <secret> at nginx (see map); Cursor base URL: .../ollama/v1
# /ollama/* has NO nginx Bearer gate (public inference if DNS is reachable); restrict at firewall or re-add map.
# Cursor base URL: https://ia.enso.4nkweb.com/ollama/v1
#
# Prerequisites on the proxy host:
# - TLS certificate for ia.enso.4nkweb.com (e.g. certbot).
# - In the main nginx `http { }` block, include the Bearer map (see http-maps/ia-enso-ollama-bearer.map.conf.example).
# - Optional: include http-maps/websocket-connection.map.conf.example in http { } if not using deploy script maps file.
#
# Upstream backend: replaced at deploy time (default 192.168.1.164). Manual install: replace __IA_ENSO_BACKEND_IP__.
@ -53,12 +54,8 @@ server {
client_max_body_size 100M;
# Ollama OpenAI-compatible API: require Authorization: Bearer <shared secret> (see map file).
# Ollama: no nginx auth (Ollama itself does not enforce API keys by default).
location /ollama/ {
if ($ia_enso_ollama_authorized = 0) {
return 401;
}
proxy_pass http://ia_enso_ollama/;
proxy_http_version 1.1;
proxy_set_header Host $host;
@ -67,9 +64,6 @@ server {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
# Ollama does not need the client Bearer; avoids passing the gate secret downstream.
proxy_set_header Authorization "";
proxy_buffering off;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;

View File

@ -4,47 +4,46 @@
## Objective
Expose Ollama and AnythingLLM on the public proxy hostname with HTTPS, path prefixes `/ollama` and `/anythingllm`, and **gate Ollama** with a **Bearer token** checked at the proxy (compatible with Cursors OpenAI base URL + API key).
Expose Ollama and AnythingLLM on the public proxy hostname with HTTPS, path prefixes `/ollama` and `/anythingllm`. **Default:** no nginx Bearer on `/ollama/` (optional `map` in `http-maps/ia-enso-ollama-bearer.map.conf.example` to re-enable).
## Public URLs (HTTPS)
- AnythingLLM UI: `https://ia.enso.4nkweb.com/anythingllm/`
- Ollama native API (example): `https://ia.enso.4nkweb.com/ollama/api/tags``Authorization: Bearer <secret>` at nginx
- Ollama native API (example): `https://ia.enso.4nkweb.com/ollama/api/tags`
- OpenAI-compatible base (Cursor): `https://ia.enso.4nkweb.com/ollama/v1`
## Impacts
- **Proxy (nginx):** new `server_name`, TLS, locations, HTTP `map` for Bearer validation; maps deployed under `/etc/nginx/conf.d/` when using the provided script.
- **Backend (192.168.1.164):** must accept connections from the proxy on `11434` and `3001`; Ollama must not rely on the client `Authorization` header (nginx clears it after validation).
- **Clients:** Cursor uses `https://ia.enso.4nkweb.com/ollama/v1` and the shared secret as API key; avoids private-IP SSRF blocks in Cursor when the hostname resolves publicly from the client infrastructure.
- **Proxy (nginx):** `server_name`, TLS, locations; `conf.d/ia-enso-http-maps.conf` holds WebSocket `map` when deployed by script (or stub if duplicate elsewhere).
- **Backend (192.168.1.164):** must accept connections from the proxy on `11434` and `3001`.
- **Clients:** Cursor can use `https://ia.enso.4nkweb.com/ollama/v1` without a matching nginx secret if Bearer is disabled; hostname may avoid private-IP SSRF blocks when DNS resolves publicly.
## Repository layout
| Path | Purpose |
|------|---------|
| `deploy/nginx/sites/ia.enso.4nkweb.com.conf` | `server` blocks ; upstreams use `__IA_ENSO_BACKEND_IP__` (default `192.168.1.164` substituted by `deploy-ia-enso-to-proxy.sh` or manual `sed`) |
| `deploy/nginx/http-maps/ia-enso-ollama-bearer.map.conf.example` | Example Bearer `map` (manual install) |
| `deploy/nginx/http-maps/ia-enso-ollama-bearer.map.conf.example` | **Optional** Bearer `map` + `location /ollama/` `if` to re-enable auth |
| `deploy/nginx/http-maps/websocket-connection.map.conf.example` | Example WebSocket `map` (manual install) |
| `deploy/nginx/deploy-ia-enso-to-proxy.sh` | SSH deploy: maps + site, `nginx -t`, reload; Bearer-only retry if websocket `map` already exists |
| `deploy/nginx/deploy-ia-enso-to-proxy.sh` | SSH deploy: maps + site, `nginx -t`, reload; stub retry if websocket `map` already exists |
| `deploy/nginx/sites/ia.enso.4nkweb.com.http-only.conf` | Temporary HTTP-only vhost for first Lets Encrypt `webroot` issuance when `live/ia.enso…` is missing |
| `deploy/nginx/README-ia-enso.md` | **Operator reference:** automated + manual steps, env vars, checks, troubleshooting, TLS bootstrap |
## Deployment modalities
**Preferred:** run `./deploy/nginx/deploy-ia-enso-to-proxy.sh` from `smart_ide` on a host with SSH access (see `README-ia-enso.md` for prerequisites and environment variables).
**Preferred:** run `./deploy/nginx/deploy-ia-enso-to-proxy.sh` from `smart_ide` on a host with SSH access (see `README-ia-enso.md`).
**Manual:** DNS → TLS (certbot) → install `map` directives inside `http { }` (via `conf.d` or `http-maps` includes) → install site under `sites-available` / `sites-enabled``nginx -t` → reload. Details: `deploy/nginx/README-ia-enso.md`.
**Manual:** DNS → TLS (certbot) → WebSocket `map` if needed → install site`nginx -t` → reload. Details: `deploy/nginx/README-ia-enso.md`.
Restrict backend ports on `192.168.1.164` to the proxy source where a host firewall is used.
## Analysis modalities
- `curl` to `/ollama/v1/models` with and without `Authorization: Bearer <secret>` (expect 200 / 401).
- `curl` to `/ollama/v1/models` and `/ollama/api/tags` without `Authorization` (expect **200** when Bearer is off).
- Browser access to `/anythingllm/` and application login.
- Cursor connectivity after configuration (no `ssrf_blocked` if the hostname does not resolve to a blocked private IP from Cursors perspective).
- Cursor connectivity; `ERROR_BAD_USER_API_KEY` may still be a Cursor client issue (see README forum link).
## Security notes
- The Bearer secret is equivalent to an API key; rotate by updating the `map` file and client configs together.
- AnythingLLM remains protected by **its own** application authentication; the `/anythingllm` location does not add the Ollama Bearer gate.
- A public URL for `/ollama` exposes the inference endpoint to anyone who knows the secret; combine with network controls if required.
- **Default `/ollama/` is unauthenticated at nginx:** anyone who can reach the URL can call Ollama unless restricted by firewall or Ollama-level controls. Re-add Bearer using the example `map` if needed.
- AnythingLLM remains protected by **its own** application authentication.

View File

@ -27,7 +27,7 @@ Internet access to backends uses **SSH ProxyJump** via `ncantu@4nk.myftp.biz` (s
## Reverse proxy `ia.enso.4nkweb.com` (Ollama / AnythingLLM)
Hostname TLS sur le **proxy** `192.168.1.100` : préfixes `/ollama` et `/anythingllm` vers lhôte LAN `192.168.1.164` (ports `11434` et `3001`, voir `deploy/nginx/sites/ia.enso.4nkweb.com.conf`). Gate Ollama par **Bearer** au nginx ; AnythingLLM reste derrière son auth applicative.
Hostname TLS sur le **proxy** `192.168.1.100` : préfixes `/ollama` et `/anythingllm` vers lhôte LAN `192.168.1.164` (ports `11434` et `3001`, voir `deploy/nginx/sites/ia.enso.4nkweb.com.conf`). **`/ollama/`** sans garde Bearer nginx par défaut (option documentée dans `deploy/nginx/http-maps/`) ; AnythingLLM reste derrière son auth applicative.
Documentation opérationnelle : [deploy/nginx/README-ia-enso.md](../deploy/nginx/README-ia-enso.md). Fiche évolution : [features/ia-enso-nginx-proxy-ollama-anythingllm.md](./features/ia-enso-nginx-proxy-ollama-anythingllm.md).

View File

@ -99,13 +99,12 @@ The last command must succeed after `OLLAMA_HOST=0.0.0.0:11434` and `host.docker
## Public reverse proxy (ia.enso.4nkweb.com)
When Ollama runs on a LAN host (e.g. `192.168.1.164` via `IA_ENSO_BACKEND_IP` / `deploy/nginx/sites/ia.enso.4nkweb.com.conf`) and must be reached via the **proxy** with HTTPS and a **Bearer** gate (for clients such as Cursor that block private IPs), use `deploy/nginx/` and **[deploy/nginx/README-ia-enso.md](../deploy/nginx/README-ia-enso.md)** (script `deploy-ia-enso-to-proxy.sh`, checks, troubleshooting).
When Ollama runs on a LAN host (e.g. `192.168.1.164` via `IA_ENSO_BACKEND_IP` / `deploy/nginx/sites/ia.enso.4nkweb.com.conf`) and must be reached via the **proxy** with HTTPS, use `deploy/nginx/` and **[deploy/nginx/README-ia-enso.md](../deploy/nginx/README-ia-enso.md)** (script `deploy-ia-enso-to-proxy.sh`, checks, troubleshooting). **Default:** no nginx Bearer on `/ollama/` (optional `http-maps/ia-enso-ollama-bearer.map.conf.example`).
**Full URLs**
- AnythingLLM UI: `https://ia.enso.4nkweb.com/anythingllm/`
- Ollama native API example: `https://ia.enso.4nkweb.com/ollama/api/tags` (header `Authorization: Bearer <secret>`)
- Ollama native API example: `https://ia.enso.4nkweb.com/ollama/api/tags`
- Cursor / OpenAI-compatible base URL: `https://ia.enso.4nkweb.com/ollama/v1`
- Cursor API key: same value as the Bearer secret configured on the proxy
Feature note: [ia-enso-nginx-proxy-ollama-anythingllm.md](./features/ia-enso-nginx-proxy-ollama-anythingllm.md).