smart_ide/docs/repo/service-langextract.md
Nicolas Cantu ac96434351 docs: centralize README content under docs/repo/
**Motivations:**
- Single canonical documentation tree under docs/; reduce drift between README copies.

**Evolutions:**
- Add docs/repo/ with operational guides (cron, systemd, projects, logs, docv, ia_dev, services, scripts, extension).
- Replace scattered README.md files with pointers to docs/repo/*.md.
- Refresh docs/README.md index and cross-links across docs/, .cursor rules/agents.
- Bump ia_dev submodule to matching doc pointer commits.
2026-04-03 18:20:31 +02:00

1.3 KiB

langextract-api (services/langextract-api/)

API HTTP locale sur 127.0.0.1 autour de google/langextract : extractions structurées depuis du texte, avec ancrage caractères optionnel.

Variables

Variable Obligatoire Description
LANGEXTRACT_SERVICE_TOKEN non Si défini, Authorization: Bearer requis.
LANGEXTRACT_API_HOST non Défaut 127.0.0.1
LANGEXTRACT_API_PORT non Défaut 37141
LANGEXTRACT_API_KEY non Modèles cloud (ex. Gemini) côté serveur.

Endpoints

  • GET /health
  • POST /extract — corps JSON aligné sur les paramètres extract() amont (text, prompt_description, examples, model_id, model_url pour Ollama, etc.)

Run

cd services/langextract-api
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
export LANGEXTRACT_SERVICE_TOKEN='…'
uvicorn app.main:app --host "${LANGEXTRACT_API_HOST:-127.0.0.1}" --port "${LANGEXTRACT_API_PORT:-37141}"

Pour Ollama : model_id (ex. gemma2:2b), model_url http://127.0.0.1:11434, souvent fence_output: false, use_schema_constraints: false selon amont.

Spécification

API/langextract-api.md, features/langextract-api.md.