- Add toggle-able debug logging (AI_DEBUG env var) that logs prompts, request metadata, raw responses, parsed output, and full error chains - Replace Node.js native fetch() with https module for Docker Alpine compatibility (fixes "fetch failed" error with large payloads) - Reduce max_tokens from 16384 to 4096 (qwen3.5 doesn't need thinking token budget) - Strip <think> blocks from model responses - Add AI_DEBUG to docker-compose.yml and .env.example Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
15 lines
477 B
Plaintext
15 lines
477 B
Plaintext
POSTGRES_USER=hoafinance
|
|
POSTGRES_PASSWORD=change_me
|
|
POSTGRES_DB=hoafinance
|
|
DATABASE_URL=postgresql://hoafinance:change_me@postgres:5432/hoafinance
|
|
REDIS_URL=redis://redis:6379
|
|
JWT_SECRET=change_me_to_random_string
|
|
NODE_ENV=development
|
|
|
|
# AI Investment Advisor (OpenAI-compatible API)
|
|
AI_API_URL=https://integrate.api.nvidia.com/v1
|
|
AI_API_KEY=your_nvidia_api_key_here
|
|
AI_MODEL=qwen/qwen3.5-397b-a17b
|
|
# Set to 'true' to enable detailed AI prompt/response logging
|
|
AI_DEBUG=false
|