- server.js: add /api/calculate endpoint using openai SDK with configurable AI_API_URL, AI_API_KEY, AI_MODEL, AI_DEBUG env vars (works with any OpenAI-compatible provider: NVIDIA NIM, Together AI, Groq, Ollama, etc.) - app.js: make calculator submit handler async; call /api/calculate with graceful fallback to client-side generated text if AI is unavailable - package.json: add openai and dotenv dependencies - AI_SETUP.md: rewrite to document new unified env var config with provider examples Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
3.4 KiB
HOA LedgerIQ — AI Investment Advisor Configuration
Overview
The Benefit Calculator uses an AI model to generate a personalized investment recommendation. It supports any OpenAI-compatible API — including OpenAI, NVIDIA NIM, Together AI, Groq, Ollama, and others — configured via environment variables.
Architecture
Browser (app.js)
└─► POST /api/calculate (server.js)
└─► OpenAI-compatible API (AI_API_URL)
└─► Returns AI-generated recommendation text
└─► JSON response back to browser
(falls back to client-side math text if unavailable)
Configuration (.env)
Add these variables to your .env file (or systemd EnvironmentFile):
# AI Investment Advisor (OpenAI-compatible API)
AI_API_URL=https://integrate.api.nvidia.com/v1
AI_API_KEY=your_api_key_here
AI_MODEL=qwen/qwen3.5-397b-a17b
# Set to 'true' to enable detailed AI prompt/response logging
AI_DEBUG=false
Provider Examples
| Provider | AI_API_URL | Example Model |
|---|---|---|
| OpenAI | https://api.openai.com/v1 |
gpt-4o-mini |
| NVIDIA NIM | https://integrate.api.nvidia.com/v1 |
qwen/qwen3.5-397b-a17b |
| Together AI | https://api.together.xyz/v1 |
meta-llama/Llama-3-70b-chat-hf |
| Groq | https://api.groq.com/openai/v1 |
llama3-70b-8192 |
| Ollama (local) | http://localhost:11434/v1 |
llama3 |
If
AI_API_KEYis not set, the/api/calculateendpoint returns 503 and the calculator falls back to client-side generated text automatically.
How It Works
server.js initializes the OpenAI client with your configured base URL and key:
const aiClient = AI_API_KEY
? new OpenAI({ apiKey: AI_API_KEY, baseURL: AI_API_URL })
: null;
The POST /api/calculate endpoint builds a prompt from the form inputs and calls:
const completion = await aiClient.chat.completions.create({
model: AI_MODEL,
max_tokens: 300,
messages: [{ role: 'user', content: prompt }],
});
app.js calls this endpoint on form submit and falls back to the client-side text if the server returns an error or is unreachable.
Restart & Verify
sudo systemctl restart hoaledgeriqweb
# Test the endpoint
curl -X POST http://localhost:3000/api/calculate \
-H "Content-Type: application/json" \
-d '{"homesites":150,"propertyType":"sfh","annualIncome":300000,"paymentFreq":"monthly","reserveFunds":500000,"interest2025":4200}'
Prompt Tuning
Edit the prompt in server.js (inside the /api/calculate route) to adjust tone or output:
| Goal | Change |
|---|---|
| More optimistic estimates | Change "conservative" to "moderate" |
| Shorter output | Reduce max_tokens to 150 |
| Specific products | Add "mention Vanguard Federal Money Market or 6-month T-bills" |
| Add disclaimer | Append "End with one sentence reminding them this is not financial advice." |
Debug Logging
Set AI_DEBUG=true in .env to log the full prompt and response to the server console. Useful for testing new models or prompt changes.
Security
- Never put
AI_API_KEYinapp.js— all AI calls go throughserver.js. - Rate-limit the endpoint to prevent abuse:
npm install express-rate-limit --ignore-scripts
const rateLimit = require('express-rate-limit');
app.use('/api/calculate', rateLimit({ windowMs: 60_000, max: 10 }));