Initial release v1.1.0

- Complete MVP for tracking Fidelity brokerage account performance
- Transaction import from CSV with deduplication
- Automatic FIFO position tracking with options support
- Real-time P&L calculations with market data caching
- Dashboard with timeframe filtering (30/90/180 days, 1 year, YTD, all time)
- Docker-based deployment with PostgreSQL backend
- React/TypeScript frontend with TailwindCSS
- FastAPI backend with SQLAlchemy ORM

Features:
- Multi-account support
- Import via CSV upload or filesystem
- Open and closed position tracking
- Balance history charting
- Performance analytics and metrics
- Top trades analysis
- Responsive UI design

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
Chris
2026-01-22 14:27:43 -05:00
commit eea4469095
90 changed files with 14513 additions and 0 deletions

540
LINUX_DEPLOYMENT.md Normal file
View File

@@ -0,0 +1,540 @@
# Linux Server Deployment Guide
Complete guide for deploying myFidelityTracker on a Linux server.
## Prerequisites
### Linux Server Requirements
- **OS**: Ubuntu 20.04+, Debian 11+, CentOS 8+, or similar
- **RAM**: 4GB minimum (8GB recommended)
- **Disk**: 20GB free space
- **Network**: Open ports 3000, 8000 (or configure firewall)
### Required Software
- Docker Engine 20.10+
- Docker Compose 1.29+ (or Docker Compose V2)
- Git (optional, for cloning)
## Step 1: Install Docker on Linux
### Ubuntu/Debian
```bash
# Update package index
sudo apt-get update
# Install dependencies
sudo apt-get install -y \
ca-certificates \
curl \
gnupg \
lsb-release
# Add Docker's official GPG key
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Set up repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Add your user to docker group (optional, to run without sudo)
sudo usermod -aG docker $USER
newgrp docker
# Verify installation
docker --version
docker compose version
```
### CentOS/RHEL
```bash
# Install Docker
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Start Docker
sudo systemctl start docker
sudo systemctl enable docker
# Add user to docker group (optional)
sudo usermod -aG docker $USER
newgrp docker
# Verify
docker --version
docker compose version
```
## Step 2: Transfer Files to Linux Server
### Option A: Direct Transfer (from your Mac)
```bash
# From your Mac, transfer the entire project directory
# Replace USER and SERVER_IP with your values
cd /Users/chris/Desktop
scp -r fidelity USER@SERVER_IP:~/
# Example:
# scp -r fidelity ubuntu@192.168.1.100:~/
```
### Option B: Using rsync (faster for updates)
```bash
# From your Mac
rsync -avz --progress /Users/chris/Desktop/fidelity/ USER@SERVER_IP:~/fidelity/
# Exclude node_modules and other large dirs
rsync -avz --progress \
--exclude 'node_modules' \
--exclude '__pycache__' \
--exclude '*.pyc' \
/Users/chris/Desktop/fidelity/ USER@SERVER_IP:~/fidelity/
```
### Option C: Git (if using version control)
```bash
# On your Linux server
cd ~
git clone YOUR_REPO_URL fidelity
cd fidelity
```
### Option D: Manual ZIP Transfer
```bash
# On your Mac - create zip
cd /Users/chris/Desktop
zip -r fidelity.zip fidelity/ -x "*/node_modules/*" "*/__pycache__/*" "*.pyc"
# Transfer the zip
scp fidelity.zip USER@SERVER_IP:~/
# On Linux server - extract
cd ~
unzip fidelity.zip
```
## Step 3: Configure for Linux Environment
SSH into your Linux server:
```bash
ssh USER@SERVER_IP
cd ~/fidelity
```
### Make scripts executable
```bash
chmod +x start-linux.sh
chmod +x stop.sh
```
### Configure environment variables
```bash
# Create .env file
cp .env.example .env
# Edit .env file to add your server IP for CORS
nano .env # or use vim, vi, etc.
```
Update the CORS_ORIGINS line:
```env
CORS_ORIGINS=http://localhost:3000,http://YOUR_SERVER_IP:3000
```
Replace `YOUR_SERVER_IP` with your actual server IP address.
### Create imports directory
```bash
mkdir -p imports
```
## Step 4: Start the Application
```bash
# Start all services
./start-linux.sh
# Or manually:
docker-compose up -d
```
The script will:
- Check Docker is running
- Create necessary directories
- Start all containers (postgres, backend, frontend)
- Display access URLs
## Step 5: Access the Application
### From the Server Itself
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- API Docs: http://localhost:8000/docs
### From Other Computers on the Network
- Frontend: http://YOUR_SERVER_IP:3000
- Backend API: http://YOUR_SERVER_IP:8000
- API Docs: http://YOUR_SERVER_IP:8000/docs
### From the Internet (if server has public IP)
First configure firewall (see Security section below), then:
- Frontend: http://YOUR_PUBLIC_IP:3000
- Backend API: http://YOUR_PUBLIC_IP:8000
## Step 6: Configure Firewall (Ubuntu/Debian)
```bash
# Allow SSH (important - don't lock yourself out!)
sudo ufw allow 22/tcp
# Allow application ports
sudo ufw allow 3000/tcp # Frontend
sudo ufw allow 8000/tcp # Backend API
# Enable firewall
sudo ufw enable
# Check status
sudo ufw status
```
### For CentOS/RHEL (firewalld)
```bash
# Allow ports
sudo firewall-cmd --permanent --add-port=3000/tcp
sudo firewall-cmd --permanent --add-port=8000/tcp
sudo firewall-cmd --reload
# Check status
sudo firewall-cmd --list-all
```
## Step 7: Load Demo Data (Optional)
```bash
# Copy your CSV to imports directory
cp History_for_Account_X38661988.csv imports/
# Run seeder
docker-compose exec backend python seed_demo_data.py
```
## Common Linux-Specific Commands
### View Logs
```bash
# All services
docker-compose logs -f
# Specific service
docker-compose logs -f backend
docker-compose logs -f frontend
docker-compose logs -f postgres
# Last 100 lines
docker-compose logs --tail=100
```
### Check Container Status
```bash
docker-compose ps
docker ps
```
### Restart Services
```bash
docker-compose restart
docker-compose restart backend
```
### Stop Application
```bash
./stop.sh
# or
docker-compose down
```
### Update Application
```bash
# Stop containers
docker-compose down
# Pull latest code (if using git)
git pull
# Rebuild and restart
docker-compose up -d --build
```
### Access Database
```bash
docker-compose exec postgres psql -U fidelity -d fidelitytracker
```
### Shell Access to Containers
```bash
# Backend shell
docker-compose exec backend bash
# Frontend shell
docker-compose exec frontend sh
# Database shell
docker-compose exec postgres bash
```
## Troubleshooting
### Port Already in Use
```bash
# Check what's using the port
sudo lsof -i :3000
sudo lsof -i :8000
sudo lsof -i :5432
# Or use netstat
sudo netstat -tlnp | grep 3000
# Kill the process
sudo kill <PID>
```
### Permission Denied Errors
```bash
# If you get permission errors with Docker
sudo usermod -aG docker $USER
newgrp docker
# If import directory has permission issues
sudo chown -R $USER:$USER imports/
chmod 755 imports/
```
### Docker Out of Space
```bash
# Clean up unused containers, images, volumes
docker system prune -a
# Remove only dangling images
docker image prune
```
### Services Won't Start
```bash
# Check Docker is running
sudo systemctl status docker
sudo systemctl start docker
# Check logs for errors
docker-compose logs
# Rebuild from scratch
docker-compose down -v
docker-compose up -d --build
```
### Cannot Access from Other Computers
```bash
# Check firewall
sudo ufw status
sudo firewall-cmd --list-all
# Check if services are listening on all interfaces
sudo netstat -tlnp | grep 3000
# Should show 0.0.0.0:3000, not 127.0.0.1:3000
# Update CORS in .env
nano .env
# Add your server IP to CORS_ORIGINS
```
## Production Deployment (Optional)
### Use Docker Compose in Production Mode
Create `docker-compose.prod.yml`:
```yaml
version: '3.8'
services:
postgres:
restart: always
backend:
restart: always
environment:
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD} # Use strong password
frontend:
restart: always
```
Start with:
```bash
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
```
### Set Up as System Service (Systemd)
Create `/etc/systemd/system/fidelity-tracker.service`:
```ini
[Unit]
Description=myFidelityTracker
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/home/YOUR_USER/fidelity
ExecStart=/usr/bin/docker-compose up -d
ExecStop=/usr/bin/docker-compose down
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target
```
Enable and start:
```bash
sudo systemctl daemon-reload
sudo systemctl enable fidelity-tracker
sudo systemctl start fidelity-tracker
sudo systemctl status fidelity-tracker
```
### Enable HTTPS with Nginx Reverse Proxy
Install Nginx:
```bash
sudo apt-get install nginx certbot python3-certbot-nginx
```
Configure `/etc/nginx/sites-available/fidelity`:
```nginx
server {
listen 80;
server_name your-domain.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /api {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
```
Enable and get SSL:
```bash
sudo ln -s /etc/nginx/sites-available/fidelity /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl restart nginx
sudo certbot --nginx -d your-domain.com
```
### Backup Database
```bash
# Create backup script
cat > backup-db.sh << 'EOF'
#!/bin/bash
DATE=$(date +%Y%m%d_%H%M%S)
docker-compose exec -T postgres pg_dump -U fidelity fidelitytracker > backup_$DATE.sql
gzip backup_$DATE.sql
echo "Backup created: backup_$DATE.sql.gz"
EOF
chmod +x backup-db.sh
# Run backup
./backup-db.sh
# Schedule with cron (daily at 2 AM)
crontab -e
# Add: 0 2 * * * /home/YOUR_USER/fidelity/backup-db.sh
```
## Security Best Practices
1. **Change default passwords** in `.env`
2. **Use firewall** to restrict access
3. **Enable HTTPS** for production
4. **Regular backups** of database
5. **Keep Docker updated**: `sudo apt-get update && sudo apt-get upgrade`
6. **Monitor logs** for suspicious activity
7. **Use strong passwords** for PostgreSQL
8. **Don't expose ports** to the internet unless necessary
## Performance Optimization
### Increase Docker Resources
Edit `/etc/docker/daemon.json`:
```json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
```
Restart Docker:
```bash
sudo systemctl restart docker
```
### Monitor Resources
```bash
# Container resource usage
docker stats
# System resources
htop
free -h
df -h
```
## Summary
Your app is now running on Linux! The main differences from macOS:
- Use `start-linux.sh` instead of `start.sh`
- Configure firewall for remote access
- CORS needs your server IP
- Use `systemctl` for Docker management
The application itself runs identically - Docker handles all the platform differences.
---
**Questions?** Check the main README.md or run `docker-compose logs` to diagnose issues.