Local Development Quick Start¶
This guide provides clear, step-by-step instructions for setting up the Federated Learning Platform for local development.
Quick Start (5 Minutes)¶
Prerequisites¶
- Docker & Docker Compose (v20.10+)
- Git
- 8GB+ RAM and 20GB+ free disk space
1. Clone and Setup¶
# Clone the repository
git clone <repository-url>
cd flip
# Run the automated setup script
./setup-local-training.sh
2. Access the Application¶
- Frontend: http://localhost:4000
- Backend API: http://localhost:8000/docs
- Grafana: http://localhost:3001 (admin/admin)
That's it! The script automatically: - Detects your local IP address - Creates Docker Compose configuration - Builds all Docker images - Starts all services - Sets up federated learning components
What Gets Started¶
The setup-local-training.sh script starts:
Core Services¶
- Frontend (Next.js) - Port 4000
- Backend (FastAPI) - Port 8000
- MongoDB - Port 27017
- Grafana - Port 3001
Federated Learning Components¶
- Superlink (Flower orchestrator) - Port 9091/9093
- ServerApp (FL aggregator) - Port varies
- 2 Client Apps (FL participants) - Ports 8082+
- 2 Supernodes (client coordinators) - Ports 9094+
Observability Stack¶
- OpenTelemetry Collector - Port 4317
- Tempo (tracing) - Port 3200
- Grafana (monitoring) - Port 3001
Manual Setup (Alternative)¶
If you prefer manual control or the script fails:
1. Prerequisites Check¶
# Verify required software
docker --version # Should be 20.10+
docker-compose --version # Should be 2.0+
git --version
2. Project Setup¶
# Clone and enter directory
git clone <repository-url>
cd flip
# Verify project structure
ls -la
# Should see: backend/, frontend/, fl-core/, setup-local-training.sh
3. ML Project Setup¶
# Copy ML project template (required for federated learning)
cd fl-core
unzip ../templates/your-ml-project.zip -d mlproject/
# OR copy your own ML project to fl-core/mlproject/
# Verify ML project exists
ls mlproject/
# Should contain: pyproject.toml, client_app.py, server_app.py, etc.
4. Build Docker Images¶
# Build backend
docker build --target final_without_secrets -t flip/backend-fastapi -f backend/Dockerfile .
# Build frontend
cd frontend
docker build -t flip/frontend-nextjs --target development -f Dockerfile .
cd ..
# Build FL components
cd fl-core
docker build -t flip/serverapp -f Dockerfile.serverapp .
docker build -t flip/clientapp -f Dockerfile.clientapp .
docker build -t flip/inference -f Dockerfile.inference .
cd ..
5. Generate Docker Compose¶
# Auto-detect your IP and create docker-compose.yml
python3 create-docker-compose.py --orchestrator_ip $(hostname -I | awk '{print $1}') --environment development
6. Start Services¶
Development Workflow¶
Starting Development¶
# Start all services
./setup-local-training.sh
# Or start specific profiles
docker-compose --profile orchestrator up -d # Frontend + Backend + DB
docker-compose --profile aggregator up -d # FL Aggregator
docker-compose --profile client up -d # FL Clients
Viewing Logs¶
# All services
docker-compose logs -f
# Specific service
docker-compose logs -f backend-fastapi
docker-compose logs -f frontend-nextjs
docker-compose logs -f serverapp
Stopping Services¶
# Stop all
docker-compose --profile "*" down
# Stop specific profile
docker-compose --profile orchestrator down
Rebuilding After Changes¶
# Rebuild specific service
docker-compose build backend-fastapi
docker-compose up -d backend-fastapi
# Rebuild all
docker-compose build
docker-compose --profile "*" up -d
Verification Steps¶
1. Check All Services Running¶
2. Test Frontend¶
3. Test Backend API¶
4. Test Federated Learning¶
- Go to http://localhost:4000
- Navigate to "Training" section
- Start a federated learning job
- Monitor progress in real-time
5. Test Monitoring¶
- Go to http://localhost:3001
- Login with admin/admin
- View federated learning metrics
Troubleshooting¶
Common Issues¶
"Port already in use"¶
"Docker build failed"¶
# Clean Docker cache
docker system prune -a
docker volume prune
# Rebuild from scratch
./setup-local-training.sh
"ML project not found"¶
# Copy template project
cd fl-core
unzip ../templates/mnist-example.zip -d mlproject/
# Verify pyproject.toml exists
ls mlproject/pyproject.toml
"Services not starting"¶
# Check Docker daemon
sudo systemctl status docker
# Check logs for errors
docker-compose logs backend-fastapi
docker-compose logs frontend-nextjs
"IP address detection failed"¶
# Manually specify IP
export ORCHESTRATOR_IP="192.168.1.100" # Your actual IP
./setup-local-training.sh
Getting Help¶
# View setup script help
./setup-local-training.sh --help
# Check Docker Compose configuration
docker-compose config
# View all running containers
docker ps -a
Next Steps¶
After successful setup:
- Explore the UI at http://localhost:4000
- Review API docs at http://localhost:8000/docs
- Check monitoring at http://localhost:3001
- Run your first federated learning job
- Read the Architecture Overview
Development Tips¶
- Hot reload: Frontend and backend support hot reload in development
- Database: MongoDB data persists in Docker volumes
- Logs: Use
docker-compose logs -ffor real-time debugging - Ports: All ports are configurable in docker-compose.yml
- Performance: Allocate 8GB+ RAM to Docker for best performance
What the Setup Script Does¶
The setup-local-training.sh script:
- Detects your local IP address (macOS/Linux compatible)
- Sets up environment variables (MongoDB credentials, etc.)
- Checks for ML project in
fl-core/mlproject/ - Generates docker-compose.yml with proper networking
- Builds all Docker images:
- Backend FastAPI
- Frontend Next.js
- Flower ServerApp (aggregator)
- Flower ClientApp (participants)
- Inference server
- Starts all services with proper profiles
- Configures networking for federated learning communication
This automated approach ensures all components work together seamlessly for local federated learning development.