- Architecture overview with VM specs - API documentation for webhooks and Ollama - Complete deployment guide - Current system state and metrics - AI processor integration
1.0 KiB
1.0 KiB
Deployment Guide
Prerequisites
- Proxmox host with 128GB+ RAM
- 4 VMs provisioned
- NFS storage configured
- Network connectivity between VMs
Step-by-Step Deployment
1. Elasticsearch Setup (192.168.100.85)
- Install Elasticsearch 7.x
- Configure NetFlow ingestion
- Set retention policy for 30 days
2. AI Processor Setup (192.168.100.86)
- Install Ollama
- Pull llama2:13b model
- Install Python 3.10+
- Deploy ai_processor.py
- Enable ai-processor.service
3. Orchestrator Setup (192.168.100.87)
- Install Python 3.10+
- Create virtual environment
- Install requirements.txt
- Configure config.yaml
- Deploy systemd services
- Start orchestrator.service
- Start gitea-webhook.service
4. Shared Storage Setup
- Mount NFS on all VMs
- Set permissions (netops:netops)
- Create directory structure
5. Gitea Configuration
- Create netops user
- Generate API token
- Configure webhook
- Set PR labels
Verification
- Check service status
- Verify NFS mounts
- Test Elasticsearch connectivity
- Confirm Ollama responding
- Create test PR