Add comprehensive documentation

- Architecture overview with VM specs
- API documentation for webhooks and Ollama
- Complete deployment guide
- Current system state and metrics
- AI processor integration
This commit is contained in:
2025-09-04 02:34:49 +00:00
parent 66d97011ab
commit 2817795cca
6 changed files with 845 additions and 0 deletions

48
docs/DEPLOYMENT.md Normal file
View File

@@ -0,0 +1,48 @@
# Deployment Guide
## Prerequisites
- Proxmox host with 128GB+ RAM
- 4 VMs provisioned
- NFS storage configured
- Network connectivity between VMs
## Step-by-Step Deployment
### 1. Elasticsearch Setup (192.168.100.85)
- Install Elasticsearch 7.x
- Configure NetFlow ingestion
- Set retention policy for 30 days
### 2. AI Processor Setup (192.168.100.86)
- Install Ollama
- Pull llama2:13b model
- Install Python 3.10+
- Deploy ai_processor.py
- Enable ai-processor.service
### 3. Orchestrator Setup (192.168.100.87)
- Install Python 3.10+
- Create virtual environment
- Install requirements.txt
- Configure config.yaml
- Deploy systemd services
- Start orchestrator.service
- Start gitea-webhook.service
### 4. Shared Storage Setup
- Mount NFS on all VMs
- Set permissions (netops:netops)
- Create directory structure
### 5. Gitea Configuration
- Create netops user
- Generate API token
- Configure webhook
- Set PR labels
## Verification
- Check service status
- Verify NFS mounts
- Test Elasticsearch connectivity
- Confirm Ollama responding
- Create test PR