- Architecture overview with VM specs - API documentation for webhooks and Ollama - Complete deployment guide - Current system state and metrics - AI processor integration
1.2 KiB
1.2 KiB
System Architecture
Infrastructure Overview
VM Specifications
| VM | IP | CPU | RAM | Storage | Purpose |
|---|---|---|---|---|---|
| Orchestrator | 192.168.100.87 | 4 vCPU | 16GB | 32GB | Main automation hub |
| AI Processor | 192.168.100.86 | 8 vCPU | 56GB | 500GB | Ollama AI processing |
| Elasticsearch | 192.168.100.85 | 4 vCPU | 32GB | 3.4TB | NetFlow analytics |
| Proxmox Host | 192.168.100.75 | - | 128GB | - | Hypervisor & NFS |
Network Topology
All VMs connected via:
- Management Network: 192.168.100.0/24
- NFS Shared Storage: /shared/ai-gitops
- Gitea Webhooks: Port 5000
Data Flow
- NetFlow Data (850k+ flows/day) → Elasticsearch
- Orchestrator queries every 60 minutes
- Smart triggers detect anomalies
- AI Processor generates configs using llama2:13b
- Auto PR creation in Gitea
- Webhook triggers deployment on merge
- Feedback loop for AI learning
Shared Storage Structure
Mount: 192.168.100.75:/mnt/ai-gitops → /shared/ai-gitops
Directory purposes:
- requests/ - AI processing queue
- responses/ - Generated configurations
- feedback/ - PR feedback for learning
- learning/ - Pattern recognition data
- state/ - System state tracking
- configs/ - Current SRX configurations