Add comprehensive documentation
- Architecture overview with VM specs - API documentation for webhooks and Ollama - Complete deployment guide - Current system state and metrics - AI processor integration
This commit is contained in:
27
docs/CURRENT_STATE.md
Normal file
27
docs/CURRENT_STATE.md
Normal file
@@ -0,0 +1,27 @@
|
||||
# Current System State
|
||||
|
||||
## Active Services
|
||||
- orchestrator.service - Running 11+ days
|
||||
- gitea-webhook.service - Running 11+ days
|
||||
- ai-processor.service - Running 19+ days
|
||||
- ollama.service - Running 19+ days
|
||||
|
||||
## Recent Activity
|
||||
- Last analysis: September 4, 2025
|
||||
- PRs created: 14+
|
||||
- Success rate: 100% (after learning)
|
||||
- Feedback iterations: 8
|
||||
|
||||
## Learning Status
|
||||
The AI has learned to avoid:
|
||||
- any/any/any rules
|
||||
- Missing logging statements
|
||||
- Trailing braces
|
||||
- Generic configurations
|
||||
|
||||
## Performance Metrics
|
||||
- Daily flows processed: 850,000+
|
||||
- Analysis frequency: Every 60 minutes
|
||||
- AI response time: ~82 seconds
|
||||
- PR creation time: <2 minutes
|
||||
- Deployment time: <30 seconds
|
||||
48
docs/DEPLOYMENT.md
Normal file
48
docs/DEPLOYMENT.md
Normal file
@@ -0,0 +1,48 @@
|
||||
# Deployment Guide
|
||||
|
||||
## Prerequisites
|
||||
- Proxmox host with 128GB+ RAM
|
||||
- 4 VMs provisioned
|
||||
- NFS storage configured
|
||||
- Network connectivity between VMs
|
||||
|
||||
## Step-by-Step Deployment
|
||||
|
||||
### 1. Elasticsearch Setup (192.168.100.85)
|
||||
- Install Elasticsearch 7.x
|
||||
- Configure NetFlow ingestion
|
||||
- Set retention policy for 30 days
|
||||
|
||||
### 2. AI Processor Setup (192.168.100.86)
|
||||
- Install Ollama
|
||||
- Pull llama2:13b model
|
||||
- Install Python 3.10+
|
||||
- Deploy ai_processor.py
|
||||
- Enable ai-processor.service
|
||||
|
||||
### 3. Orchestrator Setup (192.168.100.87)
|
||||
- Install Python 3.10+
|
||||
- Create virtual environment
|
||||
- Install requirements.txt
|
||||
- Configure config.yaml
|
||||
- Deploy systemd services
|
||||
- Start orchestrator.service
|
||||
- Start gitea-webhook.service
|
||||
|
||||
### 4. Shared Storage Setup
|
||||
- Mount NFS on all VMs
|
||||
- Set permissions (netops:netops)
|
||||
- Create directory structure
|
||||
|
||||
### 5. Gitea Configuration
|
||||
- Create netops user
|
||||
- Generate API token
|
||||
- Configure webhook
|
||||
- Set PR labels
|
||||
|
||||
## Verification
|
||||
- Check service status
|
||||
- Verify NFS mounts
|
||||
- Test Elasticsearch connectivity
|
||||
- Confirm Ollama responding
|
||||
- Create test PR
|
||||
39
docs/api/README.md
Normal file
39
docs/api/README.md
Normal file
@@ -0,0 +1,39 @@
|
||||
# API Documentation
|
||||
|
||||
## Webhook Endpoints
|
||||
|
||||
### Gitea Webhook Listener
|
||||
**URL**: http://192.168.100.87:5000/webhook
|
||||
**Method**: POST
|
||||
**Service**: gitea-webhook.service
|
||||
|
||||
#### Supported Events
|
||||
- Pull Request Merged
|
||||
- Pull Request Closed
|
||||
- Issue Comment (for feedback)
|
||||
|
||||
#### Response Actions
|
||||
- PR Merged → Triggers deploy_approved.py
|
||||
- PR Closed → Saves feedback via gitea_pr_feedback.py
|
||||
- Comment with "rejected:" → Updates learning patterns
|
||||
|
||||
## Elasticsearch Queries
|
||||
|
||||
### Traffic Analysis
|
||||
Endpoint: 192.168.100.85:9200/netflow-*
|
||||
Query frequency: Every 60 minutes
|
||||
Window: 168 hours (7 days)
|
||||
|
||||
### Anomaly Detection Thresholds
|
||||
- Traffic spike: 200% increase
|
||||
- New hosts: 10+ unique IPs
|
||||
- Port scan: 50+ ports from single IP
|
||||
- DDoS pattern: 1000+ connections/minute
|
||||
|
||||
## Ollama AI API
|
||||
|
||||
### Model Endpoint
|
||||
URL: http://192.168.100.86:11434/api/generate
|
||||
Model: llama2:13b
|
||||
Context window: 4096 tokens
|
||||
Temperature: 0.7
|
||||
37
docs/architecture/README.md
Normal file
37
docs/architecture/README.md
Normal file
@@ -0,0 +1,37 @@
|
||||
# System Architecture
|
||||
|
||||
## Infrastructure Overview
|
||||
|
||||
### VM Specifications
|
||||
| VM | IP | CPU | RAM | Storage | Purpose |
|
||||
|----|----|----|-----|---------|---------|
|
||||
| Orchestrator | 192.168.100.87 | 4 vCPU | 16GB | 32GB | Main automation hub |
|
||||
| AI Processor | 192.168.100.86 | 8 vCPU | 56GB | 500GB | Ollama AI processing |
|
||||
| Elasticsearch | 192.168.100.85 | 4 vCPU | 32GB | 3.4TB | NetFlow analytics |
|
||||
| Proxmox Host | 192.168.100.75 | - | 128GB | - | Hypervisor & NFS |
|
||||
|
||||
### Network Topology
|
||||
All VMs connected via:
|
||||
- Management Network: 192.168.100.0/24
|
||||
- NFS Shared Storage: /shared/ai-gitops
|
||||
- Gitea Webhooks: Port 5000
|
||||
|
||||
### Data Flow
|
||||
1. NetFlow Data (850k+ flows/day) → Elasticsearch
|
||||
2. Orchestrator queries every 60 minutes
|
||||
3. Smart triggers detect anomalies
|
||||
4. AI Processor generates configs using llama2:13b
|
||||
5. Auto PR creation in Gitea
|
||||
6. Webhook triggers deployment on merge
|
||||
7. Feedback loop for AI learning
|
||||
|
||||
## Shared Storage Structure
|
||||
Mount: 192.168.100.75:/mnt/ai-gitops → /shared/ai-gitops
|
||||
|
||||
Directory purposes:
|
||||
- requests/ - AI processing queue
|
||||
- responses/ - Generated configurations
|
||||
- feedback/ - PR feedback for learning
|
||||
- learning/ - Pattern recognition data
|
||||
- state/ - System state tracking
|
||||
- configs/ - Current SRX configurations
|
||||
Reference in New Issue
Block a user