- Architecture overview with VM specs - API documentation for webhooks and Ollama - Complete deployment guide - Current system state and metrics - AI processor integration
40 lines
912 B
Markdown
40 lines
912 B
Markdown
# API Documentation
|
|
|
|
## Webhook Endpoints
|
|
|
|
### Gitea Webhook Listener
|
|
**URL**: http://192.168.100.87:5000/webhook
|
|
**Method**: POST
|
|
**Service**: gitea-webhook.service
|
|
|
|
#### Supported Events
|
|
- Pull Request Merged
|
|
- Pull Request Closed
|
|
- Issue Comment (for feedback)
|
|
|
|
#### Response Actions
|
|
- PR Merged → Triggers deploy_approved.py
|
|
- PR Closed → Saves feedback via gitea_pr_feedback.py
|
|
- Comment with "rejected:" → Updates learning patterns
|
|
|
|
## Elasticsearch Queries
|
|
|
|
### Traffic Analysis
|
|
Endpoint: 192.168.100.85:9200/netflow-*
|
|
Query frequency: Every 60 minutes
|
|
Window: 168 hours (7 days)
|
|
|
|
### Anomaly Detection Thresholds
|
|
- Traffic spike: 200% increase
|
|
- New hosts: 10+ unique IPs
|
|
- Port scan: 50+ ports from single IP
|
|
- DDoS pattern: 1000+ connections/minute
|
|
|
|
## Ollama AI API
|
|
|
|
### Model Endpoint
|
|
URL: http://192.168.100.86:11434/api/generate
|
|
Model: llama2:13b
|
|
Context window: 4096 tokens
|
|
Temperature: 0.7
|