Hardware Requirements
Minimum Specs
| Fleet Size | vCPU | RAM | Disk |
|---|---|---|---|
| < 10 agents | 2 | 4 GB | 40 GB SSD |
| 10–50 agents | 2 | 8 GB | 80 GB SSD |
| 50–500 agents | 4 | 16 GB | 200 GB |
| 500–2000 agents | 8 | 32 GB | 500 GB |
| > 2000 agents | multiple nodes (postgres replica + horizontal hub-api) |
Consumption per Agent
- Ingest: ~1 KB / 15 sec / agent → 50 agents ≈ 17 MB/u
- Inventory: ~80 KB / 6u / agent
- Telemetry 30 days: ~50 MB / agent in TimescaleDB (after compression)
Backup Strategy
# Daily logical dumppg_dump -U monsys -h postgres monsys | zstd -19 > /backups/$(date -I).sql.zst
# 30 day retentionfind /backups -name "*.sql.zst" -mtime +30 -deleteFor production: also create a hot replica via streaming replication with pg_basebackup on a second node.
Scaling Advice
- Split hub-api and hub-ingest into separate instances when > 200 agents.
- Postgres connection pool: minimum
max_connections = 100; use PgBouncer for > 500 agents. - TimescaleDB compression after 7 days — saves 80–95% disk space.
- For multi-region: cross-region replication is not needed — agents choose the closest ingest endpoint via DNS.