Detailed step-by-step plan covering CrowdSec deployment on NPM/NordaBiz, OPNsense VM creation and bridge configuration, Suricata IDS/IPS setup, console enrollment, bridge insertion procedure, and observation period. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
23 KiB
DMZ Security Layer — Implementation Plan
For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
Goal: Deploy OPNsense transparent bridge VM with Suricata IPS + CrowdSec distributed agents to replace expired FortiGuard security capabilities.
Architecture: OPNsense VM 155 in bridge mode between FortiGate and server switch, Suricata IDS/IPS with ET Open rules, CrowdSec agents on NPM and NordaBiz feeding community blocklist decisions to OPNsense bouncer.
Tech Stack: OPNsense 26.1, Suricata, CrowdSec (LAPI + agents + bouncers), Proxmox VE, ET Open rulesets
Design document: docs/plans/2026-02-16-dmz-security-layer-design.md
Task 1: Install CrowdSec on NordaBiz (VM 249)
Target: 10.22.68.249 (NORDABIZ-01)
CrowdSec agent na serwerze NordaBiz — parsuje logi Flask/Gunicorn, wykrywa brute-force i honeypot.
Files:
- Create:
/etc/crowdsec/acquis.d/nordabiz.yaml(on VM 249) - Modify: systemd service (automatic)
Step 1: Install CrowdSec repository and package
ssh maciejpi@10.22.68.249 "curl -s https://install.crowdsec.net | sudo sh"
ssh maciejpi@10.22.68.249 "sudo apt install -y crowdsec"
Step 2: Verify CrowdSec is running
ssh maciejpi@10.22.68.249 "sudo cscli version && sudo systemctl status crowdsec --no-pager"
Expected: CrowdSec running, version displayed.
Step 3: Install collections for nginx + HTTP scenarios
NordaBiz runs behind nginx on port 80, Flask/Gunicorn on port 5000.
ssh maciejpi@10.22.68.249 "sudo cscli collections install crowdsecurity/nginx"
ssh maciejpi@10.22.68.249 "sudo cscli collections install crowdsecurity/base-http-scenarios"
Step 4: Configure acquisition for nginx access logs
ssh maciejpi@10.22.68.249 "sudo tee /etc/crowdsec/acquis.d/nordabiz.yaml << 'EOF'
filenames:
- /var/log/nginx/access.log
labels:
type: nginx
EOF"
Step 5: Configure acquisition for NordaBiz security log (honeypot)
ssh maciejpi@10.22.68.249 "sudo tee /etc/crowdsec/acquis.d/nordabiz-security.yaml << 'EOF'
filenames:
- /var/log/nordabiznes/security.log
labels:
type: syslog
EOF"
Step 6: Restart CrowdSec and verify
ssh maciejpi@10.22.68.249 "sudo systemctl restart crowdsec"
ssh maciejpi@10.22.68.249 "sudo cscli metrics"
Expected: Metrics show nginx parser processing lines from access.log.
Step 7: Verify installed collections and scenarios
ssh maciejpi@10.22.68.249 "sudo cscli collections list"
ssh maciejpi@10.22.68.249 "sudo cscli scenarios list"
Expected: crowdsecurity/nginx, crowdsecurity/base-http-scenarios, crowdsecurity/linux listed.
Step 8: Commit
# No local code changes — this is infrastructure configuration on remote server
Task 2: Install CrowdSec on NPM (VM 119)
Target: 10.22.68.250 (R11-REVPROXY-01), NPM runs in Docker
CrowdSec agent na serwerze NPM — parsuje logi nginx reverse proxy.
Step 1: Install CrowdSec on the host (not Docker)
NPM runs in Docker, but CrowdSec can run on the host and read Docker-mounted logs.
ssh maciejpi@10.22.68.250 "curl -s https://install.crowdsec.net | sudo sh"
ssh maciejpi@10.22.68.250 "sudo apt install -y crowdsec"
Step 2: Find NPM log location
ssh maciejpi@10.22.68.250 "docker inspect nginx-proxy-manager_app_1 | grep -A5 Mounts | head -20"
ssh maciejpi@10.22.68.250 "ls -la /var/docker/npm/data/logs/ 2>/dev/null || ls -la /opt/npm/data/logs/ 2>/dev/null"
Note: Adjust path in next step based on actual log location found.
Step 3: Install nginx collection
ssh maciejpi@10.22.68.250 "sudo cscli collections install crowdsecurity/nginx"
ssh maciejpi@10.22.68.250 "sudo cscli collections install crowdsecurity/base-http-scenarios"
Step 4: Configure acquisition for NPM logs
# Path will be adjusted based on Step 2 findings
ssh maciejpi@10.22.68.250 "sudo tee /etc/crowdsec/acquis.d/npm.yaml << 'EOF'
filenames:
- /var/docker/npm/data/logs/proxy-host-*_access.log
labels:
type: nginx
---
filenames:
- /var/docker/npm/data/logs/dead-host-*_access.log
labels:
type: nginx
EOF"
Step 5: Restart and verify
ssh maciejpi@10.22.68.250 "sudo systemctl restart crowdsec"
ssh maciejpi@10.22.68.250 "sudo cscli metrics"
Expected: nginx parser processing NPM proxy logs.
Step 6: Register NPM agent with NordaBiz LAPI (multi-server setup)
We need to decide LAPI topology. Two options:
- Option A: Each server runs its own LAPI (simpler, isolated)
- Option B: Central LAPI on one server, agents register remotely (shared decisions)
Recommended: Option A initially — each server independent, both enrolled in CrowdSec Console for community blocklist. Option B can be configured later when OPNsense is deployed.
Task 3: Enroll CrowdSec instances in Console
Prerequisite: CrowdSec account at https://app.crowdsec.net
Step 1: Create CrowdSec Console account
Navigate to https://app.crowdsec.net and create an account. Copy the enrollment key from Security Engine → Engines page.
Step 2: Enroll NordaBiz instance
ssh maciejpi@10.22.68.249 "sudo cscli console enroll --name NORDABIZ-01 --tags nordabiz --tags inpi YOUR-ENROLL-KEY"
Step 3: Enroll NPM instance
ssh maciejpi@10.22.68.250 "sudo cscli console enroll --name R11-REVPROXY-01 --tags npm --tags inpi YOUR-ENROLL-KEY"
Step 4: Validate enrollments in Console webapp
Open https://app.crowdsec.net → accept both pending enrollments.
Step 5: Subscribe to community blocklists
In Console → Blocklists tab → subscribe both engines to relevant blocklists.
Step 6: Verify blocklist decisions
ssh maciejpi@10.22.68.249 "sudo cscli metrics show decisions"
ssh maciejpi@10.22.68.250 "sudo cscli metrics show decisions"
Expected: Community blocklist decisions visible within 2 hours.
Task 4: Register IP and DNS for OPNsense VM
Skills: @ipam-manager, @dns-technitium-manager
Step 1: Reserve IP 10.22.68.155 in phpIPAM
Use ipam-manager skill to add IP:
# Via phpIPAM API
TOKEN=$(curl -sk -X POST https://10.22.68.172/api/mcp/user/ \
-u "api_claude:PASSWORD" | jq -r '.data.token')
curl -sk -X POST https://10.22.68.172/api/mcp/addresses/ \
-H "token: $TOKEN" -H "Content-Type: application/json" \
-d '{"subnetId":"7","ip":"10.22.68.155","hostname":"R11-OPNSENSE-01.inpi.local","description":"OPNsense Bridge IPS VM","state":"2"}'
Step 2: Create DNS A record
Use dns-technitium-manager skill:
TECH_TOKEN=$(jq -r '.technitium.primary.api_token' ~/.claude/config/inpi-infrastructure.json)
# A record
curl -s "http://10.22.68.171:5380/api/zones/records/add?token=${TECH_TOKEN}&domain=R11-OPNSENSE-01.inpi.local&zone=inpi.local&type=A&ipAddress=10.22.68.155&ttl=3600"
# Also add a short alias
curl -s "http://10.22.68.171:5380/api/zones/records/add?token=${TECH_TOKEN}&domain=opnsense.inpi.local&zone=inpi.local&type=CNAME&cname=R11-OPNSENSE-01.inpi.local&ttl=3600"
Step 3: Create PTR record
curl -s "http://10.22.68.171:5380/api/zones/records/add?token=${TECH_TOKEN}&domain=155.68.22.10.in-addr.arpa&zone=68.22.10.in-addr.arpa&type=PTR&ptrName=R11-OPNSENSE-01.inpi.local&ttl=3600"
Step 4: Verify DNS resolution
dig @10.22.68.171 opnsense.inpi.local +short
# Expected: R11-OPNSENSE-01.inpi.local → 10.22.68.155
dig @10.22.68.171 -x 10.22.68.155 +short
# Expected: R11-OPNSENSE-01.inpi.local.
Task 5: Create vmbr1 bridge on r11-pve-03
Target: r11-pve-03 (10.22.68.123)
Nowy bridge Proxmox potrzebny jako "wyjście" z OPNsense bridge do serwerów. vmbr0 to bridge "wejściowy" od FortiGate.
Step 1: Check current network config
ssh root@10.22.68.123 "cat /etc/network/interfaces"
Step 2: Add vmbr1 bridge
ssh root@10.22.68.123 "cat >> /etc/network/interfaces << 'EOF'
auto vmbr1
iface vmbr1 inet manual
bridge-ports none
bridge-stp off
bridge-fd 0
bridge-mcsnoop 0
#OPNsense Bridge OUT - server side
EOF"
Step 3: Bring up vmbr1
ssh root@10.22.68.123 "ifup vmbr1"
ssh root@10.22.68.123 "brctl show vmbr1"
Expected: vmbr1 bridge exists with no member ports (OPNsense VM will connect via virtio).
Step 4: Verify no impact on existing network
ssh root@10.22.68.123 "brctl show"
ssh root@10.22.68.123 "ping -c 2 10.22.68.1"
Expected: vmbr0 unchanged, gateway reachable.
Task 6: Create OPNsense VM 155 on Proxmox
Skill: @proxmox-manager
Important: OPNsense cannot use cloud-init template. Must install from ISO manually.
Step 1: Download OPNsense 26.1 ISO to Proxmox
ssh root@10.22.68.123 "cd /var/lib/vz/template/iso && wget -q https://pkg.opnsense.org/releases/26.1/OPNsense-26.1-dvd-amd64.iso.bz2 && bunzip2 OPNsense-26.1-dvd-amd64.iso.bz2"
Step 2: Verify ISO
ssh root@10.22.68.123 "ls -lh /var/lib/vz/template/iso/OPNsense-26.1-dvd-amd64.iso"
Step 3: Create VM 155 with 3 NICs
ssh root@10.22.68.123 "qm create 155 \
--name R11-OPNSENSE-01 \
--ostype other \
--bios ovmf \
--machine q35 \
--efidisk0 local-lvm:1,efitype=4m,pre-enrolled-keys=0 \
--cpu host \
--cores 4 \
--memory 8192 \
--balloon 0 \
--scsihw virtio-scsi-single \
--scsi0 local-lvm:32,ssd=1,discard=on \
--ide2 local:iso/OPNsense-26.1-dvd-amd64.iso,media=cdrom \
--boot order='ide2;scsi0' \
--net0 virtio,bridge=vmbr0,firewall=0,queues=4 \
--net1 virtio,bridge=vmbr1,firewall=0,queues=4 \
--net2 virtio,bridge=vmbr0,firewall=0,queues=4 \
--onboot 1 \
--description 'OPNsense Bridge IPS - transparent filtering bridge between FortiGate and servers'"
Key settings:
bios=ovmf,machine=q35— UEFI for OPNsenseballoon=0— ballooning disabled (required for FreeBSD)firewall=0— Proxmox firewall off on all NICsqueues=4— multiqueue matching core countnet0=vmbr0— Bridge IN (from FortiGate)net1=vmbr1— Bridge OUT (to servers)net2=vmbr0— Management (gets IP 10.22.68.155)
Step 4: Verify VM creation
ssh root@10.22.68.123 "qm config 155"
Step 5: Start VM for installation
ssh root@10.22.68.123 "qm start 155"
Step 6: Access console for OPNsense installation
Open Proxmox Web UI (https://10.22.68.123:8006) → VM 155 → Console.
Install OPNsense:
- Boot from ISO
- Login:
installer/opnsense - Select ZFS filesystem
- Select target disk (vtbd0)
- Set root password
- Reboot
Step 7: Remove ISO after installation
ssh root@10.22.68.123 "qm set 155 --ide2 none --boot order='scsi0'"
Task 7: Configure OPNsense — Management Interface
Prerequisites: OPNsense installed and booted. Access via Proxmox console.
Step 1: Assign interfaces from OPNsense console
In Proxmox console (VNC), at OPNsense prompt:
- Option
1— Assign interfaces - VLANs?
N - WAN interface:
vtnet0 - LAN interface:
vtnet1 - Optional interface 1:
vtnet2(this becomes OPT1 = management)
Step 2: Configure OPT1 (management) with static IP
From console, option 2 — Set interface IP:
- Select
3(OPT1/vtnet2) - IPv4:
10.22.68.155 - Subnet:
24 - Gateway:
10.22.68.1 - DHCP:
N - Enable WebGUI on this interface:
Y - Protocol:
HTTPS
Step 3: Verify management access
ping -c 3 10.22.68.155
curl -sk https://10.22.68.155/ | head -5
Expected: OPNsense WebGUI reachable at https://10.22.68.155 (default login: root / opnsense).
Step 4: Change root password via WebGUI
Login → System → Access → Users → root → Change password.
Task 8: Configure OPNsense — Transparent Bridge
Prerequisites: Management interface working at 10.22.68.155. All configuration via WebGUI.
Step 1: Set system tunables
System → Settings → Tunables:
- Add:
net.link.bridge.pfil_bridge=1 - Add:
net.link.bridge.pfil_member=0
Step 2: Configure WAN interface (vtnet0)
Interfaces → WAN:
- IPv4 Configuration Type:
None - IPv6 Configuration Type:
None - Uncheck "Block private networks"
- Uncheck "Block bogon networks"
- Save + Apply
Step 3: Configure LAN interface (vtnet1)
Interfaces → LAN:
- IPv4 Configuration Type:
None - IPv6 Configuration Type:
None - Disable DHCP server if enabled
- Save + Apply
Step 4: Create bridge device
Interfaces → Other Types → Bridge → Add:
- Member interfaces:
vtnet0 (WAN)+vtnet1 (LAN) - Description:
IPS_BRIDGE - Do NOT enable link-local address
- Save
Step 5: Assign bridge interface
Interfaces → Assignments:
- Assign new
bridge0device - Name it
BRIDGE - Enable it
- IPv4/IPv6:
None - Save + Apply
Step 6: Create bridge firewall rules
Firewall → Rules → BRIDGE:
- Rule 1: Pass, IPv4+IPv6, any protocol, source any, destination any, direction in
- Rule 2: Pass, IPv4+IPv6, any protocol, source any, destination any, direction out
This allows all traffic through the bridge (Suricata will handle inspection).
Step 7: Disable hardware offloading
Interfaces → Settings:
- Uncheck: Hardware CRC
- Uncheck: Hardware TSO
- Uncheck: Hardware LRO
- Uncheck: VLAN Hardware Filtering
- Save
Step 8: Set VirtIO checksum tunable
System → Settings → Tunables:
- Add:
hw.vtnet.csum_disable=1
Step 9: Reboot OPNsense
System → Firmware → Reboot.
Step 10: Verify bridge is up
After reboot, access https://10.22.68.155:
- Interfaces → Overview: bridge0 should show UP
- Dashboard: WAN and LAN interfaces should be UP
Task 9: Configure Suricata IDS on OPNsense
Step 1: Download ET Open rulesets
Services → Intrusion Detection → Download:
- Check
ET open/emergingruleset - Click Download & Update Rules
- Wait for completion
Step 2: Configure IDS (NOT IPS yet)
Services → Intrusion Detection → Administration:
- Enabled:
checked - IPS mode:
unchecked(start as IDS only!) - Promiscuous mode:
checked - Pattern matcher:
Hyperscan(if available, else Aho-Corasick) - Interfaces: Select
WAN (vtnet0)— NOT the bridge interface! - Log rotate:
weekly - Save alerts:
checked - Apply
Step 3: Enable rule categories
Services → Intrusion Detection → Policies:
- Create policy: "Block Malware and Exploits"
- Rulesets: ET Open
- Match: action=alert, affected_product=Server
- New action:
Alert(keep as alert for IDS mode) - Priority: 1
Step 4: Apply and verify
Click "Apply" to start Suricata.
Services → Intrusion Detection → Alerts: Wait for first alerts to appear (may take minutes depending on traffic).
Step 5: Verify Suricata is running
ssh maciejpi@10.22.68.155 "sockstat -l | grep suricata"
Or check via OPNsense WebGUI → Services → Intrusion Detection → Administration → status.
Task 10: Install CrowdSec on OPNsense
Step 1: Install os-crowdsec plugin
System → Firmware → Plugins:
- Find
os-crowdsec - Click Install
- Wait for completion
Step 2: Verify installation
Services → CrowdSec → Overview:
- CrowdSec service: Running
- LAPI: Running
- Firewall bouncer: Running
Default collections auto-installed:
- crowdsecurity/freebsd
- crowdsecurity/opnsense
Step 3: Enroll in CrowdSec Console
ssh maciejpi@10.22.68.155 "sudo cscli console enroll --name R11-OPNSENSE-01 --tags opnsense --tags ips --tags inpi YOUR-ENROLL-KEY"
Validate in Console webapp.
Step 4: Test bouncer
# Temporarily ban a test IP
ssh maciejpi@10.22.68.155 "sudo cscli decisions add -t ban -d 2m -i 192.0.2.1"
# Verify it's in the firewall
ssh maciejpi@10.22.68.155 "sudo cscli decisions list"
# Remove test ban
ssh maciejpi@10.22.68.155 "sudo cscli decisions delete --ip 192.0.2.1"
Task 11: Bridge Insertion (Service Window)
CRITICAL: This causes ~5 minutes of downtime for all INPI servers behind FortiGate.
Schedule: Outside business hours (evening/weekend).
Step 1: Pre-flight checks
# Verify OPNsense bridge is working
curl -sk https://10.22.68.155/ | head -3
# Verify FortiGate backup
~/.claude/plugins/marketplaces/inpi-infrastructure/fortigate-manager/scripts/fortigate_ssh.sh "execute backup config flash"
# Verify current connectivity
curl -sI https://nordabiznes.pl/health | head -3
Step 2: Identify current server VLAN/port assignment on Proxmox
The bridge insertion method depends on network architecture:
Method A: Proxmox bridge reassignment (preferred if all servers on same PVE node)
- Move server VMs from vmbr0 to vmbr1
- OPNsense vtnet0 stays on vmbr0 (FortiGate side)
- OPNsense vtnet1 on vmbr1 (server side)
- All traffic from FortiGate to servers flows through OPNsense bridge
Method B: Physical cable swap (if servers on separate hardware)
- Disconnect server switch uplink from FortiGate
- Connect FortiGate → OPNsense physical port
- Connect OPNsense physical port → server switch
Step 3: Execute bridge insertion (Method A)
For each server VM on r11-pve-03 that should be protected:
# List VMs to identify which ones to move to vmbr1
ssh root@10.22.68.123 "qm list"
# For each server VM (e.g., VM 119 = NPM, VM 249 = NordaBiz):
# Check current bridge
ssh root@10.22.68.123 "qm config 119 | grep net"
# Change to vmbr1
ssh root@10.22.68.123 "qm set 119 --net0 virtio=CURRENT_MAC,bridge=vmbr1"
NOTE: MAC address must be preserved! Extract from qm config first.
Step 4: Verify connectivity through bridge
# From management network
curl -sI https://nordabiznes.pl/health | head -3
ping -c 3 10.22.68.250
ping -c 3 10.22.68.249
Expected: All services reachable through bridge.
Step 5: Verify Suricata sees traffic
Check OPNsense WebGUI → Services → Intrusion Detection → Alerts.
Traffic should start generating alerts within minutes.
Step 6: Monitor for 30 minutes
# Check NordaBiz is responding normally
for i in $(seq 1 30); do
curl -sI https://nordabiznes.pl/health | head -1
sleep 60
done
Task 12: Observation Period (1 week)
Duration: 7 days after bridge insertion.
Daily checks:
# 1. Check Suricata alerts
# OPNsense WebGUI → Services → Intrusion Detection → Alerts
# 2. Check CrowdSec decisions on all three servers
ssh maciejpi@10.22.68.249 "sudo cscli decisions list"
ssh maciejpi@10.22.68.250 "sudo cscli decisions list"
ssh maciejpi@10.22.68.155 "sudo cscli decisions list"
# 3. Check for false positives - are legitimate IPs being flagged?
ssh maciejpi@10.22.68.155 "sudo cscli alerts list --limit 20"
# 4. Check NordaBiz health
curl -sI https://nordabiznes.pl/health | head -3
# 5. Check OPNsense resource usage
# WebGUI → Dashboard: CPU, RAM, throughput
Tuning tasks during observation:
- Suppress noisy rules: If a Suricata SID generates many false positives:
- Services → Intrusion Detection → Policies → create suppress policy for that SID
- Whitelist known IPs: If CrowdSec blocks partner/monitoring IPs:
Add tossh maciejpi@10.22.68.155 "sudo cscli decisions delete --ip TRUSTED_IP"/etc/crowdsec/parsers/s02-enrich/mywhitelists.yaml - Document all false positives for later reference.
Task 13: Activate IPS Mode
Prerequisites: Observation period complete, false positives addressed.
Step 1: Switch Suricata from IDS to IPS
OPNsense WebGUI → Services → Intrusion Detection → Administration:
- IPS mode:
checked - Apply
Step 2: Update policy to drop malicious traffic
Services → Intrusion Detection → Policies:
- Edit "Block Malware and Exploits" policy
- New action: Change from
AlerttoDrop - Save + Apply
Step 3: Verify IPS is active
Services → Intrusion Detection → Administration: Check status shows "IPS mode" active.
Step 4: Monitor for 48 hours
# Check NordaBiz every 15 minutes for first 2 hours
for i in $(seq 1 8); do
echo "$(date): $(curl -sI https://nordabiznes.pl/health | head -1)"
sleep 900
done
Watch for:
- Legitimate traffic being dropped
- Services becoming unreachable
- Increased latency
Step 5: If issues — emergency rollback to IDS
OPNsense WebGUI → Services → Intrusion Detection → Administration:
- IPS mode:
uncheck - Apply
This immediately stops blocking, returns to alert-only mode.
Task 14: Final Documentation
Step 1: Update design document with actual values
Update docs/plans/2026-02-16-dmz-security-layer-design.md with:
- Actual OPNsense version installed
- CrowdSec Console account details (redacted)
- Any configuration deviations from plan
- False positive rules added during observation
Step 2: Create runbook for operations
Create docs/plans/2026-XX-XX-dmz-runbook.md with:
- Daily monitoring checklist
- CrowdSec update procedures
- OPNsense update procedures
- Rollback procedure (detailed)
- Contact info and escalation
Step 3: Update CLAUDE.md with new infrastructure
Add OPNsense VM to project context:
- VM 155 = R11-OPNSENSE-01 (10.22.68.155)
- Suricata IPS + CrowdSec bouncer
- Management: https://10.22.68.155
Step 4: Commit all documentation
git add docs/plans/
git commit -m "docs: Add DMZ security layer implementation plan and runbook"
Rollback Procedure (Any Phase)
If OPNsense bridge causes issues at any point:
Quick Rollback (< 2 minutes)
# 1. Stop OPNsense VM
ssh root@10.22.68.123 "qm stop 155"
# 2. Move server VMs back to vmbr0
ssh root@10.22.68.123 "qm set 119 --net0 virtio=ORIGINAL_MAC,bridge=vmbr0"
ssh root@10.22.68.123 "qm set 249 --net0 virtio=ORIGINAL_MAC,bridge=vmbr0"
# 3. Verify
curl -sI https://nordabiznes.pl/health | head -3
Permanent Rollback
- Execute Quick Rollback above
- Delete VM:
ssh root@10.22.68.123 "qm destroy 155 --purge" - Remove vmbr1 from
/etc/network/interfaces - Remove DNS/IPAM records
- CrowdSec on NordaBiz/NPM can remain (they work independently)
Dependencies Between Tasks
Task 1 (CrowdSec NordaBiz) ──┐
Task 2 (CrowdSec NPM) ───────┤── Can run in parallel
Task 3 (Console enrollment) ──┘── Depends on Task 1+2
Task 4 (IPAM + DNS) ─────────┐
Task 5 (vmbr1 bridge) ───────┤── Can run in parallel
└── Both needed before Task 6
Task 6 (Create VM) ──── Depends on Task 4+5
Task 7 (Management IF) ── Depends on Task 6
Task 8 (Bridge config) ── Depends on Task 7
Task 9 (Suricata IDS) ── Depends on Task 8
Task 10 (CrowdSec OPN) ── Depends on Task 8
Task 11 (Bridge insert) ── Depends on Task 9+10 (SERVICE WINDOW!)
Task 12 (Observation) ── Depends on Task 11 (1 WEEK)
Task 13 (Activate IPS) ── Depends on Task 12
Task 14 (Documentation) ── Depends on Task 13
Parallel execution opportunities:
- Tasks 1, 2, 4, 5 can all run simultaneously
- Tasks 9, 10 can run simultaneously
- Tasks 1-5 can happen days before Tasks 6-11