nordabiz/docs/plans/2026-02-16-dmz-security-layer-implementation.md
Maciej Pienczyn 110d971dca
Some checks are pending
NordaBiz Tests / Unit & Integration Tests (push) Waiting to run
NordaBiz Tests / E2E Tests (Playwright) (push) Blocked by required conditions
NordaBiz Tests / Smoke Tests (Production) (push) Blocked by required conditions
NordaBiz Tests / Send Failure Notification (push) Blocked by required conditions
feat: migrate prod docs to OVH VPS + UTC→Warsaw timezone in all templates
Production moved from on-prem VM 249 (10.22.68.249) to OVH VPS
(57.128.200.27, inpi-vps-waw01). Updated ALL documentation, slash
commands, memory files, architecture docs, and deploy procedures.

Added |local_time Jinja filter (UTC→Europe/Warsaw) and converted
155 .strftime() calls across 71 templates so timestamps display
in Polish timezone regardless of server timezone.

Also includes: created_by_id tracking, abort import fix, ICS
calendar fix for missing end times, Pros Poland data cleanup.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 13:41:53 +02:00

843 lines
23 KiB
Markdown

# DMZ Security Layer — Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Deploy OPNsense transparent bridge VM with Suricata IPS + CrowdSec distributed agents to replace expired FortiGuard security capabilities.
**Architecture:** OPNsense VM 155 in bridge mode between FortiGate and server switch, Suricata IDS/IPS with ET Open rules, CrowdSec agents on NPM and NordaBiz feeding community blocklist decisions to OPNsense bouncer.
**Tech Stack:** OPNsense 26.1, Suricata, CrowdSec (LAPI + agents + bouncers), Proxmox VE, ET Open rulesets
**Design document:** `docs/plans/2026-02-16-dmz-security-layer-design.md`
---
## Task 1: Install CrowdSec on NordaBiz (VM 249)
**Target:** 57.128.200.27 (NORDABIZ-01)
CrowdSec agent na serwerze NordaBiz — parsuje logi Flask/Gunicorn, wykrywa brute-force i honeypot.
**Files:**
- Create: `/etc/crowdsec/acquis.d/nordabiz.yaml` (on VM 249)
- Modify: systemd service (automatic)
**Step 1: Install CrowdSec repository and package**
```bash
ssh maciejpi@57.128.200.27 "curl -s https://install.crowdsec.net | sudo sh"
ssh maciejpi@57.128.200.27 "sudo apt install -y crowdsec"
```
**Step 2: Verify CrowdSec is running**
```bash
ssh maciejpi@57.128.200.27 "sudo cscli version && sudo systemctl status crowdsec --no-pager"
```
Expected: CrowdSec running, version displayed.
**Step 3: Install collections for nginx + HTTP scenarios**
NordaBiz runs behind nginx on port 80, Flask/Gunicorn on port 5000.
```bash
ssh maciejpi@57.128.200.27 "sudo cscli collections install crowdsecurity/nginx"
ssh maciejpi@57.128.200.27 "sudo cscli collections install crowdsecurity/base-http-scenarios"
```
**Step 4: Configure acquisition for nginx access logs**
```bash
ssh maciejpi@57.128.200.27 "sudo tee /etc/crowdsec/acquis.d/nordabiz.yaml << 'EOF'
filenames:
- /var/log/nginx/access.log
labels:
type: nginx
EOF"
```
**Step 5: Configure acquisition for NordaBiz security log (honeypot)**
```bash
ssh maciejpi@57.128.200.27 "sudo tee /etc/crowdsec/acquis.d/nordabiz-security.yaml << 'EOF'
filenames:
- /var/log/nordabiznes/security.log
labels:
type: syslog
EOF"
```
**Step 6: Restart CrowdSec and verify**
```bash
ssh maciejpi@57.128.200.27 "sudo systemctl restart crowdsec"
ssh maciejpi@57.128.200.27 "sudo cscli metrics"
```
Expected: Metrics show nginx parser processing lines from access.log.
**Step 7: Verify installed collections and scenarios**
```bash
ssh maciejpi@57.128.200.27 "sudo cscli collections list"
ssh maciejpi@57.128.200.27 "sudo cscli scenarios list"
```
Expected: crowdsecurity/nginx, crowdsecurity/base-http-scenarios, crowdsecurity/linux listed.
**Step 8: Commit**
```bash
# No local code changes — this is infrastructure configuration on remote server
```
---
## Task 2: Install CrowdSec on NPM (VM 119)
**Target:** 10.22.68.250 (R11-REVPROXY-01), NPM runs in Docker
CrowdSec agent na serwerze NPM — parsuje logi nginx reverse proxy.
**Step 1: Install CrowdSec on the host (not Docker)**
NPM runs in Docker, but CrowdSec can run on the host and read Docker-mounted logs.
```bash
ssh maciejpi@10.22.68.250 "curl -s https://install.crowdsec.net | sudo sh"
ssh maciejpi@10.22.68.250 "sudo apt install -y crowdsec"
```
**Step 2: Find NPM log location**
```bash
ssh maciejpi@10.22.68.250 "docker inspect nginx-proxy-manager_app_1 | grep -A5 Mounts | head -20"
ssh maciejpi@10.22.68.250 "ls -la /var/docker/npm/data/logs/ 2>/dev/null || ls -la /opt/npm/data/logs/ 2>/dev/null"
```
Note: Adjust path in next step based on actual log location found.
**Step 3: Install nginx collection**
```bash
ssh maciejpi@10.22.68.250 "sudo cscli collections install crowdsecurity/nginx"
ssh maciejpi@10.22.68.250 "sudo cscli collections install crowdsecurity/base-http-scenarios"
```
**Step 4: Configure acquisition for NPM logs**
```bash
# Path will be adjusted based on Step 2 findings
ssh maciejpi@10.22.68.250 "sudo tee /etc/crowdsec/acquis.d/npm.yaml << 'EOF'
filenames:
- /var/docker/npm/data/logs/proxy-host-*_access.log
labels:
type: nginx
---
filenames:
- /var/docker/npm/data/logs/dead-host-*_access.log
labels:
type: nginx
EOF"
```
**Step 5: Restart and verify**
```bash
ssh maciejpi@10.22.68.250 "sudo systemctl restart crowdsec"
ssh maciejpi@10.22.68.250 "sudo cscli metrics"
```
Expected: nginx parser processing NPM proxy logs.
**Step 6: Register NPM agent with NordaBiz LAPI (multi-server setup)**
We need to decide LAPI topology. Two options:
- **Option A:** Each server runs its own LAPI (simpler, isolated)
- **Option B:** Central LAPI on one server, agents register remotely (shared decisions)
**Recommended: Option A initially** — each server independent, both enrolled in CrowdSec Console for community blocklist. Option B can be configured later when OPNsense is deployed.
---
## Task 3: Enroll CrowdSec instances in Console
**Prerequisite:** CrowdSec account at https://app.crowdsec.net
**Step 1: Create CrowdSec Console account**
Navigate to https://app.crowdsec.net and create an account. Copy the enrollment key from Security Engine → Engines page.
**Step 2: Enroll NordaBiz instance**
```bash
ssh maciejpi@57.128.200.27 "sudo cscli console enroll --name NORDABIZ-01 --tags nordabiz --tags inpi YOUR-ENROLL-KEY"
```
**Step 3: Enroll NPM instance**
```bash
ssh maciejpi@10.22.68.250 "sudo cscli console enroll --name R11-REVPROXY-01 --tags npm --tags inpi YOUR-ENROLL-KEY"
```
**Step 4: Validate enrollments in Console webapp**
Open https://app.crowdsec.net → accept both pending enrollments.
**Step 5: Subscribe to community blocklists**
In Console → Blocklists tab → subscribe both engines to relevant blocklists.
**Step 6: Verify blocklist decisions**
```bash
ssh maciejpi@57.128.200.27 "sudo cscli metrics show decisions"
ssh maciejpi@10.22.68.250 "sudo cscli metrics show decisions"
```
Expected: Community blocklist decisions visible within 2 hours.
---
## Task 4: Register IP and DNS for OPNsense VM
**Skills:** @ipam-manager, @dns-technitium-manager
**Step 1: Reserve IP 10.22.68.155 in phpIPAM**
Use ipam-manager skill to add IP:
```bash
# Via phpIPAM API
TOKEN=$(curl -sk -X POST https://10.22.68.172/api/mcp/user/ \
-u "api_claude:PASSWORD" | jq -r '.data.token')
curl -sk -X POST https://10.22.68.172/api/mcp/addresses/ \
-H "token: $TOKEN" -H "Content-Type: application/json" \
-d '{"subnetId":"7","ip":"10.22.68.155","hostname":"R11-OPNSENSE-01.inpi.local","description":"OPNsense Bridge IPS VM","state":"2"}'
```
**Step 2: Create DNS A record**
Use dns-technitium-manager skill:
```bash
TECH_TOKEN=$(jq -r '.technitium.primary.api_token' ~/.claude/config/inpi-infrastructure.json)
# A record
curl -s "http://10.22.68.171:5380/api/zones/records/add?token=${TECH_TOKEN}&domain=R11-OPNSENSE-01.inpi.local&zone=inpi.local&type=A&ipAddress=10.22.68.155&ttl=3600"
# Also add a short alias
curl -s "http://10.22.68.171:5380/api/zones/records/add?token=${TECH_TOKEN}&domain=opnsense.inpi.local&zone=inpi.local&type=CNAME&cname=R11-OPNSENSE-01.inpi.local&ttl=3600"
```
**Step 3: Create PTR record**
```bash
curl -s "http://10.22.68.171:5380/api/zones/records/add?token=${TECH_TOKEN}&domain=155.68.22.10.in-addr.arpa&zone=68.22.10.in-addr.arpa&type=PTR&ptrName=R11-OPNSENSE-01.inpi.local&ttl=3600"
```
**Step 4: Verify DNS resolution**
```bash
dig @10.22.68.171 opnsense.inpi.local +short
# Expected: R11-OPNSENSE-01.inpi.local → 10.22.68.155
dig @10.22.68.171 -x 10.22.68.155 +short
# Expected: R11-OPNSENSE-01.inpi.local.
```
---
## Task 5: Create vmbr1 bridge on r11-pve-03
**Target:** r11-pve-03 (10.22.68.123)
Nowy bridge Proxmox potrzebny jako "wyjście" z OPNsense bridge do serwerów. vmbr0 to bridge "wejściowy" od FortiGate.
**Step 1: Check current network config**
```bash
ssh root@10.22.68.123 "cat /etc/network/interfaces"
```
**Step 2: Add vmbr1 bridge**
```bash
ssh root@10.22.68.123 "cat >> /etc/network/interfaces << 'EOF'
auto vmbr1
iface vmbr1 inet manual
bridge-ports none
bridge-stp off
bridge-fd 0
bridge-mcsnoop 0
#OPNsense Bridge OUT - server side
EOF"
```
**Step 3: Bring up vmbr1**
```bash
ssh root@10.22.68.123 "ifup vmbr1"
ssh root@10.22.68.123 "brctl show vmbr1"
```
Expected: vmbr1 bridge exists with no member ports (OPNsense VM will connect via virtio).
**Step 4: Verify no impact on existing network**
```bash
ssh root@10.22.68.123 "brctl show"
ssh root@10.22.68.123 "ping -c 2 10.22.68.1"
```
Expected: vmbr0 unchanged, gateway reachable.
---
## Task 6: Create OPNsense VM 155 on Proxmox
**Skill:** @proxmox-manager
**Important:** OPNsense cannot use cloud-init template. Must install from ISO manually.
**Step 1: Download OPNsense 26.1 ISO to Proxmox**
```bash
ssh root@10.22.68.123 "cd /var/lib/vz/template/iso && wget -q https://pkg.opnsense.org/releases/26.1/OPNsense-26.1-dvd-amd64.iso.bz2 && bunzip2 OPNsense-26.1-dvd-amd64.iso.bz2"
```
**Step 2: Verify ISO**
```bash
ssh root@10.22.68.123 "ls -lh /var/lib/vz/template/iso/OPNsense-26.1-dvd-amd64.iso"
```
**Step 3: Create VM 155 with 3 NICs**
```bash
ssh root@10.22.68.123 "qm create 155 \
--name R11-OPNSENSE-01 \
--ostype other \
--bios ovmf \
--machine q35 \
--efidisk0 local-lvm:1,efitype=4m,pre-enrolled-keys=0 \
--cpu host \
--cores 4 \
--memory 8192 \
--balloon 0 \
--scsihw virtio-scsi-single \
--scsi0 local-lvm:32,ssd=1,discard=on \
--ide2 local:iso/OPNsense-26.1-dvd-amd64.iso,media=cdrom \
--boot order='ide2;scsi0' \
--net0 virtio,bridge=vmbr0,firewall=0,queues=4 \
--net1 virtio,bridge=vmbr1,firewall=0,queues=4 \
--net2 virtio,bridge=vmbr0,firewall=0,queues=4 \
--onboot 1 \
--description 'OPNsense Bridge IPS - transparent filtering bridge between FortiGate and servers'"
```
Key settings:
- `bios=ovmf`, `machine=q35` — UEFI for OPNsense
- `balloon=0` — ballooning disabled (required for FreeBSD)
- `firewall=0` — Proxmox firewall off on all NICs
- `queues=4` — multiqueue matching core count
- `net0=vmbr0` — Bridge IN (from FortiGate)
- `net1=vmbr1` — Bridge OUT (to servers)
- `net2=vmbr0` — Management (gets IP 10.22.68.155)
**Step 4: Verify VM creation**
```bash
ssh root@10.22.68.123 "qm config 155"
```
**Step 5: Start VM for installation**
```bash
ssh root@10.22.68.123 "qm start 155"
```
**Step 6: Access console for OPNsense installation**
Open Proxmox Web UI (https://10.22.68.123:8006) → VM 155 → Console.
Install OPNsense:
1. Boot from ISO
2. Login: `installer` / `opnsense`
3. Select ZFS filesystem
4. Select target disk (vtbd0)
5. Set root password
6. Reboot
**Step 7: Remove ISO after installation**
```bash
ssh root@10.22.68.123 "qm set 155 --ide2 none --boot order='scsi0'"
```
---
## Task 7: Configure OPNsense — Management Interface
**Prerequisites:** OPNsense installed and booted. Access via Proxmox console.
**Step 1: Assign interfaces from OPNsense console**
In Proxmox console (VNC), at OPNsense prompt:
1. Option `1` — Assign interfaces
2. VLANs? `N`
3. WAN interface: `vtnet0`
4. LAN interface: `vtnet1`
5. Optional interface 1: `vtnet2` (this becomes OPT1 = management)
**Step 2: Configure OPT1 (management) with static IP**
From console, option `2` — Set interface IP:
1. Select `3` (OPT1/vtnet2)
2. IPv4: `10.22.68.155`
3. Subnet: `24`
4. Gateway: `10.22.68.1`
5. DHCP: `N`
6. Enable WebGUI on this interface: `Y`
7. Protocol: `HTTPS`
**Step 3: Verify management access**
```bash
ping -c 3 10.22.68.155
curl -sk https://10.22.68.155/ | head -5
```
Expected: OPNsense WebGUI reachable at https://10.22.68.155 (default login: root / opnsense).
**Step 4: Change root password via WebGUI**
Login → System → Access → Users → root → Change password.
---
## Task 8: Configure OPNsense — Transparent Bridge
**Prerequisites:** Management interface working at 10.22.68.155. All configuration via WebGUI.
**Step 1: Set system tunables**
System → Settings → Tunables:
- Add: `net.link.bridge.pfil_bridge` = `1`
- Add: `net.link.bridge.pfil_member` = `0`
**Step 2: Configure WAN interface (vtnet0)**
Interfaces → WAN:
- IPv4 Configuration Type: `None`
- IPv6 Configuration Type: `None`
- Uncheck "Block private networks"
- Uncheck "Block bogon networks"
- Save + Apply
**Step 3: Configure LAN interface (vtnet1)**
Interfaces → LAN:
- IPv4 Configuration Type: `None`
- IPv6 Configuration Type: `None`
- Disable DHCP server if enabled
- Save + Apply
**Step 4: Create bridge device**
Interfaces → Other Types → Bridge → Add:
- Member interfaces: `vtnet0 (WAN)` + `vtnet1 (LAN)`
- Description: `IPS_BRIDGE`
- Do NOT enable link-local address
- Save
**Step 5: Assign bridge interface**
Interfaces → Assignments:
- Assign new `bridge0` device
- Name it `BRIDGE`
- Enable it
- IPv4/IPv6: `None`
- Save + Apply
**Step 6: Create bridge firewall rules**
Firewall → Rules → BRIDGE:
- Rule 1: Pass, IPv4+IPv6, any protocol, source any, destination any, direction in
- Rule 2: Pass, IPv4+IPv6, any protocol, source any, destination any, direction out
This allows all traffic through the bridge (Suricata will handle inspection).
**Step 7: Disable hardware offloading**
Interfaces → Settings:
- Uncheck: Hardware CRC
- Uncheck: Hardware TSO
- Uncheck: Hardware LRO
- Uncheck: VLAN Hardware Filtering
- Save
**Step 8: Set VirtIO checksum tunable**
System → Settings → Tunables:
- Add: `hw.vtnet.csum_disable` = `1`
**Step 9: Reboot OPNsense**
System → Firmware → Reboot.
**Step 10: Verify bridge is up**
After reboot, access https://10.22.68.155:
- Interfaces → Overview: bridge0 should show UP
- Dashboard: WAN and LAN interfaces should be UP
---
## Task 9: Configure Suricata IDS on OPNsense
**Step 1: Download ET Open rulesets**
Services → Intrusion Detection → Download:
- Check `ET open/emerging` ruleset
- Click Download & Update Rules
- Wait for completion
**Step 2: Configure IDS (NOT IPS yet)**
Services → Intrusion Detection → Administration:
- Enabled: `checked`
- IPS mode: `unchecked` (start as IDS only!)
- Promiscuous mode: `checked`
- Pattern matcher: `Hyperscan` (if available, else Aho-Corasick)
- Interfaces: Select `WAN (vtnet0)`**NOT the bridge interface!**
- Log rotate: `weekly`
- Save alerts: `checked`
- Apply
**Step 3: Enable rule categories**
Services → Intrusion Detection → Policies:
- Create policy: "Block Malware and Exploits"
- Rulesets: ET Open
- Match: action=alert, affected_product=Server
- New action: `Alert` (keep as alert for IDS mode)
- Priority: 1
**Step 4: Apply and verify**
Click "Apply" to start Suricata.
Services → Intrusion Detection → Alerts:
Wait for first alerts to appear (may take minutes depending on traffic).
**Step 5: Verify Suricata is running**
```bash
ssh maciejpi@10.22.68.155 "sockstat -l | grep suricata"
```
Or check via OPNsense WebGUI → Services → Intrusion Detection → Administration → status.
---
## Task 10: Install CrowdSec on OPNsense
**Step 1: Install os-crowdsec plugin**
System → Firmware → Plugins:
- Find `os-crowdsec`
- Click Install
- Wait for completion
**Step 2: Verify installation**
Services → CrowdSec → Overview:
- CrowdSec service: Running
- LAPI: Running
- Firewall bouncer: Running
Default collections auto-installed:
- crowdsecurity/freebsd
- crowdsecurity/opnsense
**Step 3: Enroll in CrowdSec Console**
```bash
ssh maciejpi@10.22.68.155 "sudo cscli console enroll --name R11-OPNSENSE-01 --tags opnsense --tags ips --tags inpi YOUR-ENROLL-KEY"
```
Validate in Console webapp.
**Step 4: Test bouncer**
```bash
# Temporarily ban a test IP
ssh maciejpi@10.22.68.155 "sudo cscli decisions add -t ban -d 2m -i 192.0.2.1"
# Verify it's in the firewall
ssh maciejpi@10.22.68.155 "sudo cscli decisions list"
# Remove test ban
ssh maciejpi@10.22.68.155 "sudo cscli decisions delete --ip 192.0.2.1"
```
---
## Task 11: Bridge Insertion (Service Window)
**CRITICAL: This causes ~5 minutes of downtime for all INPI servers behind FortiGate.**
**Schedule: Outside business hours (evening/weekend).**
**Step 1: Pre-flight checks**
```bash
# Verify OPNsense bridge is working
curl -sk https://10.22.68.155/ | head -3
# Verify FortiGate backup
~/.claude/plugins/marketplaces/inpi-infrastructure/fortigate-manager/scripts/fortigate_ssh.sh "execute backup config flash"
# Verify current connectivity
curl -sI https://nordabiznes.pl/health | head -3
```
**Step 2: Identify current server VLAN/port assignment on Proxmox**
The bridge insertion method depends on network architecture:
**Method A: Proxmox bridge reassignment (preferred if all servers on same PVE node)**
- Move server VMs from vmbr0 to vmbr1
- OPNsense vtnet0 stays on vmbr0 (FortiGate side)
- OPNsense vtnet1 on vmbr1 (server side)
- All traffic from FortiGate to servers flows through OPNsense bridge
**Method B: Physical cable swap (if servers on separate hardware)**
- Disconnect server switch uplink from FortiGate
- Connect FortiGate → OPNsense physical port
- Connect OPNsense physical port → server switch
**Step 3: Execute bridge insertion (Method A)**
For each server VM on r11-pve-03 that should be protected:
```bash
# List VMs to identify which ones to move to vmbr1
ssh root@10.22.68.123 "qm list"
# For each server VM (e.g., VM 119 = NPM, VM 249 = NordaBiz):
# Check current bridge
ssh root@10.22.68.123 "qm config 119 | grep net"
# Change to vmbr1
ssh root@10.22.68.123 "qm set 119 --net0 virtio=CURRENT_MAC,bridge=vmbr1"
```
**NOTE:** MAC address must be preserved! Extract from `qm config` first.
**Step 4: Verify connectivity through bridge**
```bash
# From management network
curl -sI https://nordabiznes.pl/health | head -3
ping -c 3 10.22.68.250
ping -c 3 57.128.200.27
```
Expected: All services reachable through bridge.
**Step 5: Verify Suricata sees traffic**
Check OPNsense WebGUI → Services → Intrusion Detection → Alerts.
Traffic should start generating alerts within minutes.
**Step 6: Monitor for 30 minutes**
```bash
# Check NordaBiz is responding normally
for i in $(seq 1 30); do
curl -sI https://nordabiznes.pl/health | head -1
sleep 60
done
```
---
## Task 12: Observation Period (1 week)
**Duration:** 7 days after bridge insertion.
**Daily checks:**
```bash
# 1. Check Suricata alerts
# OPNsense WebGUI → Services → Intrusion Detection → Alerts
# 2. Check CrowdSec decisions on all three servers
ssh maciejpi@57.128.200.27 "sudo cscli decisions list"
ssh maciejpi@10.22.68.250 "sudo cscli decisions list"
ssh maciejpi@10.22.68.155 "sudo cscli decisions list"
# 3. Check for false positives - are legitimate IPs being flagged?
ssh maciejpi@10.22.68.155 "sudo cscli alerts list --limit 20"
# 4. Check NordaBiz health
curl -sI https://nordabiznes.pl/health | head -3
# 5. Check OPNsense resource usage
# WebGUI → Dashboard: CPU, RAM, throughput
```
**Tuning tasks during observation:**
1. **Suppress noisy rules:** If a Suricata SID generates many false positives:
- Services → Intrusion Detection → Policies → create suppress policy for that SID
2. **Whitelist known IPs:** If CrowdSec blocks partner/monitoring IPs:
```bash
ssh maciejpi@10.22.68.155 "sudo cscli decisions delete --ip TRUSTED_IP"
```
Add to `/etc/crowdsec/parsers/s02-enrich/mywhitelists.yaml`
3. **Document all false positives** for later reference.
---
## Task 13: Activate IPS Mode
**Prerequisites:** Observation period complete, false positives addressed.
**Step 1: Switch Suricata from IDS to IPS**
OPNsense WebGUI → Services → Intrusion Detection → Administration:
- IPS mode: `checked`
- Apply
**Step 2: Update policy to drop malicious traffic**
Services → Intrusion Detection → Policies:
- Edit "Block Malware and Exploits" policy
- New action: Change from `Alert` to `Drop`
- Save + Apply
**Step 3: Verify IPS is active**
Services → Intrusion Detection → Administration:
Check status shows "IPS mode" active.
**Step 4: Monitor for 48 hours**
```bash
# Check NordaBiz every 15 minutes for first 2 hours
for i in $(seq 1 8); do
echo "$(date): $(curl -sI https://nordabiznes.pl/health | head -1)"
sleep 900
done
```
Watch for:
- Legitimate traffic being dropped
- Services becoming unreachable
- Increased latency
**Step 5: If issues — emergency rollback to IDS**
OPNsense WebGUI → Services → Intrusion Detection → Administration:
- IPS mode: `uncheck`
- Apply
This immediately stops blocking, returns to alert-only mode.
---
## Task 14: Final Documentation
**Step 1: Update design document with actual values**
Update `docs/plans/2026-02-16-dmz-security-layer-design.md` with:
- Actual OPNsense version installed
- CrowdSec Console account details (redacted)
- Any configuration deviations from plan
- False positive rules added during observation
**Step 2: Create runbook for operations**
Create `docs/plans/2026-XX-XX-dmz-runbook.md` with:
- Daily monitoring checklist
- CrowdSec update procedures
- OPNsense update procedures
- Rollback procedure (detailed)
- Contact info and escalation
**Step 3: Update CLAUDE.md with new infrastructure**
Add OPNsense VM to project context:
- VM 155 = R11-OPNSENSE-01 (10.22.68.155)
- Suricata IPS + CrowdSec bouncer
- Management: https://10.22.68.155
**Step 4: Commit all documentation**
```bash
git add docs/plans/
git commit -m "docs: Add DMZ security layer implementation plan and runbook"
```
---
## Rollback Procedure (Any Phase)
**If OPNsense bridge causes issues at any point:**
### Quick Rollback (< 2 minutes)
```bash
# 1. Stop OPNsense VM
ssh root@10.22.68.123 "qm stop 155"
# 2. Move server VMs back to vmbr0
ssh root@10.22.68.123 "qm set 119 --net0 virtio=ORIGINAL_MAC,bridge=vmbr0"
ssh root@10.22.68.123 "qm set 249 --net0 virtio=ORIGINAL_MAC,bridge=vmbr0"
# 3. Verify
curl -sI https://nordabiznes.pl/health | head -3
```
### Permanent Rollback
1. Execute Quick Rollback above
2. Delete VM: `ssh root@10.22.68.123 "qm destroy 155 --purge"`
3. Remove vmbr1 from `/etc/network/interfaces`
4. Remove DNS/IPAM records
5. CrowdSec on NordaBiz/NPM can remain (they work independently)
---
## Dependencies Between Tasks
```
Task 1 (CrowdSec NordaBiz) ──┐
Task 2 (CrowdSec NPM) ───────┤── Can run in parallel
Task 3 (Console enrollment) ──┘── Depends on Task 1+2
Task 4 (IPAM + DNS) ─────────┐
Task 5 (vmbr1 bridge) ───────┤── Can run in parallel
└── Both needed before Task 6
Task 6 (Create VM) ──── Depends on Task 4+5
Task 7 (Management IF) ── Depends on Task 6
Task 8 (Bridge config) ── Depends on Task 7
Task 9 (Suricata IDS) ── Depends on Task 8
Task 10 (CrowdSec OPN) ── Depends on Task 8
Task 11 (Bridge insert) ── Depends on Task 9+10 (SERVICE WINDOW!)
Task 12 (Observation) ── Depends on Task 11 (1 WEEK)
Task 13 (Activate IPS) ── Depends on Task 12
Task 14 (Documentation) ── Depends on Task 13
```
**Parallel execution opportunities:**
- Tasks 1, 2, 4, 5 can all run simultaneously
- Tasks 9, 10 can run simultaneously
- Tasks 1-5 can happen days before Tasks 6-11