Network isolation: egress firewall + named bridge

Adds the host-level egress firewall recommended by the upstream
DeerFlow team's "run in a VLAN" guidance, adapted to a Fritzbox-only
home network where LAN VLANs are not available.

- docker/docker-compose.override.yaml: pins the upstream deer-flow
  Docker network to a stable Linux bridge name br-deerflow so the
  firewall can address it without guessing Docker's auto-generated
  br-<hash>. Used as a -f overlay on top of the upstream compose file.

- scripts/deerflow-firewall.sh: idempotent up/down/status wrapper that
  installs DOCKER-USER iptables rules. Allowlist for 10.67.67.1 (Searx)
  and 10.67.67.2 (XTTS/Whisper/Ollama-local), hard block for
  192.168.3.0/24 (home LAN), 10.0.0.0/8, 172.16.0.0/12. Stateful return
  rule keeps inbound LAN access to published ports working.

- scripts/deerflow-firewall.nix: NixOS module snippet defining a
  systemd unit ordered After=docker.service so the rules survive
  dockerd restarts and follow its lifecycle. Copy into
  configuration.nix and nixos-rebuild switch.

- HARDENING.md: new section 2.7 "Network isolation (egress firewall)"
  with allow/block tables, bring-up steps, and smoke-test commands.

Guarantees: rules match on -i br-deerflow, so if the bridge does not
exist, the rules are no-ops and do not affect any other container
(paperclip, telebrowser, openclaw-gateway, ...). Stopping the
container leaves the rules in place but inert; stopping the systemd
unit removes them.
This commit is contained in:
2026-04-12 14:56:26 +02:00
parent 6de0bf9f5b
commit 75315d958e
4 changed files with 298 additions and 0 deletions

View File

@@ -127,6 +127,82 @@ DeerFlow guardrails see only `tool.name` (e.g. `web_search`), and both the
hardened and the native version export the same name. The real block is
the import-time stub above.
### 2.7 Network isolation (egress firewall)
The DeerFlow team recommends running the agent in a dedicated VLAN. Our
Fritzbox cannot do LAN VLANs, so instead we put the container behind an
egress firewall on the Docker host. The container can reach the Internet
plus a small whitelist of Wireguard hosts (Searx, local model servers),
but cannot scan or attack any device on the home LAN. Inbound traffic
from the LAN to the container's published ports is unaffected because the
rules are stateful.
**Allow** (egress from container):
| Destination | Purpose |
|---|---|
| `1.0.0.0/8` ... `223.0.0.0/8` (public Internet) | Ollama Cloud, search backends |
| `10.67.67.1` | Searx (Wireguard) |
| `10.67.67.2` | XTTS / Whisper / Ollama-local (Wireguard) |
**Block** (egress from container):
| Destination | Reason |
|---|---|
| `192.168.3.0/24` | home LAN — no lateral movement |
| `10.0.0.0/8` (except whitelisted /32) | other Wireguard subnets, RFC1918 |
| `172.16.0.0/12` | other Docker bridges |
**Implementation:**
| File | Role |
|---|---|
| `docker/docker-compose.override.yaml` | Pins the upstream `deer-flow` Docker network to a stable Linux bridge name `br-deerflow`, so the firewall can address it without guessing Docker's auto-generated `br-<hash>`. Used as a `-f` overlay on top of `deer-flow/docker/docker-compose.yaml`. |
| `scripts/deerflow-firewall.sh` | Idempotent `up`/`down`/`status` wrapper that installs the iptables rules in the `DOCKER-USER` chain. Inserted in reverse order so the final chain order is: stateful return, allow Searx, allow Ollama-local, block LAN, block /8, block /12. |
| `scripts/deerflow-firewall.nix` | NixOS module snippet defining `systemd.services.deerflow-firewall`. Ordered `After=docker.service`, `Requires=docker.service`, `PartOf=docker.service` so the rules survive `dockerd` restarts and follow its lifecycle. Copy into `configuration.nix` and `nixos-rebuild switch`. |
**Important guarantees:**
- The rules match on `-i br-deerflow`. If the bridge does not exist (e.g.
DeerFlow has never been started), the rules are no-ops and **do not
affect any other container** (paperclip, telebrowser, openclaw-gateway,
...). They activate automatically the moment `docker compose ... up -d`
creates the bridge.
- Stopping or removing the DeerFlow container leaves the rules in place
but inert. Stopping the systemd unit removes them.
- The script is idempotent: `up` will never duplicate a rule, `down`
removes all copies.
**Bring up:**
```bash
cd /home/data/deerflow-factory
docker compose \
-f deer-flow/docker/docker-compose.yaml \
-f docker/docker-compose.override.yaml \
up -d
# Then either run the script directly:
sudo scripts/deerflow-firewall.sh up
# ...or, on NixOS, copy scripts/deerflow-firewall.nix into configuration.nix
# and:
sudo nixos-rebuild switch
systemctl status deerflow-firewall
```
**Smoke tests** (run from inside the container, e.g. `docker exec -it <id> sh`):
```bash
# allowed
curl -s -o /dev/null -w "%{http_code}\n" --max-time 5 http://10.67.67.1:8888/ # Searx -> 200
curl -s -o /dev/null -w "%{http_code}\n" --max-time 5 https://api.cloudflare.com/ # Internet -> 200/4xx
# blocked (should fail with "no route" / "host prohibited" / timeout)
curl -s -o /dev/null -w "%{http_code}\n" --max-time 5 http://192.168.3.1/ # FAIL
curl -s -o /dev/null -w "%{http_code}\n" --max-time 5 http://10.67.67.16/ # FAIL (blocked by 10/8 reject; .16 is not whitelisted)
```
## 3. Verification
All checks below assume `PYTHONPATH=deer-flow/backend/packages/harness`.
@@ -225,4 +301,8 @@ backend/tests/test_searx_tools.py (facto
config.yaml (hardened runtime config, references only searx tools)
.env.example (template, no secrets)
HARDENING.md (this file)
docker/docker-compose.override.yaml (named bridge br-deerflow)
scripts/deerflow-firewall.sh (egress firewall up/down/status)
scripts/deerflow-firewall.nix (NixOS systemd unit snippet)
```

View File

@@ -0,0 +1,27 @@
# DeerFlow hardening overlay.
#
# This override turns the upstream `deer-flow` bridge network into a
# stably-named Linux bridge (`br-deerflow`) so that the host-level
# egress firewall (deerflow-firewall.service) can reference it without
# guessing Docker's auto-generated `br-<hash>` name.
#
# Use it together with the upstream compose file:
#
# cd deerflow-factory
# docker compose \
# -f deer-flow/docker/docker-compose.yaml \
# -f docker/docker-compose.override.yaml \
# up -d
#
# After the network is created (or recreated), the firewall rules
# installed by deerflow-firewall.service automatically take effect on
# the new bridge — no further action required.
#
# If you ever rename or delete the bridge, restart the firewall unit:
# systemctl restart deerflow-firewall.service
networks:
deer-flow:
driver: bridge
driver_opts:
com.docker.network.bridge.name: br-deerflow

View File

@@ -0,0 +1,49 @@
# NixOS module snippet for the DeerFlow egress firewall.
#
# Copy the systemd.services block below into your /etc/nixos/configuration.nix
# (or import this file from there). After `sudo nixos-rebuild switch`, the
# unit `deerflow-firewall.service` is started automatically after Docker and
# stays "active (exited)" so the rules persist for the lifetime of dockerd.
#
# Activation order:
# docker.service -> deerflow-firewall.service
#
# The unit pulls the script straight from the repo at
# /home/data/deerflow-factory/scripts/deerflow-firewall.sh — change the path
# below if you check the repo out somewhere else.
#
# Disable / remove:
# sudo systemctl stop deerflow-firewall # rules go down
# sudo systemctl disable deerflow-firewall # no auto-start
# ...then remove the block from configuration.nix and rebuild.
#
# Verify:
# systemctl status deerflow-firewall
# sudo /home/data/deerflow-factory/scripts/deerflow-firewall.sh status
{ config, pkgs, ... }:
{
systemd.services.deerflow-firewall = {
description = "DeerFlow container egress firewall";
# Make sure dockerd has created the DOCKER-USER chain before we touch it,
# and rerun the unit when docker restarts so our rules are reapplied.
after = [ "docker.service" ];
requires = [ "docker.service" ];
partOf = [ "docker.service" ];
wantedBy = [ "multi-user.target" ];
path = [ pkgs.iptables pkgs.iproute2 pkgs.bash ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
# Adjust this path if you store the repo elsewhere.
ExecStart = "/home/data/deerflow-factory/scripts/deerflow-firewall.sh up";
ExecStop = "/home/data/deerflow-factory/scripts/deerflow-firewall.sh down";
};
};
}

142
scripts/deerflow-firewall.sh Executable file
View File

@@ -0,0 +1,142 @@
#!/usr/bin/env bash
#
# deerflow-firewall.sh — egress firewall for the hardened DeerFlow container.
#
# Restricts outbound traffic from the DeerFlow Docker bridge so the agent
# cannot reach LAN devices, only the Internet plus a small allowlist of
# Wireguard hosts (Searx + local model servers).
#
# Threat model: a prompt-injected agent inside the container should be
# unable to scan or attack other devices on the home LAN. Inbound LAN
# traffic to the container's published ports is unaffected because the
# rules are stateful (ESTABLISHED,RELATED returns first).
#
# Idempotent: running `up` repeatedly will only ever leave one copy of
# each rule in DOCKER-USER. Running `down` removes them all.
#
# Usage:
# deerflow-firewall.sh up # install rules
# deerflow-firewall.sh down # remove rules
# deerflow-firewall.sh status # show DOCKER-USER chain
#
# Requires: iptables, root privileges, the `DOCKER-USER` chain (created
# by the Docker daemon at startup).
set -euo pipefail
readonly BRIDGE="br-deerflow"
readonly CHAIN="DOCKER-USER"
# IPs that the container IS allowed to reach inside RFC1918:
readonly -a ALLOW_HOSTS=(
"10.67.67.1" # Searx (Wireguard)
"10.67.67.2" # XTTS / Whisper / Ollama-local (Wireguard)
)
# Subnets that the container is NOT allowed to reach:
readonly -a BLOCK_NETS=(
"192.168.3.0/24" # home LAN
"10.0.0.0/8" # rest of /8 except whitelisted /32s above
"172.16.0.0/12" # other Docker bridges + RFC1918 leftovers
)
if [[ $EUID -ne 0 ]]; then
echo "deerflow-firewall: must run as root" >&2
exit 1
fi
require_chain() {
if ! iptables -w -nL "$CHAIN" >/dev/null 2>&1; then
echo "deerflow-firewall: chain $CHAIN does not exist — is the Docker daemon running?" >&2
exit 2
fi
}
# Run iptables only if the same rule is not already present.
add_rule() {
local args=("$@")
if ! iptables -w -C "$CHAIN" "${args[@]}" 2>/dev/null; then
iptables -w -I "$CHAIN" "${args[@]}"
fi
}
# Run iptables -D until the rule is gone (in case it was added multiple times).
del_rule() {
local args=("$@")
while iptables -w -C "$CHAIN" "${args[@]}" 2>/dev/null; do
iptables -w -D "$CHAIN" "${args[@]}"
done
}
cmd_up() {
require_chain
# NOTE: -I prepends, so build the chain bottom-up.
# Final order from top of DOCKER-USER:
# 1. ESTABLISHED,RELATED -> RETURN (let inbound responses out)
# 2. -d 10.67.67.1 -> RETURN (Searx)
# 3. -d 10.67.67.2 -> RETURN (XTTS/Whisper/Ollama)
# 4. -d 192.168.3.0/24 -> REJECT (home LAN)
# 5. -d 10.0.0.0/8 -> REJECT
# 6. -d 172.16.0.0/12 -> REJECT
# Insert rules in reverse so the final order matches the spec.
for net in "${BLOCK_NETS[@]}"; do
add_rule -i "$BRIDGE" -d "$net" -j REJECT --reject-with icmp-net-prohibited
done
for host in "${ALLOW_HOSTS[@]}"; do
add_rule -i "$BRIDGE" -d "$host" -j RETURN
done
add_rule -i "$BRIDGE" -m conntrack --ctstate ESTABLISHED,RELATED -j RETURN
if ! ip link show "$BRIDGE" >/dev/null 2>&1; then
echo "deerflow-firewall: WARNING: bridge $BRIDGE does not exist yet."
echo "deerflow-firewall: rules are installed and will activate as soon as you run"
echo " docker compose -f deer-flow/docker/docker-compose.yaml \\"
echo " -f docker/docker-compose.override.yaml up -d"
fi
echo "deerflow-firewall: up"
}
cmd_down() {
require_chain
for net in "${BLOCK_NETS[@]}"; do
del_rule -i "$BRIDGE" -d "$net" -j REJECT --reject-with icmp-net-prohibited
done
for host in "${ALLOW_HOSTS[@]}"; do
del_rule -i "$BRIDGE" -d "$host" -j RETURN
done
del_rule -i "$BRIDGE" -m conntrack --ctstate ESTABLISHED,RELATED -j RETURN
echo "deerflow-firewall: down"
}
cmd_status() {
require_chain
echo "DOCKER-USER chain (relevant rules):"
iptables -w -nL "$CHAIN" --line-numbers | grep -E "$BRIDGE|^Chain|^num" || true
if ip link show "$BRIDGE" >/dev/null 2>&1; then
echo
echo "Bridge $BRIDGE: present"
else
echo
echo "Bridge $BRIDGE: NOT present (DeerFlow container not started)"
fi
}
case "${1:-}" in
up) cmd_up ;;
down) cmd_down ;;
status) cmd_status ;;
*)
echo "Usage: $0 {up|down|status}" >&2
exit 64
;;
esac