Page MenuHomeVyOS Platform

Firewall Cannot Load Podman Network Interfaces at Boot
Open, HighPublicBUG

Description

I have a firewall zone that includes a podman network interface, but at some point in the last year the behavior has changed, and it now fails to get added to the firewall zone at boot, but works on commit. I am thankful however that the firewall now still loads, as VyOS used to just fail to load the firewall if there were any errors at all in the configuration.

A simple solution for this is to just load the firewall last.

Reproduction config would be as follows:

container {
    network containers {
        description "Network for containers"
        prefix 172.18.0.0/16
    }
}
firewall {
    zone CONTAINER {
        member {
            interface pod-containers
        }
    }
}

Should apply okay when running but will not re-apply properly on reboot with following log entry:

Set ['firewall' 'zone' 'CONTAINER' 'member' 'interface' 'pod-containers'] failed

For now I'm literally just using a post startup hook to reload the config at boot and re-apply it.

Details

Version
1.5-rolling-202502011110
Is it a breaking change?
Perfectly compatible
Issue type
Bug (incorrect behavior)

Event Timeline

Priorities seems correct

vyos@r14# /usr/libexec/vyos/priority.py | match "container|firew"
       450  container.py                         ['container']
       489  firewall.py                          ['firewall']
[edit]
vyos@r14#

Hi @Viacheslav ,

I'm also facing the same issue: interfaces named pod-XXX vanish from the zone-based firewall after a system reboot. Here's my configuration:

vyos@vyos# sudo ip netns list
netns-3ea94443-cb81-9ea7-05a1-d1e08b1cafe8 (id: 0)

vyos@vyos# ip a
......
14: pod-bgp: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:4d:3e:30:60:ea brd ff:ff:ff:ff:ff:ff
    inet 10.100.100.1/24 brd 10.100.100.255 scope global pod-bgp
       valid_lft forever preferred_lft forever
    inet6 fe80::504d:3eff:fe30:60ea/64 scope link
       valid_lft forever preferred_lft forever
15: veth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master pod-bgp state UP group default qlen 1000
    link/ether ba:30:14:95:cc:f5 brd ff:ff:ff:ff:ff:ff link-netns netns-3ea94443-cb81-9ea7-05a1-d1e08b1cafe8

and firewall zone configurations:

vyos@vyos# show firewall zone BGP
 from LOCAL {
     firewall {
         name LOCAL_to_BGP
     }
 }
 from WAN {
     firewall {
         name WAN_to_BGP
     }
 }
 member {
     interface veth0
     interface pod-bgp
 }

after rebooting ,only veth0 is present, and the zone firewall blocked the container traffic.

I've also confirmed this problem exists on these versions:

1.5-rolling-202502130006 
1.5-rolling-202411230007
dmbaturin changed Is it a breaking change? from Unspecified (possibly destroys the router) to Perfectly compatible.

I've got a task and PR opened for this same issue. T7177 and #4351. It looks like the same constraint is used for interface-groups and zone members and a number of other things.

Is this resolved on the latest nightly, now that T7177 has been resolved?

Yes - this has been included in the nightly builds for a few weeks now.

Everything seems to be working as it should on both 1.4.3 and 1.5Q2 for stream, I recently started running tailscale in a container which creates tailscale0 and while I was expecting to run into issues with this, I did not, but it got me thinking, does the interface even need to exist to be able to add it to the nftables ruleset?

Specifically for zone based firewalling could VyOS just lazily check if the interface exists or not and if not just default to adding it to the nftables configuration anyways or will nftables raise an error? It just seems that alot of this stems from VyOS trying to manage every interface that might ever exist rather than its interface regex specifically and that VyOS is possibly being stricter in this sense than nftables.

Everything seems to be working as it should on both 1.4.3 and 1.5Q2 for stream, I recently started running tailscale in a container which creates tailscale0 and while I was expecting to run into issues with this, I did not, but it got me thinking, does the interface even need to exist to be able to add it to the nftables ruleset?

Not only is it not required, I'd go as far to say the current behavior is incorrect behavior and will lead to issues. Whether it be a race condition for an interface coming up, or an interface type not in the regex (tailscale, nebula, netbird, zerotier....just to name a few).

Tailscale will fail if either of the conditions fail (interface exists on the host or doesn't pass regex):

Interface doesn't exist
vyos@vyos# set firewall group interface-group test interface tailscale0

  Incorrect path /sys/class/net/tailscale0: no such file or directory



  Invalid value
  Value validation failed
  Set failed
Doesn't pass regex
vyos@vyos# set firewall group interface-group test interface tailscale0



  Value validation failed
  Set failed

Nftables doesn't care if an interface exists outside of flowtables. I see this a lot in the FRR implementation as well. FRR doesn't care if you enable OSPF on a non-existent interface. It'll just enable it if that interface ever comes up.

For behavior like this in Nftables or FRR, any interface exist checks should be a warning instead of an error.