Sun, Nov 30
Sat, Nov 29
FRR case where there is a long explanation: https://github.com/FRRouting/frr/issues/15400
Fri, Nov 28
Wed, Nov 26
Example implementation of such a daemon:
https://github.com/vyos/vyos-1x/pull/4872
Tue, Nov 11
This is still a problem. We have found no suitable fixes, and I think we've tried "everything" without avail.
Nov 3 2025
No passthrough.
In general, the configuration looks like this:
HP DL 360G10 Servers
Windows Server 2019 with Hyper-V role
Vyos VMs Generation 2.
NIC ports are grouped at the OS level to Team (like bond in Linux).
After installing a new NIC (HPE P26253-B21, Broadcom BCM57416), we simply changed the members of this Network Team, nothing more.
Up to this point, everything had been working properly for several years.
@lbv2rus Do you passthrough NICs to the VM?
Nov 2 2025
Nov 1 2025
Please share the full configuration. bond0 is not listed in the example above.
Oct 29 2025
Oct 25 2025
Oct 24 2025
I did some deeper debuging:
Oct 19 2025
Oct 17 2025
Oct 14 2025
Yes got the information that the other side is no vyos (sorry didn't expect this) it's an current sophos box with strongswan.
It try to replicate it here between two vyos boxes the next days
@rherold any updates?
Oct 9 2025
We found this Debian bug report, which seems to be somewhat similar: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1116074
Oct 8 2025
lsusb @ VyOS 1.4.3:
Tried Debian too (debian-live-13.1.0-amd64-standard.iso). Live-boot 13.1.0 works. Boot from disk/installed Debian 13.1.0 also works.
Oct 6 2025
For the circinus works as expected
vyos@r14:~$ show version Version: VyOS 1.5-stream-202510060458 Release train: circinus Release flavor: generic
Working on the latest rolling (checked VyOS 2025.10.05-0020-rolling)
VyOS config:
set container name radius allow-host-networks set container name radius image 'dchidell/radius-web' set container name radius volume accel destination '/usr/share/freeradius/dictionary.accel' set container name radius volume accel source '/usr/share/accel-ppp/radius/dictionary.accel' set container name radius volume clients destination '/etc/raddb/clients.conf' set container name radius volume clients source '/config/containers/radius/clients' set container name radius volume dictionary destination '/usr/share/freeradius/dictionary' set container name radius volume dictionary source '/config/containers/radius/dictionary' set container name radius volume users destination '/etc/raddb/users' set container name radius volume users source '/config/containers/radius/users' set service pppoe-server access-concentrator 'ACN' set service pppoe-server authentication mode 'radius' set service pppoe-server authentication radius server 192.168.122.14 key 'vyos-secret' set service pppoe-server client-ip-pool FIRST range '100.64.0.0/18' set service pppoe-server client-ipv6-pool IPv6-POOL delegate 2001:db8:8003::/48 delegation-prefix '56' set service pppoe-server client-ipv6-pool IPv6-POOL prefix 2001:db8:8002::/48 mask '64' set service pppoe-server default-ipv6-pool 'IPv6-POOL' set service pppoe-server default-pool 'FIRST' set service pppoe-server gateway-address '100.64.0.1' set service pppoe-server interface eth1 combined set service pppoe-server interface eth1.23 set service pppoe-server log level '5' set service pppoe-server name-server '1.1.1.1' set service pppoe-server name-server '1.0.0.1' set service pppoe-server ppp-options disable-ccp set service pppoe-server ppp-options ipv6 'allow' set service pppoe-server session-control 'disable'
RADIUS users
client-1 Cleartext-Password := "client-1"
Service-Type = Framed-User,
Accel-VRF-Name = "red",
Framed-IP-Address = 10.0.0.11,
Stateful-IPv6-Address-Pool = "IPv6-POOL",
Delegated-IPv6-Prefix-Pool = "IPv6-POOL",
Framed-Route = "100.64.0.11/32 10.0.0.11 1",
Framed-Protocol = PPPOct 3 2025
Oct 1 2025
@seriv I am unable to reproduce on 1.4.3; for example:
To work around this at present I am setting the following additional configuration to prevent losing IPv6 connectivity frequently:
Sep 29 2025
Sorry, another followup, rather than duplicating the control logic in both interface.py and in vyos-netplug-dhcp-client (and leaving yourself open to another logic divergence in future), you could consider just checking for presence of the config file /var/run/dhcp6c.eth0.conf in vyos-netplug-dhcp-client when determining if the service should restart. That file only exists if dhcp6c is required on that specific interface, and that means the logic for dhcp6c presence is determined only by interface.py and vyos-netplug-dhcp-client just follows its lead.
Relevant section of interface.py: https://github.com/vyos/vyos-1x/blob/5845c4b1c50bc0335284cfb3306e0a91de3efd40/python/vyos/ifconfig/interface.py#L1822-L1824
