All node set himself as MASTER
Description
Details
- Version
- 1.1.8
Event Timeline
It works for me. We need more details of your setup and your config to tell anything.
Ok. More tests here on virtual.
This is the simply config to test:
nodeA
interfaces {
ethernet eth0 { address 10.0.0.241/24 duplex auto hw-id 00:15:5d:00:50:32 smp_affinity auto speed auto vrrp { vrrp-group 50 { advertise-interval 1 preempt true priority 150 sync-group aaa virtual-address 10.0.0.240 } } } loopback lo { }
}
nodeB
interfaces {
ethernet eth0 { address 10.0.0.242/24 duplex auto hw-id 00:15:5d:00:50:32 smp_affinity auto speed auto vrrp { vrrp-group 50 { advertise-interval 1 preempt true priority 100 sync-group aaa virtual-address 10.0.0.240 } } } loopback lo { }
}
It seems not to works between appliance in mixed x64 and x32. On 1.1.7 all was OK also between x64 and x32.
Also It works if master is on 1.1.8 x64 and backup on 1.1.7 x32... But if I upgrade the backup-node to 1.1.8 x32 it always set himself as master...
On 1.1.7 we can use mixed x64 and x32.
On 1.1.8 vrrp is OK only if we use same architecture on both.
In short:
node A node B result
x64 x64 OK
x32 x32 OK
x64 x32 BAD
x32 x64 BAD
You can see screencast https://www.screencast.com/t/2BAtqnALZE7C
There are some issues with compiling keepalived on 32 bit systems recently - http://www.keepalived.org/changelog.html
BTW: In my experience, it's much better to have separate ip address spaces for vrrp itself and virtual addresses, managed by vrrp.
It not seems 32bit-related, since It works if used 32to32 (or 64to64).
The problem is only on mixed 32to64 or 64to32.
I am seeing the same behavior on 1.1.8<->1.1.8 (also when I tried to do a VRRP against a Mikrotik 6.40.5 chr).
Both of my vyos instances are the 1.1.8 OVA downloaded here:
https://downloads.vyos.io/release/1.1.8/vyos-1.1.8-amd64.ova
Node 1
vyos@vyos1a:~$ configure [edit] vyos@vyos1a# show interfaces ethernet eth0 { address 10.100.0.14/27 duplex auto hw-id 00:50:56:96:a6:90 smp_affinity auto speed auto vif 4000 { address 10.15.15.2/24 vrrp { vrrp-group 1 { advertise-interval 1 preempt true priority 100 virtual-address 10.15.15.1/24 } } } } loopback lo { } [edit] vyos@vyos1a# exit exit vyos@vyos1a:~$ show vrrp detail Use of uninitialized value in printf at /opt/vyatta/share/perl5/Vyatta/VRRP/OPMode.pm line 249. -------------------------------------------------- Interface: eth0.4000 -------------- Group: 1 ---------- State: MASTER Last transition: 13m49s Source Address: Priority: 100 Advertisement interval: 1 sec Authentication type: none Preempt: enabled VIP count: 1 10.15.15.1/24 vyos@vyos1a:~$
Node 2
vyos@vyos1b:~$ configure [edit] vyos@vyos1b# show interfaces ethernet eth0 { address 10.100.0.15/27 duplex auto hw-id 00:50:56:96:45:4f smp_affinity auto speed auto vif 4000 { address 10.15.15.3/24 vrrp { vrrp-group 1 { advertise-interval 1 preempt false priority 75 virtual-address 10.15.15.1/24 } } } } loopback lo { } [edit] vyos@vyos1b# exit exit vyos@vyos1b:~$ show vrrp detail Use of uninitialized value in printf at /opt/vyatta/share/perl5/Vyatta/VRRP/OPMode.pm line 249. -------------------------------------------------- Interface: eth0.4000 -------------- Group: 1 ---------- State: MASTER Last transition: 8m4s Source Address: Priority: 75 Advertisement interval: 1 sec Authentication type: none Preempt: disabled VIP count: 1 10.15.15.1/24 vyos@vyos1b:~$
I had a similar configuration working with 1.1.7 a while ago.
The 32 bit build is using IPv6 VRRP address and 64 bit build is using IPv4 VRRP address
Verify using netstat -gn and run tshark on the two nodes.
For some reason the 32 bit build add native_ipv6 to the configuration in /etc/keepalived/keepalived.conf
Something broken in the configuration of the build system?
/opt/vyatta/sbin/vyatta-keepalived.pl in the 32-bit build writes the native_ipv6 line to the configuration.
If I comment out the line the two routers communicate.
Perhaps a new build should be created where this is fixed?
It looks like this https://github.com/vyos/vyatta-vrrp/commit/dfbc742a6454388aa6a2523541a170c01fc42533#diff-7a3c3afc4665f422017c25f832c9c28b
is the reason for things to break.
This is still not the right way to get VRRP for IPv6, It will break upgrades and make the routers useless. The reason why we configure VRRP is to avoid downtime. So there are no quick fix for this.
OK @aopdal.
It seems easy to fix and urgent since many VRRP (mixed x64/x32) upgraded to stable 1.1.8 are now broken.
@mdsmds If you have a mixed environment running with VRRP, just comment out the offending line in the 32Bit router and you are good.
It's the IPv6 issue which is not a quick fix ;-)
This change seems to have found its way into current somewhere between 999.201711072137 and 999.201711160506
This can also be a problem for people using 1.2.x, as the "native_ipv6" line has also made it's way into vyatta-keepalived.pl in those nightly versions at some point in the past. So trying to upgrade from some early 2017 nightly to a current version may break VRRP in 1.2.x too, at least if you are not upgrading all devices at the same time.
While that is true, older nightly versions don't set "native_ipv6" in the keepalived.conf, so any back-to-back updates will result in broken VRRP configurations. In addition, it is my understanding that you can't use both IPv4 and IPv6 in one VRRP group in newer versions of keepalived or rather you should not do it. Therefore simply adding "native_ipv6" may not be the best way to implement IPv6 support.
I'm sorry to bother with this issue again, but as I need to upgrade some VyOS 1.2.x routers in the near future, I would be grateful for any information on how to do this the best way regarding the changes stated above. As I have already mentioned, older nightlies don't set "native_ipv6" in the keepalived.conf. So if I upgrade one device at a time, VRRP will be broken until I have also upgraded the other devices. Do you have any advice how I should deal with this problem? Upgrading all devices at the same time is not really an option with my setup, but maybe there is some kind of reasonable workaround which you would recommend?