1vCPU, 512MB RAM
- Feed Queries
- All Stories
- Search
- Feed Search
- Transactions
- Transaction Logs
All Stories
Nov 4 2019
What is your VMs CPU/RAM configuration?
Well, then there is a bug in the interface renaming script with later ifupdown scripts which should be fixed.
No good I'm afraid. NAT is not working set to pppoe0 and is still offering ppp0.
the interface name ppp0 is wrong. If we configure it to be pppoe0 on the CLI it should also have this name on the bare linux system. This sounds a bit like: T1242 which was fixed in commit https://github.com/vyos/vyatta-cfg-op-pppoe/commit/4330d41fcda30553ca1b3e2588d05eebdd59fc80
Interface doesn't work if changed to ppp0. The interface seems to expect pppoe0 and everything else seems to expect ppp0. pppoe0 is still showing as 'coming up', even though everything seems to be working
More info,
You have to add a sync-group.
set high-availability vrrp sync-group intgroup member int1
set service conntrack-sync failover-mechanism vrrp sync-group intgroup
After a bit of fiddling, I have found the problem. The config.boot conversion from 1.2 to 1.3 did not chnage the NAT interface entries from pppoe0 to ppp0. Once changed, all is working. As can be seen below, NAT & monitor traffic interface expect ppp0, whilst the actual interface and firewall zones expect pppoe0. I think this is a little confusing and should probably be made consistent one way or the other.. pppoe0: is also still showing as "Coming up" even though everything appears to work so there is still a reporting problem there at the very least.
Wait, Argo tunnel uses Cloudflare's WARP VPN system, which under the hood is basically wireguard...
Nov 3 2019
Confirmed still present in VyOS 1.3-rolling-201911030242
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP group default qlen 1000
link/ether 08:07:06:05:04:03 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP group default qlen 1000
link/ether 08:07:06:05:04:03 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether 08:07:06:05:04:03 brd ff:ff:ff:ff:ff:ff
inet6 fe80::a07:6ff:fe05:403/64 scope link
valid_lft forever preferred_lft foreverTested using the below configuration on VyOS 1.2-rolling-201911030217
Nov 2 2019
I've used the following script to get the argo tunnel running and encrypting dns, i then use 127.0.0.1 as the system nameserver and as the dns forwarder's only upstream nameserver. Works well so far but the integration is lacking with the vyos config
At first glance this looks like a very "easy" priority issue. Bonding interfaces are set up before the ethernet interfaces (makes no sense though).
@starcraft66 is it working as expected?
Nov 1 2019
Hi, the routes are there.
Oct 31 2019
As you pass auth and get the config sent, can you please check that you have a default route set up? Ideally that should happen automatically. It looks like your ISP offers you IPv6 which seems rejected as unknown by the pppoe client, but that't not the primary issue. Check if your default route is being set up.
Complete
To fix this inconsistancy the output of show int ethernet | json should be:
{
"eth0": {
"address": "10.10.10.10/24"
}
}Merged to equuleus branch.
Oct 30 2019
The ddclient config file got moved to /etc/ddclient/ddclient.conf but ddclient is still trying to load /etc/ddclient.conf in the latest VyOS 1.3 rolling image.
PR #38
My hypothesis is that Interface.set_mac is being called AFTER the bond is applied, which sets the mac of the interface back to what it was originally. Probably adding a check to see if it's a bond member may solve it
Basic config that duplicates this problem
To summarise, the MACs of interfaces that are bonded should all be the same (and should also match the mac of the bond interface). This works correctly in the older July build. However, this no longer works *on boot* in the latest builds. The screenshot above shows eth0, eth1, and bond0 all having different MACs which is why it's not working.
@xrobau WOW! What a bisection and research on that problem! Thanks a lot!
On the old image, the macs are set correctly
OH. MY. GLOB. I just figured it out.
This appears to be a bug in that intel driver - there's people reporting the same issues here: https://sourceforge.net/p/e1000/bugs/649/
This is getting more and more crazy the more time I spend on it, as this is a niggly issue that shouldn't be this hard to figure out.
Oct 29 2019
Oh, just to emphasize that it's a startup-config issue, if I disable and re-enable the ethernet port in the switch, it is still broken. The only way to get it working is to delete the member, commit, and re-add the member inside vyos.
While it was booting, it responded to three pings, and then nothing
It looks like the problem occurs when both interfaces are present when the machine boots. I can remove either one, reboot the machine, and it works.
Working 1.2.0-rolling config (with the different bonding syntax)
1.2.0 complete bond0 output
@c-po, thanks for your attention, I have found this:
vyos-build/build/chroot/var/lib/dpkg/info/vyatta-cfg-system.postinst
#OpenVPN should get its own user
if ! grep -q '^openvpn' /etc/passwd; then
sudo adduser --system --group --shell /usr/sbin/nologin --home /var/lib/openvpn openvpn
fiOct 28 2019
Hi @hexes well without knowing the details and diving into zabbix I suggest you just grab the diffs on the OpenVPN rewrite to see what was crucial. vyos-build repository should be the one if I remember correctly, or vyatta-cfg-system - one of those.