@drac enabling such debug features is not easily possible as we can not install two kernels in parallel.
- Feed Queries
- All Stories
- Search
- Feed Search
- Transactions
- Transaction Logs
All Stories
Jan 2 2021
The odd thing about this is that I don't seem to have this issue consistently across systems.
I have two identical systems (hardware) one of them acting as a PPPoE concentrator with OSPF, the other is an L2TP session concentrator with OSPF and BGP.
I only see this issue on the L2TP system. It's currently only doing around 50Mbps of UDP on average.
The PPPoE system does at least twice that on average.
It feels like a bug which we received from upgrading to FRR 7.5 series.
I took the opportunity to update the supported protocols list of the dynamic DNS client. Thanks for the hint!
@drac are you seeing Slab in /proc/meminfo gradually increasing before the panic? If so, the sourceforge post at the top recommends disabling TUPLE "acceleration". It seems that the more traffic you have, the quicker the crash. We were getting them every ~6 hours.
Loopback IP addresses are now automatically assigned to every VRF interface
47: bar: <NOARP,MASTER,UP,LOWER_UP> mtu 65536 qdisc noqueue state UP group default qlen 1000
link/ether 76:7d:c0:53:6d:89 brd ff:ff:ff:ff:ff:ff
inet 127.0.0.1/8 scope host bar
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft foreverThe system tries to bind itself to the localhost address which is not in the VRF, this is definately a fault, Why did I not see that?
Amending /etc/snmp/snmpd.conf as follows got it working for me (albeit temporarily). Our snmp listen-address is 10.13.0.56 in this instance.
Similar issue for snmpd:
The frequency of this issues seems to have increased, we now seem to be getting panics daily (it was every 4 days previously)
Also, your client should still not end up with 1454 set.
On our system, we have mtu set to 1500, and various clients appear to negotiate both 1500 and 1492 settings successfully via LCP stage of ppp.
The default in code is 1436 - so I really don't understand how the value of 1450 has got there unless there is a problem generating the file at /var/run/accel-pppd/l2tp.conf and it isn't being re-written.
The config you posted has the following which is not correct, it should read 1454.
ppp-max-mtu=1450
Jan 1 2021
I think this may be related to the MAC bound to the device. You can modify the configuration of vyos to adjust the order
Need 'nopmtudisc' option for tunnel interface. This is required for MPLS over gre or Ethernet over gre applications. This option is described in the iproute2 manuals (ip-tunnel).
Alternatively, we've got an i40e VyOS box in production which is stable with:
i40e is a tyre fire.
Seems i40e is a lot of fun. Given thos nasty errors and Intels development cycle, I have a recent 1.3 ISO with Kernel 5.10.4 and build in i40e drivers (mainline).
Frustratingly, 2.13.10 seems to have some other — very nasty — bugs in it. We've had three kernel crashes on the latest VyOS 1.3 releases (from around Christmas) as a result, and I currently believe they are the same as those problems described here:
Dec 31 2020
So we have configured option max-mtu this means
ppp-max-mtu=n Set the maximum MTU value that can be negotiated for PPP over L2TP sessions.
But I think we need to provide possibility set min-mtu
[ppp] min-mtu=n
vyos@oobm:~$ cat /var/run/accel-pppd/l2tp.conf ### generated by accel_l2tp.py ### [modules] log_syslog l2tp chap-secrets auth_mschap_v2
@alainlamar We aren't going to remove web proxy support! I was only talking about the old package specifically—it's been rewritten in the new style.
We can add a new <constraintGroup> element. If you put multiple <constraint> elements inside a <constraintGroup>, they work like logical AND.
Looks like it's not an issue anymore in latest iso.
show mpls table was outputting data.
I've never configured MPLS on anything.
I've loaded the latest release from yesterday, and I'm no longer seeing the issue?
That's v.odd.
As for encrypted DNS, it should cover standard solutions rather than be limited to a certain service provider. The standard solutions are as follows (although in general, there may not be many people using encrypted recursive DNS)
I used dnsdist and dnscrypt-proxy before but currently I settled with:
On server, what is in /var/run/accel-pppd/l2tp.conf ?
The setting should read ppp-max-mtu=1454 under l2tp section
Also I'd expect something is wrong on the client side, can you see the PPP config options the Teltonika is using?
The MTU setting is well described "max-mtu", i.e. a lower one can be negotiated.
Can you capture the LCP stage of PPP negotiation from either the client or server, it sounds like it's negotiating a smaller one for some reason.
Dec 30 2020
I am wondering if these are Zebra errors as they *seem* like Zebra errors.