I can not immediately recreate the issue:
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
All Stories
Sep 29 2024
Draft PR: https://github.com/vyos/vyos-1x/pull/4108 (WIP)
Sep 28 2024
SteveP - Even when using 2.18.1 - when I try to force the speed of the USB interface to 2500 (and duplex to full, as then required), I see error messages on the commit. See below. This happens with multiple 2.5G capable interfaces.
do not reproduce on:
Version: VyOS 1.3-stable-202409270542
Version: VyOS 1.4-stable-202409170309
Version: VyOS 1.5-rolling-202409160007
We should have two variants - Kernel build in and OFED driver
Sep 27 2024
@c-po, do we miss Mellanox drivers in 1,5 ?
@doctorpangloss I see from the other forum thread:
https://forum.vyos.io/t/something-keeps-adding-offloads-back-to-my-interface-breaking-my-wan/15282/7
that @n.fort has confirmed the persistence of the fix.
i retested this issue
actual for:
Version: VyOS 1.3-stable-202409270542 (DHCP Relay Agent 4.4.1)
Sep 26 2024
What is the approach for preventing offloads from being added back in after boot? I deleting them before upgrading, upgraded to a nightly with this patch, but I observed the offloads returned:
This PR https://github.com/vyos/vyos-1x/pull/3311 has fixed the issue.
Tested in 1.4.0 and latest rolling release. 1.5-rolling-202409250007
Sep 25 2024
PR to follow tests and migration script:
https://github.com/vyos/vyos-1x/compare/current...jestabro:distinct-api
Also probably related:
I would expect the line of CLI being used to be in any of these files?
Explanation is in this PR https://github.com/vyos/vyos-1x/pull/4096
Sep 24 2024
Yes I am overloaded (who isn't), and yes I plan to make a PR but want to test it a bit more first, to be reasonably sure it causes no regression (potential resource leak if something allocated by the incomplete IPv6 configuration is not freed - not sure enough about accel-ppp internals). I'm working to rebuild replacement accel-ppp package (based on the same commit as in equuleus with just my patch applied, no other changes) and run it for a week or two while watching memory usage. Testing a fairly complex config in production environment, so not brave enough to try rolling or even sagitta just yet.
@jmaslak It's been a while since you reported this bug and we've been through multiple FRR updates since then. Could you check if the issue is fixed in the latest nightly build, or attach a config that triggers that behavior if it still exists?
We never received any info about what the incorrect behavior in question is, and the load balancing subsystem is undergoing a rewrite, so I'm closing this.
I've merged this into the feature request because the real issue is that we don't have dynamic hairpin NAT yet, while this behavior for "static" NAT is not wrong. We'll get to it.
@marekm Do you plan to make a PR?
If you are overloaded, we can import a patch ourselves, but a PR would be nice.