was done part of T2633
- Feed Queries
- All Stories
- Search
- Feed Search
- Transactions
- Transaction Logs
All Stories
Jun 24 2020
No problem occured after updating another machine from version VyOS 1.2-rolling-201910102056 to 1.3-rolling-202006230700. Login succeeded after reboot immediately.
There is the weird area here, as 1G interfaces are generally capped at 9K more or less (whether limits include those overheads or not is always weird, such as switches saying they are 9K but also 9120). For VM nics, you're never completely sure of what the host or what the switches directly connected to the hosts will allow either.
Jun 23 2020
In T2630#68467, @thomas-mangin wrote:could have the range 68-65536 but it may be a bit on the extreme side.
could have the range 68-65536 but it may be a bit on the extreme side.
https://github.com/vyos/vyos-1x/pull/473 was merged so now need to agree sane limits for the XML.
I have a PR for this (not changing the XML limiting range) for review ATM.
New Jenkins Job established https://ci.vyos.net/job/vyos-build-netfilter/ with pipeline from https://github.com/vyos/vyos-build/blob/current/packages/netfilter/Jenkinsfile
a) not have any limitations regarding MTU at all and then detect an error when trying to apply the new MTU. This means no way to verify if the new mtu is correct beforehand so it doesn't comply with the verify/apply separation that's prescribed in the developer docs. I described a possible workaround using revert code in T2404.
Just reproduced same issue on second system. Source VMware vSphere Host.
Same Problem here: After upgrading from 1.3-rolling-202005030117 to 1.3-rolling-202006230700 no login possible. After resetting password for admin user through password recovery login works. Rest of configuration was copied as should.
related to T2630
vyos@vyos# set interfaces tunnel tun0 description '*** SITE1 ***' [edit] vyos@vyos# set interfaces tunnel tun0 encapsulation 'gre-bridge' [edit] vyos@vyos# set interfaces tunnel tun0 local-ip '10.0.3.239' [edit] vyos@vyos# set interfaces tunnel tun0 remote-ip '10.0.32.240' [edit] vyos@vyos# set interfaces tunnel tun0 ip enable-arp-accept [edit] vyos@vyos# set interfaces tunnel tun0 ip enable-arp-announce [edit]
It would be possible to make the scripts check if IPv6 is enabled on the interface (or system?) and make the minimal MTU 1280 in that case. If IPv6 on the interface is disabled or not supported, have it go as low as it can.
This was discussed already in T2404. The problem is that NICs that expose their min/max MTU are rare. None of the NICs I have expose it, neither through sysfs nor through 'ip -d link show'. If I recap the discussion from T2404, there are 2 main ways to solve this:
a) not have any limitations regarding MTU at all and then detect an error when trying to apply the new MTU. This means no way to verify if the new mtu is correct beforehand so it doesn't comply with the verify/apply separation that's prescribed in the developer docs. I described a possible workaround using revert code in T2404.
b) have a mtu detection script that would be ran by udev on every new NIC detection (to support hotplugging NICs) that would determine the min/max mtu with a bruteforce binary search algorythm (try to set a mtu and see if it errors), then record the results in some temporary file that would get read by the config script. The idea was proposed by @thomas-mangin.
@systo mark as resolved. Reopen it if necessary.
Breaking user existing configs should be a no-no. If the options can be used that way under Linux, then we should not restrict it if it is not invalid. If we intend to prevent it then we would need a way to warn users clearly and we have no framework for this ATM.
Need to add max MTU to operational mode and create a new validator using it and applying it to the xml. The only question being if the information is always available.
I see no issue with the proposed solution.
Is this related to T2619? It sure looks like it to me.
Jun 22 2020
A first implementation is already live with the console-server https://github.com/vyos/vyos-1x/blob/current/src/conf_mode/service_console-server.py
Thinking on this, should ("source-address" / "remote") and ("group" / "source-interface") be mutually exclusive? I cant think of any reason you would want both set to be setup on the same interface, I'm not even sure you can have both. Usually ("source-address" / "remote") would be used for unicast setups, and ("group" / "source-interface") for multicast. Seems like an either or, but not both, setup would be ideal.
This would have been fixed for isc-dhcp-client if T2590 hadn't happened in the process of me working on it, now it requires writing a new dhclient script for the WIDE client.