@c-po It was thought that possibly the nftables migration was doing something funny here because of the potential overlaps.
- Feed Queries
- All Stories
- Search
- Feed Search
- Transactions
- Transaction Logs
All Stories
Nov 14 2020
Nov 13 2020
I will take at look if I can implement a short fix to generate IPv6 Link Local addresses on wireguard interfaces.
Further configurations and an overview via email
@cjeanneret Can you re-check it? And close it if all works fine.
Fix for "remote-host" on client side
PR https://github.com/vyos/vyos-1x/pull/606
Server conf
set interfaces openvpn vtun0 encryption cipher 'aes256gcm' set interfaces openvpn vtun0 encryption disable-ncp set interfaces openvpn vtun0 hash 'sha512' set interfaces openvpn vtun0 local-host '100.64.0.1' set interfaces openvpn vtun0 local-port '1194' set interfaces openvpn vtun0 mode 'server' set interfaces openvpn vtun0 openvpn-option 'tls-version-min 1.3' set interfaces openvpn vtun0 openvpn-option 'comp-lzo no' set interfaces openvpn vtun0 persistent-tunnel set interfaces openvpn vtun0 protocol 'tcp-passive' set interfaces openvpn vtun0 server client client1 ip '10.10.3.2' set interfaces openvpn vtun0 server client client1 subnet '10.10.3.0/29' set interfaces openvpn vtun0 server client client1 subnet '10.20.0.0/16' set interfaces openvpn vtun0 server subnet '10.10.3.0/29' set interfaces openvpn vtun0 server topology 'subnet' set interfaces openvpn vtun0 tls ca-cert-file '/config/auth/ovpn/ca.crt' set interfaces openvpn vtun0 tls cert-file '/config/auth/ovpn/central.crt' set interfaces openvpn vtun0 tls dh-file '/config/auth/ovpn/dh.pem' set interfaces openvpn vtun0 tls key-file '/config/auth/ovpn/central.key'
The check on DH length is backwards.
Is there a reason you're assuming the proto is v6, or do those options allow fallback to v4 remotes? I can't find clear information on that in the manpages.
I have reverted the commit of QAT driver update. can you please try out this image:
In the new version client configuration
Request merge PR:
I have written an "fast" fix until tunnel is rewritten. Can you test it?
@ernstjo yeah we also have this "situation" with wireguard tunnels. Should be fixed in general with the rewrite of tunnel to get_config_dict() which is the second last interface not using this scheme, vti is last.
Usually all of them have a serial failback thus they should work, currently there is only a smakk subset of vyos verified LTE modules as each and every modules comes with its own problems:
How about these changes https://github.com/vyos/vyos-1x/blob/current/src/op_mode/powerctrl.py#L37
diff --git a/src/op_mode/powerctrl.py b/src/op_mode/powerctrl.py index 69af427e..c000d7d0 100755 --- a/src/op_mode/powerctrl.py +++ b/src/op_mode/powerctrl.py @@ -34,7 +34,11 @@ def utc2local(datetime): def parse_time(s): try: if re.match(r'^\d{1,2}$', s): - return datetime.strptime(s, "%M").time() + if (int(s) > 59): + s = str(int(s)//60) + ":" + str(int(s)%60) + return datetime.strptime(s, "%H:%M").time() + else: + return datetime.strptime(s, "%M").time() else: return datetime.strptime(s, "%H:%M").time() except ValueError:
@Zer0t3ch Can you share your configuration?
There are also other huawei LTE WWAN USB modules which use CDC/NCM drivers.. such as E3276 and E3372.. so it's not possible to use these anymore? I actually haven't used those with vyos myself yet, but I was planning to try it soon, as I have those USB WWAN modules available, and I've been using them with other linux distros (with cdc/ncm drivers).
It looks like need to do some calculation
vyos@vyos:~$ show version | match Version Version: VyOS 1.3-rolling-202011130217 vyos@vyos:~$ show date Fri 13 Nov 2020 07:18:44 AM UTC vyos@vyos:~$ reboot in 60 Invalid time "60". The valid format is HH:MM vyos@vyos:~$ reboot in 59 Reboot is scheduled 2020-11-13 08:18:04 vyos@vyos:~$ reboot in 61 Invalid time "61". The valid format is HH:MM
Tested on 1.3-rolling-202011130217, all works as expected.
Thanks to @ernstjo
I believe this may be related to the following error messages I have:
Nov 12 2020
The issue here is that "set protocols ospf default-information originate" propagates a default route even if there is an inactive route for 0.0.0.0/0. It should only propagate if "always" is used. So, maybe the inactive route is not in the routing table (in the routing sense) but it seems to be taken into consideration for redistribution.
Imagine if you use for example BGP and don't have a default route or set it to blackhole.
Then you originate the default route for a neighbor.
Why it should not announce the default route to the neighbor?
This is expected behavior, so routes not installed in the routing table.
Sure—if you want to drop me an image I can try it out. I do have a working vyos-build as well, I can also try and produce my own with that change backed out when I get some time towards the end of the week.
Nov 11 2020
@lucasec of course this commit could be related and we can try revert back to the old version. Would you be willing in testing a binary for us?
Nov 10 2020
Put in a PR to add miscellaneous MPLS and LDP parameters.
I will perform a few additional tests tomorrow with the oldest available rolling releases (looks like October 13th as of writing). Will see if I can binary search my way to when things broke.
A few updates... the failure still occurs on latest rolling. Similar outcome—the kernel panics and dumps a stacktrace during the initial boot-up configure process. However, this issue goes back further than I expected (and initially expressed in the ticket). I goofed up in my testing of 1.3-rolling-202010260327 by booting with a default config file without the QAT option.
Nov 9 2020
As discussed in Slack channel, these leftover processes should be cleaned up the next time configuration mode is entered (by UnionfsCstore::setupSession). In my limited testing, I can reproduce the leftover processes as above, but they are cleaned up the next time I enter config mode. There may well be corner cases where this mechanism is not successful, but I have not reproduced.