Currently digging through a bug with ocserv upstream maintainers, might get a 1.1.7 once we fix that or atleast a 1.1.6-4.
Aside from the weird Duo+RADIUS thing, the version noted in this issue currently runs great.
- All Stories
- Advanced Search
- Transaction Logs
Mar 5 2023
Mar 1 2023
Currently digging through a bug with ocserv upstream maintainers, might get a 1.1.7 once we fix that or atleast a 1.1.6-4.
Jan 29 2023
Proposed fix in - https://github.com/vyos/vyos-build/pull/299
Aug 29 2022
@Viacheslav - how does the bug around which you were working manifest itself? I just pulled the 1.1.6 sources and built from that repo using the same command as the Jenkinsfile. Happy to test for whatever condition was being fixed in the local build
Aug 18 2022
Aug 17 2022
I think that having the configuration stored exclusively in files outside the config file breaks portability as exporting system state through # show | commands won't produce an output sufficient for full state backup of a device.
If the configuration attributes were all in the CLI which then generated the relevant files in the FS, that would address the stateless backing filesystem concern by centralizing the device config as the source of truth.
@SquirePug - could you possibly provide a link to or the contents of the changes you made? Thanks
Aug 16 2022
Aug 15 2022
Aug 13 2022
Using the pull requests filesystem copy, same place, new error:
[email protected]:~$ cat /var/log/vyatta/*log cp[/opt/vyatta/config/tmp/new_config_1615]->[/opt/vyatta/config/tmp/tmp_1615/work] cp w->tw failed[unknown exception] cp[/opt/vyatta/config/tmp/new_config_2665]->[/opt/vyatta/config/tmp/tmp_2665/work] cp w->tw failed[unknown exception] [email protected]:~$ dpkg -l|grep vyatta-cfg ii libvyatta-cfg-dev 0.102.0+vyos2+current5 amd64 libvyatta-cfg development package ii libvyatta-cfg1 0.102.0+vyos2+current5 amd64 vyatta-cfg back-end library ii libvyatta-cfg1-dbgsym 0.102.0+vyos2+current5 amd64 debug symbols for libvyatta-cfg1 ii vyatta-cfg 0.102.0+vyos2+current5 amd64 VyOS configuration system ii vyatta-cfg-dbgsym 0.102.0+vyos2+current5 amd64 debug symbols for vyatta-cfg ii vyatta-cfg-qos 0.15.42+vyos2+current1 all VyOS Qos configuration templates/scripts ii vyatta-cfg-system 0.20.44+vyos2+current22 amd64 VyOS system-level configuration [email protected]:~$ uname -r 5.15.59-amd64-vyos-sv [email protected]:~$
If Linux maintainers backport the delta causing this to 5.10, it could become a rather pressing concern, but for now merely a show-stopper in terms of moving past 5.10LTS.
Created a pull request implementing a rudimentary fall-through-on-exception to standard API from the Boost version @ https://github.com/vyos/vyatta-cfg/pull/49
Have not built it yet, nor am i a formal C++ developer (hackers are informal everything developers and rarely formal anything developers), so would appreciate eyes on and sanity checks.
Exception can probably be scoped better to only trip on EXEDEV but i dont see a logical problem with falling-through like this on other errors (is this a bad assumption?).
Aug 12 2022
Feb 27 2022
Sep 16 2021
Curl checks come back with:
[email protected]:/tmp# curl 169.254.169.254/latest/meta-data ami-id ami-launch-index ami-manifest-path block-device-mapping/ hostname instance-action instance-id instance-type local-hostname local-ipv4 placement/ public-hostname public-ipv4 public-keys/ reservation-id security-groups
Aug 30 2021
How are the MD5 sums matching but the sha512 sums not matching?
Aug 29 2021
This can be done via the tc kernel module AFAIK. Something like fireqos would be great to have in here, but they're pretty opinionated in how they do things in their tools so probably not a viable drop-in solution.
This can also be done with OSSEC using active response, either by building an OSSEC agent into the image (client key management is kind of a PITA) or by way of remote feed for FW log events showing attempts to connect with an active-response script to temporarily block the offenders with progressively longer blocks on repeat offenses.
lshw does this already
I added the kernel netflow module to my pull request a while back - collects and forwards flows to a destination defined in the module parameter set at load-time.
If we want to actually process flows on-system, there's a bunch of modern tooling for that; but in terms of just aggregation and export in canonical format, the kernel module is the best way to go IMO due to the fact that it works at the same tier as the network code itself (ring0).
Not seeing this issue when setting "description" field - we've run it in production for years bridging our OpenStack and datacenter environments, and the names show up correctly (blanked sensitive details):
From a post-exploitation perspective, this would permit attackers who've compromised an older vulnerable version to persist their payloads in the shell elements (~/.bashrc and friends) across upgrades.
This seems similar to the "configuration drive" option for OpenStack, which is already handled by cloud-init. Might be handy to implement as a cloud-init local data source and just include CI on all builds since thats becoming an industry standard even on bare metal.
I've managed to get this working in our own builds by restoring the openstack target and making some changes there - runs fine in AWS, even with a grsec kernel and hardened userspace (Xen is often the worst visor for ring0 memory defenses).
Aug 27 2021
Once i dropped that repo everything started to work again... and while it may be a temporary thing, i think that this sort of illustrates the problem with reliance on external repos. Its probably safer to mirror their content into something that can be relied upon to stay live as companies are bought and sold all the time, resources vanish, and licenses change from real FOSS to semi-permissive .
Aug 6 2021
Seems like the repo's not needed anymore as my iso just built without it, twice, after a clean, and with a bunch of added stuff (tor, docker, systemd-nspawn, xtables-addons, hardened-malloc, a grsec kernel, etc) for which dependencies are also available without it.
Either way, probably a good idea to keep deps for anything third-party in the vyos repo itself since third parties can become hostile through buyouts or license changes any time and anywhere in these post-FOSS times.
What packages are we actually pulling from there? Any reason they're not in the VyOS repo itself?
I removed their repo entirely from the JSON config and my image built fine (apparently i now have to add a /debian suffix for all packages in our repo, but that's weirdness in the repo management stack):
Reading package lists... Building dependency tree... Reading state information... [2021-08-06 21:39:40] lb source P: Source stage disabled, skipping P: Build completed successfully
Trying to use their instructions from https://repo.saltproject.io/#debian i'm back to the certificate issue - repo is set to https://repo.saltproject.io/py3/debian/10/amd64/latest buster main and the custom GPG key has been added, but certificate checks still fail hard:
Reading package lists... W: https://repo.saltproject.io/py3/debian/10/amd64/latest/dists/buster/InRelease: No system certificates available. Try installing ca-certificates. W: https://repo.saltproject.io/py3/debian/10/amd64/latest/dists/buster/Release: No system certificates available. Try installing ca-certificates. E: The repository 'https://repo.saltproject.io/py3/debian/10/amd64/latest buster Release' does not have a Release file. E: An unexpected failure occurred, exiting... P: Begin unmounting filesystems... P: Saving caches... Reading package lists... Building dependency tree... Del nftables 0.9.6-1 [66.8 kB]
After cleaning the chroot and retrying, it now fails utterly with the '#' in there:
Thank you for pointing that out - updated defaults.json and it seems to have made that issue go away.
For some reason its now breaking on using our internal repo (no TLS there, inside the datacenter), but i suspect its got something to do with the repo itself or some change in Debian since we started using it.
Aug 2 2021
I've cleaned the build space and rebuilt the Docker container - no dice locally or in the CI stack, still fails the same way.
It looks like they're using an AWS CA in the cert chain:
Certificate chain 0 s:CN = repo.saltstack.com i:C = US, O = Amazon, OU = Server CA 1B, CN = Amazon 1 s:C = US, O = Amazon, OU = Server CA 1B, CN = Amazon i:C = US, O = Amazon, CN = Amazon Root CA 1 2 s:C = US, O = Amazon, CN = Amazon Root CA 1 i:C = US, ST = Arizona, L = Scottsdale, O = "Starfield Technologies, Inc.", CN = Starfield Services Root Certificate Authority - G2 3 s:C = US, ST = Arizona, L = Scottsdale, O = "Starfield Technologies, Inc.", CN = Starfield Services Root Certificate Authority - G2 i:C = US, O = "Starfield Technologies, Inc.", OU = Starfield Class 2 Certification Authority
i do recall the starfield certs being a problem back in the day as well, but guessing its quite unlikely for the base build environment to have an AWS root CA in its trusted list.
Jul 22 2021
Feb 3 2021
To round out the effort, i've added an optional patch to the series which provides granular AAA/RBAC from ring0 and can also deliver the W^X functionality for userspace along with those functions.
Feb 2 2021
Since 5.10 appears to be holding solid, and grsecurity is using 5.10 for their beta branch, i've completed the forward port of these core functions to the same kernel revision being used in the current branch (at the time of commit).
Whats the intent with Intel drivers there? If we want to pull in from Intel, i think we ought to do the same in-tree patch process to build and sign the modules at build-time (and enforce module signing validation to load at runtime).
Jan 23 2021
I've been refreshing the stack against current branch to keep testers building, and have added the FSGSBASE backport to 5.4 as a technical argument for keeping to a properly mature LTS even when users have a good case for needing newer functionality.
What is the plan of action for this effort, and is there a written policy on which kernels are selected and how they're selected for the OS? I can keep doing the rebase & push dance once a week or so, but is anyone on the VyOS team actually testing this stuff and has anyone upstream discussed the functional security benefits to users of GeoIP firewall filters or TARPIT/DELUDE/etc response actions separately from the system hardening functions inhere?
Jan 11 2021
systemd-container - easiest way to get containers rapidly into VyOS because all of the infrastructure (systemd) is already there.
We build our images with it, works fine.
You might want to take a look at the patches in T228 - its a 5.4 build with a bunch of C fixup, but using the Intel proprietary drivers for an in-tree build (permits signing of all modules at kernel build time).
We have this running on a host with a dual-port 740 (not doing all that much, some routing, NAT, ACL, and a couple of OpenVPN and IPSEC tunnels), and it seems to be fairly happy in that low intensity environment.
I can try to beat up on it and see how it fares, but probably worth a try.
Dec 30 2020
I've added the two binary defense components oustanding:
Dec 17 2020
So how are userspace packages for this sort of stuff handled? I assume we need to itemize out individual phabricator tickets?
Off the top of my head, relevant things to add to uspace would be:
- eoip binary
- eoip CLI wrapper
- Xtables userspace with GeoIP table data and updater script (we would need to figure out how to deal with rule placement for persistence)
- Xtables-related CLI for firewall matching on GeoIP, DNS, etc
- Xtables-related CLI for firewall actions to TARPIT or DELUDE
- UKSM userspace (or just wrappers for the sysfs interface in CLI)
- Hardened Malloc with system-wide LD_PRELOAD or maintain a vyos-specific libc package with it built-in
Dec 14 2020
Dec 7 2020
Important note on this PR - in order to build the GCC plugins which perform most of the self-protection work, the Docker container needs gcc-8-plugin-dev installed. Otherwise it builds, but silently downgrades the configs dropping RANDSTRUCT/STACKLEAK silently.
Pulled RSBAC out for now (issues with building the rest while its in there but disabled), validated builds with and without the plugins package for GCC8.
Added an inert patch (disabled in Kconfig) for https://www.rsbac.org/ on 5.4. This can be used to significantly harden the restrictions intended by the CLI to limit users to specifically defined roles, same goes for applications/containers.
If adding container support to VyOS is still on the roadmap, we're going to want to take extra care to enforce the boundaries between them and the host since real world use cases are pretty much guaranteed to leave old vulnerable containers running on long-running network appliances making for a variable and worsening attack surface over time.
This isn't quite as integrated and doesnt provide nearly the coverage as what you get with grsec+pax, but a rough approximation of "role-based FS restrictions and runtime hardening" is now in the pull request along with the other stuff which seemed pertinent for upstream.
Thank you sir. Worked through a clean build, updated patches, rebased, and pushed.
Nov 24 2020
Created a GitHub PR against 5.4.78 with the core functions listed above, ixbe and QAT in-tree as well as wireguard (avoids the convoluted module builds and permits LTO/CFI passes)
Sep 16 2020
Sep 15 2020
While i appreciate that you have an opinion of what's "best," i'm not re-summarizing 10+y of Linux out-of-tree history to spoon feed someone data they can, and should (like good engineers do), acquire on their own. Several of those patches are simply in-tree integrations for things currently built and packaged as kmods by VyOS on an LTS tree, the rest are well documented long running projects of their own which one must research and review the source code for anyway to properly understand their function and benefit.