It seems to be broken once again - at least for devices <=1G RAM.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
All Stories
Mon, Aug 26
Sun, Aug 25
Sat, Aug 24
Fri, Aug 23
I messed around with it today, and made some progress!
A simple version of get_commit_scripts (nee get_commit_schedule) has been added to the resolution of T6671. After that PR is merged, this task will be closed, as that version suffices for current needs.
The following simple commit will need to be backported to sagitta so that a PR for the above will backport cleanly:
https://github.com/vyos/vyos-1x/pull/4013
I think we can close this one
Only worked:
- Restart instance
- load /config/config.boot
- sudo podman rm suricata
- commit
Then it works
If service is 'failed' state
vyos@VyOS-Test01:~$ systemctl status vyos-container-suricata.service × vyos-container-suricata.service - VyOS Container suricata Loaded: loaded (/run/systemd/system/vyos-container-suricata.service; static) Active: failed (Result: exit-code) since Fri 2024-08-23 10:32:44 UTC; 43s ago Duration: 4min 55.702s Process: 2855 ExecStartPre=/bin/rm -f /run/vyos-container-suricata.service.pid /run/vyos-container-suricata.service.cid (code=exited, sta> Process: 2856 ExecStart=/usr/bin/podman run --conmon-pidfile /run/vyos-container-suricata.service.pid --cidfile /run/vyos-container-suric> Process: 2867 ExecStopPost=/usr/bin/podman rm --ignore -f --cidfile /run/vyos-container-suricata.service.cid (code=exited, status=0/SUCCE> Process: 2873 ExecStopPost=/bin/rm -f /run/vyos-container-suricata.service.cid (code=exited, status=0/SUCCESS) CPU: 129ms
In T6673#198344, @a.hajiyev wrote:You are right there is an op-mode command to restart the container
restart container suricataBut I think there needs to be some checks/changes at least someone will execute the native Podman command to restart the container.
You are right there is an op-mode command to restart the container
restart container suricata
But I think there needs to be some checks/changes at least someone will execute the native Podman command to restart the container.
Sorry, but that was my own mistake. When I checked again today, I noticed that the connections were not allowed in the firewall.
It fails because you do it in the wrong way
PR to follow smoketests and cosmetic changes:
https://github.com/vyos/vyos-1x/compare/current...jestabro:configdep-prio
Thu, Aug 22
PR https://github.com/vyos/vyos-1x/pull/4003
vyos@r14:~$ show ntp .- Number of sample points in measurement set. / .- Number of residual runs with same sign. | / .- Length of measurement set (time). | | / .- Est. clock freq error (ppm). | | | / .- Est. error in freq. | | | | / .- Est. offset. | | | | | | On the -. | | | | | | samples. \ | | | | | | | Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev ============================================================================== ec2-34-206-168-146.compu> 31 15 70m +0.225 0.056 +1295us 112us ec2-18-193-41-138.eu-cen> 31 13 70m -0.305 0.070 -968us 119us ec2-122-248-201-177.ap-s> 6 3 52m -2.587 0.901 +109us 252us vyos@r14:~$
Wed, Aug 21
Is there any way to incentivize the addition of this?
A workaround doe DHCP client to make it work again: https://github.com/vyos/vyos-1x/pull/4002
A few immediate notes, before preparing the solution:
(1) this is independent of whether one is running under configd or not
(2) this is more easily triggered under 1.4/1.5, which has default ['system', 'conntrack', 'modules'] entries (fixed in current), though can be reproduced in current with the above and 'set ... conntrack modules ..'
(3) this was avoided in 1.4.0 by the global dependency pruning; that raised other serious issues, however (T6559), due to constraints of the legacy commit algorithm.
It looks like you are already running this with a high reporting level. Are you seeing any/all the messages on stderr? I would try all of this on the command line first, and do the VyOS cli later.
possible fix
if 'source_interface' in config: # my code here config['ifname'] = config['source_interface']
The issue is not found in 1.3.8 version but found in 1.4.0 and latest 1.5 rolling release.
@shaneshort this might look like a trivial one but we've already checked it out