works as expected on 1.3-rolling-202001270217
- Feed Queries
- All Stories
- Search
- Feed Search
- Transactions
- Transaction Logs
All Stories
Jan 27 2020
I solved this issue by doing the following:
Jan 26 2020
All right, we stay with squid, however I may drop squidguard but ask in the forum first if that feature would be required by many users.
I also can confirm this works in 1.2.4
Sounds like a duplicate of T1632
Sounds like a duplicate of T1632
Welcome, thanks for testing!
Maybe for future CLI designs, the following would be cleaner: set service snmp extension name 'foo bar' script /usr/bin/echo
vyos@vyos# set service snmp script-extensions extension-name 'foo' script /usr/bin/echo vyos@vyos# set service snmp script-extensions extension-name 'foo bar' script /usr/bin/echo
Restarting now no longer shows any error/warning
This is actually an "upstream" bug, see https://bugs.launchpad.net/ubuntu/+source/net-snmp/+bug/1384122 but it can be fixed via our own scripts.
Working fine on 1.3-rolling-202001260217.
Jan 25 2020
Sure, I can probably do that in a day or two, will report back! Didn't even think to try that on my test device, my mind was just stuck at not wanting to upgrade my production devices at the moment.
Ah, your version is a bit old that could have been prior to the migration of the ip enable-arp-ignore script to XML/Python. Could you please retest with a newer rolling release?
Exactly, there is a race condition which I try to reproduce, but can not as ow now with VMware.
I just tried it on a different device with a more clean config, and it's reproducable with this config:
Jan 24 2020
PR https://github.com/vyos/vyos-1x/pull/209
also added missing completion help values.
Confirming that I also report this on 1.3-rolling-202001240217. Just upgraded this morning and I see the same unknown layer 3 protocol error as reported.
This issue is still present in 1.3-rolling-202001240217
Unfortunately I can not reproduce this.
One seems t be the mastering parent process, try ps faux
SNMP with VRRP work
# snmpwalk -v2c -c public 10.0.0.1 VRRP-MIB:vrrpOperations VRRP-MIB::vrrpNodeVersion.0 = INTEGER: 2 VRRP-MIB::vrrpNotificationCntl.0 = INTEGER: disabled(2) VRRP-MIB::vrrpOperVrId.3.10 = INTEGER: 10 VRRP-MIB::vrrpOperVirtualMacAddr.3.10 = STRING: 52:54:0:1d:4:4e VRRP-MIB::vrrpOperState.3.10 = INTEGER: backup(2) VRRP-MIB::vrrpOperAdminState.3.10 = INTEGER: up(1) VRRP-MIB::vrrpOperPriority.3.10 = INTEGER: 50 VRRP-MIB::vrrpOperIpAddrCount.3.10 = INTEGER: 1 VRRP-MIB::vrrpOperMasterIpAddr.3.10 = IpAddress: 10.0.0.2 VRRP-MIB::vrrpOperPrimaryIpAddr.3.10 = IpAddress: 10.0.0.1 VRRP-MIB::vrrpOperAuthType.3.10 = INTEGER: noAuthentication(1) VRRP-MIB::vrrpOperAdvertisementInterval.3.10 = INTEGER: 1 seconds VRRP-MIB::vrrpOperPreemptMode.3.10 = INTEGER: true(1) VRRP-MIB::vrrpOperVirtualRouterUpTime.3.10 = Timeticks: (2) 0:00:00.02 VRRP-MIB::vrrpOperProtocol.3.10 = INTEGER: ip(1) VRRP-MIB::vrrpOperRowStatus.3.10 = INTEGER: active(1) VRRP-MIB::vrrpAssoIpAddr.3.10.10.0.0.254 = IpAddress: 10.0.0.254 VRRP-MIB::vrrpAssoIpAddrRowStatus.3.10.10.0.0.254 = INTEGER: active(1)
Jan 23 2020
@max1e6 Did you have a chance to test? Otherwise I assume the issue isn't present anymore.
Let me know if you come across any issues.
By default we don't need delay, I think it must be configurable feature.
set service pppoe-server pado-delay delay 100 sessions 100 set service pppoe-server pado-delay delay 200 sessions 300 set service pppoe-server pado-delay delay 300 sessions 1000
and configuration for this
@Dmitry What default delays do you suggest?
@syncer After all considerations, because of the authentication modules squid brings in, I would rather stay with squid for now. Let me know what you think.
If it possible, please explain me in private message how I can reach this behaviour. In my LAB this does not reproduced.
This patch tested successful on VyOS 1.2.4 VM with 25 network interfaces.
Also works as expected on VyOS 1.3-rolling-202001230217
Thx @runar !
@TriJetScud Would please make it works on vyos latest version ?
Jan 22 2020
This also could be the same issue as described in T577
This issue is possibly fixed in current by ticket T1970, could you retry with the newest current rolling release?
When using logrotate we can take full owership about the resources that will be used for number of files and its size - so i think this will be the best approach.