Page MenuHomeVyOS Platform

Flow accounting for ppp interfaces not work
Open, NormalPublicBUG

Description

Test scheme (ref. from https://forum.vyos.io/t/flow-accounting-not-working-on-rolling-release/4438)
eth0 - Uplink address, assigned via pppoe.
eth1 - pppoe-server for internal clients

We don't see flow counters for ppp interfaces.
Technology nat is not used.

sever@prim# show interfaces 
 ethernet eth0 {
     description Uplink-pppoe-client
     duplex auto
     pppoe 0 {
         default-route auto
         mtu 1492
         name-server auto
         password primpass
         user-id primlogin
     }
     smp-affinity auto
     speed auto
 }
 ethernet eth1 {
     description PPPoE-server
     duplex auto
     smp-affinity auto
     speed auto
 }

Configuration pppoe-server

service {
    pppoe-server {
        access-concentrator ACNPRIM
        authentication {
            local-users {
                username secondlogin {
                    password secondpass
                }
            }
            mode local
        }
        client-ip-pool {
            start xxx.xxx.242.245
            stop xxx.xxx.242.245
        }
        dns-servers {
            server-1 1.1.1.1
            server-2 8.8.8.8
        }
        interface eth1
        local-ip xxx.xxx.242.244
    }

Flow accounting

sever@prim# show system flow-accounting 
 interface eth0
 interface eth1
 interface pppoe0
 netflow {
     version 9
 }
 syslog-facility daemon

All counters for flows is zero.

sever@prim:~$ show flow-accounting 
flow-accounting for [eth0]
Src Addr        Dst Addr        Sport Dport Proto    Packets      Bytes   Flows

Total entries: 0
Total flows  : 0
Total pkts   : 0
Total bytes  : 0

flow-accounting for [eth1]
Src Addr        Dst Addr        Sport Dport Proto    Packets      Bytes   Flows

Total entries: 0
Total flows  : 0
Total pkts   : 0
Total bytes  : 0

flow-accounting for [pppoe0]
Src Addr        Dst Addr        Sport Dport Proto    Packets      Bytes   Flows

Total entries: 0
Total flows  : 0
Total pkts   : 0
Total bytes  : 0

The ppp1 interface was dynamically created for the internal client

sever@prim:~$ show pppoe-server sessions 
 ifname |  username   |    calling-sid    |       ip       | type  | comp | state  |  uptime  
--------+-------------+-------------------+----------------+-------+------+--------+----------
 ppp1   | secondlogin | 55:55:00:99:55:55 | xxx.xxx.242.245 | pppoe |      | active | 00:35:43

Show routes:

sever@prim:~$ show ip route 
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route

K>* 0.0.0.0/0 [0/0] is directly connected, pppoe0, 00:42:50
C>* xxx.xx.242.17/32 is directly connected, pppoe0, 00:42:50
C>* xxx.xx.242.245/32 is directly connected, ppp1, 00:42:06

Details

Difficulty level
Unknown (require assessment)
Version
VyOS 1.2.3
Why the issue appeared?
Will be filled on close
Is it a breaking change?
Unspecified (possibly destroys the router)
Issue type
Bug (incorrect behavior)

Event Timeline

That seems to be a uacct problem (test inVyOS 1.3-rolling-201912201452 with new flow-accounting python/xml conf).

I used this lab:

lab.png (416×667 px, 36 KB)

client 2 pings to client 1

We need to put ppp0 (or 1, 2, ...n) in set system flow-accountin interface pppX
This is important because on server we haven't a common pppoe interface, each client has its own link to pppX pseudo-interface on server.
That's not scalable, we will need to add a interface in flow-accounting:
set system flow-accounting interface pppX
for each client connection to our pppoe server.

If we see iptables on server you can see that there are not traffic captured in rule 1 (NFLOG eth0) but there are packets that traverses it in rule 2 (ppp0):

root@server:~# iptables -t raw -L VYATTA_CT_PREROUTING_HOOK --line-numbers -v  
Chain VYATTA_CT_PREROUTING_HOOK (1 references)
num   pkts bytes target     prot opt in     out     source               destination         
1        0     0 NFLOG      all  --  eth0   any     anywhere             anywhere             /* FLOW_ACCOUNTING_RULE */ nflog-group 2 nflog-range 128 nflog-threshold 100
2      174 14616 NFLOG      all  --  ppp0   any     anywhere             anywhere             /* FLOW_ACCOUNTING_RULE */ nflog-group 2 nflog-range 128 nflog-threshold 100

Other way to test is doing some pings from one client and capture traffic (tcpdump) in nflog group 2:

root@server:~# service uacctd stop
root@server:~# tcpdump -s 0 -n -i nflog:2
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on nflog:2, link-type NFLOG (Linux netfilter log messages), capture size 262144 bytes
18:19:21.052092 IP 192.168.200.2 > 192.168.200.3: ICMP echo request, id 1752, seq 17379, length 64
18:19:21.054775 IP 192.168.200.2 > 192.168.200.3: ICMP echo request, id 1752, seq 17380, length 64
18:19:23.068508 IP 192.168.200.2 > 192.168.200.3: ICMP echo request, id 1752, seq 17381, length 64
18:19:23.071122 IP 192.168.200.2 > 192.168.200.3: ICMP echo request, id 1752, seq 17382, length 64

This is working as expected, packets arrives to nflog group 2 (as we have seen in iptables rules before) but not are processed by uacctd.

It seems to be a uacct problem (perhaps agregation variables???)

tested with minimum aggregate options (no MAC, VLAN, interface, ...):

/etc/pmacct/uacctd.conf:

aggregate:src_host,dst_host,src_port,dst_port

And no flow entries:

root@server:~# service uacctd restart
root@server:~# /usr/bin/pmacct -s  -T flows -p /tmp/uacctd.pipe 
SRC_IP                                         DST_IP                                         SRC_PORT  DST_PORT  PACKETS               BYTES

For a total of: 0 entries

Same behaviour with compiled uacctd version from pmacct git 1.7.5-git (20191224-00):
packets are arriving correctly to nflog group (tcpdump shows them) but they aren't processed by uacct

root@server:~# uacctd -V
Linux NetFilter NFLOG Accounting Daemon, uacctd 1.7.5-git (20191224-00)

Arguments:
 [....]
root@server:~# iptables -t raw -L VYATTA_CT_PREROUTING_HOOK 
Chain VYATTA_CT_PREROUTING_HOOK (1 references)
target     prot opt source               destination         
NFLOG      all  --  anywhere             anywhere             /* FLOW_ACCOUNTING_RULE */ nflog-group 2 nflog-range 128 nflog-threshold 100
NFLOG      all  --  anywhere             anywhere             /* FLOW_ACCOUNTING_RULE */ nflog-group 2 nflog-range 128 nflog-threshold 100
NFLOG      all  --  anywhere             anywhere             /* FLOW_ACCOUNTING_RULE */ nflog-group 2 nflog-range 128 nflog-threshold 100
RETURN     all  --  anywhere             anywhere
root@server:~# iptables -t raw -S VYATTA_CT_PREROUTING_HOOK
-N VYATTA_CT_PREROUTING_HOOK
-A VYATTA_CT_PREROUTING_HOOK -i ppp1 -m comment --comment FLOW_ACCOUNTING_RULE -j NFLOG --nflog-group 2 --nflog-range 128 --nflog-threshold 100
-A VYATTA_CT_PREROUTING_HOOK -i eth0 -m comment --comment FLOW_ACCOUNTING_RULE -j NFLOG --nflog-group 2 --nflog-range 128 --nflog-threshold 100
-A VYATTA_CT_PREROUTING_HOOK -i ppp0 -m comment --comment FLOW_ACCOUNTING_RULE -j NFLOG --nflog-group 2 --nflog-range 128 --nflog-threshold 100
-A VYATTA_CT_PREROUTING_HOOK -j RETURN

Conf

set service pppoe-server authentication local-users username test password 'test'
set service pppoe-server authentication mode 'local'
set service pppoe-server client-ip-pool start '192.168.200.2'
set service pppoe-server client-ip-pool stop '192.168.200.100'
set service pppoe-server dns-servers server-1 '192.168.123.1'
set service pppoe-server interface eth0
set service pppoe-server local-ip '192.168.200.1'
set system flow-accounting interface 'ppp0'
set system flow-accounting interface 'eth0'
set system flow-accounting interface 'ppp1'
set system flow-accounting netflow server 192.168.100.2 port '2055'
set system flow-accounting netflow source-ip '192.168.123.3'
set system flow-accounting netflow version '9'

No flows detected

root@server:~# /usr/bin/pmacct -s  -T flows -p /tmp/uacctd.pipe 
IN_IFACE    SRC_MAC            DST_MAC            VLAN   SRC_IP                                         DST_IP                                         SRC_PORT  DST_PORT  PROTOCOL    TOS    PACKETS               FLOWS                 BYTES

For a total of: 0 entries
root@server:~# /usr/bin/pmacct -s  -T flows -p /tmp/uacctd.pipe 
IN_IFACE    SRC_MAC            DST_MAC            VLAN   SRC_IP                                         DST_IP                                         SRC_PORT  DST_PORT  PROTOCOL    TOS    PACKETS               FLOWS                 BYTES

For a total of: 0 entries

but they are arriving to nflog group 2 icmp and a ssh session from client1 to client2

service uacctd stop
tcpdump -s 0 -n -i nflog:2

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on nflog:2, link-type NFLOG (Linux netfilter log messages), capture size 262144 bytes
19:22:01.173588 IP 192.168.200.2 > 192.168.200.3: ICMP echo request, id 2001, seq 870, length 64
19:22:01.175570 IP 192.168.200.3 > 192.168.200.2: ICMP echo reply, id 2001, seq 870, length 64
19:22:01.952071 IP 192.168.200.3.55946 > 192.168.200.2.22: Flags [P.], seq 3481395271:3481395355, ack 2147864937, win 1002, options [nop,nop,TS val 1330901319 ecr 2229139660], length 84
[...]

19:22:02.988109 IP 192.168.200.3.55946 > 192.168.200.2.22: Flags [.], ack 2225, win 1002, options [nop,nop,TS val 1330901528 ecr 2229152143], length 0
19:22:04.177627 IP 192.168.200.2 > 192.168.200.3: ICMP echo request, id 2001, seq 872, length 64
19:22:04.179864 IP 192.168.200.3 > 192.168.200.2: ICMP echo reply, id 2001, seq 872, length 64
19:22:04.181750 IP 192.168.200.3.55946 > 192.168.200.2.22: Flags [P.], seq 640:676, ack 2225, win 1002, options [nop,nop,TS val 1330903534 ecr 2229152143], length 36
[...]
syncer triaged this task as Normal priority.Jan 1 2020, 1:55 PM
syncer edited projects, added VyOS 1.3 Equuleus; removed VyOS 1.2 Crux.
syncer added subscribers: Unknown Object (User), syncer.Jan 1 2020, 1:58 PM

@Dmitry and i talked that it will be wise to move ipt-netflow for better performance

In T1838#50692, @syncer wrote:

@Dmitry and i talked that it will be wise to move ipt-netflow for better performance

it could be interesting.
i'm not sure that all the options that can manage pmacct are available on ipt-netflow.
So I think that the best way is starting to replicate all netflow skeleton (flow-accounting-ng) in other tree:

set system flow-accounting-ng netflow ....
or even better:
set service flow-accounting(-ng)

and have both systems working at the same time

I can prepare the scripts for a basic implementation.

Hi All, is there any movement on a fix for this? It would be great to get netflow data out of my PPPoE interfaces - happy to run any tests for you

Hi - is there any update on a fix for this bug?

Well you could try to use ppp* as globbing pattern - maybe this works?

Hi @c-po - do you mean in the context of 'set system flow-accounting interface ppp*'? - unfortunately I have tried this and I'm still not getting anything out.

erkin renamed this task from Flow accounting for ppp interfaces not work. to Flow accounting for ppp interfaces not work.Aug 31 2021, 6:12 PM
erkin set Issue type to Bug (incorrect behavior).