Kernel support is there: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/networking/vrf.txt
I'd stick to VRF lite for now ;)
Kernel support is there: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/networking/vrf.txt
I'd stick to VRF lite for now ;)
Status | Subtype | Assigned | Task | ||
---|---|---|---|---|---|
Resolved | FEATURE REQUEST | c-po | T31 Add VRF support | ||
Resolved | FEATURE REQUEST | c-po | T2111 VRF add route leaking support | ||
Resolved | FEATURE REQUEST | c-po | T5234 Add bash identifier for given VRF instance |
while it would be quite nice to have a Cumulus 4.0+ like default management VLAN and make all services management aware (which would quite considerably increase the among of work to get something out), I am proposing instead make sure that all services running on the default VRF are available on VRF as a first step.
Could be away. But from my experience most people use vrf to seperate managment from production, and as second prio seperate customers and so on.
But the managment vrf must not be the "default" vrf.
Here is a patch to implement VRF. Binding is set to work on all VRF for daemon so that BGP and other protocols will work on all VRF.
https://gist.github.com/thomas-mangin/7704c538d905190bd05cfe613bd9f4f5
It is working for ethernet interface as far as I could test, other interface types were not tested yet.
This patch implements a work-around for T2027
[email protected]:~$ configure [edit] [email protected]# show vrf name blue { table 100 } name red { table 200 } [edit] [email protected]# show interfaces ethernet eth2 hw-id 08:00:27:68:d0:b1 vrf blue [edit]
[email protected]:~$ show vrf interface state mac flags --------- ----- --- ----- blue up 36:7b:15:47:9e:df noarp,master,up,lower_up red up 16:66:7f:42:a0:45 noarp,master,up,lower_up [email protected]:~$ show vrf name blue interface state mac flags --------- ----- --- ----- blue up 36:7b:15:47:9e:df noarp,master,up,lower_up [email protected]:~$ ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 08:00:27:fa:12:53 brd ff:ff:ff:ff:ff:ff 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 08:00:27:0d:25:dc brd ff:ff:ff:ff:ff:ff 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master blue state UP mode DEFAULT group default qlen 1000 link/ether 08:00:27:68:d0:b1 brd ff:ff:ff:ff:ff:ff 5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master blue state UP mode DEFAULT group default qlen 1000 link/ether 08:00:27:f0:17:c5 brd ff:ff:ff:ff:ff:ff 6: blue: <NOARP,MASTER,UP,LOWER_UP> mtu 65536 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 36:7b:15:47:9e:df brd ff:ff:ff:ff:ff:ff 7: red: <NOARP,MASTER,UP,LOWER_UP> mtu 65536 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 16:66:7f:42:a0:45 brd ff:ff:ff:ff:ff:ff
show interfaces was not made vrf aware as it is implemented the "old way" and I am not sure how to perform the change (the policy seems to be to only accept patched for vyos-1x)
At this point, I would need some help with the review to move it forward.
Implementing a management VRF and working all the binding would make this work much more complex. I am not opposed to look into it later on.
I think we should make somewhere a list of services and which level of vrf support they have.
Openssh for example has build in support for vrf
And we should declare which services should be available in the management vrf.
From my point of view:
should for multiple routing tables:
https://andir.github.io/posts/linux-ip-vrf/
http://www.allgoodbits.org/articles/view/24
https://patchwork.ozlabs.org/patch/546171/
Why do we need to explicitly create the routing table? why not name the routing table entry like the VRFs name? We should try to keep the CLI as minimal as possible. More CLI nodes, more headache.
When I think about the the Cisco VRF implementation I ise I just create it with a name and refer to it. It creates a routing table for me and I do not care. When adding routes to the table I refer to the VRF name and not an arbitrary number.
Downside of the explicit table number is a user now needs to remember the VRF and table mapping.
We could indeed create the VRF as we parse interfaces, and auto-allocate the VRF number, removing this control from the user.
I would have to move the VRF creation code within the Interface class and any typo on names would create two VRF as there would be no sanitisation, but auto-completion could be done via /etc/iproutes/rt_tables
I am not aware of any "end of configuration check" hook which could be used to parse all the interfaces and notice that a VRF is not used anymore and remove it.
This would have to be run when ALL the configuration has been parsed and acted upon as VRF can be used in different interfaces, which have different handlers.
If we do not, we would end up leaking resources.
Auto completion should be done on a per CLI path:
<completionHelp> <path>vrf name</path> </completionHelp>
Assuming the VRFs name will be set using set vrf name 'red'
Adding interfaces to VRFs seems to be a bit of a hazzle:
Pondering pro and con I go with option #2 now
Adding a dummy interface to VRF:
[email protected]# show interfaces dummy dum1 address 1.1.1.1/32 vrf foo
[email protected]# sudo ip vrf exec foo ping 1.1.1.1 PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data. 64 bytes from 1.1.1.1: icmp_seq=1 ttl=64 time=0.073 ms 64 bytes from 1.1.1.1: icmp_seq=2 ttl=64 time=0.082 ms 64 bytes from 1.1.1.1: icmp_seq=3 ttl=64 time=0.070 ms --- 1.1.1.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 30ms rtt min/avg/max/mdev = 0.070/0.075/0.082/0.005 ms
[email protected]# ping 1.1.1.1 PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data. 64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=7.92 ms
see the latency
VRF route leaking needs to be added to CLI. Routes can be leaked using:
FRR vtysh: ip route 192.0.2.0/24 172.18.204.254 vrf red nexthop-vrf default
This will add a routing table entry into VRF red for destination 192.0.2.0/24 going via the default VRF
vyos# show ip route vrf red Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP, T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR, f - OpenFabric, > - selected route, * - FIB route, q - queued route, r - rejected route VRF red: K 0.0.0.0/0 [255/8192] unreachable (ICMP unreachable), 00:33:44 S>* 192.0.2.0/24 [1/0] via 172.18.204.254, eth0.204(vrf default), 00:01:01