Page MenuHomeVyOS Platform

Extend VyOS to support appliance LCDs
Closed, ResolvedPublicENHANCEMENT


As a follow-on to T405 (Add binaries for lcdproc), perform proper integration of the lcdproc package to the VyOS command line.

lcdproc works as follows:

  • The LCDd server listens for TCP connections from clients
  • LCDd is told what driver to load for a specific LCD screen
  • The driver configures itself if needed
  • Client lcdproc collects system statistics and sends them to the server over TCP for display on the LCD
  • Client lcdexec makes a set of menus and commands available to the server over TCP and runs command as requested

Official repository for the LCDproc project.


Difficulty level
Unknown (require assessment)
Why the issue appeared?
Will be filled on close
Is it a breaking change?
Unspecified (possibly destroys the router)

Related Objects

Event Timeline

fmertz created this object in space S1 VyOS Public.
c-po added a project: VyOS 1.3 Equuleus.
c-po subscribed.

Overview of the effort:

  1. Create necessary files to start the LCDd daemon as a service
  2. Create necessary files to start the lcdproc client
  3. Create necessary files to start the lcdexec client
  4. Create the necessary XML to supplement the VyOS command line
  5. Create the template for LCDd.conf
  6. Create the template for lcdproc.conf
  7. Create the template for lcdexec.conf
  8. Create the necessary glue code

This has been on my extended todo list for a long time now, but there were always higher priority issues. I'm glad someone is working on it!

My thoughts were that this should be under 'system display'. I'd avoid the term 'lcd' as lcdproc can theoretically control any kind of text display (LCD, LED, VFD, ...).

OK, question on the approach. Looking at LCDd.conf (check the link above), there are a few server options, but TONS of individual driver options. Doing some sort of complete support in VYOS would be fairly straightforward, but would lead to a massive XML file. The lcdproc project has been around a while, so there are many different devices that are supported, most possibly somewhat historic or even one-off. We can (artificially) categorize them in 2 groups:

  1. Group 1 is LCDs users buy and attach to their system on their own as an add-on. Can be USB, fitting in a 5.25" bay, etc.
  2. Group 2 is LCD screens already built-in to a device, typically a 1U rack mount network appliance of some sort.

So, what should we do here?

  1. Approach 1: Focus on creating a VyOS command line only for the drivers known to work in network appliances (watchguard, Lanner, Caswell, etc.) because we are within the scope of a network OS. The idea here is that the "driver" name would be understood to be a macro set of configurations, and be super easy for users to configure. As all appliances of a given type work the same, a group of settings could be configured all at once without having users go through detailed setting like port name (/dev/...), port speed, screen size, etc. that users would likely get wrong all the time. This would lead to a short, tight set of options, likely easy to upgrade and supplement over time as needed. Downside is that we support what we support and that is it, power users would have to configure lcdproc manually and use the binaries.
  2. Approach 2: Focus on creating the necessary command line to support everything available in LCDd.conf. We never know what is popular and what is not, so users need to be empowered with any and all options.
  3. Approach 1.5: Do Approach 1, but add an "escape" mechanism where power users would be able to enter whatever they want and put that open text directly in LCDd.conf without much validation.

I am leaning towards Approach 1 to start with. Makes it more isolated between the command line and the implementation and super easy for users to get something working. There is a limited supply of network appliances that are covered by a handful of drivers.

Thanks for any feedback.

I'd go with approach 3, if 2 is too complex. Have a predefined set of appliances that can be configured by a single option. For all other scenarios, one of:
a) have the user supply the path to a file in /config that the script will include in lcdd.conf as-is (as including a multi-line string in the config directly is very awkward and unreadable).
b ) we could for example make a /config/lcdproc directory, containing a template conf file that would be used if the user selected that option in config, still starting the daemon via the config.
c) or split out the individual driver sections with defaults (as in lcdd.conf) into many files in /config/lcdproc/$drivername.conf that can be edited by the user, and have a config option that selects which driver to use, and have the user edit the file to configure it.

It's not optimal as the config file won't be versioned, so can't be reverted, but is far simpler than putting all driver options into the XML.

I'd go with option 1 to have a well known list of working and supported LCD displays. Each will have it's own configuration template which is used when implementing. I'm not a fan of "power user options" as this usually causes more harm then good - also users tend to be overwhelmed by the number of CLI options. We rather should make adding new display types super easy with proper documentation.

@fmertz please also consider making use of the new vyos.config.get_config_dict() abstraction. See example here: and

@c-po I tend to agree to have as much predefined templates, but I'd leave the option to have a custom config if the user wants to, I don't like imposing artificial limitations. We already allow custom options with dhcp-server, openvpn..., why not allow specifying your own conf file for the driver section to include? Some things are impossible without either this or going with approach 2 by exposing absolutely all configurable driver options through the config. I'd prefer that, but if it results in too much options/config size, the alternative is as I described. But in the long term I think approach 2 would be the best.

I'm no fan of the raw options from dhcp and openvpn and think we should not add more of those. Unfortunately they have been inherited from Vyatta. ISC DHCP could never be replaced by any other DHCP server due to this fact which is IMHO a super bad CLI design.

At this point, i could use a couple of wise words for the development process.

I have a VirtualBox VM running the latest ISO, configured with eth0 on DHCP. Seems fine.

I have a LXC container running Debian 10 (not 8), supplemented manually with all the installs from the Dockerfile. I somehow had to add dh-python3.

I git cloned the Vyos-1x package from github

I added my files (xml, py, temp) and rebuilt it with: "dpkg-buildpackage -b -us -uc -tc"

No build error, I have file "vyos-1x_1.3dev0-1574-ga686e090_all.deb"

I scp the file over to the VM, and run "sudo dpkg -i vyos-1x_1.3dev0-1574-ga686e090_all.deb"

Tons of errors, starting with:
trying to overwrite '/opt/vyatta/share/vyatta-op/templates/show/route-map/node.def', which is also in package vyatta-op-quagga 0.11.35+vyos2+current1

And then a bunch of:

unable to restore backup version of '/opt/vyatta/share/vyatta-op/templates/show/reboot/node.def': Stale file handle

Thanks for any pointer

Hi @fmertz, this is a more or less common "issue" during peak development times.

You happened to be lucky to pick an ISO image which which has the old vyatta-op-quagga package installed which still ships the /opt/vyatta/share/vyatta-op/templates/show/route-map/node.def file, but that file now made it also into vyos-1x during ongoing rewrites (

The easiest solution would be to upgrade to a more recent rolling ISO and rebase your changes onto the latest version of vyos-1x. To avoid this problem in the future, develop on a branch in vyos-1x.

Fair point. In that case I agree with not including a raw config option.
As for the errors when installing vyos-1x, c-po already pointed out why this occurs.For this reason I don't rebase on upstream while working on a set of changes locally, I always try to keep the installed iso and local git state as much together as possible. I also run docker from the vyos-build repo and have the vyos-1x repo dir in vyos-build/packages/vyos-1x (where the included scripts/build-packages would put it) so I can just docker run and build without having to copy any files anywhere, just scp the built deb into the VM.

@fmertz for easier developing I have a bunch of BASH aliases which are also mapped into my docker container.

vyblb will launch the build container for VyOS 1.3 and dbld will generate the *deb package*

I then use this "local alias" which is non versioned to transfer the DEB to my dev device behind IP

alias v1x='scp_vyos-1x'
alias scp_vyos-1x='function _vyos_v1x() { \
    files=$(ls -1t ~/vyos-1x*.deb | head -n 2)
    scp -r $files $1:/tmp
    if [ "$?" == "0" ]; then
        ssh $1 sudo dpkg --install --force-all /tmp/vyos-1x*_all.deb
        ssh $1 sudo rm -f /tmp/vyos-1x*.deb
    }; _vyos_v1x'

OK, another approach question.

This lcd stuff works with a "server" that loads a driver. It binds to an IP address (typically localhost). Then there is a separate client that collects the system information (cpu load, etc.) and sends it over TCP to the server. Having flexibility in the IP address allows for scenarios where the server and client run on different hosts. The server runs where the LCD is, the client can run on another host. There could be client connections coming from several hosts, I suppose.

Approach 1: Ignore all this flexible IP stuff, run the server and the client on localhost. Nobody really does this disconnected stuff. In the context of a router, running the client alone is not super useful because the client does not get into the detail enough, like, say SNMP does. Running the server alone is possible, but who wants to stare at a router and see the client details from another host (like a NAS or whatever). The only immediate use is server and client running on the router together. There is no benefit to complicating the CLI to support this feature. All we want is to display router details on the router LCD. The fact that is there is a server and a client is not something we want to expose to the CLI. This is the quickest route to having users actually use this feature without pitfalls. This is also possibly safer as the server binds to localhost so there is no immediate attack vector from the WAN or LAN.

Approach 2: Support the IP stuff. Let users choose the IP address and port for the server as well as the client. If they do not match (and the user meant to), oh well. If the user binds the server to the WAN and exposes the server to the internet, oh well. If it turns out this server has a security flaw and compromises the system by binding to a LAN or WAN address, oh well.

At this point, I am leaning towards Approach 1, keep it simple and possibly safer for the first release. We can always support more features of the lcdproc package if there is user demand it, to be tampered by a general desire to keep the system secure.

Thanks for any feedback.

FWIW, this integration package is coming along nicely. I was able to create the XML CLI. The python code is kept to a minimum by passing a dictionary of the Config to the template engine "render". At this point, I can generate the proper LCDd.conf and lcdproc.conf based on the CLI. I now need to work on start/stop/restart as well as (basic) config validation. I have nothing for lcdexec/menu this far.

Design CLI

system display type (SDEC|EZIO)

system display show host (cpu|cpu-all|cpu-hist|load-hist|memory|proc|disk|uptime)
                    network interface <intName> alias <alias>
                            units (bps|Bps|pps)
                    clock (big|mini|date-time)

system display menu

Example CLI:

vyos@vyosdev# show system display 
 show {
     clock big
     host uptime
     host cpu
     host proc
     network {
         interface eth0 {
             alias WAN
         interface eth1 {
         interface eth3 {
             alias LAN
         units pps
 type SDEC

That's great, I like the config syntax. What does the interface alias do regarding the display? Maybe it could be read from the interface description? Then again the display name needs to be short as the displays are small and the interface description can be longer. Maybe default to reading the alias from the interface description and override it with the display alias.
For starting services in 1.3 we use systemd, it's simple to create new service files in src/systemd that will be put in /lib/systemd/system. Just make sure they're started manually by the config script and not as part of a target (just creating a service file without the Install section should ensure that).

Update: After hooking up an actual EZIO device to my VM and working the code back and forth, I seem to have settled on this design:

system display model (SDEC|EZIO)
system display config (enabled|disabled)
system display show host (cpu|cpu-all|cpu-hist|disk|load-hist|memory|proc|uptime)
                    network interface <intName> alias <alias>
                            units (bps|Bps|pps)
                    clock (big|mini|date-time)
                    title <name>

system display duration <s>
system display hello <string>
system display bye <string>

Example CLI

display {
     bye "VyOS was here"
     config enabled
     duration 10
     hello "VyOS here!"
     model EZIO
     show {
         clock big
         host cpu
         host disk
         network {
             interface eth0 {
                 alias WAN
             units pps
         title "VyOS Dev"

I have a couple of commits here: GitHub fmertz/vyos-1x/commits/system-display

Thanks for any feedback.

An initial version has been merged with initial support for some Crystalfontz LCDs. This is in alpha state.

Please note, we still need another client (best in Python) which sends data to LCDd (like VyOS version)

c-po changed the task status from Open to Backport pending.Aug 23 2020, 10:19 AM
syncer changed the subtype of this task from "Task" to "Enhancement".Sep 9 2020, 1:45 PM

We have a basic implementation available. Additional changes should be submitted via feature requests.

c-po triaged this task as Low priority.