sysfs interfacing



  • Hi,

    Been a while since my previous post on reading spidev. The main gist there was that I was looking for a reliable way to capture any changes to the digital inputs. My "solution" there was to do a simple polling on groups of modbus coils, exposed via the modbus TCP server, see modbridge. I am happy to say this sort of already works for me ...

    What I noticed while working on the modbridge was that when I did not constrain the polling speed, CPU load for the polling program got quite high, as well as for the modbus TCP server that needed to keep up with all this polling. Although the current setup would probably work OK, I figured that using the sysfs interfacing would probably have a better performance as this should in principle allow me to deal with events directly provided from the kernel level.

    I earlier did install the unipi neuron images with the sysfs drivers and I could indeed see any input changes reflected in the sysfs files. I then looked into how I could capture changes as events using golang's fsnotify (a golang wrapper around inotify). Apparently fsnotify (/inotify) is not really used for sysfs files, so I was instead looking at epoll, but seemed to capture too much changes (probably for every spi-level update).

    Before digging deeper into epoll, my main question is what do you recommend as the best way of capturing changes as events from the sysfs interface? Is there some sort of (C)-API to directly capture these -- either via some library you provide or directly via standard linux syscalls (epoll)? I do not have any experience with linux syscalls atm, so I'd rather check what is the "preferred" way of dealing with this.


  • administrators

    Hi @Martijn-Hemeryck!

    Apologies for the somewhat late reply, we are quite busy at the moment. I do think standard polling is the best way to go; linux GPIOs/IIOs do have support for deeper interrupts, but we do not currently implement these. Some form of polling is then unfortunately necessary.

    It should be noted that the SYSFS interface uses the regmap linux subsystem to create a form of read caching. This means that on one hand reads will have a considerably lower overhead than reads directly via the SPI device, but that on the other hand the sys_reading_freq property limits the practical refresh speed to 1ms - there is no need to poll any faster than ~500us, as the reads will simply hit the cache the vast majority of the time.

    We do not have a dedicated library ourselves, but you may be able to use libgpiod and libiio if you prefer to use one.

    I hope this helps



  • Thanks for your reply.

    To be honest, I did not fully understand your answer, but that's probably because you have more background on the underlying architecture. From how I understood it, there's a kernel driver that does the SPIdev-level polling and then maps that to some reserved kernel memory -- which is then exposed via the sysfs (virtual) file system.

    Rather than polling a modbus TCP server that sits on top of that, I figured to capture any file system level changes using inotify or epoll (or actually, their related counterparts in golang). I did test inotify on a "regular" file system where it does indeed fire for any changes, but it does not so for sysfs. For epoll, I am still missing some background on this.

    I guess you are saying that the best option is to just do regular polling the files themselves then? i'll give that a try (probably needn't be that hard, actually).

    Edit: nice to learn about IIO -- although I probably would not want to wrap my head about that.



  • FYI: figured to create a small proof-of-concept with python3 + asyncio; see https://github.com/mhemeryck/unipoll/blob/b40c73f682956b08ede27abb85a904d95d9cf1ab/unipoll.py

    Note: CPU usage is quite high though, probably because of the wait I handled the scheduling with asyncio.


  • administrators

    That does look quite good! The CPU will indeed be high with direct polling unfortunately, particularly with Python. We are hoping to implement an interrupt structure to remove the vast majority of the load, though it will have considerably higher latency due to various extra overhead the interrupts will introduce, so it is a tradeoff. You may also get slightly lower overhead with libgpiod/libiio, but the same problem will remain.

    In practice we haven't designed the I/O for particularly fast reaction times, as they are primarily intended for electro-mechanical components like buttons and relays, which themselves have very considerable switching delays. That is not to say they cannot be used in that way, but it does bring some limitations with it. In particular our PLC platforms are designed to run with multi-millisecond cycles, Mervis at ~20ms and CODESYS at ~4ms.



  • 'nother update: giving it some thought, I quickly wanted to see if I could do the same stuff again in go, so here goes another POC: https://github.com/mhemeryck/unipitt/blob/37b9a07bc570301c2a11836526df1a2be8b21647/main.go

    (Don't mind the project name though, I've sort of ran out of original names, I guess)

    Anyways, performance seems much better in this case. I figure I'll put my efforts into polishing this project as it seems to be the simplest, fastest, most robust.

    I did notice however when toggling one of the DI's, that another DI also got toggled somehow, still need to investigate that further.

    Thanks for continuous feedback, this actually adds a lot to me for the overall unipi experience!