load average over 1



  • The load average for neuron os defaults above 1.1
    Is it normal for usage with evok? Or is there some issue?

    kernel:

    pi@M203-snXXX:~ $ uname -a
    Linux M203-snXXX 4.14.79-v7+ #1159 SMP Sun Nov 4 17:50:20 GMT 2018 armv7l GNU/Linux
    

    top output:

    top - 06:59:48 up 1 day,  1:48,  1 user,  load average: 1.14, 1.11, 1.15
    Tasks: 107 total,   1 running,  64 sleeping,   0 stopped,   0 zombie
    %Cpu(s):  3.0 us,  0.7 sy,  0.1 ni, 96.2 id,  0.0 wa,  0.0 hi,  0.1 si,  0.0 st
    KiB Mem :   994100 total,   532744 free,   162256 used,   299100 buff/cache
    KiB Swap:   102396 total,   102396 free,        0 used.   771924 avail Mem
    
      PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
      794 root      20   0   39696  21572   9036 S  12.5  2.2 190:45.45 python
     3127 pi        20   0    8240   3288   2724 R   1.3  0.3   0:00.27 top
      721 pi        25   5  175880  72760  25084 S   0.7  7.3   5:19.30 node-red
      912 root      20   0    2348    428    364 S   0.7  0.0  12:18.77 unipi_tcp_serve
      390 root      20   0       0      0      0 D   0.3  0.0   1:23.63 unipispi_inv
     2741 pi        20   0   11520   3832   3112 S   0.3  0.4   0:00.33 sshd
        1 root      20   0   28244   6160   4836 S   0.0  0.6   0:06.12 systemd
    

    htop:

    
      1  [||||||||                                  14.5%]   Tasks: 34, 50 thr; 1 running
      2  [                                           0.0%]   Load average: 1.16 1.14 1.15
      3  [||                                         2.0%]   Uptime: 1 day, 01:52:37
      4  [|                                          0.7%]
      Mem[||||||||||||||||||||||||              166M/971M]
      Swp[                                      0K/100.0M]
    
      PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
      794 root       20   0 39696 21572  9036 S 11.8  2.2  3h11:19 /opt/evok/bin/python /opt/evok/lib/python2.7/site-pa
     3200 pi         20   0  6236  3780  2456 R  2.0  0.4  0:03.02 htop
      912 root       20   0  2348   428   364 S  0.7  0.0 12:20.94 /opt/unipi-bin/unipi_tcp_server -p 502 -a 255
      721 pi         25   5  171M 72872 25084 S  0.7  7.3  5:20.54 node-red
     1104 influxdb   20   0  953M 47168  9368 S  0.7  4.7 17:24.57 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
     1138 www-data   20   0 47232  2668  1408 S  0.0  0.3  0:08.42 nginx: worker process
    

  • administrators

    Hello @christian,
    yes, this load is quite normal. The Evok is doing quite a lot of work and as I can see, you are running InfluxDB as well (Not a good idea on the SD card by the way) so the load very well may be around 1.

    Best regards,
    Martin



  • Hi @Martin-Kudláček ,

    Sorry to wake this one up again.

    But i hope you can reassure me a little bit more.

    This high load kind of freaks me out, so i would really like to put a finger on whats happening

    I have a lot of Neurons running (at least 15+) with very low load averages, always under 0,10

    I also have a lot of (newer) Neurons running (some 20+) with very high load averages, like @cristian mentioned, way above 1,20

    I haven't got influxdb running on those, and i'm not doing weird things as far as i know.

    The only obvious difference I can find is that in the low-load unipi's ther's still the old drivers called NEURONSPI, and in the high-load unipi's they are called UNIPISPI.

    I'm not sure this is the culprit, but it seems that the OS version is not really an issue

    I have Debian stretch on the older ones, with low load, but i have also stretch installs that show the high load.

    All unipi's with Buster have the high load for sure, but they're all UNIPISPI according to dmesg

    Is there anything you are aware of that justifies the load difference between UNIPISPI and NEURONSPI?

    Or do i have to look further... Or should i really not worry about this?

    Most of these unipi's are in production environments long drives away, so if there's something to be fixed i would try to get that done before they break...

    Thanks a lot, Tony, Fireware Netherlands

    ![alt text](![image url](0_1576594527207_IMAGE 2019-12-17 15:55:04.jpg image url))


Log in to reply