Websocket connection dropping
The response time doesn't really improve, the intended use is to allow for multiple differentiated connections which listen at the same time.
That said, the drops you are seeing do not usually happen. I'm inclined to think there are timeouts somewhere on the line from EVOK to Node-red, but it's hard to debug from our point of view. Do you still have the same issues if you send an occasional 'all' command? It shouldn't be necessary but it might help in your case.
oversc0re last edited by
yup, sending cmd:all every minute prevents the connection drop.
Even though, I don't feell like writing a state machine for blinds in node-red is a good idea. I will test this behavior while migrating to python along the way.
matthijs last edited by
@tomas_knot Hi, Would it be possible to share this websocket to MQTT script? I have a websocket configured now that runs all the time (perfectly stable btw) and inititates a rest call to update a Domoticz switch if needed, however I would like to explore making this push pull with MQTT for myself te learn and to see if that's a bit more resources friendly.
The websocket-to-MQTT is part of a wider array of scripts used in our company, which include a number of proprietary details, so we unfortunately cannot share it without extensive modification. We may at some point in a future, but we need to discuss it internally first.
matthijs last edited by
@tomas_knot ok, thanks for responding. Love to see MQTT into the base neuron by default in the future.
@TomasKnot I have one customer where I suspect the websocketconnection to break.
What is the timeout on the websocket?
Is it also possible that the websocket is not alive yet when the system is already started? Because the customer also had it right after booting the system. I have a local application running that connects to the websocket as soon as linux auto-logs in. Could it be that it connects too soon? I might have to add a check on disconnect, to do a reconnect I think
TomasKnot last edited by
Apologies for the slightly delayed reply, I was on a brief vacation for the past few days. The timeout is set to 30 seconds of the receiving device not responding, which is default for the Tornado server. It is possible to set this higher in the code, but it can cause issues with connections not being closed at times they should be.
Unfortunately it is also the case that WebSocket does not run immediately at startup, but requires approximately 15s from device boot to become fully responsive. This is mostly because of the Python backend of the server. However you shouldn't be able to connect to it until it is fully established.
I hope this helps
@tomasknot Hi, thanks for you response!
I have the system running on a location, used to control an escape room. I never saw the issue there that the connection was lost and there is no communication happening sometimes for 10 minutes. Now suddenly I think I'm seeing this connection break. Also, at startup, one of the Neuron (L203) lights was blinking red instead of solid green (well, flashing very fast every 2 seconds, the operative mode). Don't know what that is.
I now added code that if the socket connection with Evok disconnects, it directly reconnects, as I need to be able to listen to input changes...
It looks like reconnecting when connection drops seems to work for my clients.
I have the system running here now for around an hour now and the socketconnection is still up. So weird that I don't see the issue here. There has to be some difference and I'll need to investigate further.
You say the connection discounnects after a certain time, but why don't I see this happening?
It should only disconnect if either a certain number of requests fails to reach the client; or no successful messages have been sent over >10 minutes, while there have been failures during that time.
The actual implementation/behaviour is from the tornado.websocket library, albeit we use a slightly older version of Tornado.
If you do have the time to investigate further we would quite welcome the feedback. We have put this on our list of things to improve in the future, but right now I can't give you any more specifics.