In talking to a MODBUS device, is there an upper bound on how long a device can take to respond before it's considered a timeout? I'm trying to work out what to set my read timeout to. Answers for both MODBUS RTU and TCP would be great.
In MODBUS over serial line specification and implementation guide V1.0 section 2.5.2.1 MODBUS Message ASCII Framing There are suggestions that inter-character delays of up to 5 seconds are reasonable in slow WAN configurations.
2.6 Error Checking Methods indicates that the timeouts are configured without specifying any values.
The current Modicon Modbus Protocol Reference Guide PI–MBUS–300 Rev. J also provides no quantitative suggestions for these settings.
Your application time-sensitivity, along with the constraints that your network enforces, will largely determine your choices.
If you identify the worst-case delays you can tolerate, take half that time to allow a single retransmission to fail, subtract reasonable transmission times for a message of maximal length, then you should have a good candidate for a timeout. This will allow you to recover from a single error, while not reporting errors unnecessarily often.
Of course, the real problem is, what to do when the error occurs. Is it likely to be a transient problem, or is it the result of a permanent fault that requires attention?
Alexandre Vinçon's comment about the ACKNOWLEDGEMENTs is also relevant. It may be your device does not implement this, and extended delays may be intended.
The specification does not mention a particular value for the timeout, because it is not possible to normalize a timeout value for a wide range of MODBUS slaves.
However, it is a good assumption that you should receive a reply within a few hundreds of milliseconds.
I usually define my timeouts to 1 second with RTU and 500 ms with TCP.
Also, if the device takes a long time to reply, it is supposed to return an ACKNOWLEDGE message to prevent the expiration of the timeout.
Related
I am designing the software controlling several serial ports, operating system is OpenWrt. The device application is running in is single core ARM9 # 450 MHz. Protocol for serial ports is Modbus.
The problem is with Modbus slave implementation. I designed it in real-time manner, looping reading data from serial port (port is open in non-blocking mode with 0 characters to wait for and no timeouts). The sequence/data stream timeout is about 4 milliseconds # 9600/8/N/1 (3.5 characters as advised). Timeout is checked if application does not see anything in the buffer, therefore if application is slower that incoming stream of characters, timeouts mechanism will not take place - until all characters are removed from the buffer and bus is quiet.
But I see that CPU switches between threads and this thread is missed for about 40-60 milliseconds, which is a lot to measure timeouts. While I guess serial port buffer still receives the data (how long this buffer is?), I am unable to assess how much time passed between chars, and may treat next message as continuation of previous and miss Modbus request targeted for my device.
Therefore, I guess, something must be redesigned (for the slave - master is different story). First idea coming to my mind is to forget about timeouts, and just parse the incoming data after being synchronized with the whole stream (by finding initial timeout). However, there're several problems - I must know everything about all types of Modbus messages to parse them correctly and find out their ends and where next message starts, and I do not see the way how to differentiate Modbus request with Modbus response from the device. If developers of the Modbus protocol would put special bit in command field identifying if message is request or response... but it is not the case, and I do not see the right way to identify if message I am getting is request or response without getting following bytes and checking CRC16 at would-be byte counts, it will cost time while I am getting bytes, and may miss window for the response to request targeted for me.
Another idea would be using blocking method with VTIME timeout setting, but this value may be set to tenths of seconds only (therefore minimal is 100 ms), and this is too much given another +50 ms for possible CPU switching between threads, I think something like timeout of 10 ms is needed here. It is a good question if VTIME is hardware time, or software/driver also subject to CPU thread interruptions; and the size of FIFO in the chip/driver how many bytes it can accumulate.
Any help/idea/advice will be greatly appreciated.
Update per #sawdust comments:
non-blocking use has really appeared not a good way because I do not poll hardware, I poll software, and the whole design is again subject to CPU switching and execution scheduling;
"using the built-in event-based blocking mode" would work in blocking mode if I would be able to configure UART timeout (VTIME) in 100 us periods, plus if again I would be sure I am not interrupted during the executing - only then I will be able to have timing properly, and restart reading in time to get next character's timing assessed properly.
I am developping an application on the STM32 SPBTLE-1S module (BLE 4.2). The module connects to a Raspberry Pi.
When the connection quality is low, a disconnection will sometimes occur with error code 0x28 (Reason: Instant Passed) before the connection timeout is reached.
Current connection settings are:
Conn_Interval_Min: 10
Conn_Interval_Max: 20
Slave_latency: 5
Timeout_Multiplier: 3200
Reading more on this type of error, it seems to happen when "an LMP PDU or LL PDU that includes an instant cannot be performed because the instant when this would have occurred has passed." These paquets are typically used for frequency hopping or for connection updates. In my case, they must be frequency hoping paquets.
Any idea on how to prevent these disconnections caused by "Instant Passed" errors? Or are they simply a consequence of the BLE technology?
Your question sounds similar to this one
In a nutshell, there's only two possible link layer requests that can result in this type of disconnect (referred to as LL_CONNECTION_UPDATE_IND & LL_CHANNEL_MAP_IND in the latest v5.2 Bluetooth Core Spec)
If you have access to the low level firmware for the bluetooth stack on the embedded device, what I've done in the past is increase the number of slots in the future the switch "Instant" is at so there's more time for the packet to go through in a noisy environment.
Otherwise the best you can do is try to limit the amount of times you change connection parameters to make that style of disconnect less likely to occur. (The disconnect could still be triggered by channel map change but I haven't seen many BLE stacks expose a lot of configuration over when those take place.)
I currently have a game server with a customizable tick rate, but for this example let's suggest that the server is only going to tick once per second or 1hz. I'm wondering what's the best way to handle incoming packets if the client send rate is faster than the server's as my current setup doesn't seem to work.
I have my udp blocking receive with a timeout inside my tick function, and it works, however if the client tick rate is higher than the server, all of the packets are not received; only the one that is being read at the current time. So essentially the server is missing packets being sent by clients. The image below demonstrates my issue.
So my question is, how is this done correctly? Is there a separate thread where packets are read constantly, queued up and then the queue is processed when the server ticks or is there a better way?
Image was taken from a video https://www.youtube.com/watch?v=KA43TocEAWs&t=7s but demonstrates exactly what I'm explaining
There's a bunch going on with the scenario you describe, but here's my guidance.
If you are running a server at 1hz, having a blocking socket prevent your main logic loop from running is not a great idea. There is a chance you won't be receiving messages at the rate you expect (due to packet loss, network lag spike or client app closing)
A) you certainly could create another thread, continue to make blocking recv/recvfrom calls, and then enqueue them onto a thread safe data structure
B) you could also just use a non-blocking socket, and keep reading packets until it returns -1. The OS will buffer (usually configurable) a certain number of incoming messages, until it starts dropping them if you aren't reading.
Either way is fine, however for individual game clients, I prefer the second simple approaches when knowing I'm on a thread that is servicing the socket at a reasonable rate (5hz seems pretty low, but may be appropriate for your game). If there's a chance you are stalling the servicing thread (level loading, etc), then go with the first approach, so you don't detect it as a disconnection if you miss sending/receiving a periodic keepalive message.
On the server side, if I'm planning on a large number of clients/data, I go to great lengths to efficiently read from sockets - using IO Completion Ports on Windows, or epoll() on Linux.
Your server could have to have a thread to tick every 5 seconds just like the client to receive all the packets. Anything not received during that tick would be dropped as the server was not listening for it. You can then pass the data over from the thread after 5 ticks to the server as one chunk. The more reliable option though is to set the server to 5hz just like the client and thread every packet that comes in from the client so that it does not lock up the main thread.
For example, if the client update rate is 20, and the server tick rate is 64, the client might as well be playing on a 20 tick server.
Good day!
I know this is a simple question but I can't find its answer, whenever I look for RTT, it is usually loosely defined. So, is buffering time in the transmitting node included in RTT -received by ping-?
RTT simply means "round-trip time." I'm not sure what "buffering" you're concerned about. The exact points of measurement depend on the exact ping program you're using, and there are many. For BusyBox, the ping implementation can be found here. Reading it shows that the outgoing time is stamped when the outgoing ICMP packet is prepared shortly before sendto() is called, and the incoming time is stamped when the incoming ICMP packet is parsed shortly after recvfrom() is called. (Look for the calls to monotonic_us().) The difference between the two is what's printed. Thus the printed value includes all time spent in the kernel's networking stack, NIC handling and so on. It also, at least for this particular implementation, includes time the ping process may have been waiting for a time slice. For a heavily loaded system with scheduling contention this could be significant. Other implementations may vary.
I have seen a number of examples of paho clients reading sensor data then publishing, e.g., https://github.com/jamesmoulding/motion-sensor/blob/master/open.py. None that I have seen have started a network loop as suggested in https://eclipse.org/paho/clients/python/docs/#network-loop. I am wondering if the network loop is unnecessary for publishing? Perhaps only needed if I am subscribed to something?
To expand on what #hardillb has said a bit, his point 2 "To send the ping packets needed to keep a connection alive" is only strictly necessary if you aren't publishing at a rate sufficient to match the keepalive you set when connecting. In other words, it's entirely possible the client will never need to send a PINGREQ and hence never need to receive a PINGRESP.
However, the more important point is that it is impossible to guarantee that calling publish() will actually complete sending the message without using the network loop. It may work some of the time, but could fail to complete sending a message at any time.
The next version of the client will allow you to do this:
m = mqttc.publish("class", "bar", qos=2)
m.wait_for_publish()
But this will require that the network loop is being processed in a separate thread, as with loop_start().
The network loop is needed for a number of things:
To deal with incoming messages
To send the ping packets needed to keep a connection alive
To handle the extra packets needed for high QOS
Send messages that take up more than one network packet (e.g. bigger than local MTU)
The ping messages are only needed if you have a low message rate (less than 1 msg per keep alive period).
Given you can start the network loop in the background on a separate thread these days, I would recommend starting it regardless