Arduino Wifi rev2 lost UDP packet mitigation - arduino

Hi i got a simple arduino wifi program that waits for UDP commands sent by a python script. When the python script sends a command packet it expects an aknowledge packet (and in certain circumstances some returned data packets). So basically there are two kinds of commands. SET COMMANDS which only expects an aknowledge packets, and GET commands which expect an aknowledge packet + one or multiple data packets. Right now, when a command packet is lost from the python script's perspective, a timeout is raised and the python script tries again after a small delay. For now, this does not cause any problems with the GET commands because, at worse the arduino replies twice and i receive the data. But this can cause problems with the SET commands. I.e. the arduino could get the command to toggle an led twice (on off on). What could I do to remedy this problem. Should i add some framing to the udp packet command structure like packet counters? The receiving arduino needs to know if there was dome packets lost and tell the python script to restart what ever action it was trying to do.

It is the nature of UDP that packets may get lost or duplicated. You have, essentially, three options.
If you need reliable data transmission, use a protocol that provides it. Using UDP is a bad choice where you need all the features TCP provides anyway. So switch to TCP.
Re-architect the protocol so that you don't need reliable data transmission. For example, your "toggle LED" command could include a sequence number and if the toggle sequence matches the previous one, it's ignored. So you send "toggle LED, sequence 2" over and over until you get an acknowledgement, then in your next request, it's "toggle LED, sequence 3". Be careful, not only may data packets get lost, duplicated or interleaved, but responses may too. It's easy to mess this up.
Implement reliable data transmission. For example, each request may contain a sequence number and you repeat it until you get an acknowledge with the same sequence. Only then move onto the next sequence. Do this with multi-datagram replies too. This is painful, but that's why you are offered TCP -- so you don't have to re-invent it every time you need reliable data transmission.

Related

with TCP NAGLE off, how much delay between send calls will prevent messages from being combined in one packet

I am sending via TCP multiple fixed length messages to a server that requires that messages not be split between two packets. I know that that is not the way that TCP works, but I have no control over the server. I have NAGLE turned off, but messages are still being combined into a large packet, and the last message in the packet is split and continues in the next one. I assume this is because there is very little delay between my calls to "send" to send each message. Does anyone know how much delay between send calls would be required to trigger sending the packet? Or is there some other way to prevent the contents of a single send call being split between two packets?
OS: Ubuntu 18.04
Kernel: 4.15.0-91-generic

Can't get OK response from XBee upon "+++"

I have been trying to set up two XBees to communicate since the last three days. X-CTU seems to be the perfect option to do so, however, it is a real menace when it comes to discovering XBees on serial ports.
I was able to detect one XBee by luck just once and the other one never showed up. I have even replaced both my XBees. I am trying to figure out the alternative, i.e. using a serial console to perform the operation. I haven't been able to receive an OK response from the device upon issuing +++.
Since I haven't had a good experience using a PC to communicate with ESP8266 devices earlier, I tried to figure out a workaround by using the second Serial port of an Arduino to send such configuration messages and read the response by printing it out on the default serial console.
It also appears that configuration messages can differ depending on the mode of the device. If it's in API mode, the frame has to be generated in a specific format (I use the X-CTU frame generator for this purpose).
Why am I not able to receive a response from the XBee upon issuing a +++?
The devices are Series 1 XBees and the exact part number is XB24-AWI-001. Any help is highly appreciated.
Have you considered the XBee being in API mode? Maybe should you consider to reflash the device in AT mode to start playing with it.
To test if it's in API mode, you can refer to the guide, chapter 9 for the API mode structure:
http://eewiki.net/download/attachments/24313921/XBee_ZB_User_Guide.pdf?version=1&modificationDate=1380318639117&api=v2
Basically, a datagram in API mode starts with ~, and it's built as follows:
[0x7E|length(2B)|Command(1B)|Payload(length-1B)|Checksum(1B)]
As 0x7E is ~ on the ASCII table, you should try typing a bogus datagram in a serial terminal session like:
~ <C-d> AAAA
N.B.: The <C-d> characters means Control-d under unix., which is the EOF character.
Obviously such a message isn't likely to work, and you will receive a reply asking you to send that datagram again. That's because the EOF character being ASCII code 4, it means that the length of the datagram will be 4 bytes. So then you send four bogus bytes, the checksum will be A, which is very likely to be right, and the receiver will assume the transmission has been corrupted. So the datagram will be asked again, meaning you will receive a datagram to do that query.
Though I can only advice you to consider running it only in API mode (more reliable and a better API, but you cannot play around with it and understand what's going on by tapping on the line with a logic analyzer… though giving enough time, you'll start to read API datagrams like it's English ☺).
I wrote a page with a few resources to check on how to reflash the XBees:
https://github.com/hackable-devices/polluxnzcity/wiki/Flash-zigbee
and here's other advices from another totally unrelated project:
https://github.com/andrewrapp/xbee-api#documentation
And I also wrote a lib (aimed at beaglebones but you can tweak it for your use) that handles API mode 2 with XBees:
https://github.com/hackable-devices/polluxnzcity/blob/master/PolluxGateway/include/xbee/xbee_communicator.h
https://github.com/guyzmo/polluxnzcity/blob/master/PolluxGateway/src/xbee/xbee_communicator.C
but I bet with a little google search you can find more widely used libraries than those ones, and even some aimed to be run on Arduinos (N.B.: that lib was originally written for Arduinos, and then adapted to run for Beaglebone, so reversing the operation shouldn't be hard).

How can I get send AND receive timestamps from tcpdump for packets I send over local loopback?

I'm trying to run tests on a simulated network I'm running on my machine and would like to get timing information on packets I'm sending and then receiving over local loopback.
When I run tcpdump -i lo I see two packets for every packet of data I send over local loopback: a data-carrying packet with a sequence number, and an associated ack packet. Each has only 1 timestamp associated with it.
I'd like to see when the data-carrying packet is sent and received, and when the ack packet is sent and received-- that is, 4 timestamps in total. I can't figure out how to do this in tcpdump no matter what Google searches I try or flags I pass it.
Right now I'm only getting 2 timestamps, one for each packet. I'm pretty sure they are both receive times for the packets.
I could probably run this test using two different machines, but I don't have another one on hand right now, and if I did that the clock between the two wouldn't be synchronized perfectly so the timestamps would be off.
It turns out what I'm asking for here is impossible. When sending over local loopback, the kernel uses a purely software layer, so there are no TCP packets actually being sent.
This is actually true for using any device and sending to yourself-- the kernel automatically optimizes and doesn't actually use the hardware to send packets.
In order to get send and receive times, you need to route through some other external agent. Alternatively, you can pretend there are two different interfaces running on your computer using netns, then connect them using virtual ethernet (veth) and then log tcpdump data over that connection.
See this blog post on setting up a connected netns namespace.

Reliable full-duplex serial comms

I'm designing a device that will encrypt a long (assume infinite) stream of data sent from the PC and send it back. I'm planning to use a single serial port on the device running full duplex with hardware handshaking and "block" the data, sending a CRC value after every block. The device will only buffer a limited number of blocks- ideally just one buffer accumulating the block being received and one buffer holding the block presently being sent, switching them over at each block boundary and using hardware handshaking to keep things in sync.
The problem I'm considering is what happens when there's corruption and there's a mismatch between the CRC value calculated by the receiver- which could be either the PC or the device- and the one sent. If the receiver detects an error, it sets a break condition on its transmit line- because although TX and RX are doing different things that's all we CAN do- and then we drop into a recovery sequence.
Recovery is easy when the error condition is detected before the data disappears from the sender, but particularly on the PC receiving there may be a significant amount of buffer space, and by the time the PC catches up and detects the corruption the data may have disappeared from the device and we can't simply retransmit. It's difficult to "rewind" cipher generation, so resending the source data and trying to pick things up in the middle is difficult- and indeed the source data may not be available to resend depending on where it's ultimately coming from.
I considered having each side send its "last frame successfully received" counter along with its last frame sent CRC value, and having the device drop RTS if there's too much unconfirmed data waiting at the output, but that would then deadlock- the device never gets the confirmation that the PC's receive thread has caught up.
I've also considered having the PC send a block and then not send another block until the first block's been confirmed processed and received back, but that's essentially going to half duplex or block-synchronous operation and the system runs slower than it can do. A compromise is to have a number of buffers in the device, the PC to know how many buffers and to throttle its own output based on what it thinks the device is doing, but having that degree of 'intelligence' needed in the PC side seems inelegant and hacky.
Serial comms is quite ancient tech. Surely there's a good way of doing this?
Designing a reliable protocol is not that easy. Some notes with what you've talked about so far:
Only use RTS to do what it is designed to do, avoid receive buffer overflow. It is not suitable to do more.
Strongly consider not having multiple un-acknowledged frames around. It is only important if the connection suffers from high latency, that is not a problem with serial ports.
Achieve full duplex operation by layering, use the OSI model as a guide.
Be sure to treat the input and output of your protocol as plain byte streams. Framing is only a detail of the protocol implementation, the actual frame size does not matter. If the app signals by using messages then that should be implemented on top of the protocol. Otherwise the automatic outcome of proper layering.
Keep in mind that a frame can do more than just transmit data, it can also include an ACK for a received frame. In other words, you only need a separate ACK frame if there isn't anything to transmit back.
And avoid reinventing the wheel, this has been done before. I can recommend RATP, the subject of RFC916. Widely ignored btw so you are not likely to find code you can copy. I've implemented it and had good success. It has only one flaw that I know of, it is not resilient to multiple connection attempts that are present in the receive buffer. Intentionally purging the buffer when you open the port is important.

XBee Send To All

I have a simple xbee network operating where there are a bunch of slaves operating remotely and all talking to one master, who is connected to the server computer. That works no problem.
The slaves all send their ID as part of the packet and I'd like to have the master deliberately send an Ack after a delay. I'm trying to figure out how to do this efficiently and it seems that the only plausible way that doesn't involve reprogramming the master before each Ack is to send the Ack to all slaves and have them ignore the packet if it's not meant for them.
That solution is ok - I just can't figure out the command to use to do this. Is there some sort of Serial sendAll command? All of the devices are on the same ATID.
Typically in this situation, you would configure the master in API mode so you would get "Receive Explicit" frames with source addressing information, and could send with the "Transmit Explicit" frame type, and include addressing information in your frames.
If you use AT mode (transparent serial mode), then you're stuck having to change the DH and DL parameters on your coordinator every time you want to change who you send to. You should avoid using broadcast packets, since each one results in lots of network traffic (IIRC, each router will send the broadcast packet three times).
I do not know of a good XBee library on the Arduino, but it might be possible to port Digi's Open Source ANSI C XBee Host Library to that platform.

Resources