MBed/Arduino RS-232 Serial communication issue - serial-port

I am receiving messages from a CAN interface into my mBed device. The mBed device then parses the information to send out on serial to another device. The information is sent out in the following format.
"< msg>xxxxxxxxxxxxxxxxxxx< /msg>" where x = a hex number.
The other device receiving this message will receive the information split up in half (i've accounted for this in the code). The problem I'm having is, the messages will fall into the format ..... but there are times where the format is lost, for example:
[1]xxxx< /msg>< msg>xxxxx
[2]xxxxxxxx< msg>xxxxxxx
[3]< /msg>< msg>xxxxxxxxx
[4]xxx< /msg>< msg>xxxxxx
**Please ignore the space in the msg tag, it was necessary to show on StackOverflow'**
The baud rate set 38400bps on the mBed. I'm not using any parity, stop bit, start bit etc as I'm not too familiar with how they work. Can anyone help me how I might fix this loss in format, or am I going to have to include code in the receiving device to handle this?
Many thanks in advance!

This is entirely normal, serial ports are not smart enough to recognize XML. You will have to write the code yourself. A basic algorithm is a state machine. Declare a buffer that's large enough to store a complete message. Then:
throw everything away you receive until you get '<'
store bytes you receive in the buffer until you get '>'
check that you got <msg>, back to state 1 if you did not
store bytes you receive in the buffer until you get '>'
check that you got <msg/>, back to state 1 if you did not
process the message, back to state 1
This ensures that you'll properly sync with the bus when you open the port and that you don't care how many bytes you receive in one read() call.

Related

How to reverse engineer buffer data received from a serial port

I am trying to decode some buffer data that I have received from a serial port. To me the data seems to make no sense - I am not even sure if I am splitting the messages up correctly.
The data comes from a concrete crusher and while the concrete is being crushed we get an almost continuous stream of data. I get about 10 "messages" a second (but this might be multiple messages included in each message) and I am splitting them up by waiting 50 ms after each message. The data looks like this:
[0,0,0,224,0,224,0,0,224,0,0,224,0,0,0,0,0,0,0,224,0,0,0,0,224,0,0,0,224,0,224,0,0,224,0,0,0,224,0,0,0,0,0,0,0,224,0,0,0,224,0,0,224,0,0,224,0,224,0,0,224,0,0,0,0,0,0,0,0,224,224,0]
[0,0,0,224,0,224,0,0,224,0,0,224,0,0,0,0,0,0,0,224,0,0,0,0,0,0,0,224,0,224,0,0,0,0,0,0,0,0,0,0,0,0,0,224,0,0,0,224,0,0,224,0,0,224,0,0,0,0,224,0,0,0,0,0,0,0,0,224,224,0]
[0,0,0,224,0,224,0,0,224,0,0,224,0,0,0,0,0,0,0,224,0,224,0,0,224,0,0,0,224,0,224,224,224,0,0,0,224,0,0,0,0,0,0,0,0,224,0,0,0,224,0,224,224,224,0,0,224,0,0,0,224,0,0,0,224,0,0,0,0,0,224,224,0]
as you can see there are no values at all other than 0 and 224...
The last message is:
[0,0,0,0,0,0,224,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,224,0,0,224,0,0,0,0,0,0,0,0,0,0,0,0,0,0,224,0,0,0,0,224,224,0,0,224,0,0,224,0,0,0,224,0,0,0,0,0,0,0,0,0,0,0,224,0,0,0,0,0,224,0,224,224,0,0,224,0,0,0,224,224,0]
the value displayed on the machine was 427.681kN but I can't see any way that this data can produce that.
Each message ends with 224,224,0 so I am wondering if that is the split sequence?
I am getting this data with node-red and this is the format that I can copy it from the debug panel in.
I am very lost so any guidance or directions that I can look in would be much appreciated.

read a well defined frame with qtserialport

I'm developing a desktop application with qt which communicates with stm32 to send and receive data.
The thing is, the data to transfer, follow a well-defined shape, with a previously defined fields. My problem is that I can't find how read () or readall() work or how Qserialport even treats the data. So my question is how can I read data (in real time, whenever there is data in the buffer) and analyze it field by field (or per byte) in order to be displayed in the GUI.
There's nothing to it. read() and readAll() give you bytes, optionally wrapped in a QByteArray. How you deal with those bytes is up to you. The serial port doesn't "treat" or interpret the data in any way.
The major point of confusion is that somehow people think of a serial port as if it was a packet oriented interface. It's not. When the readyRead() signal fires, all that you're guaranteed is that there's at least one new byte available to read. You must cope with such fragmentation.

Handling messages over TCP

I'm trying to send and receive messages over TCP using a size of each message appended before the it starts.
Say, First three bytes will be the length and later will the message:
As a small example:
005Hello003Hey002Hi
I'll be using this method to do large messages, but because the buffer size will be a constant integer say, 200 Bytes. So, there is a chance that a complete message may not be received e.g. instead of 005Hello I get 005He nor a complete length may be received e.g. I get 2 bytes of length in message.
So, to get over this problem, I'll need to wait for next message and append it to the incomplete message etc.
My question is: Am I the only one having these difficulties to appending messages to each other, appending lengths etc.. to make them complete Or is this really usually how we need to handle the individual messages on TCP? Or, if there is a better way?
What you're seeing is 100% normal TCP behavior. It is completely expected that you'll loop receiving bytes until you get a "message" (whatever that means in your context). It's part of the work of going from a low-level TCP byte stream to a higher-level concept like "message".
And "usr" is right above. There are higher level abstractions that you may have available. If they're appropriate, use them to avoid reinventing the wheel.
So, there is a chance that a complete message may not be received e.g.
instead of 005Hello I get 005He nor a complete length may be received
e.g. I get 2 bytes of length in message.
Yes. TCP gives you at least one byte per read, that's all.
Or is this really usually how we need to handle the individual messages on TCP? Or, if there is a better way?
Try using higher-level primitives. For example, BinaryReader allows you to read exactly N bytes (it will internally loop). StreamReader lets you forget this peculiarity of TCP as well.
Even better is using even more higher-level abstractions such as HTTP (request/response pattern - very common), protobuf as a serialization format or web services which automate pretty much all transport layer concerns.
Don't do TCP if you can avoid it.
So, to get over this problem, I'll need to wait for next message and append it to the incomplete message etc.
Yep, this is how things are done at the socket level code. For each socket you would like to allocate a buffer of at least the same size as kernel socket receive buffer, so that you can read the entire kernel buffer in one read/recv/resvmsg call. Reading from the socket in a loop may starve other sockets in your application (this is why they changed epoll to be level-triggered by default, because the default edge-triggered forced application writers to read in a loop).
The first incomplete message is always kept in the beginning of the buffer, reading the socket continues at the next free byte in the buffer, so that it automatically appends to the incomplete message.
Once reading is done, normally a higher level callback is called with the pointers to all read data in the buffer. That callback should consume all complete messages in the buffer and return how many bytes it has consumed (may be 0 if there is only an incomplete message). The buffer management code should memmove the remaining unconsumed bytes (if any) to the beginning of the buffer. Alternatively, a ring-buffer can be used to avoid moving those unconsumed bytes, but in this case the higher level code should be able to cope with ring-buffer iterators, which it may be not ready to. Hence keeping the buffer linear may be the most convenient option.

AT command for disable Radio Signal Strength Indication and alike?

Im working on a program to send and recieve SMS using a GSM modem and my computer.
I have gotten sending and receiving to work - well sort of.
Once in a while my program is sent into a total chrash due to modem is mixing up information about Radio Signal Strength Indication and alike, while also serving my program with the hex code for the message.
My code can handle the hex code just fine. but I have seen the following line popup while im decoding a byte stream:
^RSSI: 2
So far I've seen it send out values between 1 and 10.
Is there an AT Command that can disable them? I have no need for them.
Or alternative: Is there a general syntax for them, so I can filter them out before decoding?
Im leaning towards a filter solution. But that would be more easy to implement if I knew whenever modem is sending out on the form: "^SOMETHING: xxx", then It would be nice to know if it is always followed up be a delimiter say for instance "\r".
You should try turning off periodic messages as using AT^CURC=0.
Information regarding the AT^CURC command:
AT^CURC? Current setting of periodic status messages
AT^CURC=? See what you possible values are
AT^CURC=0 turn off periodic status messages
The best way to tackle this scenario would be to replace that part of the response with an empty string because otherwise, it will be difficult to check even if the command sent to disable it is working or not.
This regex will match all those. You can replace them ideally by an empty string.
(\\n|\\r|\\r\\n)\\^.*(\\n|\\r|\\r\\n)

Data Error Checking

I've got a bit of an odd question. A friend of mine and I thought it would be funny to make a serial port kind of communication between computers using sound. Basically, computers emit a series of beeps to send data, and listen for beeps over a microphone to receive data. In short, the world's most annoying serial port. I have all of the basics worked out. I can filter out sounds of only one frequency and I have sent data from one computer to another. Although the transmission is fairly error free, being affected only by very loud noises, some issues still exist. My question is, what are some good ways to check the data for errors and, more importantly, recover from these errors.
My serial communication is very standard once you get past the fact it uses sound waves. I use one start bit, 8 data bits, and one stop bit in every frame. I have already considered Cyclic Redundancy Checks, and I plan to factor this into my error checking, but CRCs don't account for some of the more insidious issues. For example, consider sending two bytes of data. You send the first one, and it received correctly, but just after the stop bit of the first byte, and the start bit of the next, a large book falls on the floor, which the receiver interprets to be a start bit, now the true start bit is read as part of the data and the receiver could be reading garbage data for many bytes to come. Eventually, a pause in the data could get things back on track.
That isn't the worst of it though. Bits can be dropped too, and most error checking schemes I can think of rely on receiving a certain number of bytes. What happens when the receiver keeps waiting for bytes that may not come?
So, you can see the complexity of this question. If you can direct me to any resources, or just give me a few tips, I would greatly appreciate your help.
A CRC is just a part of the solution. You can check for bad data but then you have to do something about it. The transmitter has to re-send the data, it needs to be told to do that. A protocol.
The starting point is that you split up the data into packets. A common approach is a start byte that indicates the start of the packet, followed by a packet number, followed by a length byte that indicates the length of the packet. Followed by the data bytes and the CRC. The receiver sends an ACK or NAK back to indicate success.
This solves several problems:
you don't care about a bad start bit anymore, the pause you need to recover is always there
you start a timer when you receive the first bit or byte, declare failure when the timer expires before the entire packet is received
the packet number helps you recover from bad ACK/NAK returns. The transmitter times out and resends the packet, you can detect the duplicate
RFC 916 describes such a protocol in detail. I never heard of anybody actually implementing it (other than me). Works pretty well.

Resources