UDP API in Rust - networking

1) API for send here returns Result<usize>. Why is that ? In my head, a UDP send is all or none. The return value seems to suggest that sending can succeed but entire data may not be written which makes me code like:
let mut bytes_written = 0;
while bytes_written < data.len() {
bytes_written += match udp_socket.send_to(&data[bytes_written..]) {
Ok(bytes_tx) => bytes_tx,
Err(_) => break,
}
}
Recently someone told me this is completely unnecessary. But I don't understand. If that were true why is the return not Result<()> instead, which is also what i was expecting ?
2)
For reads though I understand. I could give it a buffer of size 100 bytes but the datagram might only be 50 bytes long. So essentially I should utilise only read_buf[..size_read]. Here my question is what happens if the buffer size is 100 but the datagram size is say 150 bytes ? Will recv_from fill in only 100 bytes and return Ok(100, some_peer_addr) ? If i re-read will it fill in the remaining of the datagram ? What if another datagram of 50 bytes arrived before my second read ? Will i get just the remaining 50 bytes the second time and 50 bytes of new datagram the 3rd time or complete 100 bytes the 2nd time which also contains the new datagram ? Or will be an error and i will lose the 1st datagram on my initial read and never be able to recover it ?

The answer to both of these questions lies in the documentation of the respective BSD sockets functions, sendto() and recvfrom(). If you use some *nix system (OS X or Linux, for example), you can use man sendto and man recvfrom to find it.
1) sendto() man page is rather vague on this; Windows API page explicitly says that it is possible for the return value be less than len argument. See also this question. It looks like that this particular moment is somewhat under-documented. I think that it is probably safe to assume that the return value will always be equal either to len or to the error code. Problems may happen if the length of the data sent through sendto() exceeds the internal buffer size inside the OS kernel, but it seems that at least Windows will return an error in this case.
2) recvfrom() man page unambiguously states that the part of a datagram which does not fit into the buffer will be discarded:
The recvfrom() function shall return the length of the message
written to the buffer pointed to by the buffer argument. For
message-based sockets, such as SOCK_RAW, SOCK_DGRAM, and
SOCK_SEQPACKET, the entire message shall be read in a single
operation. If a message is too long to fit in the supplied buffer,
and MSG_PEEK is not set in the flags argument, the excess bytes shall
be discarded.
So yes, recv_from() will fill exactly 100 bytes, the rest will be discarded, and further calls to recv_from() will return new datagrams.

If you dig down, it just wrapping the C sendto function. This function returns number of bytes sent, so Rust just passes that on (while handling the -1 case and turning errno into actual errors).

Related

recv function gives malformed data Winsock2 C++

In my simple TCP client server application, server send repetitively 1 kB message to the client and client send a reply acknowledgement (just send 'ACK') for each packet. Just think this scenario like client and server passing 1 kB messages here and there in a infinite loop.
I send the same message every time and the fist byte (first char) is always 1. But while testing this client and server application in the same machine for a long time, I noticed first character of some of the received messages are something else in the receive buffer and recv function also returned 1024 (1 kB). This is not happen frequently.
This is the how I receive.
char recvBuff[DEFAULT_BUFFER_SIZE];
int iResult = SOCKET_ERROR;
iResult = recv(curSocket, recvBuff, DEFAULT_BUFFER_SIZE, 0);
if (iResult == SOCKET_ERROR)
{
return iResult;
}
if (recvBuff[0] != 1)
{
//malformed receive
}
MessageHeader *q = (MessageHeader*)recvBuff;
message.header = *q; q++;
std::string temp((char*)q, message.header.fragmentSize);
message.message = temp;
Actually the problem is in constructing the temp string. It breaks since the correct fragment size not received. I tried to drop these kind of malformed data. But the problem is there is a gap between last successfully received fragment ID and first successfully received fragment ID after malformed receives. Any idea why these malformed receives happen?
You’re assuming that you’ve received a complete message when the recv() call completes. If this is a TCP connection (as opposed to UDP), it is byte-oriented, and that means that recv() will return whenever there are any bytes available.
Put more explicitly, there is no reason that doing
send (toServerSocket, someMessage, 1024, 0);
on the client side will cause
recv (fromClientSocket, myBuffer, 1024, 0);
to receive 1,024 bytes. It could just as well receive 27 bytes, with the remaining 997 coming from future calls to recv().
What’s happening in your program, then, is that you’re getting one of these short returns, and it’s causing your program to lose sync. with the message stream. How to fix it? Use recv() to read enough of your message that you know the length (or set a fixed length, though that’s inefficient in many cases). Then continue calling recv() into your buffer until you have read at least that many bytes. Note that you might read more bytes than the length of your message — that is, you may read some bytes that belong to the next message, so you will need to keep those in the buffer after processing the current message.

byte aligning in serial communication

So I am trying to define a communication protocol for serial communication, I want to be able to send 4 byte numbers to the device, but I'm unsure how to make sure that the device starts to pick it up on the right byte.
For instance if I want to send
0x1234abcd 0xabcd3f56 ...
how do I makes sure that the device doesn't start reading at the wrong spot and get the first word as:
0xabcdabcd
Is there a clever way of doing this? I thought of using a marker for the start of a message, but what if I want to send the number I choose as data?
Why not send a start-of-message byte followed by a length-of-data byte if you know how big the data is going to be?
Alternatively, do as other binary protocols and only send fixed sizes of packages with a fixed header. Say that you will only send 4 bytes, then you know that you'll have one or more bytes of header before the actual data content.
Edit: I think you're misunderstanding me. What I mean is that the client is supposed to always regard bytes as either header or data, not based on value but rather based on the position in the stream. Say you're sending four bytes of data, then one byte would be the header byte.
+-+-+-+-+-+
|H|D|D|D|D|
+-+-+-+-+-+
The client would then be a pretty basic state machine, along the lines of:
int state = READ_HEADER;
int nDataBytesRead = 0;
while (true) {
byte read = readInput();
if (state == READ_HEADER) {
// process the byte as a header byte
state = READ_DATA;
nDataBytesRead = 0;
} else {
// Process the byte as incoming data
++nDataBytesRead;
if (nDataBytesRead == 4)
{
state = READ_HEADER;
}
}
}
The thing about this setup is that what determines if the byte is a header byte is not the actual content of a byte, but rather the position in the stream. If you want to have a variable number of data bytes, add another byte to the header to indicate the number of data bytes following it. This way, it will not matter if you are sending the same value as the header in the data stream since your client will never interpret it as anything but data.
netstring
For this application, perhaps the relatively simple "netstring" format is adequate.
For example, the text "hello world!" encodes as:
12:hello world!,
The empty string encodes as the three characters:
0:,
which can be represented as the series of bytes
'0' ':' ','
The word 0x1234abcd in one netstring (using network byte order), followed by the word 0xabcd3f56 in another netstring, encodes as the series of bytes
'\n' '4' ':' 0x12 0x34 0xab 0xcd ',' '\n'
'\n' '4' ':' 0xab 0xcd 0x3f 0x56 ',' '\n'
(The newline character '\n' before and after each netstring is optional, but makes it easier to test and debug).
frame synchronization
how do I makes sure that the device doesn't start reading at the wrong spot
The general solution to the frame synchronization problem is to read into a temporary buffer, hoping that we have started reading at the right spot.
Later, we run some consistency checks on the message in the buffer.
If the message fails the check, something has gone wrong,
so we throw away the data in the buffer and start over.
(If it was an important message, we hope that the transmitter will re-send it).
For example, if the serial cable is plugged in halfway through the first netstring,
the receiver sees the byte string:
0xab 0xcd ',' '\n' '\n' '4' ':' 0xab 0xcd 0x3f 0x56 ',' '\n'
Because the receiver is smart enough to wait for the ':' before expecting the next byte to be valid data, the receiver is able to ignore the first partial message and then receive the second message correctly.
In some cases, you know ahead of time what the valid message length(s) will be;
that makes it even easier for the receiver to detect it has started reading at the wrong spot.
sending start-of-message marker as data
I thought of using a marker for the start of a message, but what if I want to send the number I choose as data?
After sending the netstring header, the transmitter sends the raw data as-is -- even if it happens to look like the start-of-message marker.
In the normal case, the reciever already has frame sync.
The netstring parser has already read the "length" and the ":" header,
so the netstring parser
puts the raw data bytes directly into the correct location in the buffer -- even if those data bytes happen to look like the ":" header byte or the "," footer byte.
pseudocode
// netstring parser for receiver
// WARNING: untested pseudocode
// 2012-06-23: David Cary releases this pseudocode as public domain.
const int max_message_length = 9;
char buffer[1 + max_message_length]; // do we need room for a trailing NULL ?
long int latest_commanded_speed = 0;
int data_bytes_read = 0;
int bytes_read = 0;
int state = WAITING_FOR_LENGTH;
reset_buffer()
bytes_read = 0; // reset buffer index to start-of-buffer
state = WAITING_FOR_LENGTH;
void check_for_incoming_byte()
if( inWaiting() ) // Has a new byte has come into the UART?
// If so, then deal with this new byte.
if( NEW_VALID_MESSAGE == state )
// oh dear. We had an unhandled valid message,
// and now another byte has come in.
reset_buffer();
char newbyte = read_serial(1); // pull out 1 new byte.
buffer[ bytes_read++ ] = newbyte; // and store it in the buffer.
if( max_message_length < bytes_read )
reset_buffer(); // reset: avoid buffer overflow
switch state:
WAITING_FOR_LENGTH:
// FIXME: currently only handles messages of 4 data bytes
if( '4' != newbyte )
reset_buffer(); // doesn't look like a valid header.
else
// otherwise, it looks good -- move to next state
state = WAITING_FOR_COLON;
WAITING_FOR_COLON:
if( ':' != newbyte )
reset_buffer(); // doesn't look like a valid header.
else
// otherwise, it looks good -- move to next state
state = WAITING_FOR_DATA;
data_bytes_read = 0;
WAITING_FOR_DATA:
// FIXME: currently only handles messages of 4 data bytes
data_bytes_read++;
if( 4 >= data_bytes_read )
state = WAITING_FOR_COMMA;
WAITING_FOR_COMMA:
if( ',' != newbyte )
reset_buffer(); // doesn't look like a valid message.
else
// otherwise, it looks good -- move to next state
state = NEW_VALID_MESSAGE;
void handle_message()
// FIXME: currently only handles messages of 4 data bytes
long int temp = 0;
temp = (temp << 8) | buffer[2];
temp = (temp << 8) | buffer[3];
temp = (temp << 8) | buffer[4];
temp = (temp << 8) | buffer[5];
reset_buffer();
latest_commanded_speed = temp;
print( "commanded speed has been set to: " & latest_commanded_speed );
}
void loop () # main loop, repeated forever
# then check to see if a byte has arrived yet
check_for_incoming_byte();
if( NEW_VALID_MESSAGE == state ) handle_message();
# While we're waiting for bytes to come in, do other main loop stuff.
do_other_main_loop_stuff();
more tips
When defining a serial communication protocol,
I find it makes testing and debugging much easier if the protocol always uses human-readable ASCII text characters, rather than any arbitrary binary values.
frame synchronization (again)
I thought of using a marker for the start of a message, but what if I want to send the number I choose as data?
We already covered the case where the reciever already has frame sync.
The case where the receiver does not yet have frame sync is pretty messy.
The simplest solution is for the transmitter to send a series of harmless bytes
(perhaps newlines or space characters),
the length of the maximum possible valid message,
as a preamble just before each netstring.
No matter what state the receiver is in when the serial cable is plugged in,
those harmless bytes eventually drive the receiver into the
"WAITING_FOR_LENGTH" state.
And then when the tranmitter sends the packet header (length followed by ":"),
the receiver correctly recognizes it as a packet header and has recovered frame sync.
(It's not really necessary for the transmitter to send that preamble before every packet.
Perhaps the transmitter could send it for 1 out of 20 packets; then the receiver is guaranteed to recover frame sync in 20 packets (usually less) after the serial cable is plugged in).
other protocols
Other systems use a simple Fletcher-32 checksum or something more complicated to detect many kinds of errors that the netstring format can't detect ( a, b ),
and can synchronize even without a preamble.
Many protocols use a special "start of packet" marker, and use a variety of "escaping" techniques to avoid actually sending a literal "start of packet" byte in the transmitted data, even if the real data we want to send happens to have that value.
( Consistent Overhead Byte Stuffing, bit stuffing, quoted-printable and other kinds of binary-to-text encoding, etc.).
Those protocols have the advantage that the reciever can be sure that when we see the "start of packet" marker, it is the actual start of packet (and not some data byte that coincidentally happens to have the same value).
This makes handling loss of synchronization much easier -- simply discard bytes until the next "start of packet" marker.
Many other formats, including the netstring format, allow any possible byte value to be transmitted as data.
So receivers have to be smarter about handling the start-of-header byte that might be an actual start-of-header, or might be a data byte -- but at least they don't have to deal with "escaping" or the surprisingly large buffer required, in the worst case, to hold a "fixed 64-byte data message" after escaping.
Choosing one approach really isn't any simpler than the other -- it just pushes the complexity to another place, as predicted by waterbed theory.
Would you mind skimming over the discussion of various ways of handling the start-of-header byte, including these two ways, at the Serial Programming Wikibook,
and editing that book to make it better?

return value of QTcpSocket::write(QByteArray& buf);

Dose this function always return buf.size() or -1?
if not ,dose it mean I need recall the function to write the left data not be written?
for example, if I have a 100 bytes of QByteBuffer.
when I call "tcpSocket.write(buf_100_bytes)" , is it possible that I get 60 or something else?
Additionally, dose this function return immediately?
As with POSIX write(), the QIODevice::write() returns the number of bytes written. That can be any number between 0 and the buffer size. Also, in case of an error, it might return a negative number, which you should check for separately.
QIODevice::write() does not block for sockets (they are set to non-blocking mode), the bytes are just added to a buffer and written later.
To get a notification when bytes are written, you can connect to the bytesWritten(qint64) signal. To block until the bytes are actually written, you can use waitForBytesWritten() (usually not a good idea in the main/UI thread).
I quote Qt documentation:
Writes at most maxSize bytes of data from data to the device. Returns the number of bytes that were actually written, or -1 if an error occurred.
It means, it will return the number of bytes written or -1 in case of an error. You can get error by calling error() method or connection to error() signal.

when to use writeUTF() and writeUTFBytes() in ByteArray of AS3

I am trying to create a file format for myself, so i was forming the header for my file. To write a known length string into a ByteArray, which method should i use, writeUTF() or writeUTFBytes().
From the Flex 3 language ref, it tells me that writeUTF() prepends the length of the string and throws a RangeError whereas writeUTFBytes() does not.
Any suggessions would be appreciated.
The only difference between the two is that writeUTFBytes() doesn't prepend the message with the length of the string (The RangeError is because 65535 is the highest number you can store in 16 bits)
Where you'd use one over the other depends on what you're doing. For example, I use writeUTFBytes() when copying a XML object over to be compressed. In this case, I don't care about the length of the string, and it'd introduce something extra to the code.
writeUTF() can be useful if you're writing a streaming/network server, where as you prefix the message length to the message, you know how many bytes to stream on the other end before the end of the message. e.g., I have 200 bytes worth of message. I read the length (16-bit integer), which tells me the message is 100 bytes. I read in 100 bytes and I know it's a complete message. Everything after is another message. If the message length said the message was 300 bytes, then I'd know I'd have to wait a bit before I have the full message.
I think i have found the solution myself. It came to me when i was coding to read back the data. The corresponding functions to read from a bytearray readUTF() and readUTFBytes(length:uint) requires the length to be passed to it.
So if you know the length of the string that you are gonna write, you can use writeUTFBytes() and use readUTFBytes() with that size. Else you can use readUTF(), letting as3 write the size of the data which can be read back without any need to know the length of the string while using readUTF().
Hope this might be useful to some one as well.

How the "OK" message looks like?

I have a device that sends data to a server.
Data
[ Client ] == > [ Server ]
After the validation on the server I want to return a message:
OK
[ Client ] < == [ Server ]
Is there a standard "OK" message to return? And an "ERROR" message? How does it looks like? (e.g. ":0011", ":110F")
You've got to design an application-level protocol. TCP is a byte stream, so even the definition of "Data" in your client->server piece needs some protocol so that the receiver can know what bytes make up the data (when to stop reading).
A couple of common types of protocols are...
Length-delimited chunks. Every message starts with a 16 or 32-bit length prefix. Then that many bytes follow. The length needs to be in a defined byte order (see htons, ntohs, etc). Everyone who uses this protocol knows to read the length prefix then read that many bytes. Having defined that "chunk" on the network, you might put a header on the contents of the chunk. Maybe a message type (ACK, NAK, Data, etc) followed by some contents.
ASCII newline delimited. Each message is a line of ASCII (or UTF8, etc) text. It ends at a newline. Newline endings for the lines play the same role as the length prefix for the chunks above. You then define what's in each line (like space or comma-delimited ASCII/UTF8/whatever fields). Somewhere in that you'd define what data looks like, ACK, etc.
I'm sure you could come up with other ideas, but that's the basic job: defining your application-level protocol on top of TCP's byte stream.

Resources