Tcp data is getting attached in each other - tcp

I have a problem using tcp.
When i read and write data in the client or the server he should read it as one packet each time.
But sometimes when i write data very fast (in a loop for a example) the client received it as a one data instead of handle it as three different packets for example.
Sending data:
messageToSend = Encoding.ASCII.GetBytes(data);
c.GetStream().Write(messageToSend, 0, messageToSend.Length);
Receiving in the client:
byte[] message = new byte[1024];
int i = 0;
i = c.GetStream().Read(message, 0, message.Length);
Encoding.ASCII.GetString(message, 0, i);
//Handle the new data....
Hope it was clear enough and thanks in advance!

TCP by design is stream protocol it gets stacked up in the buffer if you are not fast enough to read it out. UDP for instance is dgram protocol that has fixed packets that are readable separately.

Related

Processing: How do I set a client to read multiple messages sent at once from a server as separate?

I am working on setting up a basic network system in Processing:
import processing.net.*;
Server myServer;
Client myClient;
However, I'm having trouble with my server and client side communication set up. My client is interpreting all incoming messages as Strings, and the issue is that whenever multiple messages are sent from the server in the same frame, they become added into a single string, which my program cannot interpret. After testing, I found that sending multiple messages from a single client to the server are given the same treatment.
My client reader looks like this:
while (myClient.available() > 0) {
String dataIn = myClient.readString();
As of now, I don't know if the problem is in the reader (combining the strings), or in the fact that I'm using write() multiple times in a single frame (and the data is being sent as a single string).
I am wondering if the messages can somehow be sent/read separately or, if not, there is some method to test if a message has already been sent (that works for both the client and server side) so that I can set up a queue to keep track of messages to be sent.
Well, I decided to forgo the idea of checking if a message had already been sent as I did not see any functions that would be able to help do that. Instead, I went ahead and created an arrayList of strings for the server called serverQ to act as a queue for messages to be sent:
ArrayList <String> serverQ = new ArrayList<String>();
I also added a function writeSQ(String) that would place any input string into the queue:
void writeSQ(String s) {
serverQ.add(s);
}
I then proceeded to replace every usage of myServer.write(String) with writeSQ(String s). At the end of my ServerUpdate function, I added a section that would empty the queue, sending the next string to all clients, one frame at a time:
// send data
if (serverQ.size() > 0) {
myServer.write(serverQ.get(0));
serverQ.remove(0);
}
}
However, for some reason, messages still got compounded, so I speculated that it may be due to the closeness (frequency) of the sent messages (each frame); so I set a boolean serverSent to alternate the messages sent to every other frame. The new code looks like this:
// send data
if (serverQ.size() > 0) {
if (!serverSent) {
myServer.write(serverQ.get(0));
serverQ.remove(0);
serverSent = true;
} else
serverSent = false;
}
This worked perfectly and the messages were interpreted individually by clients. I added that exact same support code to the clients (changing everything from server to client when needed) and after a good amount of testing, can confirm that this new support works properly both ways.

sending message over a socket using gen_tcp:send/2

How to send a message of the form [Integer, String] in erlang using gen_tcp.
Eg: I am looking to send messages of the form [25, "Hello"] or [50, "Hi"] over a socket using gen_tcp:send/2.
I tried to do [ID | Msg] but that doesn't help.
Thanks
In Erlang, a string is just a list of Integers, so the list [25, "Hello"]
is actually represented like [25, [72,101,108,108,111]]
The question remains, how to send that information over a Socket in Erlang.
According to the documentation, in Erlang you can send the data as binary or as a list, which you can set either as {mode, binary} or {mode, list} when you create the Socket.
In the case that you are working with binary data (which is advice since sending and receiving binaries is faster than lists), you can do something like:
{ok, Socket} = gen_tcp:connect(Host, Port, [{mode, binary}]).
Data = list_to_binary([25, "Hello"]).
gen_tcp:send(Socket, Data).
Now, if you use your socket in list mode, you just send the list directly:
{ok, Socket} = gen_tcp:connect(Host, Port, [{mode, binary}]).
Data = [25, "Hello"].
gen_tcp:send(Socket, Data).
At the server side, if you receive the data in a Socket which is in list mode, you convert back to your original format with:
[Integer|String] = ListReceived.
If you receive the data in a Socket with binary mode, then you have to transform the data from a Binary to a List, like:
[Integer|String] = binary_to_list(BinaryReceived).

UDP API in Rust

1) API for send here returns Result<usize>. Why is that ? In my head, a UDP send is all or none. The return value seems to suggest that sending can succeed but entire data may not be written which makes me code like:
let mut bytes_written = 0;
while bytes_written < data.len() {
bytes_written += match udp_socket.send_to(&data[bytes_written..]) {
Ok(bytes_tx) => bytes_tx,
Err(_) => break,
}
}
Recently someone told me this is completely unnecessary. But I don't understand. If that were true why is the return not Result<()> instead, which is also what i was expecting ?
2)
For reads though I understand. I could give it a buffer of size 100 bytes but the datagram might only be 50 bytes long. So essentially I should utilise only read_buf[..size_read]. Here my question is what happens if the buffer size is 100 but the datagram size is say 150 bytes ? Will recv_from fill in only 100 bytes and return Ok(100, some_peer_addr) ? If i re-read will it fill in the remaining of the datagram ? What if another datagram of 50 bytes arrived before my second read ? Will i get just the remaining 50 bytes the second time and 50 bytes of new datagram the 3rd time or complete 100 bytes the 2nd time which also contains the new datagram ? Or will be an error and i will lose the 1st datagram on my initial read and never be able to recover it ?
The answer to both of these questions lies in the documentation of the respective BSD sockets functions, sendto() and recvfrom(). If you use some *nix system (OS X or Linux, for example), you can use man sendto and man recvfrom to find it.
1) sendto() man page is rather vague on this; Windows API page explicitly says that it is possible for the return value be less than len argument. See also this question. It looks like that this particular moment is somewhat under-documented. I think that it is probably safe to assume that the return value will always be equal either to len or to the error code. Problems may happen if the length of the data sent through sendto() exceeds the internal buffer size inside the OS kernel, but it seems that at least Windows will return an error in this case.
2) recvfrom() man page unambiguously states that the part of a datagram which does not fit into the buffer will be discarded:
The recvfrom() function shall return the length of the message
written to the buffer pointed to by the buffer argument. For
message-based sockets, such as SOCK_RAW, SOCK_DGRAM, and
SOCK_SEQPACKET, the entire message shall be read in a single
operation. If a message is too long to fit in the supplied buffer,
and MSG_PEEK is not set in the flags argument, the excess bytes shall
be discarded.
So yes, recv_from() will fill exactly 100 bytes, the rest will be discarded, and further calls to recv_from() will return new datagrams.
If you dig down, it just wrapping the C sendto function. This function returns number of bytes sent, so Rust just passes that on (while handling the -1 case and turning errno into actual errors).

recv function gives malformed data Winsock2 C++

In my simple TCP client server application, server send repetitively 1 kB message to the client and client send a reply acknowledgement (just send 'ACK') for each packet. Just think this scenario like client and server passing 1 kB messages here and there in a infinite loop.
I send the same message every time and the fist byte (first char) is always 1. But while testing this client and server application in the same machine for a long time, I noticed first character of some of the received messages are something else in the receive buffer and recv function also returned 1024 (1 kB). This is not happen frequently.
This is the how I receive.
char recvBuff[DEFAULT_BUFFER_SIZE];
int iResult = SOCKET_ERROR;
iResult = recv(curSocket, recvBuff, DEFAULT_BUFFER_SIZE, 0);
if (iResult == SOCKET_ERROR)
{
return iResult;
}
if (recvBuff[0] != 1)
{
//malformed receive
}
MessageHeader *q = (MessageHeader*)recvBuff;
message.header = *q; q++;
std::string temp((char*)q, message.header.fragmentSize);
message.message = temp;
Actually the problem is in constructing the temp string. It breaks since the correct fragment size not received. I tried to drop these kind of malformed data. But the problem is there is a gap between last successfully received fragment ID and first successfully received fragment ID after malformed receives. Any idea why these malformed receives happen?
You’re assuming that you’ve received a complete message when the recv() call completes. If this is a TCP connection (as opposed to UDP), it is byte-oriented, and that means that recv() will return whenever there are any bytes available.
Put more explicitly, there is no reason that doing
send (toServerSocket, someMessage, 1024, 0);
on the client side will cause
recv (fromClientSocket, myBuffer, 1024, 0);
to receive 1,024 bytes. It could just as well receive 27 bytes, with the remaining 997 coming from future calls to recv().
What’s happening in your program, then, is that you’re getting one of these short returns, and it’s causing your program to lose sync. with the message stream. How to fix it? Use recv() to read enough of your message that you know the length (or set a fixed length, though that’s inefficient in many cases). Then continue calling recv() into your buffer until you have read at least that many bytes. Note that you might read more bytes than the length of your message — that is, you may read some bytes that belong to the next message, so you will need to keep those in the buffer after processing the current message.

byte aligning in serial communication

So I am trying to define a communication protocol for serial communication, I want to be able to send 4 byte numbers to the device, but I'm unsure how to make sure that the device starts to pick it up on the right byte.
For instance if I want to send
0x1234abcd 0xabcd3f56 ...
how do I makes sure that the device doesn't start reading at the wrong spot and get the first word as:
0xabcdabcd
Is there a clever way of doing this? I thought of using a marker for the start of a message, but what if I want to send the number I choose as data?
Why not send a start-of-message byte followed by a length-of-data byte if you know how big the data is going to be?
Alternatively, do as other binary protocols and only send fixed sizes of packages with a fixed header. Say that you will only send 4 bytes, then you know that you'll have one or more bytes of header before the actual data content.
Edit: I think you're misunderstanding me. What I mean is that the client is supposed to always regard bytes as either header or data, not based on value but rather based on the position in the stream. Say you're sending four bytes of data, then one byte would be the header byte.
+-+-+-+-+-+
|H|D|D|D|D|
+-+-+-+-+-+
The client would then be a pretty basic state machine, along the lines of:
int state = READ_HEADER;
int nDataBytesRead = 0;
while (true) {
byte read = readInput();
if (state == READ_HEADER) {
// process the byte as a header byte
state = READ_DATA;
nDataBytesRead = 0;
} else {
// Process the byte as incoming data
++nDataBytesRead;
if (nDataBytesRead == 4)
{
state = READ_HEADER;
}
}
}
The thing about this setup is that what determines if the byte is a header byte is not the actual content of a byte, but rather the position in the stream. If you want to have a variable number of data bytes, add another byte to the header to indicate the number of data bytes following it. This way, it will not matter if you are sending the same value as the header in the data stream since your client will never interpret it as anything but data.
netstring
For this application, perhaps the relatively simple "netstring" format is adequate.
For example, the text "hello world!" encodes as:
12:hello world!,
The empty string encodes as the three characters:
0:,
which can be represented as the series of bytes
'0' ':' ','
The word 0x1234abcd in one netstring (using network byte order), followed by the word 0xabcd3f56 in another netstring, encodes as the series of bytes
'\n' '4' ':' 0x12 0x34 0xab 0xcd ',' '\n'
'\n' '4' ':' 0xab 0xcd 0x3f 0x56 ',' '\n'
(The newline character '\n' before and after each netstring is optional, but makes it easier to test and debug).
frame synchronization
how do I makes sure that the device doesn't start reading at the wrong spot
The general solution to the frame synchronization problem is to read into a temporary buffer, hoping that we have started reading at the right spot.
Later, we run some consistency checks on the message in the buffer.
If the message fails the check, something has gone wrong,
so we throw away the data in the buffer and start over.
(If it was an important message, we hope that the transmitter will re-send it).
For example, if the serial cable is plugged in halfway through the first netstring,
the receiver sees the byte string:
0xab 0xcd ',' '\n' '\n' '4' ':' 0xab 0xcd 0x3f 0x56 ',' '\n'
Because the receiver is smart enough to wait for the ':' before expecting the next byte to be valid data, the receiver is able to ignore the first partial message and then receive the second message correctly.
In some cases, you know ahead of time what the valid message length(s) will be;
that makes it even easier for the receiver to detect it has started reading at the wrong spot.
sending start-of-message marker as data
I thought of using a marker for the start of a message, but what if I want to send the number I choose as data?
After sending the netstring header, the transmitter sends the raw data as-is -- even if it happens to look like the start-of-message marker.
In the normal case, the reciever already has frame sync.
The netstring parser has already read the "length" and the ":" header,
so the netstring parser
puts the raw data bytes directly into the correct location in the buffer -- even if those data bytes happen to look like the ":" header byte or the "," footer byte.
pseudocode
// netstring parser for receiver
// WARNING: untested pseudocode
// 2012-06-23: David Cary releases this pseudocode as public domain.
const int max_message_length = 9;
char buffer[1 + max_message_length]; // do we need room for a trailing NULL ?
long int latest_commanded_speed = 0;
int data_bytes_read = 0;
int bytes_read = 0;
int state = WAITING_FOR_LENGTH;
reset_buffer()
bytes_read = 0; // reset buffer index to start-of-buffer
state = WAITING_FOR_LENGTH;
void check_for_incoming_byte()
if( inWaiting() ) // Has a new byte has come into the UART?
// If so, then deal with this new byte.
if( NEW_VALID_MESSAGE == state )
// oh dear. We had an unhandled valid message,
// and now another byte has come in.
reset_buffer();
char newbyte = read_serial(1); // pull out 1 new byte.
buffer[ bytes_read++ ] = newbyte; // and store it in the buffer.
if( max_message_length < bytes_read )
reset_buffer(); // reset: avoid buffer overflow
switch state:
WAITING_FOR_LENGTH:
// FIXME: currently only handles messages of 4 data bytes
if( '4' != newbyte )
reset_buffer(); // doesn't look like a valid header.
else
// otherwise, it looks good -- move to next state
state = WAITING_FOR_COLON;
WAITING_FOR_COLON:
if( ':' != newbyte )
reset_buffer(); // doesn't look like a valid header.
else
// otherwise, it looks good -- move to next state
state = WAITING_FOR_DATA;
data_bytes_read = 0;
WAITING_FOR_DATA:
// FIXME: currently only handles messages of 4 data bytes
data_bytes_read++;
if( 4 >= data_bytes_read )
state = WAITING_FOR_COMMA;
WAITING_FOR_COMMA:
if( ',' != newbyte )
reset_buffer(); // doesn't look like a valid message.
else
// otherwise, it looks good -- move to next state
state = NEW_VALID_MESSAGE;
void handle_message()
// FIXME: currently only handles messages of 4 data bytes
long int temp = 0;
temp = (temp << 8) | buffer[2];
temp = (temp << 8) | buffer[3];
temp = (temp << 8) | buffer[4];
temp = (temp << 8) | buffer[5];
reset_buffer();
latest_commanded_speed = temp;
print( "commanded speed has been set to: " & latest_commanded_speed );
}
void loop () # main loop, repeated forever
# then check to see if a byte has arrived yet
check_for_incoming_byte();
if( NEW_VALID_MESSAGE == state ) handle_message();
# While we're waiting for bytes to come in, do other main loop stuff.
do_other_main_loop_stuff();
more tips
When defining a serial communication protocol,
I find it makes testing and debugging much easier if the protocol always uses human-readable ASCII text characters, rather than any arbitrary binary values.
frame synchronization (again)
I thought of using a marker for the start of a message, but what if I want to send the number I choose as data?
We already covered the case where the reciever already has frame sync.
The case where the receiver does not yet have frame sync is pretty messy.
The simplest solution is for the transmitter to send a series of harmless bytes
(perhaps newlines or space characters),
the length of the maximum possible valid message,
as a preamble just before each netstring.
No matter what state the receiver is in when the serial cable is plugged in,
those harmless bytes eventually drive the receiver into the
"WAITING_FOR_LENGTH" state.
And then when the tranmitter sends the packet header (length followed by ":"),
the receiver correctly recognizes it as a packet header and has recovered frame sync.
(It's not really necessary for the transmitter to send that preamble before every packet.
Perhaps the transmitter could send it for 1 out of 20 packets; then the receiver is guaranteed to recover frame sync in 20 packets (usually less) after the serial cable is plugged in).
other protocols
Other systems use a simple Fletcher-32 checksum or something more complicated to detect many kinds of errors that the netstring format can't detect ( a, b ),
and can synchronize even without a preamble.
Many protocols use a special "start of packet" marker, and use a variety of "escaping" techniques to avoid actually sending a literal "start of packet" byte in the transmitted data, even if the real data we want to send happens to have that value.
( Consistent Overhead Byte Stuffing, bit stuffing, quoted-printable and other kinds of binary-to-text encoding, etc.).
Those protocols have the advantage that the reciever can be sure that when we see the "start of packet" marker, it is the actual start of packet (and not some data byte that coincidentally happens to have the same value).
This makes handling loss of synchronization much easier -- simply discard bytes until the next "start of packet" marker.
Many other formats, including the netstring format, allow any possible byte value to be transmitted as data.
So receivers have to be smarter about handling the start-of-header byte that might be an actual start-of-header, or might be a data byte -- but at least they don't have to deal with "escaping" or the surprisingly large buffer required, in the worst case, to hold a "fixed 64-byte data message" after escaping.
Choosing one approach really isn't any simpler than the other -- it just pushes the complexity to another place, as predicted by waterbed theory.
Would you mind skimming over the discussion of various ways of handling the start-of-header byte, including these two ways, at the Serial Programming Wikibook,
and editing that book to make it better?

Resources