what if Poco StreamSocket.available() returns 0? - poco-libraries

What situation will the StreamSocket.available() return 0?
What should I do within that case? Should I close the connection, or just ignore it? There is guess that the system is really busy leading to 0 bytes returned to StreamSocket.

What you should do depends on the logic of your application. If available() returns 0 and you expect some data, it means it has not yet arrived and you may want to keep on inquiring or simply call receiveBytes() (which, if socket is in blocking mode, will block until data arrives). If receiveBytes() returns 0, then the remote end has closed or reset the connection and you should drop it, too.

Related

Will the write() system call block further operation till read() is involved, or vice versa?

Written as part of a TCP/IP client-server:
Server:
write(nfds,data1,sizeof(data1));
usleep(1000);
write(nfds,data2,sizeof(data2));
Client:
read(fds,s,sizeof(s));
printf("%s",s);
read(fds,s,sizeof(s));
printf("%s",s);
Without usleep(1000) between the two calls to write(), the client prints data1 twice. Why is this?
Background:
I am doing a Client-Server program where the server has to send two consecutive pieces of information after their acquisition, via the network (socket); nfds is the file descriptor we get from accept().
In the client side, we receive these information via read; here fds is the file descriptor obtained via socket().
My issue is that when I am NOT using the usleep(1000) between the write() functions, the client just prints the info represented by data1 twice, instead of printing data1 and then data2. When I put in the usleep() it's fine. Exactly WHY is this happening? Is write() blocking the operation till the buffer is read or is read() blocking the operation till info is written into the buffer? Or am I completely off the page?
You are making several false assumptions. There is nothing in TCP that guarantees that one send equals one receive. There is a lot of buffering, at both ends, and there are deliberate delays in sending to as to coalesce packets (the Nagle algorithm). When you call read(), or recv() and friends, you need to store the result into a variable and examine it for each of the following cases:
-1: an error: examine/log/print errno, or strerror(), or call perror(), and in most cases close the socket and exit the reading loop.
0: end of stream; the owner has closed the connection; close the socket and exit the reading loop.
a positive value but less than you expected: keep reading and accumulate the data until you have everything you need.
a positive value that is more than you expected: process the data you expected, and save the rest for next time.
exactly what you expected: process the data, discard it all, and repeat. This isn the easy case, and it is rare, but it is the only case you are currently programming for.
Don't add sleeps into networking code. It doesn't solve problems, it only delays them.

How exactly works set_timeout() on a TcpStream?

I'm working with TcpStream. The basic structure I'm working with is :
loop {
if /* new data in the stream */ { /* handle it */ }
/* do a lot of other stuff */
}
So set_timeout() appears to be what I need, but I'm a little puzzled about how it works. The documentation says :
This function will set a timeout for all blocking operations (including reads and writes) on this stream. The timeout specified is a relative time, in milliseconds, into the future after which point operations will time out. This means that the timeout must be reset periodically to keep it from expiring.
So I would expect to have to reset the timeout each time before checking if new data is available, otherwise I would only have Err(TimeOut) after some time.
But it appears not to be the case : actually if I set a very low timeout (like 10 ms) once and for all, the loop does exactly what I want. It returns new data if there is some, and returns Err(TimeOut) if there is none.
Am I misunderstanding the documentation ? Is it safe for me to use this behavior ?
I would have expected it to work like a socket timeout, like you have as the property for sockets in most operating systems and which is available from with the programming languages with SO_TIMEOUT or similar things. With such socket timeout the timer will be started whenever you start a blocking operation on the socket, like read, write, connect. Either the operation will succeed within the time frame or the timer will be triggered and the operation fail because of a timeout. The timeout is a property of the socket and not of the operation, so there is no need to set it again before each operation.
But according to the documentation Rust implemented a completely different thing. If I interpret the documentation correctly they don't set a timeout per operation, but instead set a deadline for all operations of this type on the socket. That is, when the timer is set up to 10 seconds you can have multiple reads within this time but if there is still a read active after 10 seconds it will be stopped.
When one is used to work with socket timeouts in other languages this behavior is not the expected one and it looks like the Rust developers have similar objections to this (experimental) API. In https://github.com/rust-lang/rust/issues/15802 they suggest to rename these kind of functions from set..timeout to set..deadline to make the name reflect the behavior.

Handling messages over TCP

I'm trying to send and receive messages over TCP using a size of each message appended before the it starts.
Say, First three bytes will be the length and later will the message:
As a small example:
005Hello003Hey002Hi
I'll be using this method to do large messages, but because the buffer size will be a constant integer say, 200 Bytes. So, there is a chance that a complete message may not be received e.g. instead of 005Hello I get 005He nor a complete length may be received e.g. I get 2 bytes of length in message.
So, to get over this problem, I'll need to wait for next message and append it to the incomplete message etc.
My question is: Am I the only one having these difficulties to appending messages to each other, appending lengths etc.. to make them complete Or is this really usually how we need to handle the individual messages on TCP? Or, if there is a better way?
What you're seeing is 100% normal TCP behavior. It is completely expected that you'll loop receiving bytes until you get a "message" (whatever that means in your context). It's part of the work of going from a low-level TCP byte stream to a higher-level concept like "message".
And "usr" is right above. There are higher level abstractions that you may have available. If they're appropriate, use them to avoid reinventing the wheel.
So, there is a chance that a complete message may not be received e.g.
instead of 005Hello I get 005He nor a complete length may be received
e.g. I get 2 bytes of length in message.
Yes. TCP gives you at least one byte per read, that's all.
Or is this really usually how we need to handle the individual messages on TCP? Or, if there is a better way?
Try using higher-level primitives. For example, BinaryReader allows you to read exactly N bytes (it will internally loop). StreamReader lets you forget this peculiarity of TCP as well.
Even better is using even more higher-level abstractions such as HTTP (request/response pattern - very common), protobuf as a serialization format or web services which automate pretty much all transport layer concerns.
Don't do TCP if you can avoid it.
So, to get over this problem, I'll need to wait for next message and append it to the incomplete message etc.
Yep, this is how things are done at the socket level code. For each socket you would like to allocate a buffer of at least the same size as kernel socket receive buffer, so that you can read the entire kernel buffer in one read/recv/resvmsg call. Reading from the socket in a loop may starve other sockets in your application (this is why they changed epoll to be level-triggered by default, because the default edge-triggered forced application writers to read in a loop).
The first incomplete message is always kept in the beginning of the buffer, reading the socket continues at the next free byte in the buffer, so that it automatically appends to the incomplete message.
Once reading is done, normally a higher level callback is called with the pointers to all read data in the buffer. That callback should consume all complete messages in the buffer and return how many bytes it has consumed (may be 0 if there is only an incomplete message). The buffer management code should memmove the remaining unconsumed bytes (if any) to the beginning of the buffer. Alternatively, a ring-buffer can be used to avoid moving those unconsumed bytes, but in this case the higher level code should be able to cope with ring-buffer iterators, which it may be not ready to. Hence keeping the buffer linear may be the most convenient option.

How to properly determine that the end-of-input has been reached in a QTCPSocket?

I have the following code that reads from a QTCPSocket:
QString request;
while(pSocket->waitForReadyRead())
{
request.append(pSocket->readAll());
}
The problem with this code is that it reads all of the input and then pauses at the end for 30 seconds. (Which is the default timeout.)
What is the proper way to avoid the long timeout and detect that the end of the input has been reached? (An answer that avoids signals is preferred because this is supposed to be happening synchronously in a thread.)
The only way to be sure is when you have received the exact number of bytes you are expecting. This is commonly done by sending the size of the data at the beginning of the data packet. Read that first and then keep looping until you get it all. An alternative is to use a sentinel, a specific series of bytes that mark the end of the data but this usually gets messy.
If you're dealing with a situation like an HTTP response that doesn't contain a Content-Length, and you know the other end will close the connection once the data is sent, there is an alternative solution.
Use socket.setReadBufferSize to make sure there's enough read buffer for all the data that may be sent.
Call socket.waitForDisconnected to wait for the remote end to close the connection
Use socket.bytesAvailable as the content length
This works because a close of the connection doesn't discard any buffered data in a QTcpSocket.

Receiving image through winsocket

i have a proxy server running on my local machine used to cache images while surfing. I set up my browser with a proxy to 127.0.0.1, receive the HTTP requests, take the data and send it back to the browser. It works fine for everything except large images. When I receive the image info, it only displays half the image (ex.: the top half of the google logo) heres my code:
char buffer[1024] = "";
string ret("");
while(true)
{
valeurRetour = recv(socketClient_, buffer, sizeof(buffer), 0);
if(valeurRetour <= 0) break;
string t;
t.assign(buffer,valeurRetour);
ret += t;
longueur += valeurRetour;
}
closesocket(socketClient_);
valeurRetour = send(socketServeur_, ret.c_str(),longueur, 0);
the socketClient_ is non-blocking. Any idea how to fix this problem?
You're not making fine enough distinctions among the possible return values of recv.
There are two levels here.
The first is, you're lumping 0 and -1 together. 0 means the remote peer closed its sending half of the connection, so your code does the right thing here, closing its socket down, too. -1 means something happened besides data being received. It could be a permanent error, a temporary error, or just a notification from the stack that something happened besides data being received. Your code lumps all such possibilities together, and on top of that treats them the same as when the remote peer closes the connection.
The second level is that not all reasons for getting -1 from recv are "errors" in the sense that the socket is no longer useful. I think if you start checking for -1 and then calling WSAGetLastError to find out why you got -1, you'll get WSAEWOULDBLOCK, which is normal since you have a non-blocking socket. It means the recv call cannot return data because it would have to block your program's execution thread to do so, and you told Winsock you wanted non-blocking calls.
A naive fix is to not break out of the loop on WSAEWOULDBLOCK but that just means you burn CPU time calling recv again and again until it returns data. That goes against the whole point of non-blocking sockets, which is that they let your program do other things while the network is busy. You're supposed to use functions like select, WSAAsyncSelect or WSAEventSelect to be notified when a call to the API function is likely to succeed again. Until then, you don't call it.
You might want to visit The Winsock Programmer's FAQ. (Disclaimer: I'm its maintainer.)
Have you analyzed the transaction at the HTTP level i.e. checked Headers?
Are you accounting for things like Chunked transfers?
I do not have a definite answer in part because of the lack of details given here.

Resources