Relationship with recvfrom, sleep - networking

I don't know exactly,
In my case.
I was tested UPnP via Linux, I just use recvfrom.
I got a HTTP response not expected counts. (In this time, I expected 3)
So, I do put sleep(1) in while(), It works!
I have a question is 'why'?
recvfrom returns to buffer per one packets. <-- this is what I know, and is there a relationship with this?

You can use recvfrom() function for both connection and connection-less sockets.if you are using this function in connection-less socket, If a message is too long to fit in the supplied buffer, the excess bytes are discarded. To avoid this kind of situations "you can set the flag MSG_WAITALL that Requests the function block until the full amount of data requested can be returned. The function may return a smaller amount of data if a signal is caught, if the connection is terminated, if MSG_PEEK was specified, or if an error is pending for the socket."
if you are using recvfrom() function in a stream-based sockets such as SOCK_STREAM, message boundaries are ignored. In this case, data is returned to the user as soon as it becomes available, and no data is discarded.
In your case , instead of using sleep() you can set MSG_WAITALL flag that will block your socket untill full amount of data requested can be returned. and there is no relationship between recvfrom() and sleep() functions.

Related

Count sent and received bytes in Go in an http.Handler ServeHTTP function?

How can sent and received bytes be counted from within a ServeHTTP function in Go?
The count needs to be relatively accurate. Skipping connection establishment is not ideal, but acceptable. But headers must be included.
It also needs to be fast. Iterating is generally too slow.
The counting itself doesn't need to occur within ServeHTTP, as long the count for a given connection can be made available to ServeHTTP.
This must also not break HTTPS or HTTP/2.
Things I've Tried
It's possible to get a rough, slow estimate of received bytes by iterating over the Request headers. This is far too slow, and the Go standard library removes and combines headers, so it's not accurate either.
I tried writing an intercepting Listener, which created an internal tls.Listen or net.Listen Listener, and whose Accept() function got a net.Conn from the internal Listener's Accept(), and then wrapped that in an intercepting net.Conn whose Read and Write functions call the real net.Conn and count their reads and writes. It's then possible to make those counts available to the ServeHTTP function via mutexed shared variables.
The problem is, the intercepting Conn breaks HTTP/2, because Go's internal libraries cast the net.Conn as a *tls.Conn (e.g. https://golang.org/src/net/http/server.go#L1730), and it doesn't appear possible in Go to wrap the object while still making that cast succeed (if it is, it would solve this problem).
Counting sent bytes can be done relatively accurately by counting what is written to the ResponseWriter. Counting received bytes in the HTTP body is also achievable, via Request.Body. The critical issue here appears to be quickly and accurately counting request header bytes. Though again, also counting connection establishment bytes would be ideal.
Is this possible? How?
I think it is possible, but I can't say I've done it. However, based on browsing the stdlib implementation of the HTTP server and TLS listener, I don't see why it shouldn't be possible; the key is wrapping the connection before TLS instead of after. This also gets you a more accurate count of bytes on the wire, rather than a count of decrypted bytes.
You've already got an intercepting Listener, you just need to insert it in the right spot. Rather than passing your Listener to http.Serve (or wherever you're inserting it), you want to pass it to tls.NewListener first, which wraps it in the TLS handler, and then pass the result, which will be a TLS listener (making Go's HTTP/2 support happy) into the HTTP server.
Of course, if you want a count of decrypted bytes rather than wire bytes, you may be SOL - wrapping the net.Conn just won't get you there. You'll likely have to do the best you can with counting headers & body.

Will the write() system call block further operation till read() is involved, or vice versa?

Written as part of a TCP/IP client-server:
Server:
write(nfds,data1,sizeof(data1));
usleep(1000);
write(nfds,data2,sizeof(data2));
Client:
read(fds,s,sizeof(s));
printf("%s",s);
read(fds,s,sizeof(s));
printf("%s",s);
Without usleep(1000) between the two calls to write(), the client prints data1 twice. Why is this?
Background:
I am doing a Client-Server program where the server has to send two consecutive pieces of information after their acquisition, via the network (socket); nfds is the file descriptor we get from accept().
In the client side, we receive these information via read; here fds is the file descriptor obtained via socket().
My issue is that when I am NOT using the usleep(1000) between the write() functions, the client just prints the info represented by data1 twice, instead of printing data1 and then data2. When I put in the usleep() it's fine. Exactly WHY is this happening? Is write() blocking the operation till the buffer is read or is read() blocking the operation till info is written into the buffer? Or am I completely off the page?
You are making several false assumptions. There is nothing in TCP that guarantees that one send equals one receive. There is a lot of buffering, at both ends, and there are deliberate delays in sending to as to coalesce packets (the Nagle algorithm). When you call read(), or recv() and friends, you need to store the result into a variable and examine it for each of the following cases:
-1: an error: examine/log/print errno, or strerror(), or call perror(), and in most cases close the socket and exit the reading loop.
0: end of stream; the owner has closed the connection; close the socket and exit the reading loop.
a positive value but less than you expected: keep reading and accumulate the data until you have everything you need.
a positive value that is more than you expected: process the data you expected, and save the rest for next time.
exactly what you expected: process the data, discard it all, and repeat. This isn the easy case, and it is rare, but it is the only case you are currently programming for.
Don't add sleeps into networking code. It doesn't solve problems, it only delays them.

Handling messages over TCP

I'm trying to send and receive messages over TCP using a size of each message appended before the it starts.
Say, First three bytes will be the length and later will the message:
As a small example:
005Hello003Hey002Hi
I'll be using this method to do large messages, but because the buffer size will be a constant integer say, 200 Bytes. So, there is a chance that a complete message may not be received e.g. instead of 005Hello I get 005He nor a complete length may be received e.g. I get 2 bytes of length in message.
So, to get over this problem, I'll need to wait for next message and append it to the incomplete message etc.
My question is: Am I the only one having these difficulties to appending messages to each other, appending lengths etc.. to make them complete Or is this really usually how we need to handle the individual messages on TCP? Or, if there is a better way?
What you're seeing is 100% normal TCP behavior. It is completely expected that you'll loop receiving bytes until you get a "message" (whatever that means in your context). It's part of the work of going from a low-level TCP byte stream to a higher-level concept like "message".
And "usr" is right above. There are higher level abstractions that you may have available. If they're appropriate, use them to avoid reinventing the wheel.
So, there is a chance that a complete message may not be received e.g.
instead of 005Hello I get 005He nor a complete length may be received
e.g. I get 2 bytes of length in message.
Yes. TCP gives you at least one byte per read, that's all.
Or is this really usually how we need to handle the individual messages on TCP? Or, if there is a better way?
Try using higher-level primitives. For example, BinaryReader allows you to read exactly N bytes (it will internally loop). StreamReader lets you forget this peculiarity of TCP as well.
Even better is using even more higher-level abstractions such as HTTP (request/response pattern - very common), protobuf as a serialization format or web services which automate pretty much all transport layer concerns.
Don't do TCP if you can avoid it.
So, to get over this problem, I'll need to wait for next message and append it to the incomplete message etc.
Yep, this is how things are done at the socket level code. For each socket you would like to allocate a buffer of at least the same size as kernel socket receive buffer, so that you can read the entire kernel buffer in one read/recv/resvmsg call. Reading from the socket in a loop may starve other sockets in your application (this is why they changed epoll to be level-triggered by default, because the default edge-triggered forced application writers to read in a loop).
The first incomplete message is always kept in the beginning of the buffer, reading the socket continues at the next free byte in the buffer, so that it automatically appends to the incomplete message.
Once reading is done, normally a higher level callback is called with the pointers to all read data in the buffer. That callback should consume all complete messages in the buffer and return how many bytes it has consumed (may be 0 if there is only an incomplete message). The buffer management code should memmove the remaining unconsumed bytes (if any) to the beginning of the buffer. Alternatively, a ring-buffer can be used to avoid moving those unconsumed bytes, but in this case the higher level code should be able to cope with ring-buffer iterators, which it may be not ready to. Hence keeping the buffer linear may be the most convenient option.

Is gen_tcp:send/2 blocking?

Is gen_tcp:send() asynchronous? Assume I'll send some byte array using gen_tcp:send/2. Will process continue to work:
a) Immediately
b) At the time data will arrive in target's inner buffer
c) When the target gets the data from buffer
Thank You in advance.
gen_tcp:send/2 is synchronous. It means that the call returns only after the given packet is really sent. Usually it happens immediately, however if TCP window is full gen_tcp:send/2 blocks until the data is sent. So it means that the call can theoretically block infinitely (for example when receiver does not read data from socket on its side).
Fortunately there are some options to avoid such situation. There are two options {send_timeout, Integer} and {send_timeout_close, Boolean} for sockets which can be specified by the call inet:setopts/2. The first one allows to specify a longest time to wait for a send operation.
When the limit is exceeded, the send operation will return {error, timeout}. Default value of that option is infinity (and it is the reason of infinite block). Also unfortunately it is unknown how much of data was sent if {error, timeout} was returned. In that case it is better to close the socket. If the second option {send_timeout_close, Boolean} is set to true then the socket will be close automatically if {error, timeout} occurs.

What is the difference between isend and issend?

Need clarification to my understanding of isend and issend as given in Send Types
My understanding is that isend will return once the send buffer is free, i.e. when all the data has been released. Issend on the other hand returns only when it receives an ack from the receive of getting/not getting the entire data. Is this all there is to it?
Both MPI_Isend() and MPI_Issend() return immediately, but in both cases you can't use the send buffer immediately.
Think of the difference that there is between MPI_Send() and MPI_Ssend():
MPI_Send() can be buffered or it can be synchronous if the buffer is too
large to be buffered locally, and in this case it waits to complete sending the
data to the corresponding receive operation.
MPI_Ssend() is always synchronous: it always waits to complete sending the data
to the corresponding receive operation.
The inner working of the corresponding "I"-operations is very similar, except for the fact that they both don't block (return immediately): the difference is only when the MPI library signals to the user program that you can use the send-buffer (that is: MPI_Wait() returns or MPI_Test() returns true - the so called send-complete operation of the non-blocking send):
with MPI_Isend() this can happen either when the data has been copied locally in a buffer owned by the MPI library, if below the "synchronous threshold", or when the data has been actually moved to the sibling task: the send-complete operation can be local, in case the underlying send operation is buffered.
With MPI_Issend() MPI doesn't ever buffer data locally and the "buffer-free condition" is returned only after the data has been actually transferred (and probably ack'ed, at low level): the send-complete operation is non-local.
The MPI standard document is quite pedantic on these aspects. See section 3.7 Nonblocking Communication.
Correct. Obviously both of those will only be true when the request that you get back from the call to MPI_ISEND or MPI_ISSEND is completed via a MPI_WAIT* or MPI_TEST* function.

Resources