Receiving image through winsocket - http

i have a proxy server running on my local machine used to cache images while surfing. I set up my browser with a proxy to 127.0.0.1, receive the HTTP requests, take the data and send it back to the browser. It works fine for everything except large images. When I receive the image info, it only displays half the image (ex.: the top half of the google logo) heres my code:
char buffer[1024] = "";
string ret("");
while(true)
{
valeurRetour = recv(socketClient_, buffer, sizeof(buffer), 0);
if(valeurRetour <= 0) break;
string t;
t.assign(buffer,valeurRetour);
ret += t;
longueur += valeurRetour;
}
closesocket(socketClient_);
valeurRetour = send(socketServeur_, ret.c_str(),longueur, 0);
the socketClient_ is non-blocking. Any idea how to fix this problem?

You're not making fine enough distinctions among the possible return values of recv.
There are two levels here.
The first is, you're lumping 0 and -1 together. 0 means the remote peer closed its sending half of the connection, so your code does the right thing here, closing its socket down, too. -1 means something happened besides data being received. It could be a permanent error, a temporary error, or just a notification from the stack that something happened besides data being received. Your code lumps all such possibilities together, and on top of that treats them the same as when the remote peer closes the connection.
The second level is that not all reasons for getting -1 from recv are "errors" in the sense that the socket is no longer useful. I think if you start checking for -1 and then calling WSAGetLastError to find out why you got -1, you'll get WSAEWOULDBLOCK, which is normal since you have a non-blocking socket. It means the recv call cannot return data because it would have to block your program's execution thread to do so, and you told Winsock you wanted non-blocking calls.
A naive fix is to not break out of the loop on WSAEWOULDBLOCK but that just means you burn CPU time calling recv again and again until it returns data. That goes against the whole point of non-blocking sockets, which is that they let your program do other things while the network is busy. You're supposed to use functions like select, WSAAsyncSelect or WSAEventSelect to be notified when a call to the API function is likely to succeed again. Until then, you don't call it.
You might want to visit The Winsock Programmer's FAQ. (Disclaimer: I'm its maintainer.)

Have you analyzed the transaction at the HTTP level i.e. checked Headers?
Are you accounting for things like Chunked transfers?
I do not have a definite answer in part because of the lack of details given here.

Related

How to efficiently decode gobs and wait for more to arrive via tcp connection

I'd like to have a TCP connection for a gaming application. It's important to be time efficient. I want to receive many objects efficiently. It's also important to be CPU efficient because of the load.
So far, I can make sure handleConnection is called every time a connection is dialed using go's net library. However once the connection is created, I have to poll (check over and over again to see if new data is ready on the connection). This seems inefficient. I don't want to run that check to see if new data is ready if it's needlessly sucking up CPU.
I was looking for something such as the following two options but didn't find what I was looking for.
(1) Do a read operation that somehow blocks (without sucking CPU) and then unblocks when new stuff is ready on the connection stream. I could not find that.
(2) Do an async approach where a function is called when new data arrives on the connection stream (not just when a new connection is dialed). I could not find that.
I don't want to put any sleep calls in here because that will increase the latency of responding to singles messages.
I also considered dialing out for every single message, but I'm not sure if that's efficient or not.
So I came up with code below, but it's still doing a whole lot of checking for new data with the Decode(p) call, which does not seem optimal.
How can I do this more efficiently?
func handleConnection(conn net.Conn) {
dec := gob.NewDecoder(conn)
p := &P{}
for {
result := dec.Decode(p)
if result != nil {
// do nothing
} else {
fmt.Printf("Received : %+v", p)
fmt.Println("result", result, "\n")
}
}
conn.Close()
}
You say:
So I came up with code below, but it's still doing a whole lot of checking for new data with the Decode(p) call.
Why do you think that? The gob decoder will issue a Read to the conn and wait for it to return data before figuring out what it is and decoding it. This is a blocking operation, and will be handled asynchronously by the runtime behind the scenes. The goroutine will sleep until the appropriate io signal comes in. You should not have to do anything fancy to make that more performant.
You can trace this yourself in the code for decoder.Decode.
I think your code will work just fine. CPU will be idle until it receives more data.
Go is not node. Every api is "blocking" for the most part, but that is not as much as a problem as in other platforms. The runtime manages goroutines very efficiently and delegates appropriate signals to sleeping goroutines as needed.

Will the write() system call block further operation till read() is involved, or vice versa?

Written as part of a TCP/IP client-server:
Server:
write(nfds,data1,sizeof(data1));
usleep(1000);
write(nfds,data2,sizeof(data2));
Client:
read(fds,s,sizeof(s));
printf("%s",s);
read(fds,s,sizeof(s));
printf("%s",s);
Without usleep(1000) between the two calls to write(), the client prints data1 twice. Why is this?
Background:
I am doing a Client-Server program where the server has to send two consecutive pieces of information after their acquisition, via the network (socket); nfds is the file descriptor we get from accept().
In the client side, we receive these information via read; here fds is the file descriptor obtained via socket().
My issue is that when I am NOT using the usleep(1000) between the write() functions, the client just prints the info represented by data1 twice, instead of printing data1 and then data2. When I put in the usleep() it's fine. Exactly WHY is this happening? Is write() blocking the operation till the buffer is read or is read() blocking the operation till info is written into the buffer? Or am I completely off the page?
You are making several false assumptions. There is nothing in TCP that guarantees that one send equals one receive. There is a lot of buffering, at both ends, and there are deliberate delays in sending to as to coalesce packets (the Nagle algorithm). When you call read(), or recv() and friends, you need to store the result into a variable and examine it for each of the following cases:
-1: an error: examine/log/print errno, or strerror(), or call perror(), and in most cases close the socket and exit the reading loop.
0: end of stream; the owner has closed the connection; close the socket and exit the reading loop.
a positive value but less than you expected: keep reading and accumulate the data until you have everything you need.
a positive value that is more than you expected: process the data you expected, and save the rest for next time.
exactly what you expected: process the data, discard it all, and repeat. This isn the easy case, and it is rare, but it is the only case you are currently programming for.
Don't add sleeps into networking code. It doesn't solve problems, it only delays them.

TCP Connection slows down after 10.000 packets

I am using the QT implementation of the TCP stack to control a robot. We exchange short msgs (<200Byte) and have a round trip time of about 8ms. After maybe 10.000 Packets in each direction, the connection slows down and i have to wait about 1 sec for the answer of my packet. If I restart my program, and reconnect, I again get the 8ms RTT.
For me it sounds like some kind of buffer is filling up but I havn't worked with TCP much, so maybe some one could give me a hint.
The problem is in the code that you're not showing. Likely the slot that gets executed on readyRead() is not emptying the buffer.
It is acceptable for the buffer not to be completely empty, say when you're reading complete lines/packets.
It is not acceptable for the buffer size to be constantly growing.
At the end of your slot reading slot, check if bytesAvailable() is non-zero. It can only be non-zero in case #1. Even then, you should be able to place an upper bound on it - say some small multiple of packet size or maximum line length. If the bound is ever exceeded, you've got a bug in your code.
It is just a wild guess, but a common catch by using qt sockets is that you need to delete the socket object by yourself ( for example with "deleteLater()") on error and disconnection.
Example code:
connect(socket, SIGNAL(disconnected()), socket, SLOT(deleteLater()));
The event loop will then remove the socket the next time it is able to do it.
The QTcpSockets or AbstractSockets don't delete themselfs on close() or on leaving the scope (because then the Signal/Slots won't work).

Pcap Dropping Packets

// Open the ethernet adapter
handle = pcap_open_live("eth0", 65356, 1, 0, errbuf);
// Make sure it opens correctly
if(handle == NULL)
{
printf("Couldn't open device : %s\n", errbuf);
exit(1);
}
// Compile filter
if(pcap_compile(handle, &bpf, "udp", 0, PCAP_NETMASK_UNKNOWN))
{
printf("pcap_compile(): %s\n", pcap_geterr(handle));
exit(1);
}
// Set Filter
if(pcap_setfilter(handle, &bpf) < 0)
{
printf("pcap_setfilter(): %s\n", pcap_geterr(handle));
exit(1);
}
// Set signals
signal(SIGINT, bailout);
signal(SIGTERM, bailout);
signal(SIGQUIT, bailout);
// Setup callback to process the packet
pcap_loop(handle, -1, process_packet, NULL);
The process_packet function gets rid of header and does a bit of processing on the data. However when it takes too long, i think it is dropping packets.
How can i use pcap to listen for udp packets and be able to do some processing on the data without losing packets?
Well, you don't have infinite storage so, if you continuously run slower than the packets arrive, you will lose data at some point.
If course, if you have a decent amount of storage and, on average, you don't run behind (for example, you may run slow during bursts buth there are quiet times where you can catch up), that would alleviate the problem.
Some network sniffers do this, simply writing the raw data to a file for later analysis.
It's a trick you too can use though not necessarily with a file. It's possible to use a massive in-memory structure like a circular buffer where one thread (the capture thread) writes raw data and another thread (analysis) reads and interprets. And, because each thread only handles one end of the buffer, you can even architect it without locks (or with very short locks).
That also makes it easy to detect if you've run out of buffer and raise an error of some sort rather than just losing data at your application level.
Of course, this all hinges on your "simple and quick as possible" capture thread being able to keep up with the traffic.
Clarifying what I mean, modify your process_packet function so that it does nothing but write the raw packet to a massive circular buffer (detecting overflow and acting accordingly). That should make it as fast as possible, avoiding pcap itself dropping packets.
Then, have an analysis thread that takes stuff off the queue and does the work formerly done in process_packet (the "gets rid of header and does a bit of processing on the data" bit).
Another possible solution is to bump up the pcap internal buffer size. As per the man page:
Packets that arrive for a capture are stored in a buffer, so that they do not have to be read by the application as soon as they arrive.
On some platforms, the buffer's size can be set; a size that's too small could mean that, if too many packets are being captured and the snapshot length doesn't limit the amount of data that's buffered, packets could be dropped if the buffer fills up before the application can read packets from it, while a size that's too large could use more non-pageable operating system memory than is necessary to prevent packets from being dropped.
The buffer size is set with pcap_set_buffer_size().
The only other possibility that springs to mind is to ensure that the processing you do on each packet is as optimised as it can be.
The splitting of processing into collection and analysis should alleviate a problem of not keeping up but it still relies on quiet time to catch up. If your network traffic is consistently more than your analysis can handle, all you're doing is delaying the problem. Optimising the analysis may be the only way to guarantee you'll never lose data.

Non blocking TCP write(2) succeds but the request is not sent

I am seeing that a small set of message written to a non-blocking TCP socket using write(2) are not seen on the source interface and also not received by the destination.
What could be the problem? Is there any way that the application can detect this and retry?
while (len > 0) {
res = write (c->sock_fd, tcp_buf, len);
if (res < 0) {
switch (errno) {
case EAGAIN:
case EINTR:
<handle case>
break;
default:
<close connection>
}
}
else {
len -= res;
}
}
Non blocking write(2) means that whatever the difficulties, the call will return. The proper way to detect what happened is to inspect the return value of the function.
If it returns -1 check errno. A value of EAGAIN means the write did not happen and you have to do it again.
It could also return a short write (i.e. a value less than the size of the buffer you passed it) in which case you’ll probably want to retry the missing part.
If this is happening on short lived sockets also read The ultimate SO_LINGER page, or: why is my tcp not reliable. It explains a particular problem regarding the closing part of a transmission.
when we naively use TCP to just send the data we need to transmit, it often fails to do what we want - with the final kilobytes or sometimes megabytes of data transmitted never arriving.
and the conclusions is:
The best advice is to send length information, and to have the remote program actively acknowledge that all data was received.
It also describes a hack for Linux.
write() returns the number of bytes written, this might be less than the amount of bytes you send in, and even 0! Make sure you check this and retransmit whatever was dropped (due to not enough buffer space on the NIC or whatever)
You want to read up on TCP_NODELAY option and the nature of the TCP send buffer.

Resources