I write a simple web server that handles long-polling, which means the server doesn't send a full HTTP response to the client(web browser, curl, etc), but only sends HTTP headers and hang the connection.
I use command line curl to product a request to the server, it prints out HTTP response headers well, and curl hangs as expected. Then I press CTRL+C to terminate the curl process. But the server never knows this close(on_disconnect() is never called).
Some of the codes:
void request_handler(struct evhttp_request *req, void *arg){
// only send response headers, and one piece of chunked data
evhttp_send_reply_start(req, HTTP_OK, "OK");
evbuffer_add_printf(buf, "...\n");
evhttp_send_reply_chunk(req, buf);
// register connection close callback
evhttp_connection_set_closecb(req->evcon, on_disconnect, sub);
}
void on_disconnect(struct evhttp_connection *evcon, void *arg){
printf("disconnected\n");
}
evhttp_set_gencb(http, request_handler, NULL);
My question is, how to detect this kind of client close(a TCP FIN received)?
Bug at github
I think that this is a bug in libevent.
I have a similar issue with libevent 2.0.21, and here's what happens in my case: When evhttp_read_header() in libevent's http.c is done reading the HTTP headers from the client, it disables any future read events by calling bufferevent_disable(..., EV_READ). However, read events need to be enabled for the underlying bufferevent to report EOF. That's why the bufferevent never tells libevent's HTTP code when the client closes the connection.
I fixed the issue in my case by simply removing the call to bufferevent_disable() in evhttp_read_header(). I use "Connection: close" and nothing but GET requests, so the client never sends any data after the headers and this solution seems to work well for this very specific and simple use case. I am not sure, however, that this doesn't break other use cases, especially when you use persistent connections or things like "Expect: 100-continue".
You may want to take this to the libevent mailing list.
Related
I have write a simple HTTP server based on Webdis. Now I got a problem that while a client send HTTP request without receive response(AKA, only send, not receive response from server), the server will receive multiple HTTP requests, and this will cause the parse module fail(may be this is a bug in the parse module). If any fuzzy, comes some of my code:
/* client... */
int fd = connect_server();
while (1) {
send(fd, buf, sz);
continue; /* no receive.. */
}
/* server... */
/* some event trigger following code */
char buffer[4096]; /* a stack based receive buffer, buggy */
ret = recv(fd, buffer, sizeof(buffer));
while client send 10 HTTP request(less than 4096 bytes) during server's sleep(for debug), this the next receive will receive 10 request one a time, but the parser can not parse multiple request, this make all these request fail. If all these request larger than 4096, this will cut off one of them and still fail.
I have browsed Nginx source code, may it was callback designed(not blame), I haven't got its solution...
Is there any way to do following things:
How to control the recv call that only receive only one request a time? Or is there some TCP related mechanism that makes it possible to receive only one send request?
This has nothing to do with synchronousness or events, but with the streaming nature of such sockets. You will have to buffer previously received data and you will have to implement certain parts of HTTP in its entirety in order to be able to mark an incoming request as complete, after which you can release it from your buffer and start parsing it.
TCP is by definition stream-based, so you can never be guaranteed that message borders will be respeted. Strictly speaking, such a thing do not exist in TCP. However, using blocking-sockets and making sure that Nagle's algorithm is disabled, reduces the chance of each recv() containing more than one segment. Just for testing, you could also insert a sleep after each send(). You could also play around with TCP_CORK.
However, instead of hacking something together, I would recomend you implement receiving "properly", as you will have to do it at some point. For each recv-call, check if the buffer contains the end of an HTTP-request (\r\n\r\n), and then process.
To support a protocol (Icecast Source Protocol) based on HTTP, I need to be able to use a socket from Node.js's http.Server once the HTTP request is finished. A sample request looks like this:
Client->Server: GET / HTTP/1.0
Client->Server: Some-Headers:header_value
Client->Server:
Server->Client: HTTP/1.0 200 OK
Server->Client:
Client->Server: <insert stream of binary data here>
This is to support the source of an internet radio stream, the source of the stream data being the client in this case.
Is there any way I can use Node.js's built in http.Server? I have tried this:
this.server = http.createServer(function (req, res) {
console.log('connection!');
res.writeHead(200, {test: 'woot!'});
res.write('test');
res.write('test2');
req.connection.on('data', function (data) {
console.log(data);
});
}).listen(1337, '127.0.0.1');
If I telnet into port 1337 and make a request, I am able to see the first couple characters of what I type on the server console window, but then the server closes the connection. Ideally, I'd keep that socket open indefinitely, and take the HTTP part out of the loop once the initial request is made.
Is this possible with the stock http.Server class?
Since it is reporting HTTP/1.0 as the protocol version the server is probably closing the connection. If your client is something you have control over, you might want to try to set the keep alive header (Connection: Keep-Alive is the right one I think).
My solution to this problem was to reinvent the wheel and write my own HTTP-ish server. Not perfect, but it works. Hopefully the innards of some of these stock Node.js classes will be exposed some day.
I was in a similar situation, here's how I got it to work:
http.createServer(function(res, req){
// Prepare the response headers
res.writeHead(200);
// Flush the headers to socket
res._send('');
// Inform http.serverResponse instance that we've sent the headers
res._headerSent = true;
}).listen(1234);
The socket will now remain open, as no http.serverResponse.end() has been called, but the headers have been flushed.
If you want to send response data (not that you'll need to for an Icecast source connection), simply:
res.write(buffer_or_string);
res._send('');
When closing the connection just call res.end().
I have successfully streamed MP3 data using this method, but haven't tested it under stress.
I'm writing a small networking program in C++. Among other things it has to download twitter profile pictures. I have a list (stl::vector) of URLs. And I think that my next step is to create for-loop and send GET messages through the socket and save the pictures to different png-files. The problem is when I send the very first message, receive the answer segments and save png-data all things seems to be fine. But right at the next iteration the same message, sent through the same socket, produces 0 received bytes by recv() function. I solved the problem by adding a socket creation code to the cycle body, but I'm a bit confused with the socket concepts. It looks like when I send the message, the socket should be closed and recreated again to send next message to the same server (in order to get next image). Is this a right way of socket's networking programming or it is possible to receive several HTTP response messages through the same socket?
Thanks in advance.
UPD: Here is the code with the loop where I create a socket.
// Get links from xml.
...
// Load images in cycle.
int i=0;
for (i=0; i<imageLinks.size(); i++)
{
// New socket is returned from serverConnect. Why do we need to create new at each iteration?
string srvAddr = "207.123.60.126";
int socketImg = serverConnect(srvAddr);
// Create a message.
...
string message = "GET " + relativePart;
message += " HTTP/1.1\r\n";
message += "Host: " + hostPart + "\r\n";
message += "\r\n";
// Send a message.
BufferArray tempImgBuffer = sendMessage(sockImg, message, false);
fstream pFile;
string name;
// Form the name.
...
pFile.open(name.c_str(), ios::app | ios::out | ios::in | ios::binary);
// Write the file contents.
...
pFile.close();
// Close the socket.
close(sockImg);
}
The other side is closing the connection. That's how HTTP/1.0 works. You can:
Make a different connection for each HTTP GET
Use HTTP/1.0 with the unofficial Connection: Keep-Alive
Use HTTP/1.1. In HTTP 1.1 all connections are considered persistent unless declared otherwise.
Obligatory xkcd link Server Attention Span
Wiki HTTP
The original version of HTTP
(HTTP/1.0) was revised in HTTP/1.1.
HTTP/1.0 uses a separate connection to
the same server for every
request-response transaction, while
HTTP/1.1 can reuse a connection
multiple times
HTTP in its original form (HTTP 1.0) is indeed a "one request per connection" protocol. Once you get the response back, the other side has probably closed the connection. There were unofficial mechanisms added to some implementations to support multiple requests per connection, but they were not standardized.
HTTP 1.1 turns this around. All connections are by default "persistent".
To use this, you need to add "HTTP/1.1" to the end of your request line. Instead of GET http://someurl/, do GET http://someurl/ HTTP/1.1. You'll also need to make sure you provide the "Host:" header when you do this.
Note well, however, that even some otherwise-compliant HTTP servers may not support persistent connections. Note also that the connection may in fact be dropped after very little delay, a certain number of requests, or just randomly. You must be prepared for this, and ready to re-connect and resume issuing your requests where you left off.
See also the HTTP 1.1 RFC.
I have an web application in which after making a HTTP request to the server, the client quits ( or network connection is broken) before the response was completely received by the client.
In this scenario the server side of the application needs to do some cleanup work. Is there a way built into HTTP protocol to detect this condition. How does the server know if the client is still waiting for the response or has quit?
Thanks
Vijay Kumar
No, there is nothing built in to the protocol to do this (after all, you can't tell whether the response has been received by the client itself yet, or just a downstream proxy).
Just have your client make a second request to acknowledge that it has received and stored the original response. If you don't see a timely acknowedgement, run the cleanup.
However, make sure that you understand the implications of the Two Generals' Problem.
You might have a network problem... usualy, when you send a HTTP request to the server, first you send headers and then the content of the POST (if it is a post method). Likewise, the server responds with the headers and document body. The first line in the header is the status. Usually, status 200 is the success status, if you get that, then there should be no problem getting the rest of the document. Check this for details on the HTTP response status headers http://www.w3.org/Protocols/rfc2616/rfc2616-sec6.html
LE:
Sorry, missread your question. Basically, you don't have a trigger for when the user disconnects. If you use OOP, you could use the destructor of a class to clean whatever it is you need to clean.
I have an HTTP server that returns large bodies in response to POST requests (it is a SOAP server). These bodies are "streamed" via chunking. If I encounter an error midway through streaming the response how can I report that error to the client and still keep the connection open? The implementation uses a proprietary HTTP/SOAP stack so I am interested in answers at the HTTP protocol level.
Once the server has sent the status line (the very first line of the response) to the client, you can't change the status code of the response anymore. Many servers delay sending the response by buffering it internally until the buffer is full. While the buffer is filling up, you can still change your mind about the response.
If your client has access to the response headers, you could use the fact that chunked encoding allows the server to add a trailer with headers after the chunked-encoded body. So, your server, having encountered the error, could gracefully stop sending the body, and then send a trailer that sets some header to some value. Your client would then interpret the presence of this header as a sign that an error happened.
Also keep in mind that chunked responses can contain "footers" which are just like HTTP headers. After failing, you can send a footer such as:
X-RealStatus: 500 Some bad stuff happened
Or if you succeed:
X-RealStatus: 200 OK
you can change the status code as long as response.iscommitted() returns false.
(fot HttpServletResponse in java, im sure there exists an equivalent in other languages)