Keeping socket open after HTTP request/response to Node.js server - http

To support a protocol (Icecast Source Protocol) based on HTTP, I need to be able to use a socket from Node.js's http.Server once the HTTP request is finished. A sample request looks like this:
Client->Server: GET / HTTP/1.0
Client->Server: Some-Headers:header_value
Client->Server:
Server->Client: HTTP/1.0 200 OK
Server->Client:
Client->Server: <insert stream of binary data here>
This is to support the source of an internet radio stream, the source of the stream data being the client in this case.
Is there any way I can use Node.js's built in http.Server? I have tried this:
this.server = http.createServer(function (req, res) {
console.log('connection!');
res.writeHead(200, {test: 'woot!'});
res.write('test');
res.write('test2');
req.connection.on('data', function (data) {
console.log(data);
});
}).listen(1337, '127.0.0.1');
If I telnet into port 1337 and make a request, I am able to see the first couple characters of what I type on the server console window, but then the server closes the connection. Ideally, I'd keep that socket open indefinitely, and take the HTTP part out of the loop once the initial request is made.
Is this possible with the stock http.Server class?

Since it is reporting HTTP/1.0 as the protocol version the server is probably closing the connection. If your client is something you have control over, you might want to try to set the keep alive header (Connection: Keep-Alive is the right one I think).

My solution to this problem was to reinvent the wheel and write my own HTTP-ish server. Not perfect, but it works. Hopefully the innards of some of these stock Node.js classes will be exposed some day.

I was in a similar situation, here's how I got it to work:
http.createServer(function(res, req){
// Prepare the response headers
res.writeHead(200);
// Flush the headers to socket
res._send('');
// Inform http.serverResponse instance that we've sent the headers
res._headerSent = true;
}).listen(1234);
The socket will now remain open, as no http.serverResponse.end() has been called, but the headers have been flushed.
If you want to send response data (not that you'll need to for an Icecast source connection), simply:
res.write(buffer_or_string);
res._send('');
When closing the connection just call res.end().
I have successfully streamed MP3 data using this method, but haven't tested it under stress.

Related

HTTP/2 Push promise behavior

I am working on writing a resilient client for HTTP/2.
I am wondering what should be the behavior of the client, if the server sent a PUSH_PROMISE and then failed to send the PUSH_RESPONSE, related to that PUSH_PROMISE ?
I went through the HTTP/2 spec, about the Push Response, but it does not state what should we do in such scenarios.
Should we send the original request again, if the push response is not received ? If the original request sent successfully, sending it again may cause issues, isn't it ?
Or should we ignore the PUSH_PROMISE and continue ? In that case, say server promised to send a file, and did not send it, what will happen ?
Is there a defined way to resolve this ?
The client is certainly free to request the same resource again. Consider, for example, that the server has no way to know if the client is making a simultaneous request for the same resource when the server sends the PUSH_PROMISE.
Client Server
------ ------
HEADERS[sid:1, GET /]
HEADERS[sid:1, /], DATA [sid:1], PUSH_PROMISE[sid:2]
HEADERS[sid:3, GET /css] HEADERS[sid:2, /css], DATA[sid:2]
HEADERS[sid:3, /css], DATA[sid:3]
The standard way for the client to then cancel the push would be to reset the promised stream via a RST_STREAM.
PUSH PROMISE - All server push streams are initiated via PUSH-PROMISE frames, which signal the server’s intent to push the described resources to the client and need to be delivered ahead of the response data that requests the pushed resources. The simplest strategy to satisfy this requirement is to send all PUSH-PROMISE frames, which contain just the HTTP headers of the promised resource, ahead of the parent’s response.
PUSH_PROMISE method used to apply HTTP/2 server push because the server creates the PUSH_PROMISE frame to the response part of a normal browser- initiated stream. Response objects with the context of a request which has a HTTP connection is used to server push. for example, under the Page_load method of application which has HTTP connection can be used to apply Response.PUSHPROMISE for push all the relevant scripts, styles and images without the client having to request each one explicitly

libevent2 http server how to detect client close

I write a simple web server that handles long-polling, which means the server doesn't send a full HTTP response to the client(web browser, curl, etc), but only sends HTTP headers and hang the connection.
I use command line curl to product a request to the server, it prints out HTTP response headers well, and curl hangs as expected. Then I press CTRL+C to terminate the curl process. But the server never knows this close(on_disconnect() is never called).
Some of the codes:
void request_handler(struct evhttp_request *req, void *arg){
// only send response headers, and one piece of chunked data
evhttp_send_reply_start(req, HTTP_OK, "OK");
evbuffer_add_printf(buf, "...\n");
evhttp_send_reply_chunk(req, buf);
// register connection close callback
evhttp_connection_set_closecb(req->evcon, on_disconnect, sub);
}
void on_disconnect(struct evhttp_connection *evcon, void *arg){
printf("disconnected\n");
}
evhttp_set_gencb(http, request_handler, NULL);
My question is, how to detect this kind of client close(a TCP FIN received)?
Bug at github
I think that this is a bug in libevent.
I have a similar issue with libevent 2.0.21, and here's what happens in my case: When evhttp_read_header() in libevent's http.c is done reading the HTTP headers from the client, it disables any future read events by calling bufferevent_disable(..., EV_READ). However, read events need to be enabled for the underlying bufferevent to report EOF. That's why the bufferevent never tells libevent's HTTP code when the client closes the connection.
I fixed the issue in my case by simply removing the call to bufferevent_disable() in evhttp_read_header(). I use "Connection: close" and nothing but GET requests, so the client never sends any data after the headers and this solution seems to work well for this very specific and simple use case. I am not sure, however, that this doesn't break other use cases, especially when you use persistent connections or things like "Expect: 100-continue".
You may want to take this to the libevent mailing list.

Netty http server responses

This is probably simple, but I couldn't figure it out. My Netty 4 based http server is causing http clients to hang on its response. It manages to send through its response payload (as observed using curl as a client) but the clients seem not to realize that the response has finished and they indefinitely wait for it to complete. Observed using curl, as well as firefox and chrome.
Only if I modify the code to close the channel (channel.close, as seen inline below), then do the clients acknowledge that the response is done. Otherwise, they just continue waiting for it to complete. I wish for the channel to stay open so that the next client request will not require opening a new connection (I wish to have keep-alive behavior), so closing the channel doesn't seem plausible. So I'm not sure how should the server mark the response as over - without closing the connection.
The server code:
val response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK)
val buf = new StringBuilder
buf.append("hello")
response.data.writeBytes(Unpooled.copiedBuffer(buf, CharsetUtil.UTF_8))
ctx.write(response).addListener(new ChannelFutureListener(){
def operationComplete(channelFuture: ChannelFuture){
if (channelFuture.isSuccess){
println("server write finished successfully")
//channelFuture.channel.close <===== if uncommented, clients receive the response, otherwise they just keep waiting forever
}
else
println ("server write failed: " + channelFuture.cause + "\n" + channelFuture.cause.getStackTraceString)
}
})
What am I missing??
You need a Content-Length header, or else the client won't know when to stop reading, and will continually poll for more data.

Why recv() returns '0' bytes at all for-loop iterations except the first one?

I'm writing a small networking program in C++. Among other things it has to download twitter profile pictures. I have a list (stl::vector) of URLs. And I think that my next step is to create for-loop and send GET messages through the socket and save the pictures to different png-files. The problem is when I send the very first message, receive the answer segments and save png-data all things seems to be fine. But right at the next iteration the same message, sent through the same socket, produces 0 received bytes by recv() function. I solved the problem by adding a socket creation code to the cycle body, but I'm a bit confused with the socket concepts. It looks like when I send the message, the socket should be closed and recreated again to send next message to the same server (in order to get next image). Is this a right way of socket's networking programming or it is possible to receive several HTTP response messages through the same socket?
Thanks in advance.
UPD: Here is the code with the loop where I create a socket.
// Get links from xml.
...
// Load images in cycle.
int i=0;
for (i=0; i<imageLinks.size(); i++)
{
// New socket is returned from serverConnect. Why do we need to create new at each iteration?
string srvAddr = "207.123.60.126";
int socketImg = serverConnect(srvAddr);
// Create a message.
...
string message = "GET " + relativePart;
message += " HTTP/1.1\r\n";
message += "Host: " + hostPart + "\r\n";
message += "\r\n";
// Send a message.
BufferArray tempImgBuffer = sendMessage(sockImg, message, false);
fstream pFile;
string name;
// Form the name.
...
pFile.open(name.c_str(), ios::app | ios::out | ios::in | ios::binary);
// Write the file contents.
...
pFile.close();
// Close the socket.
close(sockImg);
}
The other side is closing the connection. That's how HTTP/1.0 works. You can:
Make a different connection for each HTTP GET
Use HTTP/1.0 with the unofficial Connection: Keep-Alive
Use HTTP/1.1. In HTTP 1.1 all connections are considered persistent unless declared otherwise.
Obligatory xkcd link Server Attention Span
Wiki HTTP
The original version of HTTP
(HTTP/1.0) was revised in HTTP/1.1.
HTTP/1.0 uses a separate connection to
the same server for every
request-response transaction, while
HTTP/1.1 can reuse a connection
multiple times
HTTP in its original form (HTTP 1.0) is indeed a "one request per connection" protocol. Once you get the response back, the other side has probably closed the connection. There were unofficial mechanisms added to some implementations to support multiple requests per connection, but they were not standardized.
HTTP 1.1 turns this around. All connections are by default "persistent".
To use this, you need to add "HTTP/1.1" to the end of your request line. Instead of GET http://someurl/, do GET http://someurl/ HTTP/1.1. You'll also need to make sure you provide the "Host:" header when you do this.
Note well, however, that even some otherwise-compliant HTTP servers may not support persistent connections. Note also that the connection may in fact be dropped after very little delay, a certain number of requests, or just randomly. You must be prepared for this, and ready to re-connect and resume issuing your requests where you left off.
See also the HTTP 1.1 RFC.

HTTP server detecting a broken network connection from a HTTP client

I have an web application in which after making a HTTP request to the server, the client quits ( or network connection is broken) before the response was completely received by the client.
In this scenario the server side of the application needs to do some cleanup work. Is there a way built into HTTP protocol to detect this condition. How does the server know if the client is still waiting for the response or has quit?
Thanks
Vijay Kumar
No, there is nothing built in to the protocol to do this (after all, you can't tell whether the response has been received by the client itself yet, or just a downstream proxy).
Just have your client make a second request to acknowledge that it has received and stored the original response. If you don't see a timely acknowedgement, run the cleanup.
However, make sure that you understand the implications of the Two Generals' Problem.
You might have a network problem... usualy, when you send a HTTP request to the server, first you send headers and then the content of the POST (if it is a post method). Likewise, the server responds with the headers and document body. The first line in the header is the status. Usually, status 200 is the success status, if you get that, then there should be no problem getting the rest of the document. Check this for details on the HTTP response status headers http://www.w3.org/Protocols/rfc2616/rfc2616-sec6.html
LE:
Sorry, missread your question. Basically, you don't have a trigger for when the user disconnects. If you use OOP, you could use the destructor of a class to clean whatever it is you need to clean.

Resources