This is probably simple, but I couldn't figure it out. My Netty 4 based http server is causing http clients to hang on its response. It manages to send through its response payload (as observed using curl as a client) but the clients seem not to realize that the response has finished and they indefinitely wait for it to complete. Observed using curl, as well as firefox and chrome.
Only if I modify the code to close the channel (channel.close, as seen inline below), then do the clients acknowledge that the response is done. Otherwise, they just continue waiting for it to complete. I wish for the channel to stay open so that the next client request will not require opening a new connection (I wish to have keep-alive behavior), so closing the channel doesn't seem plausible. So I'm not sure how should the server mark the response as over - without closing the connection.
The server code:
val response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK)
val buf = new StringBuilder
buf.append("hello")
response.data.writeBytes(Unpooled.copiedBuffer(buf, CharsetUtil.UTF_8))
ctx.write(response).addListener(new ChannelFutureListener(){
def operationComplete(channelFuture: ChannelFuture){
if (channelFuture.isSuccess){
println("server write finished successfully")
//channelFuture.channel.close <===== if uncommented, clients receive the response, otherwise they just keep waiting forever
}
else
println ("server write failed: " + channelFuture.cause + "\n" + channelFuture.cause.getStackTraceString)
}
})
What am I missing??
You need a Content-Length header, or else the client won't know when to stop reading, and will continually poll for more data.
Related
I'm using Akka's HTTP client to make a connection to an infinitely streaming HTTP endpoint. I am having difficulty getting the client to close the upstream to the HTTP server.
Here's my code (StreamRequest().stream returns a Source[T, Any]. It's generated by Http().outgoingConnectionHttps and then a Flow[HttpResponse, T, NotUsed] to convert HttpResponse to a stream of T):
(killSwitch, tFuture) = StreamRequest()
.stream
.takeWithin(timeToStreamFor)
.take(toPull)
.viaMat(KillSwitches.single)(Keep.right)
.toMat(Sink.seq)(Keep.both)
.run()
Then I have
tFuture.onComplete { _ =>
info(s"Shutting down the connection")
killSwitch.shutdown()
}
When I run the code I see the 'Shutting down the connection' log message but the server tells me that I'm still connected. It disconnects only when the JVM exits.
Any ideas what I'm doing wrong or what I should be doing differently here?
Thanks!
I suspect you should invoke Http().shutdownAllConnectionPools() when tFuture completes. The pool does not close connections because they can be reused by the different stream materialisations, so when the stream completes it does not close the pool. The shut connection you seen in the log can be because the idle timeout has triggered for one of the connections.
Abstract
Hi, I was pondering whether it is possible to loose a message with SignalR. Suppose client disconnects but eventually reconnects in a short amount of time, for example 3 seconds. Will the client get all of the messages that were sent to him while he was disconnected?
For example let's consider LongPolling transport. As far as I'm aware long polling is a simple http request that is issued in advance by the client in order to wait a server event.
As soon as server event occurs the data getting published on the http request which leads to closing connection on issued http request. After that, client issues new http request that repeats the whole loop again.
The problem
Suppose two events happened on the server, first A then B (nearly instantly). Client gets message A which results with closing http connection. Now to get message B client has to issue second http request.
Question
If the B event happened while the client was disconnected from the server and was trying to reconnect.
Will the client get the B message automatically, or I have to invent some sort of mechanisms that will ensure message integrity?
The question applies not only to long-polling but to general situation with client reconnection.
P.S.
I'm using SignalR Hubs on the server side.
EDIT:
I've found-out that the order of messages is not guaranteed, I was not able to make SignalR loose messages
The answer to this question lies in the EnqueueOperation method here...
https://github.com/SignalR/SignalR/blob/master/src/Microsoft.AspNet.SignalR.Core/Transports/TransportDisconnectBase.cs
protected virtual internal Task EnqueueOperation(Func<object, Task> writeAsync, object state)
{
if (!IsAlive)
{
return TaskAsyncHelper.Empty;
}
// Only enqueue new writes if the connection is alive
Task writeTask = WriteQueue.Enqueue(writeAsync, state);
_lastWriteTask = writeTask;
return writeTask;
}
When the server sends a message to a client it calls this method. In your example above, the server would enqueue 2 messages to be sent, then the client would reconnect after receiving the first, then the second message would be sent.
If the server queues and sends the first message and the client reconnects, there is a small window where the second message could attempt to be enqueued where the connection is not alive and the message would be dropped at the server end. Then after reconnect the client wouldn't get the second message.
Hope this helps
I write a simple web server that handles long-polling, which means the server doesn't send a full HTTP response to the client(web browser, curl, etc), but only sends HTTP headers and hang the connection.
I use command line curl to product a request to the server, it prints out HTTP response headers well, and curl hangs as expected. Then I press CTRL+C to terminate the curl process. But the server never knows this close(on_disconnect() is never called).
Some of the codes:
void request_handler(struct evhttp_request *req, void *arg){
// only send response headers, and one piece of chunked data
evhttp_send_reply_start(req, HTTP_OK, "OK");
evbuffer_add_printf(buf, "...\n");
evhttp_send_reply_chunk(req, buf);
// register connection close callback
evhttp_connection_set_closecb(req->evcon, on_disconnect, sub);
}
void on_disconnect(struct evhttp_connection *evcon, void *arg){
printf("disconnected\n");
}
evhttp_set_gencb(http, request_handler, NULL);
My question is, how to detect this kind of client close(a TCP FIN received)?
Bug at github
I think that this is a bug in libevent.
I have a similar issue with libevent 2.0.21, and here's what happens in my case: When evhttp_read_header() in libevent's http.c is done reading the HTTP headers from the client, it disables any future read events by calling bufferevent_disable(..., EV_READ). However, read events need to be enabled for the underlying bufferevent to report EOF. That's why the bufferevent never tells libevent's HTTP code when the client closes the connection.
I fixed the issue in my case by simply removing the call to bufferevent_disable() in evhttp_read_header(). I use "Connection: close" and nothing but GET requests, so the client never sends any data after the headers and this solution seems to work well for this very specific and simple use case. I am not sure, however, that this doesn't break other use cases, especially when you use persistent connections or things like "Expect: 100-continue".
You may want to take this to the libevent mailing list.
To support a protocol (Icecast Source Protocol) based on HTTP, I need to be able to use a socket from Node.js's http.Server once the HTTP request is finished. A sample request looks like this:
Client->Server: GET / HTTP/1.0
Client->Server: Some-Headers:header_value
Client->Server:
Server->Client: HTTP/1.0 200 OK
Server->Client:
Client->Server: <insert stream of binary data here>
This is to support the source of an internet radio stream, the source of the stream data being the client in this case.
Is there any way I can use Node.js's built in http.Server? I have tried this:
this.server = http.createServer(function (req, res) {
console.log('connection!');
res.writeHead(200, {test: 'woot!'});
res.write('test');
res.write('test2');
req.connection.on('data', function (data) {
console.log(data);
});
}).listen(1337, '127.0.0.1');
If I telnet into port 1337 and make a request, I am able to see the first couple characters of what I type on the server console window, but then the server closes the connection. Ideally, I'd keep that socket open indefinitely, and take the HTTP part out of the loop once the initial request is made.
Is this possible with the stock http.Server class?
Since it is reporting HTTP/1.0 as the protocol version the server is probably closing the connection. If your client is something you have control over, you might want to try to set the keep alive header (Connection: Keep-Alive is the right one I think).
My solution to this problem was to reinvent the wheel and write my own HTTP-ish server. Not perfect, but it works. Hopefully the innards of some of these stock Node.js classes will be exposed some day.
I was in a similar situation, here's how I got it to work:
http.createServer(function(res, req){
// Prepare the response headers
res.writeHead(200);
// Flush the headers to socket
res._send('');
// Inform http.serverResponse instance that we've sent the headers
res._headerSent = true;
}).listen(1234);
The socket will now remain open, as no http.serverResponse.end() has been called, but the headers have been flushed.
If you want to send response data (not that you'll need to for an Icecast source connection), simply:
res.write(buffer_or_string);
res._send('');
When closing the connection just call res.end().
I have successfully streamed MP3 data using this method, but haven't tested it under stress.
I'm writing a small networking program in C++. Among other things it has to download twitter profile pictures. I have a list (stl::vector) of URLs. And I think that my next step is to create for-loop and send GET messages through the socket and save the pictures to different png-files. The problem is when I send the very first message, receive the answer segments and save png-data all things seems to be fine. But right at the next iteration the same message, sent through the same socket, produces 0 received bytes by recv() function. I solved the problem by adding a socket creation code to the cycle body, but I'm a bit confused with the socket concepts. It looks like when I send the message, the socket should be closed and recreated again to send next message to the same server (in order to get next image). Is this a right way of socket's networking programming or it is possible to receive several HTTP response messages through the same socket?
Thanks in advance.
UPD: Here is the code with the loop where I create a socket.
// Get links from xml.
...
// Load images in cycle.
int i=0;
for (i=0; i<imageLinks.size(); i++)
{
// New socket is returned from serverConnect. Why do we need to create new at each iteration?
string srvAddr = "207.123.60.126";
int socketImg = serverConnect(srvAddr);
// Create a message.
...
string message = "GET " + relativePart;
message += " HTTP/1.1\r\n";
message += "Host: " + hostPart + "\r\n";
message += "\r\n";
// Send a message.
BufferArray tempImgBuffer = sendMessage(sockImg, message, false);
fstream pFile;
string name;
// Form the name.
...
pFile.open(name.c_str(), ios::app | ios::out | ios::in | ios::binary);
// Write the file contents.
...
pFile.close();
// Close the socket.
close(sockImg);
}
The other side is closing the connection. That's how HTTP/1.0 works. You can:
Make a different connection for each HTTP GET
Use HTTP/1.0 with the unofficial Connection: Keep-Alive
Use HTTP/1.1. In HTTP 1.1 all connections are considered persistent unless declared otherwise.
Obligatory xkcd link Server Attention Span
Wiki HTTP
The original version of HTTP
(HTTP/1.0) was revised in HTTP/1.1.
HTTP/1.0 uses a separate connection to
the same server for every
request-response transaction, while
HTTP/1.1 can reuse a connection
multiple times
HTTP in its original form (HTTP 1.0) is indeed a "one request per connection" protocol. Once you get the response back, the other side has probably closed the connection. There were unofficial mechanisms added to some implementations to support multiple requests per connection, but they were not standardized.
HTTP 1.1 turns this around. All connections are by default "persistent".
To use this, you need to add "HTTP/1.1" to the end of your request line. Instead of GET http://someurl/, do GET http://someurl/ HTTP/1.1. You'll also need to make sure you provide the "Host:" header when you do this.
Note well, however, that even some otherwise-compliant HTTP servers may not support persistent connections. Note also that the connection may in fact be dropped after very little delay, a certain number of requests, or just randomly. You must be prepared for this, and ready to re-connect and resume issuing your requests where you left off.
See also the HTTP 1.1 RFC.