SignalR queue limit per user - asynchronous

I have this code to test asynchronous programming in SignalR. this code send back to client the text after 10 seconds.
public class TestHub : Hub
{
public async Task BroadcastMessage(string text)
{
await DelayResponse(text);
}
async Task DelayResponse(string text)
{
await Task.Delay(10000);
Clients.All.displayText(text);
}
}
this code work fine but there is an unexpected behavior. when 5 messages are sent in less than 10 second, client can't send more message until previous "DelayResponse" methods end. it happens per connection and if before 10 seconds close the connection and reopen it, client can send 5 messages again. I test it with chrome, firefox and IE.
I made some mistake or it is signalr limitation?

You are most likely hitting a browser limit. When using longPolling and serverSentEvent transport each send is a separate HTTP request. Since you are delaying response these requests are longer running and browsers have limits of how many concurrent connection can be opened. Once you reach the limit a new connection will not be open until one of the previous ones is completed.
More details on concurrent requests limit:
Max parallel http connections in a browser?

That's not the sens of signalR, that you waiting for a "long running" task. For that signalR supports server push mechanisme.
So if you have something which needs more time you can trigger this from client.
In the case the calculation is finish you can send a message from server to client.

Related

Does HTTP long polling support heartbeat message?

I am using HTTP long polling for pushing server events to a client.
On the client side, I send a long polling request to the server and block there waiting for a event from the server.
On the server side, we used the cometd framework (I am on the client side, do not really know much about the server side).
The problem is, after sometime, the connection is broken and the client can not detect this, so it blocks there forever. We are trying to implement some kind of heartbeat message, which will be sent every N minutes to keep the connection active. But this does not seem to work.
My question is: does HTTP long polling support heartbeat messages? As far as I understand, HTTP long polling only allows the server to send one event and will close the connection immediately thereafter. The client must reconnect and send a new request in order to receive the next event. Is it possible that the server sends heartbeat messages every N minutes while still keep the connection open until a real server event happens?
If you use the CometD framework, then it takes care of notifying the application (both on client and on server) about when the connection is broken, and it does send heartbeat messages.
What you call "HTTP long polling" is just a normal HTTP request, so in itself does not support heartbeat messages.
You can use HTTP long polling requests to implement heartbeat messages, and this is what CometD does for you under the covers.
In CometD, the response to a HTTP long poll request may deliver multiple messages, and the connection will not be closed afterwards. The client will send another HTTP long poll request without the need to reconnect, possibly reusing the previous connection.
CometD offers to your application a higher level API that is independent from the transport, so you can use WebSocket rather than HTTP, which is way more efficient, without changing a single line in your application.
You need to use the CometD libraries both on client (javascript and java) and on server, and everything will just work.

Does SignalR provide message integrity mechanisms which ensure that no messages are lost during client reconnect

Abstract
Hi, I was pondering whether it is possible to loose a message with SignalR. Suppose client disconnects but eventually reconnects in a short amount of time, for example 3 seconds. Will the client get all of the messages that were sent to him while he was disconnected?
For example let's consider LongPolling transport. As far as I'm aware long polling is a simple http request that is issued in advance by the client in order to wait a server event.
As soon as server event occurs the data getting published on the http request which leads to closing connection on issued http request. After that, client issues new http request that repeats the whole loop again.
The problem
Suppose two events happened on the server, first A then B (nearly instantly). Client gets message A which results with closing http connection. Now to get message B client has to issue second http request.
Question
If the B event happened while the client was disconnected from the server and was trying to reconnect.
Will the client get the B message automatically, or I have to invent some sort of mechanisms that will ensure message integrity?
The question applies not only to long-polling but to general situation with client reconnection.
P.S.
I'm using SignalR Hubs on the server side.
EDIT:
I've found-out that the order of messages is not guaranteed, I was not able to make SignalR loose messages
The answer to this question lies in the EnqueueOperation method here...
https://github.com/SignalR/SignalR/blob/master/src/Microsoft.AspNet.SignalR.Core/Transports/TransportDisconnectBase.cs
protected virtual internal Task EnqueueOperation(Func<object, Task> writeAsync, object state)
{
if (!IsAlive)
{
return TaskAsyncHelper.Empty;
}
// Only enqueue new writes if the connection is alive
Task writeTask = WriteQueue.Enqueue(writeAsync, state);
_lastWriteTask = writeTask;
return writeTask;
}
When the server sends a message to a client it calls this method. In your example above, the server would enqueue 2 messages to be sent, then the client would reconnect after receiving the first, then the second message would be sent.
If the server queues and sends the first message and the client reconnects, there is a small window where the second message could attempt to be enqueued where the connection is not alive and the message would be dropped at the server end. Then after reconnect the client wouldn't get the second message.
Hope this helps

Async Netty HttpServer and HttpClient

I have been exploring Netty for the past days, as I am writing a quick and tight HTTP server that should receive lots of requests, and Netty's HTTP server implementation is quite simple and does the job.
My next step is as part of the request handling, I need to launch an HTTP request to an external web server. My intuition is to implement an asynchronous client that can send a lot of requests simultaneously, but I am a little confused as what is the right approach. My understanding is that Netty server uses a worker thread for each incoming message, therefore that worker thread would not be freed to accept new messages until my handler finishes its work.
Here is the punch: even if I have an asynchronous HTTP client in hand, it won't matter if I need to wait for each response and process it back with my server handler - the same worker thread would remain blocking all this time. The alternative is to use the async nature of the client, returning a future object quickly to release the thread and place a listener (meaning I have to return 200 or 202 status to the client), and check my future object to indicate when the response is received and I can push it to the client.
Does this make sense? Am I way off with my assumptions? What is a good practice to implement such kind of Netty acceptor server + external client with high concurrency?
Thanks,
Assuming you're asking about Netty 4.
Netty configured with a ServerBootstrap will have a fixed number of worker threads that it uses to accept requests and execute the channel, like so:
Two threads accepting / processing requests
bootstrap.group(NioEventLoopGroup(2))
One thread accepting requests, two threads processing.
bootstrap.group(NioEventLoopGroup(1), NioEventLoopGroup(1))
In your case, you have a channel includes a bunch of Http Codec decoding/encoding stuff and your own handler which itself makes an outgoing Http request. You're right that you don't want to block the server from accepting incoming requests, or decoding the incoming Http message, and there are two things you can do to mitigate that, you've struck on the first already.
Firstly, you want to use an Async netty client to make the outgoing requests, have a listener write the response to the original requests channel when the outgoing request returns. This means you don't block and wait, meaning you can handle many more concurrent outgoing requests than the number of threads available to process those requests.
Secondly, you can have your custom handler run in its own EventExecutorGroup, which means it runs in a separate threadpool from the acceptor / http codec channel handlers, like so:
// Two separate threads to execute your outgoing requests..
EventExecutorGroup separateExecutorGroup new DefaultEventExecutorGroup(2);
bootstrap.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) {
ChannelPipeline pipeline = ch.pipeline();
.... http codec stuff ....
pipeline.addLast(separateExecutorGroup, customHandler);
}
};
Meaning your outgoing requests don't hog the threads that would be used for accepting / processing incoming ones.

Detecting aborted requests in a HttpServlet

Is there a way to find out if a HttpServletRequest is aborted?
I'm writing an instant browser application (some kind of chat): The clients asks for new events in a loop using AJAX-HTTP-Requests. The server (Tomcat) handles the requests in a HttpServlet. If there are no new events for this client, the server delays the reply until a new event arrives or a timeout occurs (30sec).
Now I want to identify clients that are no longer polling. Therefore, I start a kick-Timer at the end of a request which is stopped when a new request arrives. If the client closes the browser window the TCP-Connection is closed and the HTTP-Request is aborted.
Problem: The client does not run into the kick-Timeout because the Servlet still handles the event request - sleeping and waiting for an event or timeout.
It would be great if I could somehow listen for connection abort events and then notify the waiting request in order to stop it. But I couldn't find anything like that in the HttpServletRequest or HttpServletResponse...
This probably won't help the OP any more, but it might help others trying to detect aborted HTTP connections in HttpServlet in general, as I was having a similar problem and finally found an answer.
The key is that when the client cancels the request, normally the only way for the server to find out is to send some data back to the client, which will fail in that case. I wanted to detect when a client stops waiting for a long computation on server, so I ended up periodically writing a single character to response body through HttpServletResponse's writer. To force sending the data to the client, you must call HttpServletResponse.flushBuffer(), which throws ClientAbortException if the connection is aborted.
You are probably using some sort of thread-notification (Semaphores or Object.wait) to hold and release the Servlet threads. How about adding a timeout (~10s) to the wait, then somehow checking whether the connection is still alive and then continuing the wait for another 10s, if the connection is still there.
I don't know whether there are reliable ways to poll the "liveness" of the connection (e.g. resp.getOutputStream not throwing an Exception) and if so, which way is the best (most reliable, least CPU intense).
It seems like having waiting requests could degrade the performance of your system pretty quickly. The threads that respond to requests would get used up fast if requests are held open. You could try completing all requests (and returning "null" to your clients if there is no message), and having a thread on the back-end that keeps track of how long it's been since clients have polled. The thread could mark a client as being inactive.

Can a http server detect that a client has cancelled their request?

My web app must process and serve a lot of data to display certain pages. Sometimes, the user closes or refreshes a page while the server is still busy processing it. This means the server will continue to process data for several minutes only to send it to a client who is no longer listening.
Is it possible to detect that the connection has been broken, and react to it?
In this particular project, we're using Django and NginX, or Apache. I assumed this is possible because the Django development server appears to react to cancelled requests by printing Broken Pipe exceptions. I'd love to have it raise an exception that my application code could catch. It appears JSP can do this. So can node.js here.
Alternatively, I could register an unload event handler on the page in question, have it do a synchronous XHR requesting that the previous request from this user be cancelled, and do some kind of inter-process communication to make it so. Perhaps if the slower data processing were handed to another process that I could more easily identify and kill, without killing the responding process...
While #Oded is correct that HTTP is stateless between requests, app servers can indeed detect when the underlying TCP/IP connection has broken for the request being processed. Why is this? Because TCP is a stateful protocol for reliable connections.
A common technique for .Net web apps processing a resource intensive request is to check Response.IsClientConnected (docs) before starting the resource intensive work. There is no point in wasting CPU cycles to send an expensive response to a client that isn't there anymore.
private void Page_Load(object sender, EventArgs e)
{
// Check whether the browser remains
// connected to the server.
if (Response.IsClientConnected)
{
// If still connected, do work
DoWork();
}
else
{
// If the browser is not connected
// stop all response processing.
Response.End();
}
}
Please reply with your target app server stack so I can provide a more relevant example.
Regarding your 2nd alternative to use XHR to post client page unload events to the server, #Oded's comment about HTTP being stateless between requests is spot on. This is unlikely to work, especially in a farm with multiple servers.
HTTP is stateless, hence the only way to detect a disconnected client is via timeouts.
See the answers to this SO question (Java Servlet : How to detect browser closing ?).

Resources