SignalR connection is endless - asp.net

I have a signalR connection working but something very weird happens, sometimes it works perfectly in very few seconds and other times when I track the request it took more than 10 minutes trying to connect and it gives me something like that
can anyone give me an explanation for this? any hints, how to search for the problem

The request your looking at: /connect?transport=serverSentEvents&... is supposed to be endless.
SingalR is using comet technology called server-sent events or SSE. The basic idea is that SignalR responds to SSE requests in chunks, but never actually closes the response unless the client asks it to.
Browsers with SSE support can read the chunks sent from the server as they are sent even though the response doesn't end. This allows an unlimited number of messages to be sent in response to a single request.

Related

SignalR long polling is making request frequently

I'm using SignalR in Mono. Its working fine, but it is always using long polling. I'm still fine in going with long polling. But as far as my understanding regarding long polling, browser makes a request to the server and the server will hold that request. Once the server has to respond, it will send a respond to that request. If the request is timedout then the client will again send a request to the server. Please correct me if my understanding is wrong.
But in my Signalr implementation, my browser is making request every 15 seconds frequently. Not sure that the timeout for the signalr long polling is 15 seconds and if yes, i don't know a way to change the timeout. Or is this not a normal behaviour? Please help.
Update 1:
Please find the log entries,
To be precise, it is taking exactly 17 seconds for SignalR to make the next request. I can see a message that 'Long polling complete' from the logs. I assume that it is coming after the given request timesout. My question is, is there a way to increase this timeout?

SignalR over websocket timeout issue

Our system now defaults to web-sockets for our signalr communications.
When I send a whole bunch of data the web socket seems to time out after about 2-3 seconds. Setting the transport to 'longPolling' works just fine. I do not see anywhere where a timeout needs to be set for a web-socket explicitly.
How do I prevent the web-socket transport from timing out when sending large amounts of data in one go?
Edit:
We are using the javascript client on version 1.1.3 and sending apparently more than 2 seconds worth of data :) --- I am sending 500 message structures with each being around 90 bytes.
But it seems something else may be going on as the messages are sent one-by-one.

implementing a background process responding to the client in an atmosphere+netty/jetty application

We have a requirement to to support 10k+ users, where every user initiate a request and waits for a response from the server (the response can take as long as 20-30 seconds to arrive). it is only one request from the client, and after a long processing by the server, a response will be transmitted and then the connection will disconnect.
in the background, the server will do some DB search and wait for other background processes to notify on completion before responding to the client.
after doing some research i figured out we will need to use something like the atmosphere framework to support websockets/sse event/long polling along with an asynchronous server like netty (=> nettosphere) or jetty.
As for my experience - mostly Java EE world and Tomcat server.
my questions are:
what will be easier to implement in regard to my experience and our requirement: atmosphere + netty or atmoshphere+jetty? which one can scale better, has an easier learning curve and easier to implement other java technologies?
how do u implement in atmosphere a response that is sent only to the originating client and not broadcast to the rest of the clients? (all the examples i found are broadcast).
how can i implement in netty (or jetty) when using the atmosphere framework our response? i.e., the client send a request, after it is received in the server some background processes are run, and when they finish i need to locate the connection and transmit the response. is that achievable?
Some thoughts:
At 10k+ users, with 20-30 second response latency, you likely hit file descriptor limits if using just 1 network interface. Consider a solution that uses multiple network interfaces.
Your description of your request/response can be handled entirely with standard Servlet 3.0, standard HTTP/1.1, Async request handling, and large timeouts.
If your clients are web browsers, and you don't start sending a response from the server until the 20-30 second window, you might hit browser idle timeouts.
Atmosphere and Cometd do the same things, supporting long duration connections, with connection technique fallbacks, and with logical channel APIs.
I believe the AKKA framework will handle this sort of need. I am looking at using it to handle scaling issues possibly with a RabbitMQ to help off load work to potentially other servers that may be added later to scale as needed.

SignalR duplicating responses

I'm using SignalR with Redis as a message bus on a server that sits behind an Nginx proxy for load balancing. I used SignalR's PersistentConnection class to write a simple chat program that broadcasts messages to users belonging to the same certain group. Users are added to a group in OnConnectedAsync, removed in OnDisconnectAsync, and the user-to-group mapping is deterministic.
Currently, the client side falls back to long polling for whatever reason (I'm not entirely sure why), and whenever the client sets up a new connection after waiting for and receiving a response, seemingly at random, the server will sometimes respond to the new connection immediately with the previous response, despite there having only been one POST.
The message ID's tend to differ by exactly one, (the smaller ID coming first), with the rest of the response remaining the same. I logged some debug info and am quite positive that my override of OnReceivedAsync is sending one response per one request. I tried the same implementation without the Redis message bus, and got the same problem. Running locally (with long polling) however yielded good results so I suspect that the problem might be with the way the message bus might be buffering messages to refresh clients who might not be caught up, and some weird timing with the cutting/setting up of connections with the Nginx load balancer, but beyond that, I am very much at a loss.
Any help would be appreciated.
EDIT: Further investigation reveals that duplication occurs at somewhat regular intervals of approximately 20-30 seconds. I'm led to believe that the message expiration in the message bus might have something to do with the bug.
EDIT: Bug can be seen here: http://tinyurl.com/9q5t3va
The server is simply broadcasting a counter being sent by the client. You will notice some responses are duplicated every 20 or so.
Reducing the number of worker processes in the IIS (6.0) Server Manager from 2 to 1 solved the problem.

How SignalR works internally?

Can anyone let me know how SignalR works internally in a high level way?
I am guessing it is flushing the data using Response.Flush and at client side it is sending Ajax requests at certain intervals. Is it correct?
No, SignalR is an abstraction over a connection. It gives you two programming models over that connection (hubs and persistent connections). SignalR has a concept of transports, each transport decides how data is sent/received and how it connects and disconnects.
SignalR has a few built in transports:
WebSockets
Server Sent Events
Forever Frame
Long polling
SignalR tries to choose the "best" connection supported by server and client (you can also force it to use a specific transport).
That's the high level. If you want to see how each transport is implemented, you can look at the source code.
There's also client code for each transport:
https://github.com/SignalR/SignalR/tree/master/src/Microsoft.AspNet.SignalR.Client.JS
If you're asking about how the long polling transport works in particular:
It sends an ajax request to the server that's waiting asynchronously for a signal to respond. When there is a signal or the request times out, it returns from the server and sends another request and the process continues. (I left some details out about how the client it keeps track of what it saw so it doesn't miss messages)
Hopefully that answers most of your question.
#davidfowl has already answered the major portion. However, to provide some more details regarding the difference in behavior of transports, specifically between WebSocket and other transports; below are some points.
WebSocket is the only transport that establishes a true persistent, two-way connection between client and server. However, WebSocket is supported only by IIS 8 or above, and the latest versions of Internet Explorer, Google Chrome and Mozilla Firefox.
While Server Sent Events, Forever Frame and Long polling, all three follow a one-way communication, and are supported by most of the browsers.

Resources