How SignalR works internally? - signalr

Can anyone let me know how SignalR works internally in a high level way?
I am guessing it is flushing the data using Response.Flush and at client side it is sending Ajax requests at certain intervals. Is it correct?

No, SignalR is an abstraction over a connection. It gives you two programming models over that connection (hubs and persistent connections). SignalR has a concept of transports, each transport decides how data is sent/received and how it connects and disconnects.
SignalR has a few built in transports:
WebSockets
Server Sent Events
Forever Frame
Long polling
SignalR tries to choose the "best" connection supported by server and client (you can also force it to use a specific transport).
That's the high level. If you want to see how each transport is implemented, you can look at the source code.
There's also client code for each transport:
https://github.com/SignalR/SignalR/tree/master/src/Microsoft.AspNet.SignalR.Client.JS
If you're asking about how the long polling transport works in particular:
It sends an ajax request to the server that's waiting asynchronously for a signal to respond. When there is a signal or the request times out, it returns from the server and sends another request and the process continues. (I left some details out about how the client it keeps track of what it saw so it doesn't miss messages)
Hopefully that answers most of your question.

#davidfowl has already answered the major portion. However, to provide some more details regarding the difference in behavior of transports, specifically between WebSocket and other transports; below are some points.
WebSocket is the only transport that establishes a true persistent, two-way connection between client and server. However, WebSocket is supported only by IIS 8 or above, and the latest versions of Internet Explorer, Google Chrome and Mozilla Firefox.
While Server Sent Events, Forever Frame and Long polling, all three follow a one-way communication, and are supported by most of the browsers.

Related

implementing a background process responding to the client in an atmosphere+netty/jetty application

We have a requirement to to support 10k+ users, where every user initiate a request and waits for a response from the server (the response can take as long as 20-30 seconds to arrive). it is only one request from the client, and after a long processing by the server, a response will be transmitted and then the connection will disconnect.
in the background, the server will do some DB search and wait for other background processes to notify on completion before responding to the client.
after doing some research i figured out we will need to use something like the atmosphere framework to support websockets/sse event/long polling along with an asynchronous server like netty (=> nettosphere) or jetty.
As for my experience - mostly Java EE world and Tomcat server.
my questions are:
what will be easier to implement in regard to my experience and our requirement: atmosphere + netty or atmoshphere+jetty? which one can scale better, has an easier learning curve and easier to implement other java technologies?
how do u implement in atmosphere a response that is sent only to the originating client and not broadcast to the rest of the clients? (all the examples i found are broadcast).
how can i implement in netty (or jetty) when using the atmosphere framework our response? i.e., the client send a request, after it is received in the server some background processes are run, and when they finish i need to locate the connection and transmit the response. is that achievable?
Some thoughts:
At 10k+ users, with 20-30 second response latency, you likely hit file descriptor limits if using just 1 network interface. Consider a solution that uses multiple network interfaces.
Your description of your request/response can be handled entirely with standard Servlet 3.0, standard HTTP/1.1, Async request handling, and large timeouts.
If your clients are web browsers, and you don't start sending a response from the server until the 20-30 second window, you might hit browser idle timeouts.
Atmosphere and Cometd do the same things, supporting long duration connections, with connection technique fallbacks, and with logical channel APIs.
I believe the AKKA framework will handle this sort of need. I am looking at using it to handle scaling issues possibly with a RabbitMQ to help off load work to potentially other servers that may be added later to scale as needed.

Would you see a significant speedup using a single websocket connection for all requests on a website?

Imagine I'm building an ordinary old website. Not a game, not a chat program, an ordinary website. Let's say it's a stack overflow clone.
The client side would simply make service calls to the server side. The server is essentially a dumb data store and never sends down HTML. The client handles all templating via javascript.
If I established a single websocket connection and did all requests through that, would I see a significant speedup over doing ajax requests?
The obvious advantage to using a single connection is that it only has to be established once. But how much time does that actually save? I know establishing a TCP connection can be costly, but in the grand scheme of things, does it matter?
I would not recommend websockets for webpages. HTTP 1.1 can reuse a TCP-connection for multiple requests, it's only HTTP 1.0 that had to use a new TCP connection for each request.
SPDY is probably a protocol that do what you are looking for. See SPDY: An experimental protocol for a faster web, but it's only supported by Chrome.
If you use websockets, the requests will not be cached.
One HTTP connection can only by used for one HTTP request at the same time. Say that a page requested a 100Kb document, nothing else will be send from the client to the server until that 100Kb document has been transferred. This is called head-of-line blocking. The client can establish an additional connection with the server, but there is also a limit on the amount of concurrent connections with the same server.
One of the primary reasons for developing SPDY and later HTTP/2 was solving this exact problem. However, support for SPDY and HTTP/2 is not yet as widespread as for WebSocket. WebSocket can get you there earlier because it supports multiple streams in full-duplex mode.
Once HTTP/2 is better supported it will be the preferred solution for this problem, but WebSocket will still be better for real-time web applications, where server needs to push data to the client.
Have a look at the N2O framework, it was created to address the problems I described above. In N2O WebSocket is used to send all assets associated with a page.
How much speed you could gain from using WebSocket instead of standard HTTP requests pretty much depends on your specific website: how often it requests data from the server, how big is a typical response, etc.

What's the behavioral difference between HTTP Keep-Alive and Websockets?

I've been working with websockets lately in detail. Created my own server and there's a public demo. I don't have such detailed experience or knowledge re: http. (Although since websocket requests are upgraded http requests, I have some.)
On my end, the server reports details of each hit. Among them are a bunch of http keep-alive requests. My server doesn't handle them because they're not websocket requests. But it got my curiosity up.
The whole big thing about websockets is that the connection stays alive. Then you can pass messages in both directions (simultaneously even). I've read that the Keep-Alive HTTP connection is a relatively new development (I don't know how many years in people time, just that it's only included in the latest standard - 1.1 - is that actually old now?)
I guess I can assume that there's a behavioral difference between the two or there would have been no reason for a websocket standard? What's the difference?
A Keep Alive HTTP header since HTTP 1.0, which is used to indicate a HTTP client would like to maintain a persistent connection with HTTP server. The main objects is to eliminate the needs for opening TCP connection for each HTTP request. However, while there is a persistent connection open, the protocol for communication between client and server is still following the basic HTTP request/response pattern. In other word, server side can't push data to client.
WebSocket is completely different mechanism, which is used to setup a persistent, full-duplex connection. With this full-duplex connection, server side can push data to client and client should be expected to process data from server side at any time.
Quoting corresponding entries on Wikipedia for reference:
1) http://en.wikipedia.org/wiki/HTTP_persistent_connection
2) http://en.wikipedia.org/wiki/WebSocket
You should read up on COMET, a design pattern which shows the limits of HTTP Keep-Alive. Keep-Alive is over 12 years old now, so it's not a new feature of HTTP. The problem is that it's not sufficient; the client and server cannot communicate in a truly asynchronous manner. The client must always use a "hanging" request in order to get a message back from the server; the server may not just send a message to the client at any time it wants.
HTTP vs Websockets
REST (HTTP)
Resources benefit from caching when the representation of a resource changes rarely or multiple clients are expected to retrieve the resource.
HTTP methods have well-known idempotency and safety properties. A request is “idempotent” if it can be issued multiple times without resulting in unique outcomes.
The HTTP design allows for responses to describe errors with the request, with the resource, or to provide nuanced status information to differentiate between success scenarios.
Have request and response functionality.
HTTP v1.1 may allow multiple requests to reuse a single connection, there will generally be small timeout periods intended to control resource consumption.
You might be using HTTP incorrectly if…
Your design relies on a client polling the service often, without the user taking action.
Your design requires frequent service calls to send small messages.
The client needs to quickly react to a change to a resource, and it cannot predict when the change will occur.
The resulting design is cost-prohibitive. Ask yourself: Is a WebSocket solution substantially less effort to design, implement, test, and operate?
WebSockets
WebSocket design does not allow explicit or transparent proxies to cache messages, which can degrade client performance.
WebSocket protocol offers support only for error scenarios affecting the establishment of the connection. Once the connection is established and messages are exchanged, any additional error scenarios must be addressed in the messaging layer design, but WebSockets allow for a higher amount of efficiency compared to REST because they do not require the HTTP request/response overhead for each message sent and received.
When a client needs to react quickly to a change (especially one it cannot predict), a WebSocket may be best.
This makes the protocol well suited to “fire and forget” messaging scenarios and poorly suited for transactional requirements.
WebSockets were designed specifically for long-lived connection scenarios, they avoid the overhead of establishing connections and sending HTTP request/response headers, resulting in a significant performance boost
You might be using WebSockets incorrectly if..
The connection is used only for a very small number of events, or a very small amount of time, and the client does not - need to quickly react to the events.
Your feature requires multiple WebSockets to be open to the same service at once.
Your feature opens a WebSocket, sends messages, then closes it—then repeats the process later.
You’re re-implementing a request/response pattern within the messaging layer.
The resulting design is cost-prohibitive. Ask yourself: Is a HTTP solution substantially less effort to design, implement, test, and operate?
Ref: https://blogs.windows.com/buildingapps/2016/03/14/when-to-use-a-http-call-instead-of-a-websocket-or-http-2-0/

What TCP protocols are usable for client to client communication?

Manytimes clients ask for features like instant messaging (IM) and other client-to-client (P2P) communication for their web apps. Typically how is this done in normal web browsers? For example I've seen demos of Google Wave (and Gmail) that are able to IM from a regular browser. Is this via HTTP? Or does XmlHttpRequest (AJAX) provide the necessary backend for such communication?
More than anything I wonder how can a server "wake up" the remote client, lets say for sending an IM? Or does the client have to keep "polling" the message server for new IMs?
Typically the browser will poll the server for new messages. One approach that is often done to make this more efficient is the 'long poll' (see also this link) - the server responds immediately if it has anything; otherwise, it waits, keeping the connection open for a while. If a message comes in, it immediately wakes up and sends it, otherwise it comes back with a 'nope, check back' after a few tens of seconds. The client them immediately redials to go back into the long-polling state.

How do I create a chat server that is not driven by polling?

I have created a simple chat server that is driven by client polling. Clients send requests for data every few seconds, and get handed any new messages as well as information about whether their peer is still connected.
Since the client is running on a mobile platform (iPhone), I've been looking for ways of getting rid of the polling, which quickly drains the battery. I've read that it's possible to keep an http connection open indefinitely, but haven't understood how to utilize this technique in practice. I'm also wondering whether such connections are stable enough to use in a mobile setting.
The ideal scenario would be that the server only sends data to clients when an event that affects them has occurred (such as a peer posting a message or going off line).
Is it advisable to try to accomplish this over http, or would I have to write my own protocol over tcp? How hard would it be to customize xmpp to my need (my chat server has some specialized features that I would have to easily implement).
How about push technology? see http://en.wikipedia.org/wiki/Comet_(programming)
I think you're describing XMPP over BOSH.
http://xmpp.org/extensions/xep-0206.html
I've used this http-binding method between a chat server and javascript client on non-mobile devices. It worked well for me.
You might like to check out this project which uses a variety of techniques including Comet. Release details are here, here's a snippet from that page
It’s my distinct pleasure to be able
to announce the first public showing
of a project that I’ve been working on
in my spare time in the last month or
two, a new Web Based IRC chat
application.
This project brings together a lot of
new technologies which had to be
developed to make this a feasible,
scalable and efficient.
Some of the underlying tools build to
make this posible that i consider
’stable enough’ are already released,
such as the php Socket Daemon library
i wrote to be able to deal with
hundreds up to many thousands of
“Comet” http connections, and an equal
amount of IRC client connections.
I just found this article myself, which describes the following technique (which I referred to in the question):
... have the client make an HTTP request
and have the server hold the request
on the queue until there is a message
to push. if the TCP/IP connection is
lost or times-out, the client will
make a new HTTP request, and the delay
will only be the round trip time for a
request/response pair . . . this model
effectively requires two TCP/IP
connections for HTTP, client to
server, though none permanent and
hence mobile friendly
I think this is nearly impossible and dangerous. The internet works stateless and connectionless meaning that the connection between client and server is always handled as unreliable. And this is not for fun.
By trying to get a stateful connection you are introducing new issues. Especially from a 3g application. What if the connection breaks? You have no control over the server and cannot push.
I think it would even be easier to send sms/text messages and have an application that handles that.

Resources