implementing a background process responding to the client in an atmosphere+netty/jetty application - asynchronous

We have a requirement to to support 10k+ users, where every user initiate a request and waits for a response from the server (the response can take as long as 20-30 seconds to arrive). it is only one request from the client, and after a long processing by the server, a response will be transmitted and then the connection will disconnect.
in the background, the server will do some DB search and wait for other background processes to notify on completion before responding to the client.
after doing some research i figured out we will need to use something like the atmosphere framework to support websockets/sse event/long polling along with an asynchronous server like netty (=> nettosphere) or jetty.
As for my experience - mostly Java EE world and Tomcat server.
my questions are:
what will be easier to implement in regard to my experience and our requirement: atmosphere + netty or atmoshphere+jetty? which one can scale better, has an easier learning curve and easier to implement other java technologies?
how do u implement in atmosphere a response that is sent only to the originating client and not broadcast to the rest of the clients? (all the examples i found are broadcast).
how can i implement in netty (or jetty) when using the atmosphere framework our response? i.e., the client send a request, after it is received in the server some background processes are run, and when they finish i need to locate the connection and transmit the response. is that achievable?

Some thoughts:
At 10k+ users, with 20-30 second response latency, you likely hit file descriptor limits if using just 1 network interface. Consider a solution that uses multiple network interfaces.
Your description of your request/response can be handled entirely with standard Servlet 3.0, standard HTTP/1.1, Async request handling, and large timeouts.
If your clients are web browsers, and you don't start sending a response from the server until the 20-30 second window, you might hit browser idle timeouts.
Atmosphere and Cometd do the same things, supporting long duration connections, with connection technique fallbacks, and with logical channel APIs.

I believe the AKKA framework will handle this sort of need. I am looking at using it to handle scaling issues possibly with a RabbitMQ to help off load work to potentially other servers that may be added later to scale as needed.

Related

Did server successfully receive request

I am working on a C# mobile application that requires major interaction with a PHP web server. However, the application also needs to support an "offline mode" as connection will be over a cellular network. This network may drop requests at random times. The problem that I have experienced with previous "Offline Mode" applications is that when a request results in a Timeout, the server may or may not have already processed that request. In cases where sending the request more than once would create a duplicate, this is a problem. I was walking through this and came up with the following idea.
Mobile sets a header value such as UniqueRequestID: 1 to be sent with the request.
Upon receiving the request, the PHP server adds the UniqueRequestID to the current user session $_SESSION['RequestID'][] = $headers['UniqueRequestID'];
Server implements a GetRequestByID that returns true if the id exists for the current session or false if not. Alternatively, this could returned the cached result of the request.
This seems to be a somewhat reliable way of seeing if a request successfully contacted the server. In mobile, upon re-connecting to the server, we check if the request was received. If so, skip that pending offline message and go to the next one.
Question
Have I reinvented the wheel here? Is this method prone to failure (or am I going down a rabbit hole)? Is there a better way / alternative?
-I was pitching this to other developers here and we thought that this seemed very simple implying that this "system" would likely already exist somewhere.
-Apologies if my Google skills are failing me today.
As you correctly stated, this problem is not new. There have been multiple attempts to solve it at different levels.
Transport level
HTTP transport protocol itself does not provide any mechanisms for reliable data transfer. One of the reasons is that HTTP is stateless and don't care much about previous requests and responses. There have been attempts by IBM to make a reliable transport protocol called HTTPR what was based on HTTP, but it never got popular. You can read more about it here.
Messaging level
Most Web Services out there still uses HTTP as a transport protocol and SOAP messaging protocol on top of it. SOAP over HTTP is not sufficient when an application-level messaging protocol must also guarantee some level of reliability and security. This is why WS-Reliability and WS-ReliableMessaging protocols where introduced. Those protocols allow SOAP messages to be reliably delivered between distributed applications in the presence of software component, system, or network failures. At the same time they provide additional security. You can read more about those protocols here and here.
Your solution
I guess there is nothing wrong with your approach if you need a simple way to ensure that message has not been already processed. I would recommend to use database instead of session to store processing result for each request. If you use $_SESSION['RequestID'][] you will run in to trouble if the session is lost (user is offline for specific time, server is restarted or has crashed, etc). Also, if you use database instead of session, you can scale-up easier later on just by adding extra web server.

Would you see a significant speedup using a single websocket connection for all requests on a website?

Imagine I'm building an ordinary old website. Not a game, not a chat program, an ordinary website. Let's say it's a stack overflow clone.
The client side would simply make service calls to the server side. The server is essentially a dumb data store and never sends down HTML. The client handles all templating via javascript.
If I established a single websocket connection and did all requests through that, would I see a significant speedup over doing ajax requests?
The obvious advantage to using a single connection is that it only has to be established once. But how much time does that actually save? I know establishing a TCP connection can be costly, but in the grand scheme of things, does it matter?
I would not recommend websockets for webpages. HTTP 1.1 can reuse a TCP-connection for multiple requests, it's only HTTP 1.0 that had to use a new TCP connection for each request.
SPDY is probably a protocol that do what you are looking for. See SPDY: An experimental protocol for a faster web, but it's only supported by Chrome.
If you use websockets, the requests will not be cached.
One HTTP connection can only by used for one HTTP request at the same time. Say that a page requested a 100Kb document, nothing else will be send from the client to the server until that 100Kb document has been transferred. This is called head-of-line blocking. The client can establish an additional connection with the server, but there is also a limit on the amount of concurrent connections with the same server.
One of the primary reasons for developing SPDY and later HTTP/2 was solving this exact problem. However, support for SPDY and HTTP/2 is not yet as widespread as for WebSocket. WebSocket can get you there earlier because it supports multiple streams in full-duplex mode.
Once HTTP/2 is better supported it will be the preferred solution for this problem, but WebSocket will still be better for real-time web applications, where server needs to push data to the client.
Have a look at the N2O framework, it was created to address the problems I described above. In N2O WebSocket is used to send all assets associated with a page.
How much speed you could gain from using WebSocket instead of standard HTTP requests pretty much depends on your specific website: how often it requests data from the server, how big is a typical response, etc.

What's the behavioral difference between HTTP Keep-Alive and Websockets?

I've been working with websockets lately in detail. Created my own server and there's a public demo. I don't have such detailed experience or knowledge re: http. (Although since websocket requests are upgraded http requests, I have some.)
On my end, the server reports details of each hit. Among them are a bunch of http keep-alive requests. My server doesn't handle them because they're not websocket requests. But it got my curiosity up.
The whole big thing about websockets is that the connection stays alive. Then you can pass messages in both directions (simultaneously even). I've read that the Keep-Alive HTTP connection is a relatively new development (I don't know how many years in people time, just that it's only included in the latest standard - 1.1 - is that actually old now?)
I guess I can assume that there's a behavioral difference between the two or there would have been no reason for a websocket standard? What's the difference?
A Keep Alive HTTP header since HTTP 1.0, which is used to indicate a HTTP client would like to maintain a persistent connection with HTTP server. The main objects is to eliminate the needs for opening TCP connection for each HTTP request. However, while there is a persistent connection open, the protocol for communication between client and server is still following the basic HTTP request/response pattern. In other word, server side can't push data to client.
WebSocket is completely different mechanism, which is used to setup a persistent, full-duplex connection. With this full-duplex connection, server side can push data to client and client should be expected to process data from server side at any time.
Quoting corresponding entries on Wikipedia for reference:
1) http://en.wikipedia.org/wiki/HTTP_persistent_connection
2) http://en.wikipedia.org/wiki/WebSocket
You should read up on COMET, a design pattern which shows the limits of HTTP Keep-Alive. Keep-Alive is over 12 years old now, so it's not a new feature of HTTP. The problem is that it's not sufficient; the client and server cannot communicate in a truly asynchronous manner. The client must always use a "hanging" request in order to get a message back from the server; the server may not just send a message to the client at any time it wants.
HTTP vs Websockets
REST (HTTP)
Resources benefit from caching when the representation of a resource changes rarely or multiple clients are expected to retrieve the resource.
HTTP methods have well-known idempotency and safety properties. A request is “idempotent” if it can be issued multiple times without resulting in unique outcomes.
The HTTP design allows for responses to describe errors with the request, with the resource, or to provide nuanced status information to differentiate between success scenarios.
Have request and response functionality.
HTTP v1.1 may allow multiple requests to reuse a single connection, there will generally be small timeout periods intended to control resource consumption.
You might be using HTTP incorrectly if…
Your design relies on a client polling the service often, without the user taking action.
Your design requires frequent service calls to send small messages.
The client needs to quickly react to a change to a resource, and it cannot predict when the change will occur.
The resulting design is cost-prohibitive. Ask yourself: Is a WebSocket solution substantially less effort to design, implement, test, and operate?
WebSockets
WebSocket design does not allow explicit or transparent proxies to cache messages, which can degrade client performance.
WebSocket protocol offers support only for error scenarios affecting the establishment of the connection. Once the connection is established and messages are exchanged, any additional error scenarios must be addressed in the messaging layer design, but WebSockets allow for a higher amount of efficiency compared to REST because they do not require the HTTP request/response overhead for each message sent and received.
When a client needs to react quickly to a change (especially one it cannot predict), a WebSocket may be best.
This makes the protocol well suited to “fire and forget” messaging scenarios and poorly suited for transactional requirements.
WebSockets were designed specifically for long-lived connection scenarios, they avoid the overhead of establishing connections and sending HTTP request/response headers, resulting in a significant performance boost
You might be using WebSockets incorrectly if..
The connection is used only for a very small number of events, or a very small amount of time, and the client does not - need to quickly react to the events.
Your feature requires multiple WebSockets to be open to the same service at once.
Your feature opens a WebSocket, sends messages, then closes it—then repeats the process later.
You’re re-implementing a request/response pattern within the messaging layer.
The resulting design is cost-prohibitive. Ask yourself: Is a HTTP solution substantially less effort to design, implement, test, and operate?
Ref: https://blogs.windows.com/buildingapps/2016/03/14/when-to-use-a-http-call-instead-of-a-websocket-or-http-2-0/

Server to server communication in iis with http

I have two servers running in iis on different computers and they need to communicate with each other. Because of firewalls etc http is the only real option. It's bidirectional communication, not just request/response. Web sockets would be good, but the spec for that isn't finished so I'm wondering if there are any other options I should look at?
I would spend a few minutes thinking about wether my communication could actually be done using request response, a lot of times you can convert your communication into request response. HTTP is really very request response based so you kind of have to hack around it using various comet techniques like long polling. If you have control over both ends of the communication then there is no reason not to use web sockets even if the standard isn't settled upon. So long as you agree with yourself how it should be implemented complying with the final standard isn't important.

How do I create a chat server that is not driven by polling?

I have created a simple chat server that is driven by client polling. Clients send requests for data every few seconds, and get handed any new messages as well as information about whether their peer is still connected.
Since the client is running on a mobile platform (iPhone), I've been looking for ways of getting rid of the polling, which quickly drains the battery. I've read that it's possible to keep an http connection open indefinitely, but haven't understood how to utilize this technique in practice. I'm also wondering whether such connections are stable enough to use in a mobile setting.
The ideal scenario would be that the server only sends data to clients when an event that affects them has occurred (such as a peer posting a message or going off line).
Is it advisable to try to accomplish this over http, or would I have to write my own protocol over tcp? How hard would it be to customize xmpp to my need (my chat server has some specialized features that I would have to easily implement).
How about push technology? see http://en.wikipedia.org/wiki/Comet_(programming)
I think you're describing XMPP over BOSH.
http://xmpp.org/extensions/xep-0206.html
I've used this http-binding method between a chat server and javascript client on non-mobile devices. It worked well for me.
You might like to check out this project which uses a variety of techniques including Comet. Release details are here, here's a snippet from that page
It’s my distinct pleasure to be able
to announce the first public showing
of a project that I’ve been working on
in my spare time in the last month or
two, a new Web Based IRC chat
application.
This project brings together a lot of
new technologies which had to be
developed to make this a feasible,
scalable and efficient.
Some of the underlying tools build to
make this posible that i consider
’stable enough’ are already released,
such as the php Socket Daemon library
i wrote to be able to deal with
hundreds up to many thousands of
“Comet” http connections, and an equal
amount of IRC client connections.
I just found this article myself, which describes the following technique (which I referred to in the question):
... have the client make an HTTP request
and have the server hold the request
on the queue until there is a message
to push. if the TCP/IP connection is
lost or times-out, the client will
make a new HTTP request, and the delay
will only be the round trip time for a
request/response pair . . . this model
effectively requires two TCP/IP
connections for HTTP, client to
server, though none permanent and
hence mobile friendly
I think this is nearly impossible and dangerous. The internet works stateless and connectionless meaning that the connection between client and server is always handled as unreliable. And this is not for fun.
By trying to get a stateful connection you are introducing new issues. Especially from a 3g application. What if the connection breaks? You have no control over the server and cannot push.
I think it would even be easier to send sms/text messages and have an application that handles that.

Resources