Updating data in a client application, how to avoid polling? - http

I have a desktop client application that is talking to a server application through a REST API using simple HTTP posts. I currently have the client polling every X minutes, but I would like the data to be refreshed more frequently. Is it possible to have the server notify the client of any new data, or is that outside the scope of what an HTTP server is meant to do? Any thoughts on the best way to approach this would be much appreciated. Thanks!

You may want to check the accepted answer to the following Stack Overflow post, which describes with a very basic example how to implement Long Polling using php on the server-side:
Simple “Long Polling” example code
When using Long Polling, your client application starts a request to the HTTP server, with an infinite timeout (or a very long one). Now as soon as new data is available, the server will find an active connection ready, so it can push the data immediately. In traditional polling, you would have to wait until the application initiates a new poll, plus the network latency to reach the server before new data is sent.
Then when the data is sent, the connection is closed, but your application should open a new one immediately in order to have a constantly open connection to the server. Actually there will be a very small gap where there will not be an active connection, but this is often negligible in many applications.

If you hold the HTTP connection open on the server side then you can send data whenever there's an update, followed by flushing the connection to actually send the data. This may cause issues with the TCP/IP stack if tens of thousands of connections are required though.

Related

lost server replies/errors with netty's object decoder

I have a very simple netty app which serves both as server and a client.
Client uses channel.writeAndFlush() to send request to server and then blocks on monitor.wait().
In client's InboundAdapter in channelRead() I find the appropriate monitor and do monitor.notify() to let the requesting client thread to proceed working on the server's reply.
On the server in ChannelHandler's channelRead() I do the following:
To limit the amount of requests being processed I submit a task which does the real work as a new task to existing EventLoop: ctx.executor().submit(new Task()). I that task I do heavy IO operations and after that I writeAndFlush() results back to client.
Here is my pipeline setup:
new ObjectEncoder(),
new ObjectDecoder(LibConstants.Search.MAX_REQUIEST_SIZE, ClassResolvers.cacheDisabled(null))
Here is the bootstrap config:
new ServerBootstrap()
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 1000)
.option(ChannelOption.SO_KEEPALIVE, true)
I have 2 problems:
Rather often I get io.netty.handler.codec.DecoderException: java.io.UTFDataFormatException on the client when receiving a reply from server. I cannot find any obvious reason for this. Since my pipeline setup is so simple.
A reply from the server just wouldn't appear on the client. I the logs I see a successful flush on the server but the reply never arrived at the client. This is very hard to deal with since my app is very latency sensitive. Any timeout I would set will kill my user experience.
This all happens over a VPN network so there is a possibility that VPN device misbehaves in some weird way but I hoping that TCP would handle any sort of packet loss/corruption which can happen in the channel.
Any advice or experience you can share will be very appreciated!

Constantly retrieves information from the stream

How Facebook, Google plus or other informations web site, constantly retrieves information from the stream?
I suppose there is an asynchronous recovery , but how he gets constantly? It's like an infinite loop?
Which technology is used ?
There are a few different approaches to displaying updates in near-real time on the web. Here are some of the most common ones:
Short polling
The simplest approach to the problem is to continuously poll the server on a short interval (hence the name). This means that every few seconds, client-side code sends an asynchronous request to the server and displays the result. The downside to this approach is that if updates happen less frequently than the server is queried, the client is doing a lot of work for little payoff. There may also be a slight delay between when the event happens on the server and when the client receives it, based on the polling frequency.
Long polling
The next evolutionary step from short polling is what's known as long polling, where the client-side JavaScript fires off an asynchronous request to the server as soon as the page loads. The server only responds to the request when an update is made, and once the response reaches the client, another request is fired off immediately. The key part of this approach is that the asynchronous request can wait for the server for a long time.
Long polling saves bandwidth and computation time, since the response is only handled when the server has something that changed. It does require more complex server-side logic, but it does allow for near-instant updates on the client side.
This question has a decent sample: How do I implement basic "Long Polling"?
WebSockets
WebSockets are a relatively new technology, and allow for two-way communication in a way that's similar to standard network sockets. The server or client can send messages across the socket that trigger events on the other side of the connection. As nice as this is, browser support isn't as widespread enough to make it a dependable solution.
For the current WebSocket specification, take a look at RFC 6455.

Is polling the way to go for live chat on web?

I'm trying to implement a custom live chat program on the web, but I'm not sure how to handle the real-time (or near real-time) updates for users. Would it make more sense to send Ajax requests from the client side every second or so, polling the database for new comments?
Is there a way to somehow broadcast from the database each time a comment is added? If this is possible how would that work? I'm using Sql Server 2008 with Asp.net (c#).
Thanks!
Use long polling/server side push/comet:
http://en.wikipedia.org/wiki/Comet_(programming))
Also see:
http://en.wikipedia.org/wiki/Push_technology
I think when you use long polling you'll also want your web server to provide some support in the form of non-blocking io for requests, so that you aren't holding a thread per connection.
You could have each client poll the server, and at the server side keep the connection open without responding.
As soon there is a message detected at server side, this data is returned through the already open connection. On receipt, your client immediately issues a new request.
There's some complexity as you need to keep track server side which connections is associated with which session, and which should be responded upon to prevent timeouts.
I never actually did this but this should be the most resource efficient way.
Nope. use queuing systems like RabiitMq or ActiveMQ. Check mongoDB too.
A queuing system will give u a publish - subscribe facilities.

Work-around needed: Windows Azure load balancers close idle connections after 60 seconds

A simple problem. I have an ASHX handler which generates a report. Unfortunately, this process can take 2 or more minutes to finish and Azure will close the connection before this handler can respond. Why? Because the connection is idle for too long, thus it is killed off.
So, I need to keep this connection alive in some way. To make it a bit more complex, the handler is called from a Silverlight application which will call the handler from a frame on the current webpage or (when not running from a browser) create a new browser instance to call the handler.
My challenge is to get around this timeout with a minimum amount of code. But also, the code needs to work exactly as it does now!
Opening the handler in a separate frame or browser window allows the report to be saved anywhere on the system of the user. If I would download it from within the Silverlight code, I will not have proper write access. There will be no permission given to any Silverlight application that needs to write to the local disk, thus the work-around with the browser/frame.
Not too sure about HTTP transport, but you can certainly use TCP keep-alives at the socket level. However, then you need to create socket listener to download HTTP content (way overkill).
Perhaps there is a much simpler solution? Why don't you have the client make the request to generate the report and have the handler return a SAS signature (time limited, read-only signature) to where the report will eventually be put in blob storage. This is very quick and requires no open TCP connection. The report generator should simply create the report in a file to be downloaded at the blob location it sent to the client (any GUID would work here) instead of streaming it back over the response. Finally, the client just needs to poll the location until it gets a file. Now you are nice and asynchronous with short open connections and don't have to worry about this TCP timeout issue. The code to do this is far, far less complex than anything to work around a TCP timeout.

How do I create a chat server that is not driven by polling?

I have created a simple chat server that is driven by client polling. Clients send requests for data every few seconds, and get handed any new messages as well as information about whether their peer is still connected.
Since the client is running on a mobile platform (iPhone), I've been looking for ways of getting rid of the polling, which quickly drains the battery. I've read that it's possible to keep an http connection open indefinitely, but haven't understood how to utilize this technique in practice. I'm also wondering whether such connections are stable enough to use in a mobile setting.
The ideal scenario would be that the server only sends data to clients when an event that affects them has occurred (such as a peer posting a message or going off line).
Is it advisable to try to accomplish this over http, or would I have to write my own protocol over tcp? How hard would it be to customize xmpp to my need (my chat server has some specialized features that I would have to easily implement).
How about push technology? see http://en.wikipedia.org/wiki/Comet_(programming)
I think you're describing XMPP over BOSH.
http://xmpp.org/extensions/xep-0206.html
I've used this http-binding method between a chat server and javascript client on non-mobile devices. It worked well for me.
You might like to check out this project which uses a variety of techniques including Comet. Release details are here, here's a snippet from that page
It’s my distinct pleasure to be able
to announce the first public showing
of a project that I’ve been working on
in my spare time in the last month or
two, a new Web Based IRC chat
application.
This project brings together a lot of
new technologies which had to be
developed to make this a feasible,
scalable and efficient.
Some of the underlying tools build to
make this posible that i consider
’stable enough’ are already released,
such as the php Socket Daemon library
i wrote to be able to deal with
hundreds up to many thousands of
“Comet” http connections, and an equal
amount of IRC client connections.
I just found this article myself, which describes the following technique (which I referred to in the question):
... have the client make an HTTP request
and have the server hold the request
on the queue until there is a message
to push. if the TCP/IP connection is
lost or times-out, the client will
make a new HTTP request, and the delay
will only be the round trip time for a
request/response pair . . . this model
effectively requires two TCP/IP
connections for HTTP, client to
server, though none permanent and
hence mobile friendly
I think this is nearly impossible and dangerous. The internet works stateless and connectionless meaning that the connection between client and server is always handled as unreliable. And this is not for fun.
By trying to get a stateful connection you are introducing new issues. Especially from a 3g application. What if the connection breaks? You have no control over the server and cannot push.
I think it would even be easier to send sms/text messages and have an application that handles that.

Resources