Server -> Many Clients: Simultaneous Events - networking

Not sure what category this question falls into; perhaps general networking / design / algorithms.
For a project I am looking at having one server with multiple connected clients. After some time, when all clients have connected, the server should send a message to each client instructing them to take some action. I need to guarantee that each client will execute this action at exactly the same time. Theoretically, how can this be done? What are the practical complications I will come up against? My target platform is mobile.
One solution I can think of;
The server actively and continuously keep track of the round-trip latency for each client. Provided this latency doesn't change too fast over time, the server should be able to compensate for each client's lag and send messages to each such that they all start execution at roughly the same time. Is there a better way?
One not-really related question: Client side and server side events not firing simultaneously

It can easily be done.
You don't care about latency nor you need the same machine time at clients.
The key here is to create a precise appointment.
Since clients communicate to the server, and not vice versa (you didn't say anything about it though). I can give you the following solution:
When a client connects to the server, it should send their local time.
When the server thinks it's time for the event to be set. It should send an appointment event to each client, with their local time in it. Server can calculate this.
Then, each client knows when exactly they need to do something by setting a timer till the time for their appointment comes.

In theory yes you can but not in real life.
At least you should add some a validity time-slot. All actions should be in that predefined time-slot in order that action to be valid.
So basically "same moment" = "a predefined time slot".
A predefined time-slot can be any value that is close to same moment or real-time.

Related

SockJS and meteor: what if load balancer do not support sticky sessions?

I'm exploring balancing options for Meteor. This article looks very cool and it says that the following should be supported to load balance Meteor:
Mongo optailing. Otherwise, it may take up to ten seconds for one instance of Meteor to get updates from the another, because polling Mongo driver will be used, which polls-and-diffs DB each ten seconds.
Websocket. It's clear too - otherwise clients will fallback to HTTP and long-polling, which will work, but it's not as cool as Websocket.
Sticky sessions 'which are required by SockJS'. Here the question comes:
As I understood, 'sticky sessions support' is something that assign one client to the same server during his session. Is it essential? What may happen if I don't configure sticky sessions at all?
Here's what I came up to by myself:
Because Meteor stores all data sent to client in memory, if client connects to X servers, then X times more memory will be consumed
Some minor (or major, if there are no oplog) lag may appear for the same user in, say, different tabs or windows, which may be surprising.
If SockJS reconnects and wants some data to persist across reconnections, it gonna have a bad time. I'm not sure about how SockJS works, is this point valid?
What bad can happen? These three points doesn't look very bad: data is valid, available, may be at a cost of extra memory consumption.
Basics
Sticky Sessions are required to ensure that the browser's in memory session can be managed correctly by the server.
First let me explain why you need sticky sessions:
Each publish that uses an ordinary publish cursor keeps track of whatever collections the client may have, so when something changes it knows what to send down back to the client. This would apply to every Meteor app if it needs a DDP connection. This is the case with websockets and sockjs
Additionally there may be other client session state stored in variables but those you would be edge cases (e.g you store the user's state in a variable).
The problem happens when the server disconnects and reconnects, but somehow perhaps the connection gets transferred to the other node (without re-establishing a new connection) - which has no idea about the client's data, so the behaviour could turn up a bit weird.
The issue with SockJS & Long Polling
With SockJS there is an additional issue. SockJS uses websocket emulation when it falls back to long polling.
With Long polling a new connection attempt/new http request is made every time new data is available.
If sticky Sessions are not enabled each of these connections will be randomly assigned to a different node/dynamo.
So you have a 50% chance (in the case its random) that the server has no idea about the client's DDP Session with every every time new data is available.
It would then force the client to re-negotiate a connection/ignore the clients DDP commands and you would end up getting very weird behaviour on the client.
Half of these would be to the wrong node:

How to throttle SignalR clients on the server side

I am using a PersistentConnection for publishing large amounts of data (many small packages) to the connected clients.
It is basically a one way direction of data (since each client will call endpoints on other servers to set up various subscriptions, so they will not push any data back to the server via the SignalR connection).
Is there any way to detect that the client cannot keep up with the messages sent to it?
The example could be a mobile client on a poor connection (e.g. in a roaming situation, the speed may vary a lot). If we are sending 100 messages per second, but the client can only handle 10, we will eventually lose the messages (due to the message buffer on the server side).
I was looking for a server side event, similar to what has been done on the (SignalR) client, e.g.
protected override Task OnConnectionSlow(IRequest request, string connectionId) {}
but that is not part of the framework (for good reasons, I assume).
I have considered using the approach (suggested elsewhere on Stackoverflow), to let the client tell the server (e.g. every 10-30 seconds) how many messages it has received, and if that number differentiates a lot from the number of messages sent to the client, it is likely that the client cannot keep up.
The event would be used to tell the distributed backend that the client cannot keep up, and then turn down the data generation rate.
There's no way to to this right now other than coding something custom. We have discussed this in the past as a potential feature but it isn't anywhere the roadmap right now. It's also not clear what "slow" means as it's up to the application to decide. There'd probably be some kind of bandwidth/time/message based setting that would make this hypothetical event trigger.
If you want to hook in at a really low level, you could use owin middleware to replace the client's underlying stream with one that you owned so that you'd see all of the data going over the write (you'd have to do the same for websockets though and that might be non trivial).
Once you have that, you could write some time based logic that determined if the flush was taking too long and kill the client that way.
That's very fuzzy but it's basically a brain dump of how a feature like this could work.

Constantly retrieves information from the stream

How Facebook, Google plus or other informations web site, constantly retrieves information from the stream?
I suppose there is an asynchronous recovery , but how he gets constantly? It's like an infinite loop?
Which technology is used ?
There are a few different approaches to displaying updates in near-real time on the web. Here are some of the most common ones:
Short polling
The simplest approach to the problem is to continuously poll the server on a short interval (hence the name). This means that every few seconds, client-side code sends an asynchronous request to the server and displays the result. The downside to this approach is that if updates happen less frequently than the server is queried, the client is doing a lot of work for little payoff. There may also be a slight delay between when the event happens on the server and when the client receives it, based on the polling frequency.
Long polling
The next evolutionary step from short polling is what's known as long polling, where the client-side JavaScript fires off an asynchronous request to the server as soon as the page loads. The server only responds to the request when an update is made, and once the response reaches the client, another request is fired off immediately. The key part of this approach is that the asynchronous request can wait for the server for a long time.
Long polling saves bandwidth and computation time, since the response is only handled when the server has something that changed. It does require more complex server-side logic, but it does allow for near-instant updates on the client side.
This question has a decent sample: How do I implement basic "Long Polling"?
WebSockets
WebSockets are a relatively new technology, and allow for two-way communication in a way that's similar to standard network sockets. The server or client can send messages across the socket that trigger events on the other side of the connection. As nice as this is, browser support isn't as widespread enough to make it a dependable solution.
For the current WebSocket specification, take a look at RFC 6455.

IIS6 HTTP request prioritization

I am submitting POST requests to an external server running IIS6. This is a time critical request where I want to ensure that my request is processed at a specific time (say 10:00:00 AM). No earlier. And I want to ensure that at that specific time, my request is assigned the highest priority over other requests. Would any of this help:
Sending most of the message a few seconds early and sending the last byte or so a few milliseconds prior to 10:00:00. Not sure if this will help as I will be competing with other requests that come in around that time. Will IIS assign a higher priority to my request based on how long I am connected?
Anything that I can add to the message header to tell the server to queue my request and process only at a specific time?
Any known hacks that I can leverage?
No - HTTP is not a real time protocol. It usually runs on top of TCP/IP which is not a real time protocol. While you can get near real-time behaviour out of such an architecture its far from simple - don't take my word for it - go read the source code for xntpd.
Having said that you give no details of the actual level of precision you require - but your post implies that it could be up to a second - which is a very long time for submitting a request to a webserver. On the other hand, scheduling such an event to fire client side with this level of accuracy is very difficult - I've not tried measuring the accuracy of the scheduler on MSWindowsNT but elsewhere I'd only expect it to be accurate to about 5 minutes. So you'd need to schedule the job to start 5 minutes early then sleep for 10 milliseconds at a time until the target time rolls around.
But then again, thinking about why you need to run any job with any sort of timing accuracy makes me think that you're trying to solve the problem the wrong way.
C.
It sounds like you need more of a scheduler system then trying to use http. HTTP is a stateless protocol, you send a request to IIS, you get a response.
What you might want to consider is taking that request, and then storing the information you require somewhere (database). Then using some sort of scheduler (cronjobs, scheduled tasks) you action that information at the desired time.
What you want, you probably can't achieve with IIS, it's not what it is designed to do.

Detecting (on the server side) when a Flex client disconnects from BlazeDS destination

I'd like to know whether it's possible to easily detect (on the server side) when Flex clients disconnect from a BlazeDS destination please? My scenario is simply that I'd like to try and use this to figure out how long each of my clients are connected for each session. I need to be able to differentiate between clients as well (ie so not just counting the number of currently connected clients which I see in ds-console).
Whilst I could program in a "I'm now logging out" process in my clients, I don't know whether this will fire if the client simply navigates away to another web page rather than going though said logout process.
Can anyone suggest if there's an easy way to do this type of monitoring on the server side please.
Many thanks,
Alex
Implement you own adapter extending "extends ServiceAdapter"
Then set the function:
#Override
public boolean handlesSubscriptions() {
return true;
}
So you can handle subscription and unsubscription
Then you can manage those in the manage function:
#Override
public Object manage(CommandMessage commandMessage) {
switch (commandMessage.getOperation()) {
case CommandMessage.SUBSCRIBE_OPERATION:
break;
case CommandMessage.UNSUBSCRIBE_OPERATION:
break;
}
}
You can also catch different commands.
Hope this help
The only way to do it right is to implement the heartbeat mechanism in a way or another. You can use the keep-alive from http coupled with session expire as suggested before but my opinion is to use the messaging mechanism from BlazeDS (send a message at X seconds). You can control the time interval and other aspects (maybe you want to detect if the client is not doing anything for several hours and to invalidate the session even if your client is still connected).
If you want to be notified instantly (chat application) when a client is disconnecting a solution is to have a socket (RTMP) or some emulation (http streaming) which will detect instantly if the client is disconnected, however this disconnection can be temporary (maybe the network was down for one second, and after that is ok, and you should also detect that).
I would assume BlazeDS would provide a callback or event for when a client disconnects, but I haven't worked with Blaze so that would just be a guess. First step would be to check the documentation to see if it does though, as that would be your best bet.
What I've done in cases where there isn't a disconnect event (or it's not reliable) is to add a keepalive message. For instance, the client would be configured to send a keepalive message every 30 seconds, and if the server goes more than, say, 2 minutes without seeing a keepalive then it assumes the client has disconnected. This would let you differentiate between different clients, and you can play around with the rate and check times to get something you're happy with.

Resources