flex and data concurrency - apache-flex

I am soon to embark on a medium scale project. Although this isn't a very high priority in my large list of things to do but I have been trying of how I could affectively handle data concurrency.
I will be using a stateless EJB backend to my flex application.
Ideally I am looking for a simple method to deal with data concurrency. e.g. if data is saved on one interface it is refreshed in another. Or it warns that the data has been changed before saving a new version of the data.
Has anyone any ideas as I am at a loss at the moment. As I mentioned its not a high priority but I would feel a lot better if I had some mechanism to improve the process.

If you are planning on using AMF channels for communication you can use the long polling feature to effectively give your application "push message" type support. Both the BlazeDS and/or GraniteDS data services support this capability for exactly the reasons you mentioned.

Version control systems store user_id and datetime for every revision. You can use same method. Client app get current datetime for requested data and save it. App send on changed data with saved datetime. Server checks datetime of last revision and received datetime. And reply to app accordingly.
Second method is using broadcast messages from server to clients. But I don't think it's applicable in your case. This method put into practice in LAN (environment with stable connect) usually.

Related

how to show updated data to the users as fast as possible (not real-time)?

In database some entity is getting updated by some backend process. We want to show this updated value to the user not real-time but as fast as possible on website.
Problems we are facing with these approaches.
Polling :- As we know that there are better techniques then polling like SSE, WebSockets.
SSE :- In SSE the connection open for long time(I search on internet and found that it uses long polling). Which might cause problem when user increases.
WebSockets :- As we need only one way communication(from server to client), SSE is better then this.
Our Solution
We check database on every request of user and update the value.(It is not very good as it will depend upon user next request)
Is it good approach or is there any better way to do this or Am I missing something about SSE(misunderstood something).
Is it fine to use SignalR instead of this all?(is there any long connection issue in it or not?)
Thanks.
It's just up to your requirements what you should use.
Options:
You clients need only the update information, in the case they make a request -> Go your way
If you need a solution with different client types like (Webclient, Winformclient, Androidclient,....) and you have for example different browser types which you should support. Not all browsers support all mechanisme... SignalR was designed to choose automatically the right transport mechanisme according to the mechanisme which a clients supports --> SignalR is an option. (Read more details here: https://www.asp.net/signalr) Has also options that your connection keeps alive.
There are also alternatives like https://pusher.com/ (For short this is only a queue where you can send messages, and also subscribe for messages) But these services are only free until for example some data volume.
You can use event based communication. When ever there is a change(event) in the backend/database, server should send a message to clients.
Your app should register to respective events and refresh the UI when ever there is an update.
We used Socket IO for this usecase, in our apps and it worked well.
Here is the website https://socket.io/

Meteorjs how much of a real time is it really?

I had my chance to play with this tool for a while now and made a chat application instead of a hello world. My project has 2 meteor applications sharing the same mongo database:
client
operator
when I type a message from the operator console it sometimes takes as much as 7-8 seconds to appear to the subscribed client. So my question is...how much of a real time can I expect from this meteor? Right now I can see better results with other services such as pubnub or pusher.
Should the delay come from the fact that it's 2 applications subscribed to the same db?
P.S. I need 2 applications because the client and operator apps are totally different mostly in design and media libraries (css/jquery plugins etc.) which is the only way I found to make the client app much lighter.
If you use two databases without DDP your apps are not going to operate in real time. You should either use one complete app or use DDP to relay messages to the other instance (via Meteor.connect)
This is a bit of an issue for the moment if you want to do the subscription on the server as there isn't really server to server ddp support with subscriptions yet. So you need to use the client to make the subscription:
connection = Meteor.connect("http://YourOtherMetorInstanceUrl");
connection.subscribe("messages");
Instead of
Meteor.subscribe("messages");
In your client app, of course using the same subscription names as you do for your corresponding publish functions on the other meteor instance
Akshat's answer is good, but there's a bit more explanation of why:
When Meteor is running it adds an observer to the collection, so any changes to data in that collection are immediately reactive. But, if you have two applications writing to the same database (and this is how you are synchronizing data), the observer is not in place. So it's not going to be fully real-time.
However, the server does regularly poll the database for outside changes, hence the 7-8 second delay.
It looks like your applications are designed this way to overcome the limitation Meteor has right now where all client code is delivered to all clients. Fixing this is on the roadmap.
In the mean time, in addition to Akshat's suggestion, I would also recommend using Meteor methods to insert messages. Then from client application, use Meteor.call('insertMessage', options ... to add messages via DDP, which will keep the application real-time.
You would also want to separate the databases.

Disconnected meteor application

I am interested in creating an application using the the Meteor framework that will be disconnected from the network for long periods of time (multiple hours). I believe meteor stores local data in RAM in a mini-mongodb js structure. If the user closes the browser, or refreshes the page, all local changes are lost. It would be nice if local changes were persisted to disk (localStorage? indexedDB?). Any chance that's coming soon for Meteor?
Related question... how does Meteor deal with document conflicts? In other words, if 2 users edit the same MongoDB JSON doc, how is that conflict resolved? Optimistic locking?
Conflict resolution is "last writer wins".
More specifically, each MongoDB insert/update/remove operation on a client maps to an RPC. RPCs from a given client always play back in order. RPCs from different clients are interleaved on the server without any particular ordering guarantee.
If a client tries to issue RPCs while disconnected, those RPCs queue up until the client reconnects, and then play back to the server in order. When multiple clients are executing offline RPCs, the order they finally run on the server is highly dependent on exactly when each client reconnects.
For some offline mutations like MongoDB's $inc and $addToSet, this model works pretty well as is. But many common modifiers like $set won't behave very well across long disconnects, because the mutation will likely conflict with intervening changes from other clients.
So building "offline" apps is more than persisting the local database. You also need to define RPCs that implement some type of conflict resolution. Eventually we hope to have turnkey packages that implement various resolution schemes.

Caching data and notifying clients about changes in data in ASP.NET

We are thinking to make some architectural changes in our application, which might affect the technologies we'll be using as a result of those changes.
The change that I'm referring in this post is like this:
We've found out that some parts of our application have common data and common services, so we extracted those into a GlobalServices service, with its own master data db.
Now, this service will probably have its own cache, so that it won't have to retrieve data from the db on each call.
So, when one client makes a call to that service that updates data, other clients might be interested in that change, or not. Now that depends on whether we decide to keep a cache on the clients too.
Meaning that if the clients will have their own local cache, they will have to be notified somehow (and first register for notifications). If not, they will always get the data from the GlobalServices service.
I need your educated advice here guys:
1) Is it a good idea to keep a local cache on the clients to begin with?
2) If we do decide to keep a local cache on the clients, would you use
SqlCacheDependency to notify the clients, or would you use WCF for
notifications (each might have its cons and pros)
Thanks a lot folks,
Avi
I like the sound of your SqlCacheDependency, but I will answer this from a different perspective as I have worked with a team on a similar scenario. We created a master database and used triggers to create XML representations of data that was being changed in the master, and stored it in a TransactionQueue table, with a bit of meta data about what changed, when and who changed it. The client databases would periodically check the queue for items it was interested in, and would process the XML and update it's own tables as necessary.
We also did the same in reverse for the client to update the master. We set up triggers and a TransactionQueue table on the client databases to send data back to the master. This in turn would update all of the other client databases when they next poll.
The nice thing about this is that it is fairly agnostic on client platform, and client data structure, so we were able to use the method on a range of legacy and third party systems. The other great point here is that you can take any of the databases out of the loop (including the master - e.g. connection failure) and the others will still work fine. This worked well for us as our master database was behind our corporate firewall, and the simpler web databases were sitting with our ISP.
There are obviously cons to this approach, like race hazard, so we were careful with the order of transaction processing, error handling, de-duping etc. We also built a management GUI to provide a human interaction layer before important data was changed in the master.
Good luck! Tim

Flex - best strategy for keeping client data in synch with backend database?

In an Adobe flex applicaiton using BlazeDS AMF remoting, what is the best stategy for keeping the local data fresh and in synch with the backend database?
In a typical web application, web pages refresh the view each time they are loaded, so the data in the view is never too old.
In a Flex application, there is the tempation to load more data up-front to be shared across tabs, panels, etc. This data is typically refreshed from the backend less often, so there is a greater chance of it being stale - leading to problems when saving, etc.
So, what's the best way to overcome this problem?
a. build the Flex application as if it was a web app - reload the backend data on every possible view change
b. ignore the problem and just deal with stale data issues when they occur (at the risk of annoying users who are more likely to be working with stale data)
c. something else
In my case, keeping the data channel open via LiveCycle RTMP is not an option.
a. Consider optimizing back-end changes through a proxy that does its own notification or poling: it knows if any of the data is dirty, and will quick-return (a la a 304) if not.
b. Often, users look more than they touch. Consider one level of refresh for looking and another when they start and continue to edit.
Look at BuzzWord: it locks on edit, but also automatically saves and unlocks frequently.
Cheers
If you can't use the messaging protocol in BlazeDS, then I would have to agree that you should do RTMP polling over HTTP. The data is compressed when using RTMP in AMF which helps speed things up so the client is waiting long between updates. This would also allow you to later scale up to the push methods if the product's customer decides to pay up for the extra hardware and licenses.
You don't need Livecycle and RTMP in order to have a notification mechanism, you can do it with the channels from BlazeDS and use a streaming/long polling strategy
In the past I have gone with choice "a". If you were using Remote Objects you could setup some cache-style logic to keep them in sync on the remote end.
Sam
Can't you use RTMP over HTTP (HTTP Polling)?
That way you can still use RTMP, and although it is much slower than real RTMP you can still braodcast updates this way.
We have an app that uses RTMP to signal inserts, updates and deletes by simply broadcasting RTMP messages containing the Table/PrimaryKey pair, leaving the app to automatically update it's data. We do this over Http using RTMP.
I found this article about synchronization:
http://www.databasejournal.com/features/sybase/article.php/3769756/The-Missing-Sync.htm
It doesn't go into technical details but you can guess what kind of coding will implement this strategies.
I also don't have fancy notifications from my server so I need synchronization strategies.
For instance I have a list of companies in my modelLocator. It doesn't change really often, it's not big enough to consider pagination, I don't want to reload it all (removeAll()) on each user action but yet I don't want my application to crash or UPDATE corrupt data in case it has been UPDATED or DELETED from another instance of the application.
What I do now is saving in a SESSION the SELECT datetime. When I come back for refreshing the data I SELECT WHERE last_modified>$SESSION['lastLoad']
This way I get only rows modified after I loaded the data (most of the time 0 rows).
Obviously you need to UPDATE last_modified on each INSERT and UPDATE.
For DELETE it's more tricky. As the guy point out in his article:
"How can we send up a record that no longer exists"
You need to tell flex which item it should delete (say by ID) so you cannot really DELETE on DELETE :)
When a user delete a company you do an UPDATE instead: deleted=1
Then on refresh companies, for row where deleted=1 you just send back the ID to flex so that it makes sure this company isn't in the model anymore.
Last but not the least, you need to write a function that clean rows where deleted=1 and last_modified is older than ... 3days or whatever you think suits your needs.
The good thing is that if a user delete a row by mistake it's still in the database and you can save it from real delete within 3days.
Rather than caching on flex client, why not do caching on server side? Some reasons,
1) When you cache data on server side, its centralized and you can make sure all clients have the same state of data
2) There are much better options available on server side for caching rather than on flex. Also you can have a cron job which refreshes data based on some frequency say every 24 hours.
3) As data is cached on server and it doesn't need to fetch it from db every time, communication with flex will be much faster
Regards,
Tejas

Resources