Adobe AIR: online/offline database sync - apache-flex

I am working on an AIR app which should work in both online and offline model. The user can do various actions, while offline and the results would get saved in a local DB. The same needs to get synced up with the global DB once the user goes online. I googled a bit on it, and it seems that Adobe LCDS (lifecycle data services) is the only available option to do it. However, it is an enterprise solution, and way too costly.
Is there any other implementation for this? Has anyone used CouchDB for online/offline synchronization?
Thanks and Regards,
Kapil Kaushik

For doing a DB sync with your server when the Air app is only, you do not have any requirements as for which backend technology you use. LCDS makes it simpler, but it's not your only option. Heck, you could use just a normal PHP script to do the sync for you if you'd like.
The hard part of it all is that you need to figure out your syncing algorithm so that you don't lose any information. Normally what I do is that when the app is connected online again, it sends all the information that was modified/create (with timestamps on when it was modified) while offline to the server, then the server has an algorithm that checks if the offline information is newer than what was done previous (or does some other business rule depending on your situation). When the server decides which data is good, it then sends the updated data to the client and effectively syncs everything.
This can be done through a normal HTTP request, polling or pushing.

Related

When using realm where does business logic sit

We currently designing a mobile app and the client has requested we have the ability to work offline and sync data when online again. I'm looking at using realm as it seems to make having an offline state really easy, however I'm a bit confused about where any server side logic would live. Am I right in that realm isn't really designed to have server side logic? You are just persisting data to the cloud when the cloud is available but you aren't actually in charge of building an API with and logic behind it?
Edit.
Reading further maybe Azure offline data sync is a better option because you can write the server side code? Am I correct in this decision that if I want server side code I can't use realm?
Thansk Michael
Realm Mobile Platform is designed for offline data access since it uses the full power of Realm Mobile Database as the client data store. However, that doesn't mean you are limited to only client-side interactions. We offer a Node.js SDK in the Professional and Enterprise editions where you can work with the same copy of Realm data from the mobile clients in a Linux environment.
The Node.js SDK offers the ability to open any Realm, query or perform write transactions on the server which will push data out to the client(s). In addition, it has event-handling capabilities, where you can register callbacks to perform logic in response to data changes performed on client devices.
To make it easier to get started with the event-handling functionality, we launched Realm Functions which allows you to create Javascript functions through the Realm Object Server dashboard, which will then run in response to data changes. Underneath the hood this uses the Node.js SDK to power it.
The sum total of all of this is that you should be able to build any server-side business logic that you need. By using Realm's sync as the transport layer, your mobile development can focus on the application logic versus networking and data transformation. Likewise, your server has an exact copy of the data to perform logic as well. Both sides operate independently, so offline changes will always sync back up!

Send data to DynamoDb over intermittent connection

I have an application that needs to send data to a cloud database (DynamoDb).
The app runs on a computer that can lose internet connectivity or be switched off at any time, but I must ensure that all data eventually gets to the cloud database.
I can assume the application will eventually be switched on, and will eventually get internet access back.
The app is written in VB .NET
What are some schemes for achieving this, and are there any ready-made products that already achieve this?
You could implement a write-through cache using a local DynamoDB instance (or even using SQLite). But without getting specific details about what kind of data you'd be storing into the database, and what data should be made available "offline" it's hard to say exactly how you should structure your application. You'll definitely want to not keep everything local, unless the volume of data is really small overall.
Then there is the problem of resolving conflicts that may occur during network partitions (ie. a client goes offline and makes some database modifications, while other clients also make modifications to the database; these need to be reconciled and it's up to you, and your users to determine how)
It's not a simple problem to solve.

How to Design a Database Monitoring Application

I'm designing a database monitoring application. Basically, the database will be hosted in the cloud and record-level access to it will be provided via custom written clients for Windows, iOS, Android etc. The basic scenario can be implemented via web services (ASP.NET WebAPI). For example, the client will make a GET request to the web service to fetch an entry. However, one of the requirements is that the client should automatically refresh UI, in case another user (using a different instance of the client) updates the same record AND the auto-refresh needs to happen under a second of record being updated - so that info is always up-to-date.
Polling could be an option but the active clients could number in hundreds of thousands, so I'm looking for a more robust and lightweight (on server) solution. I'm versed in .NET and C++/Windows and I could roll-out a complete solution in C++/Windows using IO Completion Ports but feel like that would be an overkill and require too much development time. Looked into ASP.NET WebAPI but not being able to send out notifications is its limitation. Are there any frameworks/technologies in Windows ecosystem that can address this scenario and scale easily as well? Any good options outside windows ecosystem e.g. node.js?
You did not specify a database that can be used so if you are able to use MSSQL Server, you may want to lookup SQL Dependency feature. IF configured and used correctly, you will be notified if there are any changes in the database.
Pair this with SignalR or any real-time front-end framework of your choice and you'll have real-time updates as you described.
One catch though is that SQL Dependency only tells you that something changed. Whatever it was, you are responsible to track which record it is. That adds an extra layer of difficulty but is much better than polling.
You may want to search through the sqldependency tag here at SO to go from here to where you want your app to be.
My first thought was to have webservice call that "stays alive" or the html5 protocol called WebSockets. You can maintain lots of connections but hundreds of thousands seems too large. Therefore the webservice needs to have a way to contact the clients with stateless connections. So build a webservice in the client that the webservices server can communicate with. This may be an issue due to firewall issues.
If firewalls are not an issue then you may not need a webservice in the client. You can instead implement a server socket on the client.
For mobile clients, if implementing a server socket is not a possibility then use push notifications. Perhaps look at https://stackoverflow.com/a/6676586/4350148 for a similar issue.
Finally you may want to consider a content delivery network.
One last point is that hopefully you don't need to contact all 100000 users within 1 second. I am assuming that with so many users you have quite a few servers.
Take a look at Maximum concurrent Socket.IO connections regarding the max number of open websocket connections;
Also consider whether your estimate of on the order of 100000 of simultaneous users is accurate.

Architecture For A Real-Time Data Feed And Website

I have been given access to a real time data feed which provides location information, and I would like to build a website around this, but I am a little unsure on what architecture to use to achieve my needs.
Unfortunately the feed I have access to will only allow a single connection per IP address, therefore building a website that talks directly to the feed is out - as each user would generate a new request, which would be rejected. It would also be desirable to perform some pre-processing on the data, so I guess I will need some kind of back end which retrieves the data, processes it, then makes it available to a website.
From a front end connection perspective, web services sounds like it may work, but would this also create multiple connections to the feed for each user? I would also like the back end connection to be persistent, so that data is retrieved and processed even when the site is not being visited, I believe IIS will recycle web services and websites when they are idle?
I would like to keep the design fairly flexible - in future I will be adding some mobile clients, so the API needs to support remote connections.
The simple solution would have been to log all the processed data to a database, which could then be picked up by the website, but this loses the real-time aspect of the data. Ideally I would be looking to push the data to the website every time the data changes or now data is received.
What is the best way of achieving this, and what technologies are there out there that may assist here? Comet architecture sounds close to what I need, but that would require building a back end that can handle multiple web based queries at once, which seems like quite a task.
Ideally I would be looking for a C# / ASP.NET based solution with Javascript client side, although I guess this question is more based on architecture and concepts than technological implementations of these.
Thanks in advance for all advice!
Realtime Data Consumer
The simplest solution would seem to be having one component that is dedicated to reading the realtime feed. It could then publish the received data on to a queue (or multiple queues) for consumption by other components within your architecture.
This component (A) would be a standalone process, maybe a service.
Queue consumers
The queue(s) can be read by:
a component (B) dedicated to persisting data for future retrieval or querying. If the amount of data is large you could add more components that read from the persistence queue.
a component (C) that publishes the data directly to any connected subscribers. It could also do some processing, but if you are looking at doing large amounts of processing you may need multiple components that perform this task.
Realtime web technology components (D)
If you are using a .NET stack then it seems like SignalR is getting the most traction. You could also look at XSockets (there are more options in my realtime web tech guide. Just search for '.NET'.
You'll want to use signalR to manage subscriptions and then to publish messages to registered client (PubSub - this SO post seems relevant, maybe you can ask for a bit more info).
You could also look at offloading the PubSub component to a hosted service such as Pusher, who I work for. This will handle managing subscriptions and component C would just need to publish data to an appropriate channel. There are other options all listed in the realtime web tech guide.
All these components come with a JavaScript library.
Summary
Components:
A - .NET service - that publishes info to queue(s)
Queues - MSMQ, NServiceBus etc.
B - Could also be a simple .NET service that reads a queue.
C - this really depends on D since some realtime web technologies will be able to directly integrate. But it could also just be a simple .NET service that reads a queue.
D - Realtime web technology that offers a simple way of routing information to subscribers (PubSub).
If you provide any more info I'll update my answer.
A good solution to this would be something like http://rubyeventmachine.com/ or http://nodejs.org/ . It's not asp.net, but it can easily solve the issue of distributing real time data to other users. Since user connections, subscriptions and broadcasting to channels are built in to each, that will make coding the rest super simple. Your clients would just connect over standard tcp.
If you needed clients to poll for updates then you would need a que system to store info for the next request. That could be a simple array, or a more complicated que system depending on your requirements and number of users.
There may be solutions for .net that I am not aware of that do the same thing, but those are the 2 I know of.

How to ensure HTTP upload came from authentic executable

We are in the process of writing a native windows app (MFC) that will be uploading some data to our web app. Windows app will allow user to login and after that it will periodically upload some data to our web app. Upload will be done via simple HTTP POST to our web app. The concern I'm having is how can we ensure that the upload actually came from our app, and not from curl or something like that. I guess we're looking at some kind of public/private key encryption here. But I'm not sure if we can somehow just embed a public key in our win app executable and be done with it. Or would that public key be too easy to extract and use outside of our app?
Anyway, we're building both sides (client and server) so pretty much anything is an option, but it has to work through HTTP(S). However, we do not control the execution environment of win (client) app, plus the user that is running the app on his/her system is the only one that stands to gain something by gaming the system.
Ultimately, it's not possible to prove the identity of an application this way when it's running on a machine you don't own. You could embed keys, play with hashes and checksums, but at the end of the day, anything that relies on code running on somebody else's machine can be faked. Keys can be extracted, code can be reverse-engineered- it's all security through obscurity.
Spend your time working on validation and data cleanup, and if you really want to secure something, secure the end-user with a client certificate. Anything else is just a waste of time and a false sense of security.
About the best you could do would be to use HTTPS with client certificates. Presumably with WinHTTP's interface.
But I'm not sure if we can somehow just embed a public key in our win app executable and be done with it.
If the client is to be identifying itself to the server, it would have to be the private key embedded.
Or would that be too easy to extract and use outside of our app?
If you don't control the client app's execution environment, anything your app can do can be analysed, automated and reproduced by an attacker that does control that environment.
You can put obfuscatory layers around the communications procedure if you must, but you'll never fix the problem. Multiplayer games have been trying to do this for years to combat cheating, but in the end it's just an obfuscation arms race that can never be won. Blizzard have way more resources than you, and they can't manage it either.
You have no control over the binaries once your app is distributed. If all the signing and encryption logic reside in your executable it can be extracted. Clever coders will figure out the code and build interoperable systems when there's enough motivation to do so. That's why DRM doesn't work.
A complex system tying a key to the MAC address of a PC for instance is sure to fail.
Don't trust a particular executable or system but trust your users. Entrust each of them with a private key file protected by a passphrase and explain to them how that key identify them as submitters of contents on your service.
Since you're controlling the client, you might as well embed the key in the application, and make sure the users don't have read access to the application image - you'll need to separate the logic to 2 tiers - 1 that the user runs, the other that connects to the service over HTTP(S) - since the user will always have read access to an application he's running.
If I understand correctly, the data is sent automatically after the user logs on - this sounds like only the service part is needed.

Resources