Realm Data Sync after Changing Realm Object Server - realm

Before I launch a Realm app that lots of people rely on, I'm trying to get acquainted with all the possible Realm Object Server failure scenarios so that I'm prepared. :)
Let's say I have the ROS deployed successfully on an Ubuntu VPS and everything works great. Then suddenly my VPS provider shuts down and I have to migrate immediately to another.
I push a client app update with a new realm SyncConfiguration with a new server URL and realm URL and it points to a freshly-installed instance of the ROS with no realm data.
When the users' apps connect to the new server and re-authenticate, what happens to their data when they sync?
All their local data syncs and pushes up to the ROS and everything resumes like the situation never happened.
The new ROS overwrites the local realm with no data.
Something else
I know I can do server-side backups (and I will), but I'm just trying to anticipate what a server migration would look like.
Thanks!

You will receive an error called Client Reset. This happens when the server and client disagree about the history of the Realm. It a Client Reset happens, it will backup the local file to another location on the device, after which the original file gets deleted so the server state can be synced.
You can read more here: https://realm.io/docs/realm-object-server/#client-recovery-from-a-backup
How to handle it depends a little on what binding you are in, but the overall concept is the same. This is Swift way:https://realm.io/docs/swift/latest/#client-reset

Related

Best practice to maintain PSQL+R Shiny connections

I've built an application that does a lot of processing of user data (they can load in their data, map the variables, run analyses, review a dashboard, and download the results/report). It's a pretty heavy application, and I'm running into an issue that I'm not sure how to best solve for.
The problem is that sometimes the session will unexpectedly disconnect from the psql database. This causes problems because just about every corner of the application depends on retrieving or sending information. Basically, the app doesn't work at all without the connection. What's even worse, is the UI doesn't really inform the user of the problem, it kindof sits all lazy-like.
The application exists on an EC2 instance within a docker container, served through an HTTPS proxy (Caddy) to the public via a registered domain name. Each new session searches for a global pool connection, and if one does not exist, it creates one, then checks out a local connection and passes that into all the downstream modules.
I'm wondering how others have addressed this problem. Should I,
use a global pool, then check out a single connection and test for a severed connection at the start of each function? This is my current (unfinished) approach and seems not great.
search for a pool connection and checkout a connection at the start of each function, then return at the end? This would take a bit of time to implement (and test), but seems like a reasonable solution.
check for a connection every minute and if one doesn't exist, create one. I'm guessing this would need to happen in each module independently.
Any direction will be greatly appreciated.
Thanks,

Wrong dependency to IIS restart for getting changed data in SQL Server

I am working on an ASP.NET webforms application with Entity Framework. Also for some reports it uses a dll and in that we have explicit query to get the records from SQL Server (such as ADO).
The problem is that when I change a column such as ParentID in SQL Server, I must to reset the website in IIS to see it and this solves the problem. This dependency is not logical and I want to know why this happens? Is there any relation to caching because of calling method in the dll?
How can I solve this problem?
When you run a query against SQL server (or any database, really), the result that you see is not the data "in the database", so to speak. The query returns a copy of that data that belongs only to you. The copy of the data gets sent over the network, to the client - in your case, an ASP.NET web application - and the application does whatever it needs to do, such as show it to a user.
Once the query which retrieved the data is complete, there is no longer any link between the data in the client, and the data in the database. There is no continuous, "live" connection between the two, even if your actual database connection is still open. The database connection is merely a way to send queries to the server, and for it to send copies of the data back.
It's like taking a copy of a file from a different machine. If you copy a file from my machine, and then I update my copy, your copy doesn't instantly get updated.
If you want data in some user interface to stay perfectly up to date with the data that actually exists in the database, you have a difficult problem to solve. There is no "easy" way to do this. Or perhaps more accurately, there is no simple or efficient way to do this.
This might seem odd to you. You're thinking "well, why not? Why doesn't it just show me the values as they actually exist?". The reason is that these systems need to be able to support many users - often thousands at once - who are all both reading the database and writing to it. Imagine someone was in the middle of updating data in the database, but then they rollback their transaction. Should you see the data as it was being modified, but not committed? What if two users are trying to update "the same" data at once? All sorts of concurrency questions come into play, which basically boils down to questions about locking.
What you are encountering here is a basic principle of multi-threaded environments, which translates to systems with multiple clients: Data can't be accessed directly by multiple people at the same time. Instead, you give each person their own immutable copy.
In a web application things are even more disconnected. When the browser requests the web page, the server side of the web application gets a copy of the data from the database, and then transmits that to the browser. Once the page is loaded there is no longer any link between the web server and the database server, or any link between the web server and the web browser at the client, and certainly no link between the web browser and the database.
Ultimately, this is one of the "hard problems" in computer science. You want to know how to tell the client to invalidate their "cache", and refresh their local data. There are a few mechanisms provided by .NET to do this with SQL Server, but they are quite technical. One of them is query notifications

How do I keep TCP/IP socket open in IIS?

I have the following use scenario: User logs in to ASP.NET application; and at some point makes a connection to remote TCP/IP server. The server's response may come after significant delay (say, a few hours). Imagine that the user submits a batch job, and the job may be running for a long time. So, the user may close the browser, get some coffee and come back to see the results later.
However, if client closes the connection, the server will never return the results. So, keeping Socket info in Application object won't work - once user closes the browser, it goes away.
Is there any other way to persist this open socket while IIS is up? Also, if the second user logs in, I would prefer to use the same connection. Finally, I realize that the solution is brittle, and it may occasionally break. It's OK.
Remote server is 20-year old mainframe application; so no chance for changes there. And as long as the user doesn't log out - everything is working fine now. Everything is on the LAN, so there are no routing issues to complicate the situation.
The contents of the application dictionary are not lost when a user logs out. Your scheme will work (in a brittle way, but you say that's ok).
Note, that worker processes can exit for many reasons, so expect to be killed at arbitrary points in time.
you have several options for persisting session-state: MSDN - Session-State Modes
inproc mode: you disconnect, state is lost. if you use cookies, and
store info/data somewhere on the backend, then you can map the GUID
to the data, regardless of session. or use application-state.
stateserver: persisted across disconnects and application restarts,
but not across iis/pool/server restarts, unless you use another
server, or cookie/auth. can be problematic sometimes.
sqlserver: as the name implies, uses a specially formatted db/table structure to persist state data across all sorts of scenarios.
custom/off: allows you to build your own provider, or turns it off completely.
here's the cookie method, by far the simplest (you have to generate a GUID, then store it in the cookie and application state or a backend DB): MSDN - Maintaining Session State with Cookies
you can persist cookies on the user's client. then, on server
reboot/client disconnect/any other scenario just pull the GUID from
app/session state or from a backend DB, which will also store the data
for the reports/output.
also, as a caution: even though cookies can be used to auth a user to an account/db record via GUID, it is considered insecure for all other purposes except unindentifiable information, such as: view shopping cart, simple reports, status, etc...
oh, and the stuff on IIS session timeouts (20 mins by default): MSDN - Configure Session Time-out (IIS 7) and MSDN - Configure Idle Time-out Settings for an Application Pool (IIS 7)
completely forgot to add the links on: ASP.NET Application State Overview, ASP.NET Session State Overview, but storing large amounts of data on a busy server in application state is not recommended. oh yea, and MSDN - Caching Application Data

flask manage db connection :memory:

I have a flask application that needs to store some information from requests. The information is quite short-lived and if the server is restarted I do not need it any more - so I do not really need persistence.
I have read here that an Sqlite database, which is held in memory can be used for that. What is the best way to manage the database connection? In the flask documentation connections to the database are created on demand, but my database will be deleted if I close the connection.
The problem with using an in memory sqlite db is that your Sqlite in-memory databases cannot be accessed from multiple threads.
http://www.sqlite.org/inmemorydb.html
To further the problem, you are likely going to have more than one process running your app, which makes using an in-memory global variable out of the question as well.
So unless you can be certain that your app will only ever require a single thread or a single process (which is unlikely) You're going to need to either:
Use the disk to store state, such as an on-disk sqlite db, or even just some file you parse.
Use a daemonized process that runs separately from your application to manage the state.
I'd personally go with option 2.
You can use memcached for this, running on a central server or even on your app server if you've only got one. This will allow you to store state (including python objects!) temporarily, in memory and you can even set timeout values for when the data should expire, which from the sound of things might be useful for your app.
Since you're using Flask, you've got some really good built-in support for using a memcached cache, check it out here: http://flask.pocoo.org/docs/patterns/caching/
As for getting memcached running on your server, it's really just an apt-get or yum install away. Let me know if you have questions or challenges and I'll be happy to update.

Disconnected meteor application

I am interested in creating an application using the the Meteor framework that will be disconnected from the network for long periods of time (multiple hours). I believe meteor stores local data in RAM in a mini-mongodb js structure. If the user closes the browser, or refreshes the page, all local changes are lost. It would be nice if local changes were persisted to disk (localStorage? indexedDB?). Any chance that's coming soon for Meteor?
Related question... how does Meteor deal with document conflicts? In other words, if 2 users edit the same MongoDB JSON doc, how is that conflict resolved? Optimistic locking?
Conflict resolution is "last writer wins".
More specifically, each MongoDB insert/update/remove operation on a client maps to an RPC. RPCs from a given client always play back in order. RPCs from different clients are interleaved on the server without any particular ordering guarantee.
If a client tries to issue RPCs while disconnected, those RPCs queue up until the client reconnects, and then play back to the server in order. When multiple clients are executing offline RPCs, the order they finally run on the server is highly dependent on exactly when each client reconnects.
For some offline mutations like MongoDB's $inc and $addToSet, this model works pretty well as is. But many common modifiers like $set won't behave very well across long disconnects, because the mutation will likely conflict with intervening changes from other clients.
So building "offline" apps is more than persisting the local database. You also need to define RPCs that implement some type of conflict resolution. Eventually we hope to have turnkey packages that implement various resolution schemes.

Resources