I'm working on a solution to intercept changes to the data from our node.js server and validate/alter them before they are stored/synced to other clients.
Any strategies or suggestions on how to solve this with the current code base?
Currently, it seems like the only option is to rewrite it post-sync operation. That would mean each client would probably receive the sync (including the server), then the server would rewrite the data and trigger a second sync.
To help understand the context of the question, here's what seems like an ideal strategy for my needs:
server gets a special token/key not available to clients (when security comes about)
server registers a dependency injection like firebase.child('widgets').beforeSync(myCallback)
client syncs data
server callback is notified
server modifies or validates the data
if valid, it returns it to firebase for sync ops
if invalid, aborts the sync with an error which is returned to the client
Thanks for sharing your ideas!
We've considered this type of approach. You can actually simulate this sort of behavior by structuring your data so that there is an "unvalidated" tree and a "validated" tree.
The "unvalidated" tree would be writeable by the client and the server would monitor it for changes. When a change occurs it would validate the data, and if it passes it would copy it into the "validated" tree which is only writeable by the server. You could pass errors back to the client through Firebase data as well when the validation fails.
This behavior could be packaged into a library that provides the behavior you describe. We may add this as core functionality as well, but we're still researching a variety of options.
Related
My current application developed in Unity uses Firebase Realtime Database with database persistence enabled. This works great for offline use of the application (such as in areas of no signal).
However, if a new user runs the application for the first time without an internet connection - the application freezes. I am guessing, it's because it needs to pull down the database for the first time in order for persistence to work.
I am aware of threads such as: 'Detect if Firebase connection is lost/regained' that talk about handling database disconnection from Firebase.
However, is there anyway I can check to see if it is the users first time using the application (eg via presence of the persistent database?). I can then inform them they must go online for 'first time setup'?
In addition to #frank-van-puffelen's answer, I do not believe that Firebase RTDB should itself cause your game to lock up until a network connection occurs. If your game is playable on first launch without a network connect (ie: your logic itself doesn't require some initial state from the network), you may want to check the following issues:
Make sure you can handle null. If your game logic is in a Coroutine, Unity may decide to silently stop it rather than fully failing out.
If you're interacting with the database via Transactions, generally assume that it will run twice (once against your local cache then again when the cache is synced with the server if the value is different). This means that the first time you perform a change via a transaction, you'll likely have a null previous state.
If you can, prefer to listen to ValueChanged over GetValueAsync. You'll always get this callback on your main Unity thread, you'll always get the callback once on registration with the data in your local cache, and the data will be periodically updated as the server updates. Further, if you see #frank-van-puffelen answer elsewhere, if you're using GetValueAsync you may not get the data you expect (including a null if the user is offline). If your game is frozen because it's waiting on a ContinueWithOnMainThread (always prefer this to ContinueWith in Unity unless you have a reason not to) or an await statement, this could ValueChanged may work around this as well (I don't think this should be the case).
Double check your object lifetimes. There are a ton of reasons that an application may freeze, but when dealing with asynchronous logic definitely make sure you're aware of the differences between Unity's GameObject lifecycle and C#'s typical object lifecycle (see this post and my own on interacting with asynchronous logic with Unity and Firebase). If an objects OnDestroy is invoked before await, ContinueWith[OnMainThread], or ValueChanged is invoked, you're in danger of running into null references in your own code. This can happen if a scene changes, the frame after Destroy is called, or immediately following a DestroyImmediate.
Finally, many Firebase functions have an Async and synchronous variant (ex: CheckDependencies and CheckDependenciesAsync). I don't think there are any to call out for Realtime Database proper, but if you use the non async variant of a function (or if you spinlock on the task completing, including forgetting to yield in a coroutine), the game will definitely freeze for a bit. Remember that any cloud product is i/o bound by nature, and will typically run slower than your game's update loop (although Firebase does its best to be as fast as possible).
I hope this helps!
--Patrick
There is nothing in the Firebase Database API to detect whether its offline cache was populated.
But you can detect when you make a connection to the database, for example by listening to the .info/connected node. And then when that first is set to true, you can set a local flag in the local storage, for example in PlayerPrefs.
With this code in place, you can then detect if the flag is set in the PlayerPrefs, and if not, show a message to the user that they need to have a network connection for you to download the initial data.
I'm learning Meteor and fundamentally enjoy how fast I can build data driven applications however as I went through the Creating Posts chapter in the Discover Meteor book I learned about using server side Methods. Specifically the primary reason (and there are a number of very valid reasons to use these) was because of the timestamp. You wouldn't want to rely on the client date/time, you'd want to use the server date/time.
Makes sense except that in almost every application I've ever built we store date/time of row create/update in a column. Effectively every single create or update to the database records date/time which in Meteor now looks like I would need to use server side Methods to ensure data integrity.
If I'm understanding correctly that pretty much eliminates the ease of use and real-time nature of a client side Collection because I'll need to use Methods for almost every single update and create to our databases.
Just wanted to check and see how everyone else is doing this in the real world. Are you just querying a server side Method that just returns the date/time and then using client side Collection or something else?
Thanks!
The short answer to this question is that yes, every operation that affects the server's database will go through a server-side method. The only difference is whether you are defining this method explicitly or not.
When you are just getting started with Meteor, you will probably do insert/update/remove operations directly on client collections using validators, which check for whether the operation is allowed. This usage is actually calling predefined methods on both the server and client: (for a collection named foo the you have /foo/insert, for example) which simply checks the specified validators before doing the operation. As you become more familiar with Meteor you will probably override these default methods, for reasons you described (among others.)
When using your own methods, you will typically want to define a method both on the server and the client, just as the default collection functions do for you. This is because of Meteor's latency compensation, which allows most client operations to be reflected immediately in the browser without any noticeable lag, as long as they are permitted. Meteor does this by first simulating the effect of a method call in the client, updating the client's cached data temporarily, then sending the actual method call to the server. If the server's method causes a different set of changes than the client's simulation, the client's cache will be updated to reflect this when the server method returns. This also means that if the client's method would have done the same as the server, we've basically allowed for an instant operation from the perspective of the client.
By defining your own methods on the server and client, you can extend this to fill your own needs. For example, if you want to insert timestamps on updates, have the client insert whatever timestamp in the simulation method. The server will insert an authoritative timestamp, which will replace the client's timestamp when the method returns. From the client's perspective, the insert operation will be instant, except for an update to the timestamp if the client's time happens to be way off. (By the way, you may want to check out my timesync package for displaying relative server time accurately on the client.)
A final note: it's good to understand what scope you are doing collection operations in, as this was one of the this that originally confused me about Meteor. For example, if you have a collection instance in the client Foo, Foo.insert() in normal client code will call the default pair of client/server methods. However, Foo.insert() in a client method will run only in a simulation and will never call server code - so you will need to define the same method on the server and make sure you do Foo.insert() there as well, for the method to work properly.
A good rule of thumb for moving forward is to replace groups of validated collection operations with your own methods that do the same operations, and then adding specific extra features on the server and client respectively.
In short— yes!
Publications exist to send out a 'live', and dynamic, subset of the database to the client, sending DDP added messages for existing records, followed by a ready, and then added, changed, and deleted messages to keep the client's cache consistent.
Methods exist to- directly, or indirectly— cause Mongo Updates, and like it was mentioned by Andrew, they are always in use.
But truly, because of Meteor's publication architecture, any edits to collections that are currently being published to at least one client, will be published via DDP - regardless of the source of the change to Mongo - even an outside process.
I'm building a somewhat big application. When changing code the server restart and force refresh on the client.
The client keep his session data, but I seem to lose the Meteor.Collection previously sync data, forcing my user to re-sync everything.
I use 0.5.7(did not see anything in 0.5.8 about that)
Is that the expected behavior or I'm I missing something?
This can be tested by adding something like that at your client start (Assuming Components is your Meteor.Collection)
console.log("Length: ", Components.find().fetch().length);
No, you're not missing anything. The collection data should be re-synced on code pushes. However, if your collection data takes more than a second or two to load in, you should look into trying to send less data to the client by creating finer-grained subscriptions that only send data the client needs at the moment.
I am interested in creating an application using the the Meteor framework that will be disconnected from the network for long periods of time (multiple hours). I believe meteor stores local data in RAM in a mini-mongodb js structure. If the user closes the browser, or refreshes the page, all local changes are lost. It would be nice if local changes were persisted to disk (localStorage? indexedDB?). Any chance that's coming soon for Meteor?
Related question... how does Meteor deal with document conflicts? In other words, if 2 users edit the same MongoDB JSON doc, how is that conflict resolved? Optimistic locking?
Conflict resolution is "last writer wins".
More specifically, each MongoDB insert/update/remove operation on a client maps to an RPC. RPCs from a given client always play back in order. RPCs from different clients are interleaved on the server without any particular ordering guarantee.
If a client tries to issue RPCs while disconnected, those RPCs queue up until the client reconnects, and then play back to the server in order. When multiple clients are executing offline RPCs, the order they finally run on the server is highly dependent on exactly when each client reconnects.
For some offline mutations like MongoDB's $inc and $addToSet, this model works pretty well as is. But many common modifiers like $set won't behave very well across long disconnects, because the mutation will likely conflict with intervening changes from other clients.
So building "offline" apps is more than persisting the local database. You also need to define RPCs that implement some type of conflict resolution. Eventually we hope to have turnkey packages that implement various resolution schemes.
From a general glimpse of it, it seems that source code for Meteor app is open to the clients due to 'Write one Javascript file, run it on client and server at once' theme.
If server side source code of particular app open to client sides, wouldn't it be easy for random person to copy them and create very look alike app?
Wouldn't it be easy for person with evil purpose to find security holes in the app, because its server side code is open to the public?
For instance, in Meteor 0.5.0 's new example of parties app, model.js file seems to be sent to the client side as well.
Am I misunderstanding something here?
Edit
Here is the part that I do not understand.
According to http://docs.meteor.com/#structuringyourapp,
Files outside the client and server subdirectories are loaded on both the client and the server! That's the place for model definitions and other functions
I really do not understand it. If every model implementation, (including DB interaction) is sent to client, wouldn't app be less secure and easily copied by other developers?
Any code in the server/ folder will not get sent to the client (see http://docs.meteor.com/#structuringyourapp)
EDIT
Regarding the second part:
Any code not in client/ or server/ is code you want to run both client and server side. So obviously it must be sent to the client.
The reason that you would place model code in there is because of latency compensation. If you want to make updates to your data, it's best to do it immediately client-side and then run the same code server side to 'commit' it for real. There are many examples where this would make sense.
If there is 'secret' model code that you don't want to run client side, you can certainly have a second server/models.js file.
The best way to secure a client-server app is by writing explicit security checks on the server, rather than hiding the database update logic from the client.
For a longer explanation of the security model, see https://stackoverflow.com/a/13334986/791538.