SignalR and client data consistency - signalr

I'm about to implement a client check that warns if the data state is not as it should be. The server pushes data when it changes to the clients. Server always knows the current state. Is there anything in the SinglarR framework to help me write these checks? I guess it is necessary to write these checks (when using SignalrR and a push strategy) if consistency is important?
I'm thinking of adding a continous ID to all data entities send. Then both the client and server could hash the IDs and compare their state.

SignalR helps with the communication channel, but the data validation is up to your app to handle.
Also if you want to know when something changed, rather than just accepting an unchanged copy from the client, you would need to impose that check. You could use the .Observable along with RX along with .DistinctUntilChanged to protect for this case.

Related

Is end-to-end encryption possible with Realm Mobile Platform?

On the client device, a synced Realm can be setup with an encryption key that's unique to the user and stored on the device keychain, so data is stored encrypted on the client.
(related question: Can "data at rest" in the Realm Mobile Platform be encrypted?)
Realm Object Server and the clients can communicate via TLS, so data is encrypted in transit.
But the Realm Object Server does not appear to store data using encryption, since an admin user is able to access all the database contents via Realm Browser (https://realm.io/docs/realm-object-server/#data-browser).
Is it possible to setup Realm Mobile Platform so user data is encrypted end-to-end, such as no one but the user (not even server admins) have access to the decryption key?
Due to the way we handle conflict resolution, we currently are unable to provide end-to-end encryption, as you correctly deduced. Let's go a tiny bit into detail with regards to the conflict resolution.
In order to handle conflicts the way we do, we use something called operational transformation. This means that instead of sending the data over directly, the client tells the server the intent of the change, rather than the result. For example, when two users edit a text field, we would tell the server insert(data='new text', offset=0) because the first user prepended data at the beginning of the text field, and insert(data='some more stuff', offset=10) because the second user added data in the middle of the field. These two separate operations allow the server to uniquely resolve what happened, and have conflictless resolution of the two writes.
This also means that if we encrypt everything, the server would be unable to handle this conflict resolution.
This being said, that's for the current version. We do have a number of thoughts on how we could handle this in the future, while providing (some degree) of encryption. Mainly this would mean more work on the client, and maybe find a new algorithm that would allow us to tell the client the intent, and let the client figure out how to merge everything. This is a quadratic problem, though, so we're reticent to putting too much work on the client side, as it could really drain the battery.
That might be acceptable for some users, which is why we're looking into it. Basically, there will be a trade-off. As the old adage goes: fast, secure, convenient: pick two. We just have to figure out how to handle this properly.
I just opened a feature request around possibly using Tresorit's ZeroKit to solve the end-to-end encryption question posed. Sounds like the conflict resolution implementation will still cause an issue though, but maybe there is a different conflict resolution level that can be applied for those that don't need the realtime dynamic editing of individual data fields (like patient health data, where only a single clinician ever really edits a record at any given time).
https://github.com/realm/realm-mobile-platform/issues/96

Headers in SignalR response

I am writing an application using SignalR which acts as a push service of resources for clients. The service sends notifications to its clients regarding state change of resources at regular intervals. Sometimes, the resource state remains the same. In such cases I need a way to convey the same to clients without sending the same resource again. In a way, I want to implement something like ETags with SignalR. To do that I need to modify SignalR response headers (or I could use query string).
Is there a way to do this?
you are saying to pass the same state to clients (But with NOT MODIFIED kind of header) as well not to pass if it is same.
In such case you should actually not pass any message to the clients if the server hasn't anything new to give to the clients. This is how realtime applications are supposed to work.
Also instead of adding an overhead of extra HEADERS why don't you just send a simple actual signalR ping to the clients saying "Hey, nothing modified. Just chill!"

Rebus Pub / Sub Encryption

Is it possible to turn of message encryption on subscription messages?
We want to let external parties subscribe to messages via AzureServiceBus and SAS keys pr queue, but we use Encryption of messages and don't want to expose that key to external parties.
One way is to create another bus to that, but that just seems complicated, is there another way?
Also want to thank you for quick repons here on Stack overflow and a great product.
You could probably achieve what you want by wrapping the outgoing step that encrypts the message body in something that would decide to invoke the step (and thus encrypt the body) or not, depending on the message's intent value in the headers, but I think that is a little bit icky.
The reason I think it is icky, is because I think you should treat your encrypted bus as an internally used bus only, and then have a completely separate bus that you use to publish messages to third parties.
This separation would have the benefit that you can update internal message schemas etc. without worrying about breaking anything in the integration with your external parties.

Would real world Meteor application use server side Methods almost exclusively?

I'm learning Meteor and fundamentally enjoy how fast I can build data driven applications however as I went through the Creating Posts chapter in the Discover Meteor book I learned about using server side Methods. Specifically the primary reason (and there are a number of very valid reasons to use these) was because of the timestamp. You wouldn't want to rely on the client date/time, you'd want to use the server date/time.
Makes sense except that in almost every application I've ever built we store date/time of row create/update in a column. Effectively every single create or update to the database records date/time which in Meteor now looks like I would need to use server side Methods to ensure data integrity.
If I'm understanding correctly that pretty much eliminates the ease of use and real-time nature of a client side Collection because I'll need to use Methods for almost every single update and create to our databases.
Just wanted to check and see how everyone else is doing this in the real world. Are you just querying a server side Method that just returns the date/time and then using client side Collection or something else?
Thanks!
The short answer to this question is that yes, every operation that affects the server's database will go through a server-side method. The only difference is whether you are defining this method explicitly or not.
When you are just getting started with Meteor, you will probably do insert/update/remove operations directly on client collections using validators, which check for whether the operation is allowed. This usage is actually calling predefined methods on both the server and client: (for a collection named foo the you have /foo/insert, for example) which simply checks the specified validators before doing the operation. As you become more familiar with Meteor you will probably override these default methods, for reasons you described (among others.)
When using your own methods, you will typically want to define a method both on the server and the client, just as the default collection functions do for you. This is because of Meteor's latency compensation, which allows most client operations to be reflected immediately in the browser without any noticeable lag, as long as they are permitted. Meteor does this by first simulating the effect of a method call in the client, updating the client's cached data temporarily, then sending the actual method call to the server. If the server's method causes a different set of changes than the client's simulation, the client's cache will be updated to reflect this when the server method returns. This also means that if the client's method would have done the same as the server, we've basically allowed for an instant operation from the perspective of the client.
By defining your own methods on the server and client, you can extend this to fill your own needs. For example, if you want to insert timestamps on updates, have the client insert whatever timestamp in the simulation method. The server will insert an authoritative timestamp, which will replace the client's timestamp when the method returns. From the client's perspective, the insert operation will be instant, except for an update to the timestamp if the client's time happens to be way off. (By the way, you may want to check out my timesync package for displaying relative server time accurately on the client.)
A final note: it's good to understand what scope you are doing collection operations in, as this was one of the this that originally confused me about Meteor. For example, if you have a collection instance in the client Foo, Foo.insert() in normal client code will call the default pair of client/server methods. However, Foo.insert() in a client method will run only in a simulation and will never call server code - so you will need to define the same method on the server and make sure you do Foo.insert() there as well, for the method to work properly.
A good rule of thumb for moving forward is to replace groups of validated collection operations with your own methods that do the same operations, and then adding specific extra features on the server and client respectively.
In short— yes!
Publications exist to send out a 'live', and dynamic, subset of the database to the client, sending DDP added messages for existing records, followed by a ready, and then added, changed, and deleted messages to keep the client's cache consistent.
Methods exist to- directly, or indirectly— cause Mongo Updates, and like it was mentioned by Andrew, they are always in use.
But truly, because of Meteor's publication architecture, any edits to collections that are currently being published to at least one client, will be published via DDP - regardless of the source of the change to Mongo - even an outside process.

intercept data changes and alter/validate from a server

I'm working on a solution to intercept changes to the data from our node.js server and validate/alter them before they are stored/synced to other clients.
Any strategies or suggestions on how to solve this with the current code base?
Currently, it seems like the only option is to rewrite it post-sync operation. That would mean each client would probably receive the sync (including the server), then the server would rewrite the data and trigger a second sync.
To help understand the context of the question, here's what seems like an ideal strategy for my needs:
server gets a special token/key not available to clients (when security comes about)
server registers a dependency injection like firebase.child('widgets').beforeSync(myCallback)
client syncs data
server callback is notified
server modifies or validates the data
if valid, it returns it to firebase for sync ops
if invalid, aborts the sync with an error which is returned to the client
Thanks for sharing your ideas!
We've considered this type of approach. You can actually simulate this sort of behavior by structuring your data so that there is an "unvalidated" tree and a "validated" tree.
The "unvalidated" tree would be writeable by the client and the server would monitor it for changes. When a change occurs it would validate the data, and if it passes it would copy it into the "validated" tree which is only writeable by the server. You could pass errors back to the client through Firebase data as well when the validation fails.
This behavior could be packaged into a library that provides the behavior you describe. We may add this as core functionality as well, but we're still researching a variety of options.

Resources