Meteor DDP - "ready" and "update" messages clarification - meteor

I'm currently implementing a DDP client based on the specs available on this page:
https://github.com/meteor/meteor/blob/master/packages/livedata/DDP.md
I just have a doubt concerning the 2 method types called "ready" and "update".
Let's start with the "ready", according to the spec:
When one or more subscriptions have finished sending their initial
batch of data, the server will send a ready message with their IDs.
Do this mean that we can have several "added" messages from the server until the whole collection to be completely transferred to the client. We should store this in a temporary place to then wait for the "ready" semaphore prior to make it public ? i.e. in the real collection ?
The same question regarding the remote procedure calls. Should I store the result in a temporary collection and only return (process) the result once the "updated" message is received ?
This part is obscure
Once the server has finished sending the client all the relevant data messages based on this procedure call, the server should send an
updated message to the client with this method's ID.
"Should", so I'm stuck if I do rely on it but nothing ?

We should store this in a temporary place to then wait for the "ready"
semaphore prior to make it public ? i.e. in the real collection ?
The standard Meteor JavaScript client makes added documents available in the client collection as they come in from the server. So if, for example, the collection is being displayed on the web page and 5 of 100 documents have arrived so far, the user will be able to see the 5 documents.
When the subscription "ready" message arrives, the subscription on the client is marked as "ready", which the client can use if they're doing something that needs to wait for all the data to arrive.
Whether you want to wait in your client for all the data to arrive before making it public is up to you... it depends on what you're doing with your client and if you want to show documents as they arrive or not.
"Should", so I'm stuck if I do rely on it but nothing ?
The Meteor server does send the "updated" message, so you can rely on it.
The same question regarding the remote procedure calls. Should I store the result in a temporary collection and only return (process) the result once the "updated" message is received ?
There are two outcomes from making a method call: the return value (or error) returned by the method (the "result" message), and documents that may have been inserted / updated / removed by the method call (the "updated" message). Which one you want to listen to is up to you: whether it's important for you to know when you've received all the document changes coming from the method call or if you just want the method return value.
The "updated" message is used by the Meteor client to perform "latency compensation": when the client changes a local document, the change is applied immediately to the local document (and the changes will be visible to the user)... on the assumption that the changes will likely be accepted by the server. Then the client makes a method call requesting the change, and waits for the updated documents to be sent from the server (which may include the changes if they were accepted, or not, if they were rejected). When the "update" message is received, the local changes are thrown away and replaced by the real updates from the server. If you're not doing latency compensation in your own client then you may not care about the "updated" message.

Related

What is the HTTP network flow used by Firestore JS SDK?

We use heavily Firebase JS SDK, and at one point encountered an issue where updates made by the SDK (using updateDoc) did not reach the server (though no network issues were detected). Same happened also for onSnapshot call (no updates retrieved from server, although there were updated).
In order to better understand what happens, we wanted to understand what happens on the network layer (in terms of HTTP requests flow), especially when we listen for updates, or perform updates.
The question is created in order to share our findings with other who are wondering about this, and in case some one from the Firebase team can confirm.
We performed updateDoc and onSnapshot calls using Firebase JS SDK for Firestore, and expected to be able to associate the network requests seen in Chrome Dev Tools to these SDK calls.
Below are the observations regarding the HTTP calls flow.
When we initialize Firestore, a Web Channel is created on which the traffic between the client and the server will go through.
The Web Channel uses gRPC over HTTP/2 (thanks Frank van Puffelen for the tip)
The default behavior is as follows:
Commands (e.g. start listen, set data) are sent using POST requests.
The response includes SID, which identifies the command lifecycle. Note that start listen commands (on different docs) will share the same SID.
SSE-based (server-side-events) GET requests with ±1 minute interval are used to listen on results from the commands (with the appropriate SID).
After the interval ends, a new SSE-based GET request will be created unless the command finished (e.g. stop listen was called, data was set).
If the command finished, and no additional GET request is required, a POST terminate request will be sent with the relevant SID.
The listen (onSnapshot) behavior by default is as follows:
Whenever onSnapshot is called, we have a POST request to a LISTEN endpoint to addTarget which returns immediately with SID. The addTarget includes the document path and a targetId generated by the client for later use. If there's already an open onSnapshot, it will use the same SID.
Following the POST response, a pending GET request is created which receives changes. It will use the same SID.
The pending GET requests receives changes are for all target (listened) objects (e.g. documents).
The GET request remains open for about a minute (sometimes less, sometimes more).
Every time new data arrives, it is processed, but the GET request remains open until it's interval (±1 minute) has passed.
The GET request also receives the initial values once a target was added (via onSnapshot).
Once the interval has passed, the pending GET request ends (and we can see the response).
Once it ends, it is replaced by a new pending GET request, which behaves the same way.
Whenever we unsubscribe from a listener, there is a POST request to a LISTEN endpoint to removeTarget.
After unsubscribing from all targets, the GET request will still end after the interval completes. However, no new GET request will initiate, and a terminate POST request will be sent.
Setting data for a doc (setDoc) behaves as follows:
A POST request is sent to the backend in order to start a session and receive SID.
A pending GET request is created, on which we aim to receive updates when the data was set in the backend. The pending GET request leaves for ±1 minute.
Another POST request is created with the update details (e.g. update field f1 on document d1 ).
Once the Firestore backend has updated the data, we'll receive the update operation details on the GET request, which will remain pending until the ±1 minute interval completes.
After the interval completes, and if the data was stored in the backend DB, the GET request will end, and a terminate POST request will be sent.
With long-polling (set by experimentalForceLongPolling) we have the following notable changes:
Every time there's a change in an object we listen on, the pending GET request is ended and a new one is created instead.
Instead of ±1 minute GET request, we have a ±30 seconds GET request followed by a ±15 seconds GET request.

Database rollback on API response failure

A customer of ours is very persistent that they expect this "from any API" (meaning they don't want to pay for the changes). I seem to have trouble finding clear information on this though.
Say we have an API that creates an appointment for a calendar. Server-side everything was successful, data is committed to the database. API tries to send the HTTP 201 (Created) response, but something goes wrong there. Client ignores the response, or connection dropped, ...
They want our API to undo the database changes in that particular situation.
The question is not how to do this, but rather if this is something most APIs do? Is this standard behavior? Or something similar like refusing duplicate create requests?
The difficult part of course is to actually know if an API has failed to send the response, and as far as I am concerned with respect to the crux of the question, it is not a usual behavior implemented. If the user willingly inputs the data, you can go ahead and store it. If the response doesn't return properly due to timeouts (you are not responsible for user "ignoring" the response), then the client side code can refresh on failure and load fresh data. And the user can delete inputted data themselves(given you provide an endpoint for that)
Depending on the database, it is possible to make all database changes of an API reversible. For example, with SQL, you use [SQL transactions][1] using commit, rollback and savepoints. There is most likely a similar mechanism available for noSQL.

Meteor: Does subscribing to a publication download all the published data to the client?

Question sounds pretty dumb, but I couldn't find an explicit answer to my question. I know that subscribing to a publication makes the data available to the client.
So what I want to understand is when does the data get downloaded into Minimongo. Does the client download all the data in a publication as soon as he subscribes to it?
Or (which would make more sense), does the client only download the data as soon as he starts querying for the data. My terminology might be of, apologise for that. But maybe some code makes it clearer. All of the below is code run on the client side.
Subscribing:
const eventSub = Meteor.subscribe('getEvents');
const loading = !eventSub.ready();
Querying :
const fin = {_id:someid};
const eventData = loading ? null : Events.find(fin).fetch()[0]
Pub/Sub in meteor (as well as pretty much all client-server communication) is done using a protocol called DDP, usually done over a web socket (if it is not supported, there are fallbacks).
When a client subscribes to a publication, it sends a message to the server requesting a subscription. This invokes the handler (the publication function that you define and supply to Meteor.publish, in your case) which can return a Mongo cursor, and array of cursors or handle the lower-level details of publishing on its own.
In case the function returns a cursor, the server observes it and sends messages regarding the data as soon as it can. At first, all matching documents are sent as added messages to the client, which are automatically translated into documents in MiniMongo.
Later changes are sent as by the server cursor observer when the server notices them.
ready is another message sent by the server and it tells the client that the server has sent it whatever it had at that moment.
This means that the data is sent to the client immediately (or, at least, ASAP), but not synchronously, and not in a single message.
A reactive computation (using Tracker) can be used to subscribe, get the subscription's ready state and query for data as needed, as the ready() method of that object is "reactive".

When I run Meteor.disconnect() and then Meteor.reconnect(), Meteor clears minimongo, how can I prevent this?

We are using fast render in our app, so all the data the app needs is sent down with the app itself. We are not using any Meteor.subscribe calls since minimongo is populated by fast render.
Once rendered we run Meteor.disconnect()
At some point in the future we want to reconnect to call a specific method, but when we reconnect, minimongo gets cleared.
How can we prevent Meteor from clearing all documents in minimongo upon reconnect?
I suspect that it's actually fast render that is causing your problem. Checking the meteor docs for Meteor.disconnect()...
Call this method to disconnect from the server and stop all live data updates. While the client is disconnected it will not receive updates to collections, method calls will be queued until the connection is reestablished, and hot code push will be disabled.
Call Meteor.reconnect to reestablish the connection and resume data transfer.
This can be used to save battery on mobile devices when real time updates are not required.
This implies that your client data is never deleted, otherwise you could not "resume data transfer" upon reconnection. It also would mean that one of their primary intended use cases for this method (e.g. "used to save battery on mobile devices when real time updates are not required") would not actually work.
Just to be absolutely sure, I checked the meteor source to see what happens on a disconnect and all it does it set a connection state var to false, clear the connection and heartbeat timers, and cancels any pending meteor method calls.
Similarly, Meteor.reconnect() simply set the connection state var back to true, re-establishes the connection and hearbeat timers, re-establishes any subscriptions (so that new data can be acquired...this action does not delete client data), and calls any queued up meteor method calls.
After reading more about how fast render works, I understand that a lot of hacking was done to get it to actually work. The main hack that jumped out to me is the "fake ready" hack which tricks the client to thinking the subscription is ready before the actual subscription is ready (since the data was sent to the client on the initial page load).
Since you have no subscriptions in your app and a Meteor.reconnect() does not cause your page to reload, I'm wondering if the client is never doing anything because it never receives another ready message. Or maybe since Meteor isn't aware of any subscriptions (since fast render bypasses meteor to transport data), is clears the client minimongo cache so its in a good state if a new subscription is started. Or, there could be something else about fast render that is getting in the way.
Long story short, I'm quite certain that Meteor.disconnect() and Meteor.reconnet() have no impact on your client minimongo data based upon reviewing the documentation, the source, and based upon my experience of testing my meteor apps offline.
I can Meteor.reconnect() does not delete data as I have a meteor app in production that continues to call Meteor.reconnect() if it detects that it has lost a connection (e.g. the computer goes offline, network outage, etc.).
Hopefully this long winded answer helps you track down what's going on with your app.
I tried Meteor.disconnect() and Meteor.reconnect() and the Minimongo DB was not cleared. I confirmed it using:
a) Minimongo explorer: https://chrome.google.com/webstore/detail/meteor-minimongo-explorer/bpbalpgdnkieljogofnfjmgcnjcaiheg
b) A helper to return a message if at some point during reconnection
my collection would have zero records.
Although you are right, all the data in the subscription was sent from server to client after reconnection (letting the local DB to do the sync stuff though). This happens because Meteor server takes the reconnection like a completely new connection. It seems like in the future (uncertain) Meteor will deploy a real reconnection, as is stated in their docs:
Currently when a client reconnects to the server (such as after
temporarily losing its Internet connection), it will get a new
connection each time. The onConnection callbacks will be called again,
and the new connection will have a new connection id.
In the future, when client reconnection is fully implemented,
reconnecting from the client will reconnect to the same connection on
the server: the onConnection callback won’t be called for that
connection again, and the connection will still have the same
connection id.
Source: https://docs.meteor.com/api/connections.html

SMPP optional parameters

Good day guys!. I'm currently working on a system using JMS queues that send message over SMPP (using Logica SMPP library).
My problem is that I need to attach an internal id (that we manage within our system) to the message sequence id so that when in async mode I receive a response, the proper action can be taken for that particular message.
The first option I tried to implement was the use of optional parameters, as established for SMPP 3.4. I do not receive the optional parameters in the response (I've read that the response attaches the optional parameters depending on the provider).
A second approach was to keep a mapping in memory for those messages until their response is received (it saturates the memory, so it is a no go).
Can anyone else think on a viable solution for correlating an internal system ID of a message to its sequence number within an asynchronous SMPP environment?
Thank you for your time.
You need to keep a map of seq_nr - internal message id and delete from this map as soon you get an async response back from SMSC.
It should not saturate the memory as it will keep only inflight messages but you need to periodicaly iterate over the map and delete orphaned entries (as sometimes you will not get an reponse back from smsc).

Resources