In a Meteor app, if the server is unreachable, all the pending requests are queued and resent when the server will be available again;
this is great, but I would like to:
monitor the status of the connection, to show users that the app is currently offline
notify the user about how many pending request are currently queued and monitor them in order to notify when are succesfully sent;
To be more clear I'd like to find a way to know how many pending request are currently queued (if any) and get informations about their status (to know when they are not more pending)
As Mark Uretsky suggested, you can use a package like francocatena:status to get and display the status.
As for monitoring pending requests in his comment, there isn't a public API for that. However, it looks like currently you could use the _methodInvokers and/or _outstandingMethodBlocks properties of Meteor.connection to determine which method calls are still outstanding.
Related
firebaseFirestore.collection("Test").document("test123").set(hashMap);
I want to write this document inside the collection, But when the user was offline it is scheduled the operation until the Internet returns. But I want if there is no internet then it must cancel the write operation even if the internet is back.
If there is internet then will do the natural job and add the document to the server but if there is no internet I want to cancel the write operation even if the internet is back.
Not having an internet connection is not considered an error condition by the Firestore SDK. Instead if will keep the pending write in its local memory cache (and on disk, unless you disabled that) and send it to the server once the connection is established.
There is no configuration option or API call to prevent the Firestore SDK from caching pending writes. So you'll have to do something in your application code to accomplish the behavior you want.
The most common options are:
Detect whether there's an internet connection before writing to the database, and only performing the set call when there is a conneciton.
Add a completion listener to the set call, and set up a timer for how long you think a write should reasonably take. If the timer goes off before the write was completed, clear the local cache to cancel all pending writes.
I would like to ask about Rebus Timeout Manager. I know we have Internal timeout manager and External timeout manager and I have been using Internal timeout manager for quite some time. And I have been sharing one timeout database (Sql Server) for all my endpoints.
I would like to know if this is correct.
Secondly I would like to know if I can also use one external Timeout Manager for all my endpoints.
My questions comes from the the fact that the information contained in the Timeouts table (id,due_time,headers,body) has no connection with the endpoint that sent a message to the timeout manager.
I just would like to get assurance.
Regards
You can definitely use the internal timeout manager like you're currently doing.
The MSSSQL-based timeout storage is safe to use concurrently from multiple instances, as it used some finely trimmed lock hints when reading due messages, thus preventing issues that could otherwise have happened due to concurrent access.
But it's also a valid (and often very sensible) approach to create a dedicated timeout manager and then configure all other Rebus instances to use that.
And you are absolutely right that the sender of the timeout is irrelevant. The recipient is determined when sending the timeout, so that
await bus.DeferLocal(TimeSpan.FromMinutes(2), "HELLO FROM THE PAST π");
will send the string to the bus' own input queue, and
await bus.Defer(TimeSpan.FromMinutes(2), "HELLO FROM THE PAST π");
will send the string to the queue mapped as the owner of string:
.Routing(r => r.TypeBased().Map<string>("string-owner"))
In both cases, the message will actually be sent to the timeout manager, which will read the rbs2-deferred-until and rbs2-defer-recipient headers and keep the message until it is due.
I made a custom service to cancel an order, after trigger it, when I get the server response 'OK' I reload the order's list but the order status isn't refreshed. It takes several seconds to do so.
Is there any service or flag that I could use to refresh the list when the data store state has changed?
I tried several services without success...
this.userOrderService.getCancelOrderSuccess().subscribe... // state still not cancelled
this.userOrdersEffect.resetUserOrders$.subscribe // neither this
I checked with my colleagues who are responsible for order status implementation in Commerce Cloud backend. Unfortunately the behavior you have observed so far is working as designed: order status is updated in Commerce Cloud backend in an asynchronous way, there is nothing can do to catch the event when the order status is really changed. Perhaps you can check whether there is a email sent when order status change is done.
Best Regards,
Jerry
I am using SignalR in my web api to provide real-time functionality to my client apps (mobile and web). Everything works ok but there is something that worries me a bit:
The clients get updated when different things happen in the backend. For example, when one of the clients does a CRUD operation on a resource that will be notified by SignalR. But, what happens when something happens on the client, let's say the mobile app, and the device data connection is dropped?.
It could happen that another client has done any action over a resource and when SignalR broadcasts the message it doesn't arrive to that client. So, that client will have an old view sate.
As I have read, it seems that there's no way to know if a meesage has been sent and received ok by all the clients. So, beside checking the network state and doing a full reload of the resource list when this happens is there any way to be sure message synchronization has been accomplished correctly on all the clients?
As you've suggested, ASP NET Core SignalR places the responsibility on the application for managing message buffering if that's required.
If an eventually consistent view is an issue (because order of operations is important, for example) and the full reload proves to be an expensive operation, you could manage some persistent queue of message events as far back as it makes sense to do so (until a full reload would be preferable) and take a page from message buses and event sourcing, with an onus on the client in a "dumb broker/smart consumer"-style approach.
It's not an exact match of your case, but credit where credit is due, there's a well thought out example of handling queuing up SignalR events here: https://stackoverflow.com/a/56984518/13374279 You'd have to adapt that some and give a numerical order to the queued events.
The initial state load and any subsequent events could have an aggregate version attached to them; at any time that the client receives an event from SignalR, it can compare its currently known state against what was received and determine whether it has missed events, be it from a disconnection or a delay in the hub connection starting up after the initial fetch; if the client's version is out of date and within the depth of your queue, you can issue a request to the server to replay the events out to that connection to bring the client back up to sync.
Some reading into immediate consistency vs eventual consistency may be helpful to come up with a plan. Hope this helps!
We are using fast render in our app, so all the data the app needs is sent down with the app itself. We are not using any Meteor.subscribe calls since minimongo is populated by fast render.
Once rendered we run Meteor.disconnect()
At some point in the future we want to reconnect to call a specific method, but when we reconnect, minimongo gets cleared.
How can we prevent Meteor from clearing all documents in minimongo upon reconnect?
I suspect that it's actually fast render that is causing your problem. Checking the meteor docs for Meteor.disconnect()...
Call this method to disconnect from the server and stop all live data updates. While the client is disconnected it will not receive updates to collections, method calls will be queued until the connection is reestablished, and hot code push will be disabled.
Call Meteor.reconnect to reestablish the connection and resume data transfer.
This can be used to save battery on mobile devices when real time updates are not required.
This implies that your client data is never deleted, otherwise you could not "resume data transfer" upon reconnection. It also would mean that one of their primary intended use cases for this method (e.g. "used to save battery on mobile devices when real time updates are not required") would not actually work.
Just to be absolutely sure, I checked the meteor source to see what happens on a disconnect and all it does it set a connection state var to false, clear the connection and heartbeat timers, and cancels any pending meteor method calls.
Similarly, Meteor.reconnect() simply set the connection state var back to true, re-establishes the connection and hearbeat timers, re-establishes any subscriptions (so that new data can be acquired...this action does not delete client data), and calls any queued up meteor method calls.
After reading more about how fast render works, I understand that a lot of hacking was done to get it to actually work. The main hack that jumped out to me is the "fake ready" hack which tricks the client to thinking the subscription is ready before the actual subscription is ready (since the data was sent to the client on the initial page load).
Since you have no subscriptions in your app and a Meteor.reconnect() does not cause your page to reload, I'm wondering if the client is never doing anything because it never receives another ready message. Or maybe since Meteor isn't aware of any subscriptions (since fast render bypasses meteor to transport data), is clears the client minimongo cache so its in a good state if a new subscription is started. Or, there could be something else about fast render that is getting in the way.
Long story short, I'm quite certain that Meteor.disconnect() and Meteor.reconnet() have no impact on your client minimongo data based upon reviewing the documentation, the source, and based upon my experience of testing my meteor apps offline.
I can Meteor.reconnect() does not delete data as I have a meteor app in production that continues to call Meteor.reconnect() if it detects that it has lost a connection (e.g. the computer goes offline, network outage, etc.).
Hopefully this long winded answer helps you track down what's going on with your app.
I tried Meteor.disconnect() and Meteor.reconnect() and the Minimongo DB was not cleared. I confirmed it using:
a) Minimongo explorer: https://chrome.google.com/webstore/detail/meteor-minimongo-explorer/bpbalpgdnkieljogofnfjmgcnjcaiheg
b) A helper to return a message if at some point during reconnection
my collection would have zero records.
Although you are right, all the data in the subscription was sent from server to client after reconnection (letting the local DB to do the sync stuff though). This happens because Meteor server takes the reconnection like a completely new connection. It seems like in the future (uncertain) Meteor will deploy a real reconnection, as is stated in their docs:
Currently when a client reconnects to the server (such as after
temporarily losing its Internet connection), it will get a new
connection each time. The onConnection callbacks will be called again,
and the new connection will have a new connection id.
In the future, when client reconnection is fully implemented,
reconnecting from the client will reconnect to the same connection on
the server: the onConnection callback wonβt be called for that
connection again, and the connection will still have the same
connection id.
Source: https://docs.meteor.com/api/connections.html