Spartacus, refresh orders after cancellation fail - reload

I made a custom service to cancel an order, after trigger it, when I get the server response 'OK' I reload the order's list but the order status isn't refreshed. It takes several seconds to do so.
Is there any service or flag that I could use to refresh the list when the data store state has changed?
I tried several services without success...
this.userOrderService.getCancelOrderSuccess().subscribe... // state still not cancelled
this.userOrdersEffect.resetUserOrders$.subscribe // neither this

I checked with my colleagues who are responsible for order status implementation in Commerce Cloud backend. Unfortunately the behavior you have observed so far is working as designed: order status is updated in Commerce Cloud backend in an asynchronous way, there is nothing can do to catch the event when the order status is really changed. Perhaps you can check whether there is a email sent when order status change is done.
Best Regards,
Jerry

Related

Wait for firebase cloud trigger function to end before launching http request

Question
I have a frontend user-registration page. Once the user clicks on 'Register', a user document gets created in my firestore database. I made a 'onCreate' trigger function which listens to new user-documents getting created. This trigger function creates and updates other documents in my database (it adds extra information and creates some documents based on the user-document that just got created).
Once the user clicks on Register in my frontend, they are redirected to a new page. On initState that new page, it makes a http (cloud function) request which needs this newly created info made by my trigger function to correctly make his response. The thing is, this HTTP-request gets launched before my trigger function ends, this causes the HTTP function to error, because it cannot yet find the information needed, because the trigger function hasn't ended and hasn't fully updated the database yet.
Does anyone know how to resolve this problem? The first thing I did, which was just a very ugly and quick workaround, was to have a delay in my frontend of a few seconds, right after the onclick 'register', so that it would delay it's redirecting to the new page, and therefore delay the http-request. But this was just a temporary solution. I would like to have a real solution now. Is there a way to tell the http-request to not run while this specific trigger-function is running? Or if not possible, is there another way to architecture my functions or my frontend to prevent this? I've thought about it, and tried to research someone with the same problem, but I can't find any questions regarding this. (This scares me a little, 'cause that makes me feel like I'm missing something basic).
If you could help, thanks in advance.
What I am hearing here is that you have a classic race condition. I am understanding that your user click "Register" and that causes a new document to be added to the database. There is then a trigger on that insertion which updates other documents/fields. It is here that you have introduced parallelism and hence the race.
Your design needs to change such that there is a "signal" sent back from GCP when the updates that were performed asynchronously have completed. Since the browser doesn't receive unsolicited signals, you will have to design a solution where your browser calls back to GCP such that the call doesn't return until the asynchronous changes have been completed. One way might be to NOT have an onCreate trigger but instead have the browser explicitly call a Cloud Function to perform the updates after the initial insertion (or even a Cloud Function that performs the insertion itself). The Cloud Function should not return until the data is consistent and ready to be read back by the browser.

Monitor Meteor pending requests

In a Meteor app, if the server is unreachable, all the pending requests are queued and resent when the server will be available again;
this is great, but I would like to:
monitor the status of the connection, to show users that the app is currently offline
notify the user about how many pending request are currently queued and monitor them in order to notify when are succesfully sent;
To be more clear I'd like to find a way to know how many pending request are currently queued (if any) and get informations about their status (to know when they are not more pending)
As Mark Uretsky suggested, you can use a package like francocatena:status to get and display the status.
As for monitoring pending requests in his comment, there isn't a public API for that. However, it looks like currently you could use the _methodInvokers and/or _outstandingMethodBlocks properties of Meteor.connection to determine which method calls are still outstanding.

Should POST data have a re-try on timeout condition or not?

When POSTing data - either using AJAX or from a mobile device or what have you - there is often a "retry" condition, so that should something like a timeout occue, the data is POSTed again.
Is this actually a good idea?
POST data is meant to be idempotent, so if you
make a POST to the server,
the server receives the request,
takes time to execute and
then sends the data back
if the timeout is hit sometime after 3. then the next retry will send data that was meant to be idempotent.
The question then is that should a retry (when calling from the client side) be set for POST data, or should the server be designed to always handle POST data appropriately (with tokens and so on), or am i missing something?
update as per the questions - this is for a mobile app. As it happens, during testing it was noticed that with too short a timeout, the app would retry. Meanwhile, the back-end server had in fact accepted and processed the initial request, and got v. upset when the new (otherwise identical) re-request came in.
nonce's are a (partial) solution to this. The server generates a nonce and gives it to the client. The client sends the POST including the nonce, the server checks if the nonce is valid and unused and if so, acts on the POST and invalidates the nonce, if not, it reports back that the nonce is used and discards the data. Also very usefull to avoid the 'double post' problem by users clicking a submit button twice.
However, it is moving the problem from the client to a different one on the server. If you invalidate the nonce before the action, the action might still fail / hang, if you invalidate it after, the nonce is still valid for requests during the processing. So, a possible scenario on the server becomes on receiving.
Lock nonce
Do action
On any processing error preventing action completion, rollback, release lock on nonce.
On no errors, invalidate / remove nonce.
Semaphores on the server side are most helpfull with this, most backend languages have libraries for these.
So, implementing all these:
It is safe to retry, if the action is already performed it won't be done again.
A reply that the nonce has already been used can be understood as a a confirmation that the original POST has been acted upon.
If you need the result of an action where the second requests shows that the first came through, a short-lived cache would be needed server-sided.
Up to you to set a sane limit on subsequent tries (what if the 2nd fails? or the 3rd?).
The issue with automatic retries is the server needs to know if the prior POST was successfully processed to avoid unintended consequences. For example, if each POST inserts a DB record, but the last POST timed out after the insert, the automatic re-POST will cause a duplicate DB record, which is probably not what you want.
In a robust setup you may be able to catch that there was a timeout and rollback any POSTed updates. That would safely allow an automatic re-POST. But many (most?) web systems aren't this elaborate, so you'll typically require human intervention to determine if the POST should happen again.
If you can't avoid the long wait until the server responds, a better practice would be to return an immediate response (200OK with the message "processing") and have the client, later on, send a new request that checks if the action was performed.
AJAX was not designed to be used in such a way ("heavy" actions).
By the way, the default HTTP timeout is 7200 secs so I don't think you'll reach it easily - but regardless, you should avoid having the user wait for long periods of time.
Providing more information about the process (like what exactly you're trying to do) would help in suggesting ways to avoid such obstacles.
If your requests are failing enough to instigate something like this, you have major problems that have nothing to do with implementing a retry condition. I have never seen a Web app that needed this type of functionality. Programming your app to automatically beat an overloaded sever with the F5 hammer is not the solution to anything.
If this ajax request is triggered from a button click, disable that button until it returns, successful or not. If it failed, let the user click it again.

Handling changes in user Google Tasks using GTasks API

We are building service that will synchronize with user Google Tasks data, so if user add/edit/delete task in GTask, so it will be added/edited/deleted in our service.
And there is a big problem with synchronization: as I see GTasks API does not provide any onUpdate/onChange event listeners. I mean, the perfect solution can be if there will be Google Tasks API method that can be used to set some callback URL that will be requested when user add/edit/delete tasks.
But I can't find such method in Google Tasks API, so now there is only one very bad way to sync with Google Tasks API - request all users tasks and compare them with service tasks. This is very bad way to sync, because if we have 10k users and want their tasks list be synchronizaed up to 1 minute, so we will need to make > 10k GTasks API requests per minute :(
I hope that I'm wrong and there is some way to set onChange/onUpdate callback for user tasks. Or may be there is some another way to receive actual notification of user GTasks changes(by email & etc).
Does anybody know it?
Thank you.
You could use updatedMin parameter to only get Tasks that have been updated since a given timestamp, as described in the documentation.
You should be able to rely on ETag and If-None-Match headers when querying user tasks lists to get a 304 Not Modified if the no tasks in the list have changed. (Not that should also works when polling individual tasks)
This way you can effectively poll for the tasks that have changed since the last time you synced.

How to time-delay email deliveries?

I'm currently learning about the Drupal email functions, such as drupal_mail and hook_mail, and hook_mail_alter, and I have a problem before me where I'll need to be able to queue emails for delayed delivery. For example, an event signup notification that needs to wait for an hour after the event was registered by a user. And that email has user specific data, so can't be just a generic template email...
I'm using the MailQ module, mainly for it's logging capabilities, but I wonder if it (or something else) could be modified to add a configurable delay function?
Any ideas?
======================
Some additional thoughts:
There are 2 modules in D6 that are useful for queuing emails, MailQ and Job queue (in Nikit's list below). They both have the mail queue functionality, which can be very useful, and both have different approaches, strengths and weaknesses- from a cursory investigation. EG: MailQ has a great logging function, with a very useful admin interface for dealing with categories of mail such as queued, sending, sent, failed etc. And you can set how many emails to go out with each cron run.
But while Job Queue doesn't have those categories and logging-archiving as far as I can tell, it does allow for prioritizing different jobs, which can be very useful if you have different email types going out simultaneously, such as contact confirmations, event signup confirmations, and "invite-a-friend" emails for example.
So my idea of a perfect queue would have both of those modules integrated (they don't get along very well) with an additional delay function for each job type.
But writing such a thing is way beyond my meager talents, so I'll just have to write a little custom module that is activated by cron, and that will look for flagged contacts with a creation date of at least an hour in the past, and find a way to write the relevant info directly into the MailQ DB table.
There seems to be some collision going on when I have both my new function being triggered by cron, as well as the MailQ function being triggered by the same cron run, so I'm trying to sort out how to cron-prioritize the MailQ over my function- so that my function will add the table data AFTER MailQ has run.
It's not the most elegant solution, but I did something in D6 before I knew of transactional email services with hook_mail_alter where I loaded the email parameters into a database table, set the message value to null so it didn't send after the hook, and then later a cron job running every 5 mins would check the table and send a couple messages at a time if any where waiting. Once it sent, I flipped the status to sent. You will get a watchdog error saying the email failed to send.
But with this method, you could either call something like Mailgun/Mandrill and send a scheduled email within the hook, or put it in a database table with a timestamp field like strtotime('+2 weeks') and then have a cron job check on the table every x minutes and send when timestamp conditions are met. hook_mail_alter gives you a field to identify the message, so you could have a switch that would only catch specific emails if you want.
If you're comfortable with custom modules, take a look at the Mandrill module. It likely has a more elegant solution for taking over Drupal's email than what I've described above.

Resources