POLLY Nuget Package WaitAndRetryAsync Method - polly

I am using Polly nuget package to utilize its Retry feature for webAPI/WCF calls in my project.
I am just curious to know what is maximum duration of wait we can give for WaitAndRetryAsync method?

Polly imposes no limit on the wait in wait-and-retry. The maximum duration of wait you can specify is therefore Timespan.MaxValue (which is probably more than you need ;~).

Related

Cloud Tasks - waiting for a result

My application needs front-end searching. It searches an external API, for which I'm limited to a few calls per second.
So, I wanted to keep ALL queries, related to this external API, on the same Cloud Task queue, so I could guarantee the amount of calls per second.
That means the user would have to wait for second or two, most likely, when searching.
However, using Google's const { CloudTasksClient } = require('#google-cloud/tasks') library, I can create a task but when I go to check it's status using .getTask() it says:
The task no longer exists, though a task with this name existed recently.
Is there any way to poll a task until it's complete and retrieve response data? Or any other recommended methods for this? Thanks in advance.
No. GCP Cloud Tasks provides no way to gather information on the body of requests that successfully completed.
(Which is a shame, because it seems quite natural. I just wrote an email to my own GCP account rep asking about this possible feature. If I get an update I'll put it here.)

Firebase Functions: What scheduling algorithm is used when retrying events? (exponential backoff, etc)

When Firebase Function events are retried, what replay algorithm is used? Or how often can I expect events to be retried?
Cloud Functions does not offer a guaranteed behavior on this. The documentation merely states:
When you enable retries on a background function, Cloud Functions will retry a failed function invocation until it completes successfully, or the retry window (by default, 7 days) expires.
And that's all you're given. I would take this to mean that the system is free to adjust the retry frequency as it sees fit, based on internal configuration current conditions.

How to reliably schedule events in Corda?

Many examples of using event scheduling in Corda -- including it's own reference implementation of the Interest Rate Swap contract -- rely on SchedulableState mechanism. Unfortunately there is not much information available in the Corda documentation regarding the use of the scheduler in a real business context. Reliable execution of scheduled business events is important, and, as a party who depends on a CorDapp to automatically trigger scheduled events in order to fulfil my contractual obligations, I'd like to implement robust controls for whether things that had been scheduled have actually run. For example, it is possible to envisage a situation whereby a scheduled flow kicked off, but then ended prematurely due to a bug in the code or node misconfiguration without achieving its indented purpose. The question therefore is what CorDapp designer and/or node operator could do to ensure reliable execution of scheduled activities?
SchedulableState normally is used to represent recurring events.
Please check out this HeartBeat sample: https://github.com/corda/samples/tree/release-V4/heartbeat
You can think of monthly payment or a paycheck that is reoccured every month.

Can the Google Calendar API events watch be used without risking to exceed the usage quotas?

I am using the Google Calendar API to preprocess events that are being added (adjust their content depending on certain values they may contain). This means that theoretically I need to update any number of events at any given time, depending on how many are created.
The Google Calendar API has usage quotas, especially one stating a maximum of 500 operations per 100 seconds.
To tackle this I am using a time-based trigger (every 2 minutes) that does up to 500 operations (and only updates sync tokens when all events are processed). The downside of this approach is that I have to run a check every 2 minutes, whether or not anything has actually changed.
I would like to replace the time-based trigger with a watch. I'm not sure though if there is any way to limit the amount of watch calls so that I can ensure the 100 seconds quota is not exceeded.
My research so far shows me that it cannot be done. I'm hoping I'm wrong. Any ideas on how this can be solved?
AFAIK, that is one of the best practice suggested by Google. Using watch and push notification allows you to eliminate the extra network and compute costs involved with polling resources to determine if they have changed. Here are some tips to best manage working within the quota from this blog:
Use push notifications instead of polling.
If you cannot avoid polling, make sure you only poll when necessary (for example poll very seldomly at night).
Use incremental synchronization with sync tokens for all collections instead of repeatedly retrieving all the entries.
Increase page size to retrieve more data at once by using the maxResults parameter.
Update events when they change, avoid re-creating all the events on every sync.
Use exponential backoff for error retries.
Also, if you cannot avoid exceeding to your current limit. You can always request for additional quota.

Does opencpu supports asynchronous call for time consuming R functions?

I have recently created an R package that makes use of sparklyr possibilities. I invoke the package main function from opencpu and pass as argument an url with all my data as a stream. Data stream is successfully analysed in a distributed way via spark and provides some results.
My only problem is that it needs a lot of time to complete the execution part. I tried to invoke my package both via opencpu.call and opencpu.rpc but both of them make me wait until the end of the process.
Since opencpu is an amazing approach of microservice architecture it would be extremely useful to have the possibility of really asynchronous calls.
Is something of the following supported or planned to be supported an the near future?
Option A: receive instantly a sessionid (even though the process is still executed). Then the client is responsible to ask for the status of the process using his sessionid.
Option B: define a callback url that ocpu server triggers, passing the sessionid upon the completion of the analytic process execution.
Thank you very much for you help!
No, current OpenCPU does not support background jobs. You'll have to create a middle layer yourself that performs the request that does the waiting on behalf of the user.

Resources