I have a Durable Function that always seems to be in a Running state. Even when it completed successfully.
It seems to complete, then it updates the RunningStatus back to Running for some reason.
This is the end of my logs:
[2021-07-22T08:52:59.211Z] Executing 'SavingsOrchestrator' (Reason='(null)', Id=36d509f3-4655-4382-9b72-ccb6fc39f413)
[2021-07-22T08:52:59.230Z] Executed 'SavingsOrchestrator' (Succeeded, Id=36d509f3-4655-4382-9b72-ccb6fc39f413, Duration=51ms)
[2021-07-22T08:53:02.206Z] SaveCustomerSavings stored 34 records.
[2021-07-22T08:53:02.210Z] Orechestrator_SaveCustomerSavings: Function 'SavingsOrchestrator (Orchestrator)' awaited. IsReplay: False. State: Awaited. HubName: TestHubName. AppName: . SlotName: . ExtensionVersion: 2.5.0. SequenceNumber: 24.
[2021-07-22T08:53:02.214Z] Orechestrator_SaveCustomerSavings: Function 'SavingsOrchestrator (Orchestrator)' completed. ContinuedAsNew: False. IsReplay: False. Output: (null). State: Completed. HubName: TestHubName. AppName: . SlotName: . ExtensionVersion: 2.5.0. SequenceNumber: 25. TaskEventId: -1
[2021-07-22T08:53:02.214Z] Orechestrator_SaveCustomerSavings: Orchestration 'SavingsOrchestrator' awaited and scheduled 0 durable operation(s).
[2021-07-22T08:53:02.254Z] Orechestrator_SaveCustomerSavings: Appended 3 new events to the history table in 35ms
[2021-07-22T08:53:02.290Z] Orechestrator_SaveCustomerSavings: Updated Instances table and set the runtime status to 'Running'
[2021-07-22T08:53:02.292Z] Orechestrator_SaveCustomerSavings: Deleting [TaskCompleted#3] message from testhubname-control-00
What I basically do in my code is the following:
I start my Orchestrator (as a singleton):
await client.StartNewAsync<Task>("SavingsOrchestrator", _instanceId, null);
My Orchestrator function basically looks like this:
[FunctionName(nameof(SavingsOrchestrator))]
public async Task SavingsOrchestrator([OrchestrationTrigger] IDurableOrchestrationContext context, ILogger logger)
{
var a1 = await context.CallActivityAsync<List<MyModel>>("Activity1", null);
if (a1?.Any() != true)
{
return;
}
// Run as 4 parallel tasks.
var batchSize = a1.Count / 4 + 1;
var tasks = new List<Task<int>>();
foreach (var batch in a1.Batch(batchSize))
{
tasks.Add(context.CallActivityAsync<int>("Activity2", batch));
}
await Task.WhenAll(tasks);
var total = tasks.Sum(x => x.Result);
logger.LogInformation($"Done: {total}");
}
Why is my Durable Function still in a Running state, even after it successfully ran?
Update
In my code I do a "Fan out" by using a Task.WhenAll. When I remove that code and simply call Activity2 in a single line, passing all the items to it instead of batches:
// Pass all items to the Activity function
var total = await context.CallActivityAsync<int>("Activity2", a1);
Then it updates the RunningStatus to Completed.
There seems to be an issue using Task.WhenAll. But I got this idea directly from the Microsoft Documentation
This could be because there may be larger amount of data or return values get serialized into queue messages and Azure storage queues only support 64 KB messages.
For further information refer Durable Function
Related
Is there a way to notify system`s users on real-time that the system is in deployment process(publish to production)?The purpose is to prevent them from starting to do atomic operations?
the system is an ASP.NET-based system and it already has SignalR Dlls, but I do not exactly know how to get to the "source" in the application from which I know that the system is deploying right now.
This is highly dependent on your deployment process, but I achieved something similar in the following way:
I created a method in one of my controllers called AnnounceUpdate:
[HttpPost("announce-update")]
public async Task<IActionResult> AnnounceUpdate([FromQuery] int secondsUntilUpdate, string updateToken)
{
await _tenantService.AnnounceUpdate(secondsUntilUpdate, updateToken);
return Ok();
}
The controller method takes in the amount of seconds till the update, as well as a secret token to ensure not just anyone can call this endpoint.
The idea is that we will call this controller just before we deploy, to announce the pending deployment. I make my deployments using Azure Dev Ops, and so I was able to create a release task that automatically runs the following PowerShell code to call my endpoint:
$domain = $env:LOCALURL;
$updateToken = $env:UPDATETOKEN;
$minutesTillUpdate = 5;
$secondsUntilUpdate = $minutesTillUpdate * 60;
$len = $secondsUntilUpdate / 10;
#notify users every 10 seconds about update
for($num =1; $num -le $len; $num++)
{
$url = "$domain/api/v1/Tenant/announce-update?secondsUntilUpdate=$secondsUntilUpdate&updateToken=$updateToken";
$r = Invoke-WebRequest $url -Method Post -UseBasicParsing;
$minsLeft = [math]::Floor($secondsUntilUpdate/60);
$secsLeft = $secondsUntilUpdate - $minsLeft * 60;
$timeLeft;
if($minsLeft -eq 0){
$timeLeft = "$secsLeft seconds";
}else{
if($secsLeft -eq 0){
$timeLeft = "$minsLeft minute(s)";
}else{
$timeLeft = "$minsLeft minute(s) $secsLeft seconds";
}
};
$code = $r.StatusCode;
Write-Output "";
Write-Output "Notified users $num/$len times.";
Write-Output "Response: $code.";
Write-Output "$timeLeft remaining."
Write-Output "_________________________________"
Start-Sleep -Seconds 10;
$secondsUntilUpdate = $secondsUntilUpdate - 10;
}
Write-Output "Allowing users to log out.";
Write-Output "";
Start-Sleep -Seconds 1;
Write-Output "Users notfied! Proceeding with update.";
As you can see, on the script I have set that the time till the update is 5 minutes. I then call my AnnounceUpdate endpoint every 10 seconds for the duration of the 5 minutes. I have done this because if I announce an update that will occur in 5 minutes, and then 2 minutes later someone connects, they will not see the update message. On the client side I set a variable called updatePending to true when the client receives the update notification, so that they do not keep on getting a message every 10 seconds. Only clients that have not yet seen the update message will get it.
In the tenant service I then have this code:
public async Task AnnounceUpdate(int secondsUntilUpdate, string updateToken)
{
if (updateToken != _apiSettings.UpdateToken) throw new ApiException("Invalid update token");
await _realTimeHubWrapper.AnnouncePendingUpdate(secondsUntilUpdate);
}
I simply check if the token is valid and then conitnue to call my HUB Wrapper.
The hub wrapper is an implementation of signalR's hub context, which allows to invoke signalR methods from within our code. More info can be read here
In the HUB wrapper, I have the following method:
public Task AnnouncePendingUpdate(int secondsUntilUpdate) =>
_hubContext.Clients.All.SendAsync("UpdatePending", secondsUntilUpdate);
On the client side I have set up this handler:
// When an update is on the way, clients will be notified every 10 seconds.
private listenForUpdateAnnouncements() {
this.hubConnection.on(
'PendingUpdate', (secondsUntilUpdate: number) => {
if (!this.updatePending) {
const updateTime = currentTimeString(true, secondsUntilUpdate);
const msToUpdate = secondsUntilUpdate * 1000;
const message =
secondsUntilUpdate < 60
? `The LMS will update in ${secondsUntilUpdate} seconds.
\n\nPlease save your work and close this page to avoid any loss of data.`
: `The LMS is ready for an update.
\n\nThe update will start at ${updateTime}.
\n\nPlease save your work and close this page to avoid any loss of data.`;
this.toastService.showWarning(message, msToUpdate);
this.updatePending = true;
setTimeout(() => {
this.authService.logout(true, null, true);
this.stopConnection();
}, msToUpdate);
}
}
);
}
I show a toast message to the client, notifying them of the update. I then set a timeout (using the value of secondsUntilUpdate) which will log the user out and stop the connection. This was specifically for my use case. You can do whatever you want at this point
To sum it up, the logical flow is:
PowerShell Script -> Controller -> Service -> Hub Wrapper -> Client
The main take away is that somehow we need to still trigger the call to the endpoint to announce the update. I am lucky enough to be able to have it run automatically during my release process. If you are manually publishing and copying the published code, perhaps you can just run the PowerShell script manually, and then deploy when it's done?
In the case of network connectivity loss, the following code just loops endlessly and keeps making API calls. Is there a way to cancel with a timeout (for example, 5000 ms) using Firebase API? Or would I have to make my own Coroutine to handle this?
fun updateUserFieldInDB(
collectionPath: String,
strArr: ArrayList<String>,
onSuccess: (() -> Unit),
onFail: (() -> Unit)
) {
val fbUser = Firebase.auth.currentUser
if (fbUser == null) {
Log.i(TAG, "user is null....")
return
}
val db = Firebase.firestore
when (strArr.size) {
2 -> {
db.collection(collectionPath).document(fbUser.uid).update(strArr[0], strArr[1])
.addOnSuccessListener {
onSuccess()
}
.addOnFailureListener {
onFail()
}
}
}
}
The onSuccess ad onFail completion handlers for Firestore only fire once the write operation has been committed or rejected on the server. You should only use them if you're interested in detecting that situation, in which case the looping is to be expected.
If you only care whether the write operation was recorded by the Firestore client (in its local cache), the best way to detect that is when the update(strArr[0], strArr[1]) call completes.
So pretty much: when the next line of code executes, the write has been recorded locally; when the completion listeners fire, the write has been handled on the server.
This trigger is used for detecting sequence in schedule has been updated, and help to update the schedule's overview status and finished time.
But it didn't always work when an internal error was occurred as below:
Error: 13 INTERNAL: An internal error occurred. at Object.exports.createStatusError
(/srv/node_modules/grpc/src/common.js:91:15) at Object.onReceiveStatus
(/srv/node_modules/grpc/src/client_interceptors.js:1204:28) at InterceptingListener._callNext
(/srv/node_modules/grpc/src/client_interceptors.js:568:42) at InterceptingListener.onReceiveStatus
(/srv/node_modules/grpc/src/client_interceptors.js:618:8) at callback
(/srv/node_modules/grpc/src/client_interceptors.js:845:24)
Here is my code:
export const calc_status = function.firestore.document("users/{userid}/schedule/{scheduledid}").onUpdate(async (change, context) => {
// before error occurred ...
const data = change.after.data();
let curStatus = data.status;
...
...
// after getting occurred ...
if(data.status !== curStatus ) {
data.status = curStatus;
if(curStatus === 'finished') {
data.finish_time = new Date().toISOString();
}
if(curStatus !== 'expired'){
data.update_time = data.expired_time;
data.finish_time = data.expired_time;
} else {
data.update_time = new Date().toISOString();
}
await change.after.ref.update(data);
return Status.SUCCEEDED;
}
return Status.SUCCEEDED;
}
I'm very confused why the error occurred because this function works fine at most time.
Does anyone met the same problem as mine?
Why the error happened? And what's your solution?
Thank you.
This appears to be long standing framework bug github.com/firebase/firebase-functions/issues/536 with no resolution as of yet.
Though you can't get around the error which anecdotally and very intermittently happens on a cold start you can work around it by enabling retries for the function via the full console see Retry Cloud Functions for Firebase until it succeeds for instructions.
This assumes you handle internal errors in your code very well as it will retry for any failure but in my case the functions onCreate handler was just queuing up some later processing via PubSub so any failure meant it should retry.
Oct 2020 Update
Since v3.11 of firebase-functions you can now set the retry mode in your function code by setting failurePolicy to true
module.exports = functions.runWith({ failurePolicy: true }).firestore.document('collection/doc').onWrite(async (change, context) => {
//do function stuff
});
I've got a Google Cloud app with several cloud functions that call an API, then save the response in Firebase. I have this scheduled function set up to retry the API on error and it works great. But I want to retry the call if the data I need isn't there yet. Calling again right away could work if I throw an error, but it's highly unlikely that the missing data will be available seconds later, so I'd rather check once an hour after that until I have valid data.
Below is a shortened version of my function. I can imagine adding a setTimeout and having the function call itself again, but I'm not sure I would do that or if it's a good idea since it would keep the function alive for a long time. Is there a way to automatically retry this scheduled function on an arbitrary time interval?
exports.fetchData= functions.pubsub
.schedule('every Tuesday 6:00')
.timeZone('America/New_York')
.onRun(async context => {
const response = fetch(...)
.then(res => {
if (res.status < 400) {
return res;
} else {
throw new Error(`Network response was not ok. ${res}`);
}
})
.then(res => res.json());
const resObj = await response;
resObj.map(x => {
// check response for valid data
})
if (// data is valid) {
// save to Firebase
} else {
// retry in 1 hour
}
});
});
Scheduled functions only run on the schedule you specify. There is no "arbitrary" scheduling. If you think that the function might frequently fail, consider just increasing the frequency of the schedule, and bail out of function invocations that don't need to run because of recent success.
If you enable retries, and the function generates an error by throwing an exception, returning a rejected promise, or timing out, then Cloud Functions will automatically retry the function on a schedule that you can't control.
setTimeout is not a feasible option to keep a function alive for longer than its configured timeout. Cloud Functions will terminate the function and all of its ongoing work after the timeout expires (and you would be paying for the time the function sits idle, which is kind of a waste).
According to the docs Realm can notify you when certain actions are taking place like "every time a write transaction is committed". I am using the Realm Object Server and the first time a user opens my app a large set of data is synched from the server down to the app. I would like to show a loading screen and not present the main UI of my app until Realm has completed its initial sync. Is there a way to be notified / determine when this process is complete?
The realm.io website just posted documentation on how to do this.
Asynchronously Opening Realms
If opening a Realm might require a time-consuming operation, such as applying migrations or downloading the remote contents of a synchronized Realm, you should use the openAsync API to perform all work needed to get the Realm to a usable state on a background thread before dispatching to the given queue. You should also use openAsync with Realms that are set read-only.
For example:
Realm.openAsync({
schema: [PersonSchema],
schemaVersion: 42,
migration: function(oldRealm, newRealm) {
// perform migration (see "Migrations" in docs)
}
}, (error, realm) => {
if (error) {
return;
}
// do things with the realm object returned by openAsync to the callback
console.log(realm);
})
The openAsync command takes a configuration object as its first parameter and a callback as its second; the callback function receives a boolean error flag and the opened Realm.
Initial Downloads
In some cases, you might not want to open a Realm until it has all remote data available. In such a case, use openAsync. When used with a synchronized Realm, this will download all of the Realm’s contents before the callback is invoked.
var carRealm;
Realm.openAsync({
schema: [CarSchema],
sync: {
user: user,
url: 'realm://object-server-url:9080/~/cars'
}
}, (error, realm) => {
if (error) {
return;
}
// Realm is now downloaded and ready for use
carRealm = realm;
});