I see the following logs in Application Insights after running an Azure Durable Function:
Does 'Response time' indicate the execution time of each function? If so, is there a way to run a kusto query to return the Response time and name of each function?
Yes, Response Time is the time taken to complete execution
or
Response Time = Latency + Processing time.
You can use the below kql query to pull the function name and response time
requests
| project timestamp,functionName=name,FuncexecutionTime=parse_json(customDimensions).FunctionExecutionTimeMs,operation_Id,functionappName=cloud_RoleName
Related
I am using ADX Command activity in ADFv2 (Azure Data Factory) to append data to one of the Kusto tables. But very frequently this fails throwing an error after an hour. If the underlying activity finishes within an hour, it succeeds but if it tries to run beyond 1 hour, it is terminated (times out).
When I check the operation status through Kusto Explorer on the basis of operations id that I get in the ADF error, I see that after 59 mins, the operation has failed
"The admin command execution timed out at..."
This is happening despite specifying 2 hours timeout for the ADX Command activity in the data factory. Why is that then timing out only after an hour? How do I avoid this?
ADX command activity limits the execution time by the specified Command timeout parameter where the limit is 1 hour. Please see the docs
ADX Command activity - Command timeout
I am trying to call an api every minute for ski lift status and check for changes. I am going to store the value of if the lift is open or closed in firebase (Real Time Database) and read to see if value from api is different and only update/ write to that node when it's a different value. Then I can set up a cloud function that will listen for database changes and send push notifications to the list of FCM tokens from that channel. I am not sure if this is the most efficient way, but I was going to set up scheduled functions to call the third party api.
I have been using these docs:
https://firebase.google.com/docs/functions/schedule-functions
I was planning to do something like this:
exports.scheduledFunction = functions.pubsub.schedule('every 5 minutes').onRun((context) => {
CALL MY API IN HERE AND UPDATE DATABASE IF SNAPSHOT BACK IS DIFFERENT
});
I was wondering how would I run only between set times- say 8am-6pm EST. I am struggling to find anything about times to run. Should I just run the function every minute and then pause and resume by checking the time? In which case how does it know to keep checking the time when it is paused?
Firebase scheduled functions use Cloud Scheduler to implement the schedule. It accepts cron style time specifiers to indicate when a job should be run. The full spec for that can be found here. You will have to use ranges of numbers to indicate the valid times and frequency of the schedule. For example, you might use "8-18" in the hour field to limit the hours of execution.
I need to run a cron job to perform a specific cloud function after a set interval only once but a bit unsure of how to do it. Is there any way to do this through the current google cloud platform?
Update following our discussion below through comments:
If you want to "change a document in your Firestore database 2 hours after it has been created" you could do as follows:
When creating the document in Firestore, save the date/time of creation, e.g. with firebase.firestore.FieldValue.serverTimestamp()
Have an HTTP Cloud Function that you call regularly as explained below (every minutes? every 5 minutes?) and that, first, selects the documents that were created 2 hours ago (based on the saved timestamp) and then do the desired action on these docs.
If you want to trigger a Cloud Function through a cron job, note that you would normally do that through an HTTP Cloud Function, calling the Cloud Function URL via the cron job.
You can either use an external service like cron-job.org or you can use GCP's App Engine and Cloud Pub/Sub
See this video: https://www.youtube.com/watch?v=fEBPAMSk5_8
and this Blog post: https://firebase.googleblog.com/2017/03/how-to-schedule-cron-jobs-with-cloud.html
both from the Firebase team.
Finally note that recently GCP launched a new product, Cloud Scheduler, which can be used to call HTTP Cloud Functions.
Sorry for late answer. Once upon time, I am stuck with this issue too. You sure can schedule a job executed once at particular time. But, you must use multiple platform as Firebase Cloud Function has time limit to cron task. If you look at Quotas and limits document for Time Limit of Firebase, you can see that Firebase cloud functions have set time limits until they are canceled (540 seconds or 9 minutes). So you can't cron a job executed after more than 9 minute with cloud function. But you can use Heroku server to cron a job without paying. Unfortunately, Heroku apps sleep after 30 minutes if there is no task along the time interval. However, you can keep awake with external server such as cron-job.org. You can get unlimitedly your app awake by applying pinging to your Heroku app every minute less than 30 minutes. You can use node-schedule to cron a job executed once for all time by using this code there:
const schedule = require('node-schedule');
const date = new Date(2012, 11, 21, 5, 30, 0);
const job = schedule.scheduleJob(date, function(){
console.log('The world is going to end today.');
});
You can get current time or timestamp for Firestore and add time interval to current date to schedule as you desire. Dont forget to use timezone for it. You can use rule to set timezone like that:
const rule = new schedule.RecurrenceRule();
rule.dayOfWeek = [0, new schedule.Range(0, 6)]; //all days
rule.hour = req.body.hour;
rule.minute = req.body.minute;
rule.second = req.body.second;
rule.tz = "Europe/Istanbul"; // You can specify a timezone!
Here, you can get request from client side by fetching time specification from user. And use schedule job module for one time task like that:
const job = schedule.scheduleJob(rule, function(data) {
console.log("Job ran #", new Date().toString());
}.bind(null, dataFuture));
Here you can use user data with .bind() by entering variable just like dataFuture variable. If your users use native android platform, you can specify time interval by entering hour_of_day and minute as:
Date currentTime = Calendar.getInstance().getTime();
Locale aLocale = Locale.forLanguageTag("tr-TR");
Calendar calendar = Calendar.getInstance(aLocale);
calendar.setTime(currentTime);
calendar.add(Calendar.MINUTE, 1);
calendar.add(Calendar.HOUR_OF_DAY, 4);
Alternatively you can use Cloud Task platform. But it may be a bit hard to use.
I have an application that's been running since 2015. It both reads and writes to approx 16 calendars via a service account, using the Google node.js library (calendar v3 API). We also have G Suite for Education.
The general process is:
Every 30 seconds it caches all calendar data via a list operation
Periodically a student will request an appointment "slot", it first checks to see if the slot is still open (via a list call) then an insert.
That's all it does. It's been running fine until the past few days, where API insert calls started failing:
{
"code": 403,
"errors": [{
"domain": "usageLimits",
"reason": "quotaExceeded",
"message": "Calendar usage limits exceeded."
}]
}
This isn't all that special - the documentation has three "solutions":
Read more on the Calendar usage limits in the G Suite Administrator
help.
If one user is making a lot of requests on behalf of many users
of a G Suite domain, consider using a Service Account with authority
delegation (setting the quotaUser parameter).
Use exponential backoff.
I'm not exceeding any of the stated limits as far as I can tell.
While I'm using a service account, it isn't making a request on behalf of a user. The service account has write access to the calendar and adds the user as an attendee
Finally, I do not think exponential backoff will help, although I do not have this implemented. The time between a request to insert and the next insert call is measured in seconds, not milliseconds. Additionally, just running calls directly on the command line with a simple script produce the same problem.
Some stats:
2015 - 2,466 inserts, 186 errors
2016 - 25,747 inserts, 237 errors
2017 - 42,815 inserts, 225 errors
2018 - 41,390 inserts, 1,074 errors (990 of which are in the past 3 days)
I have updated the code over the years, but it has remained largely untouched this term.
At this point I'm unsure what to do - there is no channel to reach Google, and while I have not implemented a backoff strategy, the way timings work with this application, subsequent calls are delayed by seconds, and processed in a queue that sequentially processes requests. The only concurrent requests would be list operations.
I am using UFT 12.02 to create UFT API tests, the same tests i am using in Load Runner to check the transaction response time.
The challenge i am facing is to check for success and failure during execution. In Load runner we can easily check the response for different success indicators (e.g. response code '200 OK' or 'user ID' or 'Success ID' generated by the system) but in case of UFT API script we can add start and End transaction activities in the flow but cannot check the application status based on any indicator.
Please let me know if there is any way to check that the completed transaction is a pass transaction or a failure.
currently I am getting all transactions as passed but the records inserted in the DB are far less than the passed transactions.
Unfortunately, you cannot set a fail / pass criteria when using LoadRunner transactions in UFT. All transactions are considered as finished with "passed" status. The core usage of UFT - LoadRunner transaction integration is to measure response times, nothing else.