Whats the difference between Polymers this.async, Promise.then and setTimeout function?
My understanding:
this.async and Promise.then moves a Task to the end of the current Stack and setTimeout is handled as new Task and executed in the next loop when the eventloop takes a new task from the queue?
Please correct me if I am wrong.
TLDR: Yes, but note this.async uses setTimeout if a timeout is specified.
Polymer.Async.run (this.async) without timeout - queues a microtask (via a MutationObserver callback)
Polymer.Async.run (this.async) with timeout - queues a macrotask
Promise.then - queues a microtask
setTimeout - queues a macrotask
Difference between microtask and macrotask within an event loop context
Related
We have a cloud function set up with pub/sub triggers.
The function is invoked topic(NAME).onPublish()
If the function is invoked when it is cold, it always runs twice.
Function execution took 284 ms, finished with status: 'ok' METHOD_NAME METHOD_ID
Received message from pub sub METHOD_NAME METHOD_ID
Function execution started METHOD_NAME METHOD_ID
Function execution took 24271 ms, finished with status: 'ok' METHOD_NAME METHOD_ID
Received message from pub sub METHOD_NAME METHOD_ID
Function execution started METHOD_NAME METHOD_ID
After that all future messages only run once, until the function goes cold again.
Is this because it takes a long time for the first invocation to complete and the timeout causes it to be run again? Any way to prevent this?
Startup time is almost for sure the issue. To verify this, try these:
comment out portion of the function until fast, to see if the problem goes away (time it in your local terminal, if you can, using timeit module)
increasing Acknowledgement Deadline seconds (upon subscription); defaults to 10 so could easily be the problem; try 20, 40 etc
ensure that the first time run, the function takes less time than the Timeout value of the function (defaults to 60 seconds - not likely to be the problem)
I'm puzzled at what we see when running this setup:
FuncA: Google Cloud Function trigger-http
FuncB: Google Cloud Function trigger-topic
FuncA is called by a HTTP client. Upon being called, FuncA does some light work setting up a JSON object describing a task to perform, and stores this JSON into Google Cloud Storage. When the file has been written FuncA published a topic with a pointer to the gs file. At this point FuncA responds to the client and exits. Duration typically 1-2 seconds.
FuncB is informed that a topic has been published and is invoked. The task JSON is picked up and work beings. After processing FuncB stores the result info Firebase Realtime Database. FuncB exits at this point. Duration typically 10-20 seconds.
As FuncA and FuncB are in no way associated, live their individual process lifecycles on different function names (and triggers) and only share communication through pub/sub topic message passing (one-direction from A to B) we would expect that FuncA can run again and again, publishing topics at any rate. FuncB should be triggered and fan-out to scale with what ever pace FuncA is called with.
This is however not what happens.
In the logs we see results following this pattern:
10:00:00.000 FuncA: Function execution started
10:00:02.000 FuncA: Function execution took 2000 ms, finished with status: 'ok'
10:00:02.500 FuncB: Function execution started
10:00:17.500 FuncB: Function execution took 15000 ms, finished with status: 'ok'
10:00:18.000 FuncA: Function execution started
10:00:20.000 FuncA: Function execution took 2000 ms, finished with status: 'ok'
10:00:20.500 FuncB: Function execution started
10:00:35.500 FuncB: Function execution took 15000 ms, finished with status: 'ok'
...
The client calling FuncA clearly gets to wait for both FuncA and FuncB to finish, before being let through with the next request. It is expected that FuncA would finish, and allow a new call in immediately at what ever pace the calling client can "throw at it".
Beefing the client up with more threads only repeats this pattern, such that "paired" calls to FuncA->FuncB always waits for each other.
Dicsuss, clarify, ... stackoverflow, do your magic! :-)
Thanks in advance.
I am having a task which will listen for certain events and kick-off other functions.
This function (the listener) subscribes to a kafka topic and runs forever, or at least until it will get a 'stop' event.
Wrapping this as an airflow operator doesn't seem to work properly.
Meaning, if I send the stop event, it will not process it, or anything else for that matter.
Is it possible to run busy loop functions in airflow ?
No, do not run infinite loops in an Airflow task.
Airflow is designed as a batch processor - long/infinite running tasks are counter to it's entire scheduling and processing model and while it might "work", it will lock up a task runner slot.
I wrote my job using jsr-352 and deployed it on wildfly. How can I schedule one job with some delay after last end time like below time line where = is execution time and - is delay time:
===============--=====--========--
Note: maximum number of job execution is one
JBeret ejb scheduler supports repeating interval job executions, with a fixed interval duration or certain delay duration after the start of job execution. Delay after end of a job execution is currently not supported. If your job execution duration is relatively predictable, you can approximate it with either interval or delay after the start of a job execution.
To achieve this kind of job schedulgin with finer control, you can try the following:
schedule a single-action job schedule
configure a job listener in job.xml to watch for the end of the above job execution, and scheule the next single-action job execution with a short initial delay
specifically, the job listener's afterJob() method should be able to look up or inject TimerSchedulerBean, which is a local singleton EJB, and invoke its org.jberet.schedule.TimerSchedulerBean#schedule method. The job listener is reponsible for creating an instance of org.jberet.schedule.JobScheduleConfig, passing it when calling the ejb business method. The job listener should already have all info to create JobScheduleConfig.
In Flex, I'm making a set of asynchronous calls:
service.method1.send().addResponder(responder1);
service.method2.send().addResponder(responder2);
service.method3.send().addResponder(responder3);
I want to execute some code after all of these service calls have returned (either success or failure, I don't care which). How can I do this?
Implement a CallResponder that will monitor the results from each responder & increment a variable in the listener after each result. When the variable hits three, execute some code.