Firebase cloud function with cron-job what happens if connection is closed while executing something - firebase

I want to know what happens if connection becomes nothing while executing some tasks for a Cloud Function.
cron-job.org says this:
You should design your scripts in a way that they send as little data as possible, ideally just a short status message at the end of the execution, e.g. "OK" — or simply nothing. In case your script is written in PHP and needs more than 30 seconds of run-time, you can use the following trick to let it continue to execute in the background: Use the PHP function ignore_user_abort(true) to tell PHP to continue the script execution after disconnection.
Let's say doing task like query through database with certain condition and delete matched data.
If there are too many data and not finish execution of the task within 30 seconds, what will happen?

Related

Running process not shown in active process Instances and difference between ASYNC & SYNC tasks

My workflow is quite simple, I have two script, first script is ASYNC and the second is SYNC. In each script I have a loop from 0 to Integer.MAX_VALUE as follow
for(int i=0;i<Integer.MAX_VALUE;i++)
System.out.println("value is "+i);
When I run my process, it starts working and I can see in my log file that it is being filled. But when I want to stop it, I find nothing in my active process instances, neither in completed process or even in aborted. even if I check my data base, I have nothing related to this process in the ProcessInstanceInfo or even ProcessInstanceLog. So weird isn't it? what could be the reason?
The goal from creating this workflow is to see the difference between ASYNC and SYNC tasks, because as I know that ASYNC tasks when they start running, the workflow don't have to wait until this task finish, but what I have is that my task ASYNC is still running and it didn't go to next task. So my second question is can any one give me the difference between ASYNC and SYNC with a good example to learn. I would appreciate if I'll get at least one answer on one of my two questions. thanks
What do you stop? Do you abort the process instance ?
In the scripts you can populate the process variables with kcontext.setVariable("variable_name","variable_value"). This will reflect in DB if you have defined the process variable persistent in the process model.
The tasks, the sync one will return the flow control to the process when is completed. In contrast to the async one, process flow will continue immediately after it sends the async tasks to execute.

How to have the same execution id in logs on Cloud Functions for Firebase

There's multiple executions happening interleaved/concurrently in firebase. The log id changes when new execution happens and the old execution id is forgotten. So the execution id moves forward only. When the old function resumes, it uses new execution id. Is there a way to achieve old execution id for old function & new execution id for new function.
Workflow:
Lets say Function1 & Function2 are diffrent triggers of same function.
1. Function1 does some db reads and do http requests. This returns an http promise - This takes some time(maybe some ms). Lets assume its execution-id from log is 154690519665944.
2. Function2 get triggered while function1 was waiting. function2 gets execution-id 154690574405903. function2 also does same thing and waits for http response.
3. Function1 resumes and it got http response and while logging it uses another execution-id 154694739233261 in log.
What happened to execution-id 154690519665944 ?
Since there's multiple triggers happening simultaneously, the only way to find whether a function completed successfully is to check logs. So by using execution-id as the filter, I could have find whether the function executed successfully or not. But because firebase changes execution-id randomly, I guess I have to find another solution.
PS: There's an update call which will trigger the same function. Does that change the parent function execution-id ?
Without seeing your code or complete logs, I don't think a definitive answer can be provided.
However, it sounds like the question you're asking is how can you keep data in memory across asynchronous transactions. Regardless of Firebase or not, you basically have two options:
commit that data to the database during the first part of the
transaction and then retrieve it in the second part
pass the data
from the first part to the second part so that it already has it.
You seem to be relying on the execution-id, so I would recommend taking the latter approach and passing that id along as part the input to your httprequest and having the server your calling return it in its response.

is it possible to auto update data every day on firebase [duplicate]

Is it possible on Firebase or Parse to set up something kinda like a cron job?
Is there a way to set up some sort of timed operation that runs over the stored user data?
For example, I'm writing a program that allows people to RSVP for lunch everyday. If you have RSVPed by noon, then you get paired up with somebody else who has also RSVPed. Using JavaScript, the user can submit their RSVP in the browser.
The question is, can Firebase/Parse execute the code to match everyone at 12:00pm every day?
Yes, this can be done with Parse. You'll need to write your matching function as a background job in cloud code, and then you'll need to schedule the task in the dashboard. In terms of the flexibility in scheduling, it's not as flexible as cron but you can definitely run a task at the same time every day, or every x minutes/hours.
Tasks can take 15 mins max to execute before they're killed, so depending on the size of your database or the complexity of your task, you may need to break it up into different tasks or make it resumable.
Just to confirm about Firebase:
As #rickerbh said, it can be done with Parse, but currently there is no way for you to run your code on Firebase's server. There are 2 options for you 2 solve this:
You could use Firebase Queue and run your code in Node.js
You could use a different library such as Microsoft Azure (I still haven't tried this yet, I'm not sure if it provides Job Scheduling for Android)
However, Firebase is working on something called Firebase Trigger, which will solve our problem, however it is still not released with no confirmed release date.

Asynchronous calls in OpenCPU

I would like to run OpenCPU job asynchronously and collect its results from a different session. In Rserve + RSclient I can do the following:
RS.eval(connection, expression, wait = FALSE)
# do something while the job is running
and then when I'm ready to receive results call either:
RS.collect(connection)
to try to collect results and wait until they are ready if job is still running or:
RS.collect(connection, timeout = 0)
if I want to check the job state and let it run if it is still not finished.
Is it possible with OpenCPU to receive the tmp/*/... path with the result id before the job has finished?
It seems acording to this post that OpenCPU does not support asynchronous jobs. Every request between the browser and the OpenCPU server must be alive in order to execute a script or function and receive a response succesfully.
If you find any workaround I would be pleased to know it.
In my case, I need to run a long process (may takes a few hours) and I can't keep alive the client request until the process finishes.

Pagodabox or PHPfog + Gearman

All,
I'm looking for a good way to do some job backgrounding through either of these two services.
I see PHPFog supports IronWorks, but i need something more realtime. Through these cloud based PaaS services, I'm not able to use popen(background.php --token=1234). So I'm thinking the best solution, might be to try to kick off a gearman worker to handle the job. (Actually my preferred method would be to use websockets to keep a connection open and receive feedback from the job, rather than long polling a db table through AJAX, but none of these guys support websockets)
Question 1 is, is there a better solution than using gearman to offload the job?
Question 2 is, http://help.pagodabox.com/customer/portal/articles/430779 I see pagodabox supports 'worker listeners' ... has anybody set this up with gearman? Would it work?
Thanks
I am using PagodaBox with a background worker in an application I am building right now. Basically, PagodaBox daemonizes a PHP process for you (meaning it will continually run in the background), so all you really have to do is create a script that checks a database table for tasks to run, runs them, and then sleeps a bit so it's not running too many queries against your database.
This is a simplified version of what I have running:
// Remove time limit
set_time_limit(0);
// Show ALL errors
error_reporting(-1);
// Run daemon
echo "--- Starting Daemon ---\n";
while(true) {
// Query 'work_queue' table for new tasks
// Loop over items and do whatever tasks are associated with them
// Update row to mark task as completed
// Wait a bit
sleep(30);
}
A benefit to this approach is that it's easy to test via CLI:
php tasks.php
You will see all the echo statements come through in console as it's running, and of course it's much easier to do than a more complicated setup with other dependencies like Gearman.
So whenever you add a new task to the table, the maximum amount of time you'll wait for that task to be started in a batch is 30 seconds (or whatever your sleep time is). This is better and preferable to cron jobs, because if you setup a cron job to run every minute (the lowest possible interval) and the work you have to do takes longer than a minute, another cron process will start working on the same queue and you could end up with quite a lot of duplicated task work that could cause a lot of issues that are hard to debug and troubleshoot. So if you instead have either only one background worker that runs all tasks, or multiple background workers that work on different task types, you will never run into this issue.

Resources