Is there a best practice to schedule some task to a specified date and hour in meteor (e.g. what buffer doe with tweets)? I thought of a collection (sendlater) that contains all the info needed (date, time, code to run). And a server code that checks (maybe every minute) if there are codes to run, and runs them.
Related
I have a DAG that inserts data into a SQL Server database. Some of the tasks take 24+ hours to run as the database its inserting into is not high performing.
I need to mark the tasks as complete automatically if they take more than 24 hours to run, as I need to move on from them so I can start inserting the next days worth of data (the DAG runs daily and the data source has new data coming in every day). How can I do this programmatically, where I don't have to go into the UI to mark it as 'Success' or 'Failed'?
You could follow a similar approach as shown in this StackOverflow post: kill or terminate subprocess when timeout. Then once the timeout occurs, you just need to make sure you don't raise any Exception.
So in my android app, I am using the real-time database to store information about my users. That information should be updated every Monday at 00:00 o'clock. I am using a cloud function to do this but the problem here is the time zones. Right now I have set the time zone to 'Europe/Sofia' for testing purposes. In the documentation, it is said that the time zone for cloud functions must be from the TZ database. So I figured I could ask the user before registering in my app their preferred time zone and save it in the database. My question is after getting the user's prefered time zone is there a way to only write one cloud function and execute it dynamically for each time zone in the TZ database or do I have to create individual functions for each time zone in the TZ database?
If I correctly understand your question, you could have a scheduled Cloud Function which runs every hour from 00:00 to 23:00 UTC+14:00 on Mondays, and, for every execution (i.e. for every hour within this range), query for the users that should be updated and execute the updates.
I'm not able to enter more into details, based on the info you have provided.
It's not possible to schedule a Cloud Function using a dynamic timezone. You must know the timezone at the time you write the function and declare it statically in your code.
If you want to schedule something dynamically, read through your options in this other question: https://stackoverflow.com/a/42796988/807126
So, you could schedule a repeating function that runs every hour, and check to see if something should be run for a user at the moment in time that it was invoked. Or, you can schedule a single future invocation of a function with a service like Cloud Run, and keep rescheduling it if needed.
I have built a micro-service where there is an API called deleteToken. This API(when invoked) is supposed to change the status in a tuple in db corresponding to token (identified with token id) to "MARK-DELETE". Once that tuple has status "MARK_DELETE" then after 30 days there should be a rest call made to downstream service API called deleteTokenFromPartner. There is no such mandate like call to deleteTokenFromPartner has to be made right after 30 days, it can be done few hours later 30 days also. So what I thought was I will write a scheduler (using Quartz, Java Executor service) with scheduled period in such a way that it will run once everyday. what it will do is it will query db and find out all rows which has status="MARK_DELETE" and status update is older than 30 days. After then it will iteratively call deleteTokenFromPartner for each and every row. There is one db which is highly available and we may not have any issue with consistency as we delete after 30 days. But the problem I am seeing is, as this is a micro-service which has N instances so every instance will query db, get the same set of rows and make call to same rows. Can I make any tweak so that this duplicated calls can be avoided. FYI we don't make any config changes using hostnames and if only one instance will be capable of running the scheduler that too will be fine.
we have source files are arrived in hdfs every day except holidays.
our oozie coordinator watch these files to start every day. I do not want the oozie to run on holidays defined. How to do that. Coodinator should not timeout if it is holiday.
One possible solution is run job regularly and exclude all the job actions through switch case using decision nodes for holidays. For this start to java action which will check if this is holiday, propagate this value to decision action and then decide if the required actions will run or not for this value.(oozie supports propagation of value in workflow from one action to other). For each of the two scenario provide different message for your confirmation, 'todays holiday required actions skipped' else 'No holidays job succeeded'.
My requirement is to create a job in informatica which will run for every 15 min and look for a status column in abc table.If it is “Approved” THEN It will exit and kick off the rest of the jobs.
If the status is not approved it will not do anything and run after 15 min.This process wil continue until we have a approval status.
So, No matter what happens in the above two scenarios,This process will run in every 15 minutes.
I have worked on the same requirement in unix using loops and conditional statments but I am not sure how this can be achieved using informatica.Could you please help me on this.
Regards,
Karthik
I would try adding a scheduler that runs every 15 minutes. The best way that I've found to "loop" sessions in Informatica is:
run the session once, check if it failed using conditional links
if it did fail, run a timer task for an amount of time (a minute, an hour, whatever)
then try to run the same session again by copying and pasting the session up ahead of the timer task, and repeat a few times as necessary.
So if you added a scheduler into the mix, you could set the scheduler to have the workflow run every 15 minutes, and have the timer tasks halt the workflow for 4 or 5 minutes each. Then you could use SESSSTARTTIME function in some pre/post-session task to determine when the scheduler will fire off again and simply abort the workflow before that time.