Scheduled tasks in Windows Server 2016 don't run after being disabled then enabled - windows-server-2016

I am having a problem with the Task Scheduler on Windows Server 2016 not running repetitive tasks after they are disabled and re-enabled.
I create a task with a “Triggers->Start” of five minutes in the future and set the task to run every five minutes. The “Settings->Run task as soon as possible after scheduled start is missed” option is checked.
Everything works fine – task runs at the scheduled Start time and then runs every five minutes.
Disable the task and wait more than five minutes.
Enable the task, it does not run again.
The Next Run time continues to update every five minutes but the task does not run and the Last Run Time is never updated.
There are no entries in the Task History once the task is re-enabled and no events in the Windows event logs.

The task was set to run every day and repeat either indefinitely or daily. I changed it to Run Once and repeat indefinitely and resumes running on the schedule once enabled again.
I believe why this works is that the scheduler shows that the task will "After triggered run every x minutes indefinately" even when the task is disabled.
The task stays in the "has been triggered" state when disabled and and so resumes running once enabled.

Related

DAG Running stop once it reach 50 successfully

I am using latest AirFlow. I ran a dag file which just executes print and sleeps command for 10 seconds.
Once that dag complete 50 successful runs it automatically stoped. When I restart the web server , scheduler and worker then it again runs for another 50. I did this way 2 times and the same result.
There is some issue with the schedule.
Most likely reason is the schedule is made of Catchup or Backfill task with specific start_date and end_date parameters .
for more details https://airflow.apache.org/docs/stable/dag-run.html

Airflow Dependencies Blocking Task From Getting Scheduled

I have an airflow instance that had been running with no problem for 2 months until Sunday. There was a blackout in a system on which my airflow tasks depend and some tasks where queued for 2 days. After that we decided it was better to mark all the tasks for that day as failed and just lose that data.
Nevertheless, now all the new tasks get trigger at the proper time but they are never being set to any state (neither queued nor running). I check the logs and I see this output:
Dependencies Blocking Task From Getting Scheduled
All dependencies are met but the task instance is not running. In most cases this just means that the task will probably be scheduled soon unless:
The scheduler is down or under heavy load
The following configuration values may be limiting the number of queueable processes: parallelism, dag_concurrency, max_active_dag_runs_per_dag, non_pooled_task_slot_count
This task instance already ran and had its state changed manually (e.g. cleared in the UI)
I get the impression the 3rd topic is the reason why it is not working.
The scheduler and the webserver were working, however I restarted the scheduler and still I am having the same outcome. I also deleted the data in mysql database for one job and it is still not running.
I also saw a couple of post that said it is not running because the depens_on_past was set to true and if the previous runs failed, the next one will never be executed. I also checked it and it is not my case.
Any input would be really apreciated.
Any ideas? Thanks
While debugging a similar issue i found this setting: AIRFLOW__SCHEDULER__MAX_DAGRUNS_PER_LOOP_TO_SCHEDULE (or http://airflow.apache.org/docs/apache-airflow/2.0.1/configurations-ref.html#max-dagruns-per-loop-to-schedule), checking the airflow code it seems that the scheduler queries for dagruns to examine (consider to run ti's for), this query is limited to that number of rows (or 20 by default). So if you have >20 dagruns that are in some way blocked (in our case because ti's were on up-for-retry), then it won't consider other dagruns even though these could run fine.

task must be cleared before being run

I have a task that's scheduled to run hourly, however it's not being triggered. When I look at theTask Instance Details it says:
All dependencies are met but the task instance is not running. In most cases this just means that the task will probably be scheduled soon unless:
- The scheduler is down or under heavy load
- The following configuration values may be limiting the number of queueable processes: parallelism, dag_concurrency, max_active_dag_runs_per_dag, non_pooled_task_slot_count
- This task instance already ran and had its state changed manually (e.g. cleared in the UI)
If this task instance does not start soon please contact your Airflow administrator for assistance.
If I clear the task in the UI I am able to execute it through terminal but it does not run when scheduled.
Why do I have to manually clear it after every run?

how to create a wait job in informatica

My requirement is to create a job in informatica which will run for every 15 min and look for a status column in abc table.If it is “Approved” THEN It will exit and kick off the rest of the jobs.
If the status is not approved it will not do anything and run after 15 min.This process wil continue until we have a approval status.
So, No matter what happens in the above two scenarios,This process will run in every 15 minutes.
I have worked on the same requirement in unix using loops and conditional statments but I am not sure how this can be achieved using informatica.Could you please help me on this.
Regards,
Karthik
I would try adding a scheduler that runs every 15 minutes. The best way that I've found to "loop" sessions in Informatica is:
run the session once, check if it failed using conditional links
if it did fail, run a timer task for an amount of time (a minute, an hour, whatever)
then try to run the same session again by copying and pasting the session up ahead of the timer task, and repeat a few times as necessary.
So if you added a scheduler into the mix, you could set the scheduler to have the workflow run every 15 minutes, and have the timer tasks halt the workflow for 4 or 5 minutes each. Then you could use SESSSTARTTIME function in some pre/post-session task to determine when the scheduler will fire off again and simply abort the workflow before that time.

Quartz.NET job configurared to run once daily works the first day but fails to fire on subsequent days

I've configured quartz.net within my asp.net application and I have a job set to run daily at 1am. If I change the job-config.xml file to have the job run in 2 minutes time, the change is automatically picked up without restarting the app-pool, and the job is fired off. However, if I revert the change to have the job fire off at 1am in the morning again, it doesn't seem to fire off. The obvious reason to me would be that either the app-pool or IIS has been restarted, which would have caused my asp.net app to shutdown (in effect shutting down quartz.net as it's part-and-parcel of the same asp.net process), but to test whether quartz.net is still running, without any iisreset or app-pool recycle, I change the job-config.xml file again to fire off within 2 minutes again and the job runs, so it doesn't seem to be the case where an app-pool recycle or IIS reset has occurred - I don't understand.
I would like to keep the job running under my asp.net application without having to create an additional windows service, so any help would greatly be appreciated. Below is a snippet of my quartz.net config file.
<job>
<job-detail>
<name>xjob</name>
<group>MyJobs</group>
<description>blah blah</description>
<job-type>yyy.xxx,yyy</job-type>
<volatile>false</volatile>
<durable>true</durable>
<recover>false</recover>
</job-detail>
<trigger>
<cron>
<name>xJobTrigger</name>
<group>MyJobs</group>
<description>blah blah</description>
<job-name>xJob</job-name>
<job-group>MyJobs</job-group>
<cron-expression>0 0 1 * * ?</cron-expression>
</cron>
</trigger>
Thank you
I think there are a couple of things going on. As you mention, if you run the scheduler under asp.net, the process might get recycled and your scheduler might not be available to run the job when it is supposed to run. If having the job run at a give time is important, then you need to set up Quartz.net as a service.
The other thing that is missing is that I think you are using a RAM jobs tore, which means that when the scheduler restarts, it reschedules the job, so in your case, it will not fire again until 1. If the scheduler is not running at 1, then the job won't fire, even if the scheduler restarts (and so on). If you use an ADO job store, then at least the trigger will get persisted and the job will run once the scheduler starts again and the misfire is detected.

Resources