Airflow - Specify time of the day for execution timeout parameter - airflow

My DAG is scheduled to run daily at 7 AM. Can I specify time of the day to execution timeout parameter instead of duration.
For example, I want to add specific time 12 PM so that job will fail if it is still running at 12 PM.

Such a param is not present in BaseOperator or DAG
You'll have a build it. Here's some hint how you can go about it (not certain if this would work)
Write a custom TimeSensor (not to be confused with TimeDeltaSensor) by subclassing it, that kills the DAG upon failure.
You'll have to override the execute() method
For killing you can look into _mark_dagrun_state_as_failed() method
With the specified datetime timeout, add that custom sensor task as one of the starting tasks (tasks that don't have an upstream task) of you DAG
In case you have to timeout some specific task(s) instead of entire DAG
you can change write another custom timesensor that marks a specific task as failed upon timing out.
You can use _mark_task_instance_state() method for it
you can wire up this custom timesensor with that task in parallel (so that both the task and it's sensor launch together)

Related

New task in DAG blocks further DAG executions

We have an ETL DAG which is executed daily. DAG and tasks have the following parameters:
catchup=False
max_active_runs=1
depends_on_past=True
When we add a new task, due to depends_on_past property, no new DAG runs get scheduled, as all previous states for new task are missing.
We would like to avoid having to run manual backfill or manually marking previous runs from UI as it can be easily forgotten, and we also have some dynamic DAGs where tasks get added automatically and halt future DAG executions.
Is there a way to automatically set past executions for new tasks as skipped by default, or some other approach that will allow future DAG runs to execute without human intervention?
We also considered creating a maintenance DAG that would insert missing task executions with skipped state, but would rather not go this route.
Are we missing something as the flow looks like a common thing to do?
Defined in Airflow documentation on BaseOperator:
depends_on_past (bool) – when set to true, task instances will run
sequentially and only if the previous instance has succeeded or has
been skipped. The task instance for the start_date is allowed to run.
As long as there exists a previous instance of the task, if that previous instance is not in the success state, the current instance of the task cannot run.
When adding a task to a DAG with existing dagrun, Airflow will create the missing task instances in the None state for all dagruns. Unfortunately, it is not possible to set the default state of task instances.
I do not believe there is a way to allow future task instances of a DAG with existing dagruns to run without human intervention. Personally, for depends_on_past enabled tasks, I will mark the previous task instance as success either through the CLI or the Airflow UI.
Looks like there is an Github Issue describing exactly what you are experiencing! Feel free to bump this PR or take a stab at it if you would like.
A hacky solution is to set depends_on_past to False as max_active_runs=1 will implicitly guarantee the same behavior. As of the current Airflow version, the scheduler orders both dag runs and task instances by execution date before running them (checked 1.10.x but also 2.0)
Another difference is that next execution will be scheduled even if previous fails. We solved this by retrying unlimited times (setting a ver large number), and alert if retry number is larger than some value.

Airflow: how to stop next dag run from starting after failure

I'm trying to see whether or not there is a straightforward way to not start the next dag run if the previous dag run has failures. I already set depends_on_past=True, wait_for_downstream=True, max_active_runs=1.
What i have is tasks 1, 2, 3 where they:
create resources
run job
tear down resources
task 3 always runs with trigger_rule=all_done to make sure we always tear down resources. What i'm seeing is that if task 2 fails, and task 3 then succeeds, the next dag run starts and if i have wait_for_downstream=False it runs task 1 since the previous task 1 was a success and if i have wait_for_downstream=true then it doesn't start the dag as i expect which is perfect.
The problem is that if tasks 1 and 2 succeed but task 3 fails for some reason, now my next dag run starts and task 1 runs immediately because both task 1 and task 2 (due to wait_for_downstream) were successful in the previous run. This is the worst case scenario because task 1 creates resources and then the job is never run so the resources just sit there allocated.
What i ultimately want is for any failure to stop the dag from proceeding to the next dag run. If my previous dag run is marked as fail then the next one should not start at all. Is there any mechanism for doing this?
My current 2 best effort ideas are:
Use a sub dag so that there's only 1 task in the parent dag and therefore the next dag run will never start at all if the previous single task dag failed. This seems like it will work but i've seen mixed reviews on the use of sub dag operators.
Do some sort of logic within the dag as a first task that manually queries the DB to see if the dag has previous failures and fails the task if it does. This seems hacky and not ideal but that it could work as well.
Is there any out of the box solution for this? Seems fairly standard to not want to continue on failure and not want step 1 to start of run 2 if not all steps of run 1 were successful or if run 1 itself was marked as failed.
The reason depends_on_past is not helping your is it's a task parameter not a dag parameter.
Essentially what you're asking for is for the dag to be disabled after a failure.
I can imagine valid use cases for this, and maybe we should add an AirflowDisableDagException that would trigger this.
The problem with this is you risk having your dag disabled and not noticing for days or weeks.
A better solution would be to build recovery or abort logic into your pipeline so that you don't need to disable the dag.
One way you can do this is add a cleanup task to the start of your dag, which can check whether resources were left sitting there and tear them down if appropriate, and just fail the dag run immediately if you get an appropriate error. You can consider using airflow Variable or Xcom to store the state of your resources.
The other option, notwithstanding the risks, is the disable dag approach: if your process fails to tear down resources appropriately, disable the dag. Something along these lines should work:
class MyOp(BaseOperator):
def disable_dag(self):
orm_dag = DagModel(dag_id=self.dag_id)
orm_dag.set_is_paused(is_paused=True)
def execute(self, context):
try:
print('something')
except TeardownFailedError:
self.disable_dag()
The ExternalTaskSensor may work, with an execution_delta of datetime.timedelta(days=1). From the docs:
execution_delta (datetime.timedelta) – time difference with the previous execution to look at, the default is the same execution_date as the current task or DAG. For yesterday, use [positive!] datetime.timedelta(days=1). Either execution_delta or execution_date_fn can be passed to ExternalTaskSensor, but not both.
I've only used it to wait for upstream DAG's to finish, but seems like it should work as self-referencing because the dag_id and task_id are arguments for the sensor. But you'll want to test it first of course.

Airflow: Trigger DAG to run even when cross-dependent DAG is not finished yet

I am in the process of setting up cross-dependent DAGS using the Airflow documentation. I have a particular use case where my DAG B requires that DAG A runs first - however, if DAG A is delayed long enough DAG B should still run. So I'm essentially looking for a way to wire an OR operation between 2 sensors.
Say DAG B needs to run daily by 5PM then this is how I would do it in code:
while True:
CURRENT_TIME = getCurrentTime()
if DAG A completed OR CURRENT_TIME > 5pm:
run DAG B
This is much simpler to do in code however not seeing how this is done with Airflow.
Interesting problem, here's how I think it can be accomplished
You need an ExternalTaskSensor in the beginning of your DAG-B, as told in the Cross-DAG Dependencies guide, to hold off the execution of DAG-B until DAG-A completes.
Here you must also set the timeout param so that the sensor fails after certain maximum time
Then in the first actual task of DAG-B (that comes immediately after ExternalTaskSensor) set trigge_rule=TriggerRule.ALL_DONE to ensure that the actual processing of your DAG-B starts irrespective of whether DAG-A completes within stipulated time or not; in other words
execution of DAG-B will be held off until DAG-A completes, but only for a maximum delta duration
If DAG-A completes within this duration, then DAG-B will begin executing immediately after that
but if DAG-A fails to complete within this duration, DAG-B will begin executing anyways after the passing of this duration

Airflow: Only allow one instance of task

Is there a way specify that a task can only run once concurrently? So in the tree above where DAG concurrency is 4, Airflow will start task 4 instead of a second instance of task 2?
This DAG is a little special because there is no order between the tasks. These tasks are independent but related in purpose and therefore kept in one DAG so as to new create an excessive number of single task DAGs.
max_active_runs is 2 and dag_concurrency is 4. I would like it start all 4 tasks and only start a task in next if same task in previous run is done.
I may have mis-understood your question, but I believe you are wanting to have all the tasks in a single dagrun finish before the tasks begin in the next dagrun. So a DAG will only execute once the previous execution is complete.
If that is the case, you can make use of the max_active_runs parameter of the dag to limit how many running concurrent instances of a DAG there are allowed to be.
More information here (refer to the last dotpoint): https://airflow.apache.org/faq.html#why-isn-t-my-task-getting-scheduled
max_active_runs defines how many running concurrent instances of a DAG there are allowed to be.
Airflow operator documentation describes argument task_concurrency. Just set it to one.
From the official docs for trigger rules:
depends_on_past (boolean) when set to True, keeps a task from getting triggered if the previous schedule for the task hasn’t succeeded.
So the future DAGs will wait for the previous ones to finish successfully before executing.
On airflow.cfg under [core]. You will find
dag_concurrency = 16
//The number of task instances allowed to run concurrently by the scheduler
you're free to change this to what you desire.

Airflow Controller triggers Target DAG with a future execution date; Target DAG stalls

I have a Controller DAG (SampleController) that will call a Target DAG (SampleWait), both with a start_date of datetime.now() and a schedule_interval of None.
I trigger the Controller DAG from command line or the webserver UI, and it will run right away with an execution date of "right now" in my system time zone. In the screenshot, it is 17:25 - which isn't my "real" UTC time; it is my local time.
However, when the triggered DAG Run for the target is created, the execution date is "adjusted" to the UTC time, regardless of how I try to manipulate the start_date - it will ALWAYS be in the future (21:25 here). In my case, it is four hours in the future, so the target DAG just sits there doing nothing. I actually have a sensor in the Controller that waits for the Target DAG to finish, so that guy is going to be polling for no reason too.
Even the examples in the Github for the Controller-Target pattern exhibit the exact same behavior when I run them and I can't find any proper documentation on how to actually handle this issue, just that it is a "pitfall".
It is strange that Airflow seems to be aware of my time zone and adjusts within one operator, but not when I do it from command line or the web server UI.
What gives?

Resources