Airflow trigger_rule all_done not working as expected - airflow

I have the following DAG in Airflow 1.10.9 where the clean_folder task should run once all the previous tasks either succeeded, failed or were skipped. To ensure this, I put the trigger_rule parameter of the clean_folder operator to "all_done":
t_clean_folders = BashOperator
bash_command=f"python {os.path.join(custom_resources_path, 'cleaning.py')} {args['n_branches']}",
trigger_rule='all_done',
task_id="clean_folder",
)
This logic works properly when all tasked in the branches executed before are skipped:
[Graph view][1]
However when a branch was successfully executed, the clean_folder task is skipped:
[Graph view][2]
The branches are defined dynamically as follow:
for b in range(args['n_branches']):
t_file_sensing = FileSensor(
filepath=f"{input_path}/input_{b}",
task_id=f"file_sensing_{b}",
poke_interval=60,
timeout=60*60*5,
soft_fail=True,
retries=3,
)
t_data_staging = BashOperator(
bash_command=f"python {os.path.join(custom_resources_path, 'staging.py')} {b}",
task_id=f"data_staging_{b}",
)
...
The documentation provides the following definition of "all_done": all parents are done with their execution. Is this normal behavior of the trigger_rule? What can I change to ensure clean_folder will run in any case (and last)? Thanks!
[1]: https://i.stack.imgur.com/AGqqD.png
[2]: https://i.stack.imgur.com/7CG3r.png

If possible, you should consider to upgrade your Airflow version to at least 1.10.15 in order to benefit from more recent bug-fixes.
It really surprises me that clean_folder and dag_complete both get executed when every parent tasks are skipped. The behaviour when a task is skipped is to directly skip its child tasks without first checking their trigger_rules.
According to Airflow 1.10.9 Documentation on trigger_rules,
Skipped tasks will cascade through trigger rules all_success and all_failed but not all_done [...]
For your UseCase, you could split the workflow into 2 DAGs:
1 DAG to do everything you want except the t_clean_folder
1 DAG to execute the t_clean_folder task, preceded by an ExternalTaskSensor

Related

run next tasks in dag if another dag is complete

dag1:
start >> clean >> end
I have a dag where i run a few tasks. But I want to modify it such that the clean steps only runs if another dag "dag2" is not running at the moment.
Is there any way I can import information regarding my "dag2", check its status and if it is in success mode, I can proceed to the clean step
Something like this:
start >> wait_for_dag2 >> clean >> end
How can I achieve the wait_for_dag2 part?
There are some different answers depends on what you want to do:
if you have two dags with the same schedule interval, and you want to make the run of second dag waits the same run of first one, you can use ExternalTaskSensor on the last task of the first dag
if you want to run a dag2, after each run of a dag1 even if it's triggered manually, in this case you need to update dag1 and add a TriggerDagRunOperator and set schedule interval of the second to None
I want to modify it such that the clean steps only runs if another dag "dag2" is not running at the moment.
if you have two dags and you don't want to run them in same time to avoid a conflict on an external server/service, you can use one of the first two propositions or just use higher priority for the task of the first dag, and use the same pool (with 1 slot) for the tasks which lead to the conflict, but you will lose the parallelism on these tasks.
Hossein's Approach is the way people usually go. However if you want to get info about any dag run data, you can use the airlfow functionality to get that info. The following appraoch is good when you do not want(or are not allowed) to modify another dag:
from airflow.models.dagrun import DagRun
from airflow.utils.state import DagRunState
dag_runs = DagRun.find(dag_id='the_dag_id_you_want_to_check')
last_run = dag_runs[-1]
if last_run.state == DagRunState.SUCCESS:
print('the dag run was successfull!')
else:
print('the dag state is -->: ', last_run.state)

Can I create an adhoc parameterized DAG from scheduler job DAG running once a minute

I'm researching Airflow to see if it is a viable fit for my use case and not clear from the documentation if it fits this scenario. I'd like to schedule a job workflow per customer based on some very dynamic criteria which doesn't fall into the standard "CRON" loop of running every X minutes etc. (since there is some impact of running together)
Customer DB
Customer_id, "CRON" desired interval (best case)
1 , 30 minutes
2 , 2 hours
...
... <<<<<<< thousands of these potential jobs
Every minute I'd like to query the state of the current work in the system as well as real world "sensor" data which changes often (such as load on some DBs, or quotas to other resources, or adhoc priorities, boosting etc.)
When decided, I'd need to create a "DAG" (pipeline) of work per customer which had been deemed worthy of running at this time (since perhaps we want to delay work for the "CRON" given some complicated analysis).
For instance :
Every minute run this test:
for customer in DB:
if (shouldRunDAGForCustomer(customer)):
Create a DAG with states ..... and run it
"Sleep for a minute"
def shouldRunDagForCustomer(...):
currentStateOfJobs = ....
situationalAwarenessOfWorld = .... // check all sort of interesting metrics
if some criteria is met for this customer: return true // run the dag for this customer
else : return false
From the material I've read, it seems that the Dags are given a specifed schedule and are static in their structure. Also seems that DAGs run on all their inputs, not generated per input.
Also wasn't clear on how the scheduling works, if the given DAG hadn't completed but the schedule time had arrived. Would I have potentially multiple runs of the same pipeline for the same input (Bad). As I have pipelines whose time to complete varies depending on customer, dynamic load of system etc. I'd like to manage the scheduling aspect myself and generation of "DAG"
This is possible through a "controller" DAG that is scheduled every minute, which then triggers runs for a "target" DAG when desired conditions are met. Airflow has a good example of this, see example_trigger_controller_dag and example_trigger_target_dag. The controller uses the TriggerDagRunOperator() which is an operator that kicks off a run for any desired DAG.
trigger = TriggerDagRunOperator(
task_id="test_trigger_dagrun",
trigger_dag_id="example_trigger_target_dag", # Ensure this equals the dag_id of the DAG to trigger
conf={"message": "Hello World"},
dag=dag,
)
Then the target DAG doesn't need to do anything special except should have schedule_interval=None. Note that on trigger, you can populate a conf dictionary that the target can later consume, in case you want to customize each triggered run.
bash_task = BashOperator(
task_id="bash_task",
bash_command='echo "Here is the message: \'{{ dag_run.conf["message"] if dag_run else "" }}\'"',
dag=dag,
)
Back to your case, your scenario is similar, but where you differ from the example is you won't kick off a DAG every time and you have multiple target DAGs that you could kick off. This is where the ShortCircuitOperator comes into play, which basically is a task that runs a python method you specify, which just needs to return true or false. If it returns true, then it continues onto the next downstream task as usual, otherwise it "short circuits" and stops skips the downstream task. Worth giving example_short_circuit_operator a run if you want to see this demonstrated. With that and dynamic creation of tasks with a for-loop, I think you'll get something like this in your controller DAG:
dag = DAG(
dag_id='controller_customer_pipeline',
default_args=args,
schedule_interval='* * * * *',
)
def shouldRunDagForCustomer(customer, ...):
currentStateOfJobs = ....
situationalAwarenessOfWorld = .... // check all sort of interesting metrics
if some criteria is met for this customer: return true // run the dag for this customer
else : return false
for customer in DB:
check_run_conditions = ShortCircuitOperator(
task_id='check_run_conditions_' + customer,
python_callable=shouldRunDagForCustomer,
op_args=[customer],
op_kwargs={...}, # extra params if needed
dag=dag,
)
trigger_run = TriggerDagRunOperator(
task_id='trigger_run_' + customer,
trigger_dag_id='target_customer_pipeline_' + customer, # standardize on DAG ids for per customer DAGs
conf={"foo": "bar"}, # pass on extra info to triggered DAG if needed
dag=dag,
)
check_run_conditions >> trigger_run
Then your target DAG is just the per customer work.
This is probably not the only way you could implement something like this, but basically yes I think it's viable to implement in Airflow.

When does a airflow dag definition get evaluated?

Suppose I have an airflow dag file that creates a graph like so...
def get_current_info(filename)
current_info = {}
<fill in info in current_info relevant for today's date for given file>
return current_info
files = [
get_current_info("file_001"),
get_current_info("file_002"),
....
]
for f in files:
<some BashOperator bo1 using f's current info dict>
<some BashOperator bo2 using f's current info dict>
....
bo1 >> bo2
....
Since these values in the current_info dict that is used to define the dag changes periodically (here, daily), I would like to know by what process / schedule the dag definition gets updated. (I print the current_info values each run and values appear to be updating, but curious as to how and when exactly this happens).
When does a airflow dag definition get evaluated? referenced anywhere in the docs?
The DAGs are evaluated in every run of the scheduler.
This article describes how the scheduler works and at what stage the DAG files are picked up for evaluation.
After some discussion on the [airflow email list][1], it turns out that airflow builds the dag for each task when it is run (so each tasks includes the overhead of building the dag again (which in my case was very significant)).
See more details on this here: https://stackoverflow.com/a/59995882/8236733

Manual DAG run set individual task state

I have a DAG without a schedule (it is run manually as needed). It has many tasks. Sometimes I want to 'skip' some initial tasks by changing the task state to SUCCESS manually. Changing task state of a manually executed DAG fails, seemingly because of a bug in parsing the execution_date.
Is there another way to individually setting task states for a manually executed DAG?
Example run below. The execution date of the Task is 01-13T17:27:13.130427, and I believe the milliseconds are not being parsed correctly.
Traceback
Traceback (most recent call last):
File "/opt/conda/envs/jumpman_prod/lib/python3.6/site-packages/airflow/www/views.py", line 2372, in set_task_instance_state
execution_date = datetime.strptime(execution_date, '%Y-%m-%d %H:%M:%S')
File "/opt/conda/envs/jumpman_prod/lib/python3.6/_strptime.py", line 565, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "/opt/conda/envs/jumpman_prod/lib/python3.6/_strptime.py", line 365, in _strptime
data_string[found.end():])
ValueError: unconverted data remains: ..130427
It's not working from Task Instances page, but you can do it in another page:
- open DAG graph view
- select needed Run (screen 1) and click go
- select needed task
- in a popup window click Mark success (screen 2)
- then confirm.
PS it relates to airflow 1.9 version
Screen 1
Screen 2
What you may want to do to accomplish this is using branching, which, as the name suggests, allows you to follow different execution paths according to some conditions, just like an if in any programming language.
You can use the BranchPythonOperator (documented here) to attain this goal: the idea is that this operator is configured by a python_callable, a function that outputs the task_id to execute next (which should, of course, be a task which is directly downstream from the BranchPythonOperator itself).
Using branching will set the skipped tasks to the proper state automatically, as mentioned in the documentation:
All other “branches” or directly downstream tasks are marked with a state of skipped so that these paths can’t move forward. The skipped states are propagated downstream to allow for the DAG state to fill up and the DAG run’s state to be inferred.
The resulting DAG would look something like the following:
(source: apache.org)
Branching is documented here, on the official Apache Airflow documentation.

Airflow depends_on_past for whole DAG

Is there a way in airflow of using the depends_on_past for an entire DagRun, not just applied to a Task?
I have a daily DAG, and the Friday DagRun errored on the 4th task however the Saturday and Sunday DagRuns still ran as scheduled. Using depends_on_past = True would have paused the DagRun on the same 4th task, however the first 3 tasks would still have run.
I can see in the DagRun DB table there is a state column that contains failed for the Friday DagRun. What I want is a way configuring a DagRun to not start if the previous DagRun failed, not start and run until finding a Task that previously failed.
Does anyone know if this is possible?
At your first task, set depends_on_past=True and wait_for_downstream=True, the combination will result in that current dag-run runs only if the last run succeeded.
Because by setting the first task at current dag-run would waits for previous
(depends_on_past) and all tasks (wait_for_downstream) to succeed
This question is a bit old but it turns out as a first google search result and the highest rated answer is clearly misleading (and it has made me struggle a bit) so it definitely demands a proper answer. Although the second rated answer should work, there's a cleaner way to do this and I personally find using xcom ugly.
The Airflow has a special operator class designed for monitoring status of tasks from other dag runs or other dags as a whole. So what we need to do is to add a task preceding all the tasks in our dag, checking if the previous run has succeded.
from airflow.sensors.external_task_sensor import ExternalTaskSensor
previous_dag_run_sensor = ExternalTaskSensor(
task_id = 'previous_dag_run_sensor',
dag = our_dag,
external_dag_id = our_dag.dag_id,
execution_delta = our_dag.schedule_interval
)
previous_dag_run_sensor.set_downstream(vertices_of_indegree_zero_from_our_dag)
One possible solution would be to use xcom:
Add 2 PythonOperators start_task and end_task to the DAG.
Make all other tasks depend on start_task
Make end_task depend on all other tasks (set_upstream).
end_task will always push a variable last_success = context['execution_date'] to xcom (xcom_push). (Requires provide_context = True in the PythonOperators).
And start_task will always check xcom (xcom_pull) to see whether there exists a last_success variable with value equal to the previous DagRun's execution_date or to the DAG's start_date (to let the process start).
Example use of xcom:
https://github.com/apache/incubator-airflow/blob/master/airflow/example_dags/example_xcom.py
Here a solution that addresses Marc Lamberti's concern, namely, that 'wait_for_download' is not "recursive".
The solution entails "embedding" your original DAG in between two dummy tasks, a start_task and an end_task.
Such that:
The start_task precedes all your original initial tasks (ie, no other task in your DAG can start until start_task is completed).
A end_task follows all your original ending tasks (ie, all branches in your DAG converge to that dummy end_task).
start_task also directly precedes the end_task.
These conditions are provided by the following code:
start_task >> [_all_your_initial_tasks_here_]
[_all_your_ending_tasks_here] >> end_task
start_task >> end_task
Additionally, one needs to set that start_task has depends_on_past=True and wait_for_downstream=True

Resources