on_failure_callback not working in airflow DAG() - airflow

I want to call two different functions for dag failure and success. For that i want to use on_failure_callback and on_success_callback in DAG() function.
As per my requirement, this callbacks should be on dag level and not task level. That's why i am writing this callbacks insiled DAG() functions while declaring dag variable.
but this callback function are not being called. Same function if i call on task level , then working fine.
This is my code:
def success():
print("successful")
dag = DAG(dag_id='callback_test',schedule_interval=None,default_args=default_args,on_success_callback=success)
def fun1(**kwargs):
print("function called")
task1 = PythonOperator(
task_id='task1',
provide_context=True,
python_callable=fun1,
dag=dag
)
task1

However, I think it should also work at DAG level, according to this:
https://airflow.apache.org/docs/apache-airflow/1.10.10/_api/airflow/models/dag/index.html?highlight=on_failure_callback
right?
I am not being able to make it work, though :(

If you want to pass agrument, you can do it by creating higher order function just like crearing decorator returning actual callback function by outer function.

Related

Create dynamic workflows in Airflow with XCOM value

Now, I create multiple tasks using a variable like this and it works fine.
with DAG(....) as dag:
body = Variable.get("config_table", deserialize_json=True)
for i in range(len(body.keys())):
simple_task = Operator(
task_id = 'task_' + str(i),
.....
But I need to use XCOM value for some reason instead of using a variable.
Is it possible to dynamically create tasks with XCOM pull value?
I try to set value like this and it's not working
body = "{{ ti.xcom_pull(key='config_table', task_ids='get_config_table') }}"
It's possible to dynamically create tasks from XComs generated from a previous task, there are more extensive discussions on this topic, for example in this question. One of the suggested approaches follows this structure, here is a working example I made:
sample_file.json:
{
"cities": [ "London", "Paris", "BA", "NY" ]
}
Get your data from an API or file or any source. Push it as XCom.
def _process_obtained_data(ti):
list_of_cities = ti.xcom_pull(task_ids='get_data')
Variable.set(key='list_of_cities',
value=list_of_cities['cities'], serialize_json=True)
def _read_file():
with open('dags/sample_file.json') as f:
data = json.load(f)
# push to XCom using return
return data
with DAG('dynamic_tasks_example', schedule_interval='#once',
start_date=days_ago(2),
catchup=False) as dag:
get_data = PythonOperator(
task_id='get_data',
python_callable=_read_file)
Add a second task which will pull from pull from XCom and set a Variable with the data you will use to iterate later on.
preparation_task = PythonOperator(
task_id='preparation_task',
python_callable=_process_obtained_data)
*Of course, if you want you can merge both tasks into one. I prefer not to because usually, I take a subset of the fetched data to create the Variable.
Read from that Variable and later iterate on it. It's critical to define default_var.
end = DummyOperator(
task_id='end',
trigger_rule='none_failed')
# Top-level code within DAG block
iterable_list = Variable.get('list_of_cities',
default_var=['default_city'],
deserialize_json=True)
Declare dynamic tasks and their dependencies within a loop. Make the task_id uniques. TaskGroup is optional, helps you sorting the UI.
with TaskGroup('dynamic_tasks_group',
prefix_group_id=False,
) as dynamic_tasks_group:
if iterable_list:
for index, city in enumerate(iterable_list):
say_hello = PythonOperator(
task_id=f'say_hello_from_{city}',
python_callable=_print_greeting,
op_kwargs={'city_name': city, 'greeting': 'Hello'}
)
say_goodbye = PythonOperator(
task_id=f'say_goodbye_from_{city}',
python_callable=_print_greeting,
op_kwargs={'city_name': city, 'greeting': 'Goodbye'}
)
# TaskGroup level dependencies
say_hello >> say_goodbye
# DAG level dependencies
get_data >> preparation_task >> dynamic_tasks_group >> end
DAG Graph View:
Imports:
import json
from airflow import DAG
from airflow.utils.dates import days_ago
from airflow.models import Variable
from airflow.operators.python_operator import PythonOperator
from airflow.operators.dummy import DummyOperator
from airflow.utils.task_group import TaskGroup
Things to keep in mind:
If you have simultaneous dag_runs of this same DAG, all of them will use the same variable, so you may need to make it 'unique' by differentiating their names.
You must set the default value while reading the Variable, otherwise, the first execution may not be processable to the Scheduler.
The Airflow Graph View UI may not refresh the changes immediately. Happens especially in the first run after adding or removing items from the iterable on which the dynamic task generation is created.
If you need to read from many variables, it's important to remember that it's recommended to store them in one single JSON value to avoid constantly create connections to the metadata database (example in this article).
Good luck!
Edit:
Another important point to take into consideration:
With this approach, the call to Variable.get() method is top-level code, so is read by the scheduler every 30 seconds (default of min_file_process_interval setting). This means that a connection to the metadata DB will happen each time.
Edit:
Added if clause to handle emtpy iterable_list case.
This is not possible, and in general dynamic tasks are not recommended:
The way the Airflow scheduler works is by reading the dag file, loading the tasks into the memory and then checks which dags and which tasks it need to schedule, while xcom are a runtime values that are related to a specific dag run, so the scheduler cannot relay on xcom values.
When using dynamic tasks you're making debug much harder for yourself, as the values you use for creating the dag can change and you'll lose access to logs without even understanding why.
What you can do is use branch operator, to have those tasks always and just skip them based on the xcom value.
For example:
def branch_func(**context)
return f"task_{context['ti'].xcom_pull(key=key)}"
branch = BranchPythonOperator(
task_id="branch",
python_callback=branch_func
)
tasks = [BaseOperator(task_id=f"task_{i}") for i in range(3)]
branch >> tasks
In some cases it's also not good to use this method (for example when I've 100 possible tasks), in those cases I'd recommend writing your own operator or use a single PythonOperator.

Is there a way for me to know when a DAG finished running in airflow?

I'm running a script that checks the status of my database before a DAG runs and compares it to after the DAG finished running.
def pre_dag_db
pass
def run_dag
pass
def post_dag_db
pass
Is there a way for me to know when the DAG finished running so that my script knows when to run post_dag_db? The idea is that my post_dag_db runs after my DAG finished running because the DAG manipulates the db.
The easiest way to do this would be to just run the script as last task in your dag, maybe using a BashOperator.
Other options would be to trigger a separate dag (TriggerDagRunOperator) and there implement a dag that calls your script.
If you really cannot call your script from Airflow itself, you might want to check the REST APIs https://airflow.apache.org/docs/stable/api.html and use them to retrieve information about the dag_run. But this seems overly-complicated to me.
Quick and easy way is to add one task in DAG which will work/run as last task of the DAG, this will work like magic for you.
you can use any of the operator like (PythonOperator, BashOperator, etc).
I think you can use the following code:
dag = get_dag(args)
dr = DagRun.find(dag.dag_id, execution_date=args.execution_date)
print(dr[0].state if len(dr) > 0 else None)
This code is taken from airflow cli.
Make a custom class that inherits from dag, and whose dependencies are the same as your dag.
something like (custom_dag.py)
from airflow.models.dag import DAG
class PreAndPostDAG(DAG):
#property
def tasks(self) -> List[BaseOperator]:
return [self.pre_graph] + list(self.task_dict.values()) + [self.post_graph]
#property
def pre_graph(self):
#whatever crazy things you want to do here, before DAG starts
pass
#property
def post_graph(self):
#whatever crazy things you want to do here, AFTER DAG finishes
pass
That's the easiest I can think of, then you just import it when defining your dags:
from custom_dag import PreAndPostDAG
with PreAndPostDAG(
'LS',
default_args=default_args,
description='A simple tutorial DAG',
schedule_interval=timedelta(days=1),
start_date=days_ago(2),
tags=['example'],
) as dag:
# t1, t2 and t3 are examples of tasks created by instantiating operators
t1 = BashOperator(
task_id='list',
bash_command='ls',
)
You get the rest, hope that helps

Airflow Access Variable From Previous Python Operator

I'm new to Airflow and I'm currently building a DAG that will execute a PythonOperator, a BashOperator, and then another PythonOperator structured like this:
def authenticate_user(**kwargs):
...
list_prev = [...]
AUTHENTICATE_USER = PythonOperator(
task_id='AUTHENTICATE_USER',
python_callable=authenticate_user,
provide_context=True,
dag=dag)
CHANGE_ROLE = BashOperator(
task_id='CHANGE_ROLE',
bash_command='...',
dag=dag)
def calculations(**kwargs):
list_prev
...
CALCULATIONS = PythonOperator(
task_id='CALCULATIONS',
python_callable=calculations,
provide_context=True,
dag=dag)
My issue is, I create a list of variables in the first PythonOperator (AUTHENTICATE_USER) that I would like to use later in my second PythonOperator (CALCULATIONS) after executing the BashOperator (CHANGE_ROLE). Is there a way for me to carry over that created list into other PythonOperators in my current DAG?
Thank you
I can think of 3 possible ways (to avoid confusion with the Airflow's concept of Variable, I'll call the data that you want to share between tasks as values)
Airflow XCOMs: Push your values from AUTHENTICATE_USER task and pull them in your CALCULATIONS task. You can either publish and access each value separately or wrap them all into a Python dict or list (better as it reduces db reads and writes)
External system: Persist your values from 1st task into some external system such as database, files or S3-objects and access them from downstream tasks when needed
Airflow Variables: This is a specific case of point (2) above (as Variables are stored in Airflow's backend meta-db). You can programmatically create, modify or delete Variables by exploiting the underlying SQLAlchemy model. See this for hints.

Apache Airflow - How to retrieve dag_run data outside an operator in a flow triggered with TriggerDagRunOperator

I set up two DAGs, let's call the first one orchestrator and the second one worker. Orchestrator work is to retrieve a list from an API and, for each element in this list, trigger the worker DAG with some parameters.
The reason why I separated the two workflows is I want to be able to replay only the "worker" workflows that fail (if one fails, I don't want to replay all the worker instances).
I was able to make things work but now I see how hard it is to monitor, as my task_id are the same for all, so I decided to have dynamic task_id based on a value retrieved from the API by "orchestrator" workflow.
However, I am not able to retrieve the value from the dag_run object outside an operator. Basically, I would like this to work :
with models.DAG('specific_workflow', schedule_interval=None, default_args=default_dag_args) as dag:
name = context['dag_run'].name
hello_world = BashOperator(task_id='hello_{}'.format(name), bash_command="echo Hello {{ dag_run.conf.name }}", dag=dag)
bye = BashOperator(task_id='bye_{}'.format(name), bash_command="echo Goodbye {{ dag_run.conf.name }}", dag=dag)
hello_world >> bye
But I am not able to define this "context" object. However, I am able to access it from an operator (PythonOperator and BashOperator for instance).
Is it possible to retrieve the dag_run object outside an operator ?
Yup it is possible
What I tried and worked for me is
In the following code block, I am trying to show all possible ways to use the configurations passed,
directly to different operators
pyspark_task = DataprocSubmitJobOperator(
task_id="task_0001",
job=PYSPARK_JOB,
location=f"{{{{dag_run.conf.get('dataproc_region','{config_data['cluster_location']}')}}}}",
project_id="{{dag_run.conf['dataproc_project_id']}}",
gcp_conn_id="{{dag_run.conf.gcp_conn_id}}"
)
So either you can use it like
"{{dag_run.conf.field_name}}" or "{{dag_run.conf['field_name']}}"
Or
If you want to use some default values in case the configuration field is optional,
f"{{{{dag_run.conf.get('field_name', '{local_variable['field_name_0002']}')}}}}"
I don't think it's easily possible currently. For example, as part of the worker run process, the DAG is retrieved without any TaskInstance context provided besides where to find the DAG: https://github.com/apache/incubator-airflow/blob/f18e2550543e455c9701af0995bc393ee6a97b47/airflow/bin/cli.py#L353
The context is injected later: https://github.com/apache/incubator-airflow/blob/c5f1c6a31b20bbb80b4020b753e88cc283aaf197/airflow/models.py#L1479
The run_id of the DAG would be good place to store this information.

Why is an airflow downstream task done even if branch tasks fail and trigger rule is one succes?

I worked my way through an example script on BranchPythonOperator and I noticed the following:
The final task gets Queued before the the follow_branch_x task is done. This I found strange, because before queueing the final task, it should know whether its upstream task is a succes (TriggerRule is ONE_SUCCESS).
To test this, I replaced the 3 of the 4 follow_branch_ tasks with tasks that would fail, and noticed that regardless of the follow_x branch task state, the downstream task gets done. See the image:
Could anyone explain this behaviour to me, as it does not feel intuitive as normally failed tasks prevent downstream tasks from being executed.
Code defining the join task:
join = DummyOperator(
task_id = "join",
trigger_rule = TriggerRule.ONE_SUCCESS,
dag=dag
)
Try setting the trigger_rule to all_done on your branching operator like this:
branch_task = BranchPythonOperator(
task_id='branching',
python_callable=decide_which_path(),
trigger_rule="all_done",
dag=dag)
No idea if this will work but it seems to have helped some people out before:
How does Airflow's BranchPythonOperator work?
Definition from Airflow API
one_success: fires as soon as at least one parent succeeds, it does not wait for all parents to be done
If you continue to use one_success, you need all of your four follow_branch-x tasks to fail for join not to be run.

Resources