Fusing operators together - airflow

I'm still in the process of deploying Airflow and I've already felt the need to merge operators together. The most common use-case would be coupling an operator and the corresponding sensor. For instance, one might want to chain together the EmrStepOperator and EmrStepSensor.
I'm creating my DAGs programmatically, and the biggest one of those contains 150+ (identical) branches, each performing the same series of operations on different bits of data (tables). Therefore clubbing together tasks that make-up a single logical step in my DAG would be of great help.
Here are 2 contending examples from my project to give motivation for my argument.
1. Deleting data from S3 path and then writing new data
This step comprises 2 operators
DeleteS3PathOperator: Extends from BaseOperator & uses S3Hook
HadoopDistcpOperator: Extends from SSHOperator
2. Conditionally performing MSCK REPAIR on Hive table
This step contains 4 operators
BranchPythonOperator: Checks whether Hive table is partitioned
MsckRepairOperator: Extends from HiveOperator and performs MSCK REPAIR on (partioned) table
Dummy(Branch)Operator: Makes up alternate branching path to MsckRepairOperator (for non-partitioned tables)
Dummy(Join)Operator: Makes up the join step for both branches
Using operators in isolation certainly offers smaller modules and more fine-grained logging / debugging, but in large DAGs, reducing the clutter might be desirable. From my current understanding there are 2 ways to chain operators together
Hooks
Write actual processing logic in hooks and then use as many hooks as you want within a single operator (Certainly the better way in my opinion)
SubDagOperator
A risky and controversial way of doing things; additionally the naming convention for SubDagOperator makes me frown.
My questions are
Should operators be composed at all or is it better to have discrete steps?
Any pitfalls, improvements in above approaches?
Any other ways to combine operators together?
In taxonomy of Airflow, is the primary motive of Hooks same as above, or do they serve some other purposes too?
UPDATE-1
3. Multiple Inhteritance
While this is a Python feature rather than Airflow specific, its worthwhile to point out that multiple inheritance can come handy in combining functionalities of operators. QuboleCheckOperator, for instance, is already written using that. However in the past, I've tried this thing to fuse EmrCreateJobFlowOperator and EmrJobFlowSensor, but at the time I had run into issues with #apply_defaults decorator and had abandoned the idea.

I have combined various hooks to create a Single operator based on my needs. A simple example is I clubbed gcs delete, copy, list method and get_size methods in hook to create a single operator called GcsDataValidationOperator. A rule of thumb would be to have Idempotency i.e. if you run multiple times it should produce the same result.
Should operators be composed at all or is it better to have discrete
steps?
The only pitfall is maintainability, sometimes when the hooks change in the master branch, you will need to update all your operator manually if there are any breaking changes.
Any pitfalls, improvements in above approaches?
You can use PythonOperator and use the in-built hooks with .execute method, but it would still mean a lot of details in the DAG file. Hence, I would still go for a new operator approach
Any other ways to combine operators together?
Hooks are just interfaces to external platforms and databases like Hive, GCS, etc and form building blocks for operators. This allows the creation of new operators. Also, this mean you can customize templated field, add slack notification on each granular step inside your new operator and have your own logging details.
In taxonomy of Airflow, is the primary motive of Hooks same as above, or do they serve some other purposes too?
FWIW: I am the PMC member and a contributor of the Airflow project.

This is not a generic answer to the question (although I think the approach is generic enough that it can be extended to other operators and sensors). It just shows how EmrCreateJobFlowOperator and EmrJobFlowSensor can be fused together using Python's multiple inheritance.
Code for Fused Operator and Sensor
# Necessary imports
class EmrStartCluster(EmrCreateJobFlowOperator, EmrJobFlowSensor):
ui_color = "#c70039"
NON_TERMINAL_STATES = ["STARTING", "BOOTSTRAPPING"]
#apply_defaults
def __init__(self, *args, **kwargs):
kwargs["job_flow_id"] = None # We do this, because EmrJobFlowSensor requires `job_flow_id` to be initialized
super(EmrStartCluster, self).__init__(*args, **kwargs)
def execute(self, context):
self.job_flow_id = super(EmrStartCluster, self).execute(context)
super(EmrJobFlowSensor, self).execute(context)
return self.job_flow_id
The fused operator can be invoked like:
JOB_FLOW_OVERRIDES = {} ## Job flow config goes here
dag = DAG() # Dag invocation goes here
cluster_creator = EmrStartCluster(
dag=dag,
task_id='start_cluster',
job_flow_overrides=JOB_FLOW_OVERRIDES,
aws_conn_id='aws_default',
emr_conn_id='emr_default'
)
I have tested it with EmrStepSensor, EmrAddStepsOperator, and EmrTerminateJobFlowOperator and so far have had no issues.

Related

Airflow DAGs recreation in each operator

We are using Airflow 2.1.4 and running on Kubernetes.
We have separated pods for web-server, scheduler and we are using Kubernetes executors.
We are using variety of operator such as PythonOperator, KubernetesPodOperator etc.
Our setup handles ~2K customers (businesses) and each one of them has it's own DAG.
Our code looks something like:
def get_customers():
logger.info("querying database to get all customers")
return sql_connection.query(SELECT id, name, offset FROM customers_table)
customers = get_customers()
for id, name, offset in customers:
dag = DAG(
dag_id=f"{id}-{name}",
schedule_interval=offset,
)
with dag:
first = PythonOperator(..)
second = KubernetesPodOperator(..)
third = SimpleHttpOperator(..)
first >> second >> third
globals()[id] = dag
In the snippet above is a simplified version of what we've got, but we have a few dozens of operators in the DAG (and not just three).
The problem is that for each one of the operators in each one of the DAGs we see the querying database to get all customers log - which means that we query the database a way more than we want to.
The database doesn't updated frequently and we can update the DAGs only once-twice a day.
I know that the DAGs are being saved in the metadata database or something..
Is there a way to build those DAGs only one time / via scheduler and not to do that per operator?
Should we change the design to support our multi-tenancy requirement? Is there a better option than that?
In our case, ~60 operators X ~2,000 customers = ~120,000 queries to the database.
Yes this is entirely expected. The DAGs are parsed by Airflow regularly (evey 30 second by default) so any top-level code (the one that is executed during parsing the file rather than "execute" methods of operators) is executed then.
Simple answer (and best practice) is "do not use any heavy operations in the top-level code of your DAGs". Specifically do not use DB queries. But if you want some more specific answers and possible ways how you can handle it, there are dedicated chapters about it in Airflow documentation about best practices:
This is explanation why Top-Level code should be "light" https://airflow.apache.org/docs/apache-airflow/stable/best-practices.html#top-level-python-code
This one is about strategies you might use to avoid "heavy" operations in Top-level code when you do dynamic DAG generation as you do in your case: https://airflow.apache.org/docs/apache-airflow/stable/best-practices.html#dynamic-dag-generation
In short there are three proposed ways:
using env variables
generating a configuration file (for example .json) from your DB automatically (periodically) by an external script and putting it next to your DAG and reading the json file by your DAG from there rather than using sql query.
generating many DAG python files dynamically (for exmple using JINJA) also automatically and periodically using an external script.
You could use either 2) or 3) to achive your goal I believe.

In Airflow, what do they want you to do instead of pass data between tasks

In the docs, they say that you should avoid passing data between tasks:
This is a subtle but very important point: in general, if two operators need to share information, like a filename or small amount of data, you should consider combining them into a single operator. If it absolutely can’t be avoided, Airflow does have a feature for operator cross-communication called XCom that is described in the section XComs.
I fundamentally don't understand what they mean. If there's no data to pass between two tasks, why are they part of the same DAG?
I've got half a dozen different tasks that take turns editing one file in place, and each send an XML report to a final task that compiles a report of what was done. Airflow wants me to put all of that in one Operator? Then what am I gaining by doing it in Airflow? Or how can I restructure it in an Airflowy way?
fundamentally, each instance of an operator in a DAG is mapped to a different task.
This is a subtle but very important point: in general if two operators need to share
information, like a filename or small amount of data, you should consider combining them
into a single operator
the above sentence means that if you want any information that needs to be shared between two different tasks then it is best you could combine them into one task instead of using two different tasks, on the other hand, if you must use two different tasks and you need to pass some information from one task to another then you can do it using
Airflow's XCOM, which is similar to a key-value store.
In a Data Engineering use case, file schema before processing is important. imagine two tasks as follows :
Files_Exist_Check : the purpose of this task is to check whether particular files exist in a directory or not
before continuing.
Check_Files_Schema: the purpose of this task is to check whether the file schema matches the expected schema or not.
It would only make sense to start your processing if Files_Exist_Check task succeeds. i.e. you have some files to process.
In this case, you can "push" some key to xcom like "file_exists" with the value being the count of files present in that particular directory in Task Files_Exist_Check.
Now, you "pull" this value using the same key in Check_Files_Schema Task, if it returns 0 then there are no files for you to process hence you can raise exception and fail the task or handle gracefully.
hence sharing information across tasks using xcom does come in handy in this case.
you can refer following link for more info :
https://www.astronomer.io/guides/airflow-datastores/
Airflow - How to pass xcom variable into Python function
What you have to do for avoiding having everything in one operator is saving the data somewhere. I don't quite understand your flow, but if for instance, you want to extract data from an API and insert that in a database, you would need to have:
PythonOperator(or BashOperator, whatever) that takes the data from the API and saves it to S3/local file/Google Drive/Azure Storage...
SqlRelated operator that takes the data from the storage and insert it into the database
Anyway, if you know which files are you going to edit, you may also use jinja templates or reading info from a text file and make a loop or something in the DAG. I could help you more if you clarify a little bit your actual flow
I've decided that, as mentioned by #Anand Vidvat, they are making a distinction between Operators and Tasks here. What I think is that they don't want you to write two Operators that inherently need to be paired together and pass data to each other. On the other hand, it's fine to have one task use data from another, you just have to provide filenames etc in the DAG definition.
For example, many of the builtin Operators have constructor parameters for files, like the S3FileTransformOperator. Confusing documentation, but oh well!

Airflow dynamic dag creation

Someone please tell me whether a DAG in airflow is just a graph (like a placeholder) without any actual data (like arguments) associated with it OR a DAG is like an instance (for a fixed argument)?
I want a system where the set of operations to perform (given a set of arguments) are fixed. But this input will be different everytime the set of operations are run. In simple terms, the pipeline is the same but the arguments to the pipeline will be different everytime it is run.
I want to know how to configure this in airflow? Should I create a new DAG for every new set of arguments? or any other method?
In my case, the graph is the same but want to run it on different data (from different users) as they come. So, should I create a new DAG everytime for new data?
Yes you are correct; A DAG is basically kind off a one-way graph. You can create a DAG once by chaining together multiple operators together to form your "structure".
Each operator, can then take in multiple arguments that you can pass from the DAG definition file itself (if needed).
Or you can pass in a configuration object to the DAG, and access custom data from there using the context.
I would recommend reading the Airflow Docs for more examples: https://airflow.apache.org/concepts.html#tasks
You can think of Airflow DAG as a program made of other programs, with the exception that it can't contain loops(acyclic). Will you change your program every time input data changes? Of course, it all depends on how you write your program, but usually you'd like you program to generalise, right? You don't want two different programs to do 2+2 and 3+3. But you'll have different programs to show Facebook pages and to play Pokemon Go. If you want to do the same thing to a similar data then you want to write your DAG once, and maybe only change environment arguments(DB connection, date, etc) - Airflow is perfectly suitable for that.
You do not need to create a new DAG every time, if the structure of the graph is the same.
Airflow DAGs are created via code, so you are free to create a code structure that allows you to pass in arguments each time. How you do that will require some creative thinking.
You could, for example, create a web form that accepts the arguments, stores them in a DB and then schedules the DAG with the Airflow restAPI. The DAG code would then need to be written to retrieve params from the database.
There are several other ways to accomplish what you are asking, they all just depend on your use case.
One caveat, the Airflow scheduler does not perform well if you change the start date of the DAG. For your idea above you will need to set the start date earlier than your first DAG run and then set the schedule interval to off. This way you have a start date that doesn’t change and dynamically triggered DAG runs.

Best way to copy tasks from one DAG into another?

Say I have two pre-existing DAGs, A and B. Is it possible in Airflow to "copy" all tasks from B into A, preserving dependencies and propagating default arguments and the like for all of B's tasks? With the end goal being to have a new DAG A' that contains all of the tasks of both A and B.
I understand that it might not be possible or feasible to reconcile DAG-level factors, e.g. scheduling, and propagate across the copying, but is it possible to at least preserve the dependency orderings of the tasks such that each task runs as expected, when expected—just in a different DAG?
If it is possible, what would be the best way to do so? If it's not supported, is there work in progress to support this sort of "native DAG composition"?
UPDATE-1
Based on clarification of question expressed, I infer that requirement is not to replicate a DAG into another but to append it after another DAG.
The techniques mentioned in the original answer below are still applicable (to variable extent)
But for this specific use-case there are few more options
i. Use TriggerDagRunOperator: Invoke your 2nd DAG at the end of 1st DAG
ii. Use SubDagOperator: Wrap your 2nd DAG into a Sub-Dag and attach it at the end of 1st DAG
But do checkout Wiring top-level DAGs together thread (question / answer plus comments) for ideas / loopholes in each of above mentioned techniques
ORIGINAL ANSWER
I can think of 3 possible ways
The recommended way would be to programmatically construct your DAG. In other words, if possible, iterate over a list of configs (each config for one task) read from an external source (such as Airflow Variable, database or JSON files) and build your DAG as per your business logic. Here, you'll just have to alter the dag_id and you can re-use the same script to build identical DAG as your original one
A modification of 1st approach above is to generalize your dag-construction logic by employing a simple idea like ajbosco/dag-factory or a full-fleged wrapper framework like etsy/boundary-layer
Finally if none of the above approaches are easily adaptable for you, then you can hand-code the task-replication logic to regenerate the same structure as your original DAG. You can write a single robust script and re-use it across your entire project to replicate DAGs as and when needed. Here you'll have to go through DAG-traversal and some traditional data-structure and algorithmic stuff. Here's an example of BFS-like traversal over tasks of an Airflow DAG

How can one set a variable for use only during a certain dag_run

How do I set a variable for use during a particular dag_run. I'm aware of setting values in xcom, but not all the operators that I use has xcom support. I also would not like to store the value into the Variables datastore, in case another dag run begins while the current one is running, that need to store different values.
The question is not clear, but from whatever I can infer, I'll try to clear your doubts
not all the operators that I use has xcom support
Apparently you've mistaken xcom with some other construct because xcom feature is part of TaskInstance and the functions xcom_push() and xcom_pull() are defined in BaseOperator itself (which is the parent of all Airflow operators)
I also would not like to store the value into the Variables datastore,
in case another dag run begins while the current one is running, that
need to store different values.
It is straightforward (and no-brainer) to separate-out Variables on per-DAG basis (see point (6)); but yes for different DagRuns of a single DAG, this kind of isolation would be a challenge. I can think of xcom to be the easiest workaround for this. Have a look at this for some insights on usage of Xcoms.
Additionally, if you want to manipulate Variables (or any other Airflow model) at runtime (though I would recommend you to avoid it particularly for Variables), Airflow also gives completely liberty to exploit the underlying SQLAlchemy ORM framework for that. You can take inspiration from this.

Resources