How to create operators from list in Airflow? - airflow

I need to copy tables from MySQL to BigQuery daily.
My workflow is:
MySqlToGoogleCloudStorageOperator
GoogleCloudStorageToBigQueryOperator
This works for a single process (say Categories).
Example:
BQ_TABLE_NAME_CATEGORIES = Variable.get("tables_categories")
...
import_categories_op = MySqlToGoogleCloudStorageOperator(
task_id='import_categories',
mysql_conn_id='c_mysql',
google_cloud_storage_conn_id='gcp_a',
approx_max_file_size_bytes = 100000000, #100MB per file
sql = 'import_categories.sql',
bucket=GCS_BUCKET_ID,
filename=file_name_categories,
dag=dag)
gcs_to_bigquery_categories_op = GoogleCloudStorageToBigQueryOperator(
dag=dag,
task_id='load_categories_to_BigQuery',
bucket=GCS_BUCKET_ID,
destination_project_dataset_table=table_name_template_categories,
source_format='NEWLINE_DELIMITED_JSON',
source_objects=[uri_template_categories_read_from],
schema_fields=Categories(),
src_fmt_configs={'ignoreUnknownValues': True},
create_disposition='CREATE_IF_NEEDED',
write_disposition='WRITE_TRUNCATE',
skip_leading_rows = 1,
google_cloud_storage_conn_id=CONNECTION_ID,
bigquery_conn_id=CONNECTION_ID)
import_categories_op >> gcs_to_bigquery_categories_op
Now, Say I want to scale it up and have it work with 20 more tables.. Is there a way to do it without writing the same code 20 times?
I'm looking for a way to do something like:
BQ_TABLE_NAME_CATEGORIES = Variable.get("tables_categories")
BQ_TABLE_NAME_PRODUCTS = Variable.get("tables_products")
....
BQ_TABLE_NAME_ORDERS = Variable.get("tables_orders")
list = [BQ_TABLE_NAME_CATEGORIES,BQ_TABLE_NAME_PRODUCTS,BQ_TABLE_NAME_PRODUCTS ]
for item in list:
GENERATE THE OPERATORS PER TABLE
so that will create import_categories_op , import_products_op , import_orders_op etc..

Yes, in fact it's exactly what you described. Simply instantiate your operators in your for loop. Make sure your task ids are unique and you're set:
BQ_TABLE_NAME_CATEGORIES = Variable.get("tables_categories")
BQ_TABLE_NAME_PRODUCTS = Variable.get("tables_products")
list = [BQ_TABLE_NAME_CATEGORIES, BQ_TABLE_NAME_PRODUCTS]
for table in list:
import_op = MySqlToGoogleCloudStorageOperator(
task_id=`import_${table}`,
mysql_conn_id='c_mysql',
google_cloud_storage_conn_id='gcp_a',
approx_max_file_size_bytes = 100000000, #100MB per file
sql = `import_${table}.sql`,
bucket=GCS_BUCKET_ID,
filename=file_name,
dag=dag)
gcs_to_bigquery_op = GoogleCloudStorageToBigQueryOperator(
dag=dag,
task_id=`load_${table}_to_BigQuery`,
bucket=GCS_BUCKET_ID,
destination_project_dataset_table=table_name_template,
source_format='NEWLINE_DELIMITED_JSON',
source_objects=[uri_template_read_from],
schema_fields=Categories(),
src_fmt_configs={'ignoreUnknownValues': True},
create_disposition='CREATE_IF_NEEDED',
write_disposition='WRITE_TRUNCATE',
skip_leading_rows = 1,
google_cloud_storage_conn_id=CONNECTION_ID,
bigquery_conn_id=CONNECTION_ID)
import_op >> gcs_to_bigquery_op
You can simplify this if you store all tables in a single variable:
// bq_tables = "table_products,table_orders"
BQ_TABLES = Variable.get("bq_tables").split(',')
for table in BQ_TABLES:
...
Edit: Task references vs IDs
Luis asked about how only the task IDs need to change (and not the references to the tasks). Actually, you don't even need to refer to your tasks for anything but adding some details to them after creation (like upstream and downstream dependencies), because they're stored in the DAG object on creation, and that's all the DAG parser is looking for. Once the DAG parser finds a DAG object in the global scope, it uses it. It doesn't know what names the tasks were referred to as in the global scope, it only knows that those tasks are listed on the DAG object, and that they list each other upstream or downstream.
I would have made this a comment on this answer, but I wanted to show the following code to explain what I mean a bit more obviously (in which I use with DAG to avoid assigning each task to the dag, and the bitwise-shift operator upstream/downstream assignment to avoid needing to even refer to the tasks by a reference, and python3's formatted f-strings):
// bq_tables = "table_products,table_orders"
BQ_TABLES = Variable.get("bq_tables").split(',')
with DAG('…dag_id…', …) as dag:
for table in BQ_TABLES:
MySqlToGoogleCloudStorageOperator(
task_id=f'import_{table}',
sql=f'import_{table}.sql',
… # all params except notably there's no `dag=dag` in here.
) >> GoogleCloudStorageToBigQueryOperator( # Yup, …
task_id=f'load_{table}_to_BigQuery',
… # again all but `dag=dag` in here.
)
Sure, it could have been t1=…; t2=…; t1>>t2; … but why name references?

Related

Pull list xcoms in TaskGroups not working

My airflow code has the below Python Operator callable where I am creating a list and pushing it to xcoms:
keys = []
values = []
def attribute_count_check(e_run_id,**context):
job_run_id = int(e_run_id)
da = "select count (distinct row_num) from dds_metadata.dds_temp_att_table where run_id ={}".format(job_run_id)
cursor.execute(da)
res = cursor.fetchall()
view_res = [x for res in res for x in res]
count_of_sql = view_res[0]
print(count_of_sql)
if count_of_sql < 1:
print("deleting of cluster")
return 'delete_cluster'
else :
print("triggering attr_check")
num_attributes_per_task = num_attr #job_config
diff = math.ceil (count_of_sql / num_attributes_per_task)
instance = int(diff)
n = num_attributes_per_task
global values
global keys
for r in range(1, instance+1):
#a = r
keys.append(r)
lower_ranges =(n*(r-1)) +1
upper_range = (n*(r - 1)) + n
b =(lower_ranges,upper_range)
values.append(b)
task_instance = context['task_instance']
task_instance.xcom_push(key="di_keys", value=keys)
task_instance.xcom_push(key="di_values", value=values)
The xcoms from the job is as in the below screenshot :
Now I am trying to fetch the values from xcoms to create cluster dynamically with the code below:
with TaskGroup('dataproc_create_cluster',prefix_group_id=False) as dataproc_create_clusters:
for i in zip('{{ ti.xcom_pull(key="di_keys")}}','{{ ti.xcom_pull(key="di_values")}}'):
dynmaic_create_cluster = DataprocCreateClusterOperator(
task_id="create_cluster_{}".format(list(eval(str(i)))[0]),
project_id='{0}'.format(PROJECT),
cluster_config=CLUSTER_GENERATOR_CONFIG,
region='{0}'.format(REGION),
cluster_name="dataproc-cluster-{}-sit".format(str(i[0])),
)
But I am getting the below error:
Broken DAG: [/opt/airflow/dags/Cluster_config.py] Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/models/baseoperator.py", line 547, in __init__
validate_key(task_id)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/helpers.py", line 56, in validate_key
"dots and underscores exclusively".format(k=k)
airflow.exceptions.AirflowException: The key (create_cluster_{) has to be made of alphanumeric characters, dashes, dots and underscores exclusively
So I changed the task_id as below:
task_id="create_cluster_"+re.sub(r'\W+', '', str(list(eval(str(i)))[0])),
After which I got the below error:
airflow.exceptions.DuplicateTaskIdFound: Task id 'create_cluster_' has already been added to the DAG
This made me think that the value in Xcoms is being parsed one literal at a time, so I used render_template_as_native_obj=True, .
But I am still getting the duplicate task id error
Regarding the jinja2 templating outside of templated fields
First, you can only use jinja2 templating in templated fields. Simply said, there are two processes. One is parsing the DAG (which happens first), the other is executing the tasks. At the moment your DAG is parsed, no tasks have run yet and there is no TaskInstance available, and thus also no XCOM pull available. However, with templated fields, you can use jinja2 templating for which the value of the fields are computed at the moment your task executes. At that point, the TaskInstance and the XCOM pull is available.
For example, in a PythonOperator you can use the following templated fields;
template_fields: Sequence[str] = ('templates_dict', 'op_args', 'op_kwargs')
Changing the number of tasks based on a result of a task.
Second, you can not change the number of tasks it contains based on the output of a task. Airflow simply does not support this. There is one exception; which is using mapped tasks. There is a nice example in the docs that I copied here;
#task
def make_list():
# This can also be from an API call, checking a database, -- almost anything you like, as long as the
# resulting list/dictionary can be stored in the current XCom backend.
return [1, 2, {"a": "b"}, "str"]
#task
def consumer(arg):
print(list(arg))
with DAG(dag_id="dynamic-map", start_date=datetime(2022, 4, 2)) as dag:
consumer.expand(arg=make_list())

Airflow Broken DAG error during dynamic task creation with variables

I am trying to create dynamic tasks depending on airflow variable :
My code is :
default_args = {
'start_date': datetime(year=2021, month=6, day=20),
'provide_context': True
}
with DAG(
dag_id='Target_DIF',
default_args=default_args,
schedule_interval='#once',
description='ETL pipeline for processing users'
) as dag:
iterable_list = Variable.get("num_table")
for index, table in enumerate(iterable_list):
read_src1 = PythonOperator(
task_id=f'read_src_{table}'
python_callable=read_src,
)
upload_file_to_directory_bulk1 = PythonOperator(
task_id=f'upload_file_to_directory_bulk_{table}',
python_callable=upload_file_to_directory_bulk
)
write_Snowflake1 = PythonOperator(
task_id=f'write_Snowflake_{table}',
python_callable=write_Snowflake
)
# TaskGroup level dependencies
# DAG level dependencies
start >> read_src1 >> upload_file_to_directory_bulk1 >> write_Snowflake1 >> end
I am facing the below error :
Broken DAG: [/home/dif/airflow/dags/target_dag.py] Traceback (most recent call last):
airflow.exceptions.AirflowException: The key (read_src_[) has to be made of alphanumeric characters, dashes, dots and underscores exclusively
The code works perfect with changes in the code :
#iterable_list = Variable.get("num_table")
iterable_list = ['inventories', 'products']
Start and End are dummy operators.
Airflow variable has data as shown in the image.
My expected dynamic workflow:
I am able to achieve the above flow with a list but not with Airflow variable.
Any leads to find the cause of the error is appreciated. Thanks.
The Variable.get("num_table") returns string.
thus your loop is actually iterating over the chars of ['inventories, 'ptoducts'] which is why in the first iteration of the loop the task_id=f'read_src_{table}' is read_src_[ and [ is not a valid char for task_id.
You should convert the string into list.
Save your var as: "inventories,ptoducts" and then you can do:
iterable_string = Variable.get("num_table")
iterable_list = iterable_string.split(",")
for index, table in enumerate(iterable_list):
You should note that using Variable.get("num_table") as a top level code is a very bad practice!
The problem is that by default, Airflow reads the variables as str. Try using this:
iterable_list = Variable.get("num_table", deserialize_json=True)
I was able to arrive at the solution with the followings modifications :
import ast
...
...
iterable_string = Variable.get("num_table",default_var="[]")
iterable_list = ast.literal_eval(iterable_string)
...
Airflow variables are stored as strings.
So my data was stored as "[tab1,tab2]".
So I have used literal_eval to convert the string back to list.
I have also added an empty list as default so that if no values are present in the variable num_table, I will not process further.

Airflow: How to template or pass the output of a Python Callable function as arguments to other tasks?

I'm new to Airflow and working on making my ETL pipeline more re-usable. Originally, I had a few lines of top-level code that would determine the job_start based on a few user input parameters, but I found through much searching that this would trigger at every heartbeat which was causing some unwanted behavior in truncating the table.
Now I am investigating wrapping this top level code into a Python Callable so it is secure from the refresh, but I am unsure of the best way to pass the output to my other tasks. The gist of my code is below:
def get_job_dts():
#Do something to determine the appropriate job_start_dt and job_end_dt
#Package up as a list as inputs to other PythonCallables using op_args
job_params = [job_start_dt, job_end_dt]
return job_params
t0 = PythonOperator(
task_id = 'get_dates'
python_callable = get_job_dts
dag=dag
)
t1 = PythonOperator(
task_id = 'task_1'
,python_callable=first_task
,op_args=job_params #<-- How do I send job_params to op_args??
,dag=dag
)
t0 >> t1
I've searched around and hear mentions of jinja templates, variables, or xcoms, but I'm fuzzy on how to implement it. Does anyone have an example I could look at where I can save that list into a variable that can be used by my other tasks?
The best way to do this is to push your value into XCom in get_job_dts, and pull the value back from Xcom in first_task.
def get_job_dts(**kwargs):
#Do something to determine the appropriate job_start_dt and job_end_dt
#Package up as a list as inputs to other PythonCallables using op_args
job_params = [job_start_dt, job_end_dt]
# Push job_params into XCom
kwargs['ti'].xcom_push(key='job_params', value=job_params)
return job_params
def first_task(ti, **kwargs):
# Pull job_params into XCom
job_params = ti.xcom_pull(key='job_params')
# And then do the rest
t0 = PythonOperator(
task_id = 'get_dates'
python_callable = get_job_dts
dag=dag
)
t1 = PythonOperator(
task_id = 'task_1',
provide_context=True,
python_callable=first_task,
op_args=job_params,
dag=dag
)
t0 >> t1
As RyantheCoder mentioned, XCOM is the way to go. My implementation is geared towards the tutorial where I implicitly perform a push automatically from a return value in a PythonCallable.
I am still confused by the difference in passing in (ti, **kwargs) vs. using (**context) to the function that is pulling. Also, where does "ti" come from?
Any clarifications appreciated.
def get_job_dts(**kwargs):
#Do something to determine the appropriate job_start_dt and job_end_dt
#Package up as a list as inputs to other PythonCallables using op_args
job_params = [job_start_dt, job_end_dt]
# Automatically pushes to XCOM, refer to: Airflow XCOM tutorial: https://airflow.apache.org/concepts.html?highlight=xcom#xcoms
return job_params
def first_task(**context):
# Change task_ids to whatever task pushed the XCOM vars you need, rest are standard notation
job_params = job_params = context['task_instance'].xcom_pull(task_ids='get_dates')
# And then do the rest
t0 = PythonOperator(
task_id = 'get_dates'
python_callable = get_job_dts
dag=dag
)
t1 = PythonOperator(
task_id = 'task_1',
provide_context=True,
python_callable=first_task,
dag=dag
)
t0 >> t1
As you mentioned changing task start time and end time dynamically, I supposed what you need is to create dynamic dag rather than just pass the args to dag. Especially, changing start time and interval without changing dag name will cause unexpected result, it will highly suggested not to do so. So you can refer to this link to see if this strategy can help.

How to trigger operator inside Python function using Airflow?

I have the following code:
def chunck_import(**kwargs):
...
for i in range(1, num_pages + 1):
start = lower + chunks * i
end = start + chunks
if i>1:
start = start + 1
logging.info(start, end)
if end > max_current:
end = max_current
where = 'where orders_id between {0} and {1}'.format(start,end)
logging.info(where)
import_orders_products_op = MySqlToGoogleCloudStorageOperator(
task_id='import_orders_and_upload_to_storage_orders_products_{}'.format(i),
mysql_conn_id='mysql_con',
google_cloud_storage_conn_id='gcp_con',
provide_context=True,
approx_max_file_size_bytes = 100000000, #100MB per file
sql = 'import_orders.sql',
params={'WHERE': where},
bucket=GCS_BUCKET_ID,
filename=file_name_orders_products,
dag=dag)
start_task_op = DummyOperator(task_id='start_task', dag=dag)
chunck_import_op = PythonOperator(
task_id='chunck_import',
provide_context=True,
python_callable=chunck_import,
dag=dag)
start_task_op >> chunck_import_op
This code uses PythonOperator to calculate how many runs I need from the MySqlToGoogleCloudStorageOperator and create the WHERE cluster of the SQL then it needs to execute it.
The problem is that the MySqlToGoogleCloudStorageOperator isn't being executed.
I can't actually do
chunck_import_op >> import_orders_products_op
How can I make the MySqlToGoogleCloudStorageOperator be executed inside the PythonOperator?
I think at the end of your for loop, you'll want to call import_orders_products_op.execute(context=kwargs) possibly preceded by import_orders_products_op.pre_execute(context=kwargs). This is a bit complicated in that it skips the render_templates() call of the task_instance, and actually if you instead made a task_instance to put each of these tasks in, you could call run or _raw_run_task instead but these both require information from the dagrun (which you can get in the python callable's context like kwargs['dag_run'])
Looking at what you've passed to the operators it looks like as is you'll need the templating step to load the import_orders.sql file and fill in the WHERE parameter. Alternatively it's okay within the callable itself to load the file into a string, replace the {{ params.WHERE }} part (and any others) manually without Jinja2 (or you could spend time to figure out the right jinja2 calls), and then set the import_orders_products_op.sql=the_string_you_loaded before calling import_orders_products_op.pre_execute(context=kwargs) and import_orders_products_op.execute(context=kwargs).

Airflow: How to push xcom value from BigQueryOperator?

This is my operator:
bigquery_check_op = BigQueryOperator(
task_id='bigquery_check',
bql=SQL_QUERY,
use_legacy_sql = False,
bigquery_conn_id=CONNECTION_ID,
trigger_rule='all_success',
xcom_push=True,
dag=dag
)
When I check the Render page in the UI. Nothing appears there.
When I run the SQL in the console it return value 1400 which is correct.
Why the operator doesn't push the XCOM?
I can't use BigQueryValueCheckOperator. This operator is designed to FAIL against a check of value. I don't want nothing to fail. I simply want to branch the code based on the return value from the query.
Here is how you might be able to accomplish this with the BigQueryHook and the BranchPythonOperator:
from airflow.operators.python_operator import BranchPythonOperator
from airflow.contrib.hooks import BigQueryHook
def big_query_check(**context):
sql = context['templates_dict']['sql']
bq = BigQueryHook(bigquery_conn_id='default_gcp_connection_id',
use_legacy_sql=False)
conn = bq.get_conn()
cursor = conn.cursor()
results = cursor.execute(sql)
# Do something with results, return task_id to branch to
if results == 0:
return "task_a"
else:
return "task_b"
sql = "SELECT COUNT(*) FROM sales"
branching = BranchPythonOperator(
task_id='branching',
python_callable=big_query_check,
provide_context= True,
templates_dict = {"sql": sql}
dag=dag,
)
First we create a python callable that we can use to execute the query and select which task_id to branch too. Second, we create the BranchPythonOperator.
The simplest answer is because xcom_push is not one of the params in BigQueryOperator nor BaseOperator nor LoggingMixin.
The BigQueryGetDataOperator does return (and thus push) some data but it works by table and column name. You could chain this behavior by making the query you run output to a uniquely named table (maybe use {{ds_nodash}} in the name), and then use the table as a source for this operator, and then you can branch with the BranchPythonOperator.
You might instead try to use the BigQueryHook's get_conn().cursor() to run the query and work with some data inside the BranchPythonOperator.
Elsewhere we chatted and came up with something along the lines of this for putting in the callable of a BranchPythonOperator:
cursor = BigQueryHook(bigquery_conn_id='connection_name').get_conn().cursor()
# one of these two:
cursor.execute(SQL_QUERY) # if non-legacy
cursor.job_id = cursor.run_query(bql=SQL_QUERY, use_legacy_sql=False) # if legacy
result=cursor.fetchone()
return "task_one" if result[0] is 1400 else "task_two" # depends on results format

Resources