Currently there is no S3ToBigQuery operator.
My choices are:
Use the S3ToGoogleCloudStorageOperator and then use the GoogleCloudStorageToBigQueryOperator
This is not something i'm eager to do. This means paying double for storage. Even if removing the file from either one of the storage that still involves payment.
Download the file from S3 to local file system and load it to BigQuery from file system - However there is no S3DownloadOperator This means writing the whole process from scratch without Airflow involvement. This misses the point of using Airflow.
Is there another option? What would you suggest to do?
This is what I ended up with.
This should be converted to a S3toLocalFile Operator.
def download_from_s3(**kwargs):
hook = S3Hook(aws_conn_id='project-s3')
result = hook.read_key(bucket_name='stage-project-metrics',
key='{}.csv'.format(kwargs['ds']))
if not result:
logging.info('no data found')
else:
outfile = '{}project{}.csv'.format(Variable.get("data_directory"),kwargs['ds'])
f=open(outfile,'w+')
f.write(result)
f.close()
return result
If the first option is cost restrictive, you could just use the S3Hook to download the file through the PythonOperator:
from airflow.hooks.S3_hook import S3Hook
from datetime import timedelta, datetime
from airflow import DAG
from airflow.hooks.S3_hook import S3Hook
from airflow.operators.python_operator import PythonOperator
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime(2018, 1, 1),
'email_on_failure': False,
'email_on_retry': False,
'retries': 0
}
def download_from_s3(**kwargs):
hook = S3Hook(aws_conn_id='s3_conn')
hook.read_key(bucket_name='workflows-dev',
key='test_data.csv')
dag = DAG('s3_download',
schedule_interval='#daily',
default_args=default_args,
catchup=False)
with dag:
download_data = PythonOperator(
task_id='download_data',
python_callable=download_from_s3,
provide_context=True
)
What you can do instead is use S3ToGoogleCloudStorageOperator and then use GoogleCloudStorageToBigQueryOperator with an external_table table flag i.e pass external_table =True.
This will create an external data that points to GCS location and doesn't store your data in BigQuery but you can still query it.
Related
I'm struggling to understand the difference between a task and a DAG and when to use one over the other. I know a task is more granular and called within a DAG, but so much of Airflow documentation mentions creating DAGs on the go or calling other DAGs instead of tasks. Is there any significant difference between using either of these two options?
A DAG is a collection of tasks with schedule information. Each task can perform different work based on our requirement. Let us consider below DAG code as an example. In the below code we are printing current time and then sending an e-mail notification after that.
#importing operators and modules
from airflow import DAG
from airflow.operators.python_operator import PythonOperator ##to call a python object
from airflow.operators.email_operator import EmailOperator ##to send email
from datetime import datetime,timedelta,timezone ##to play with date and time
import dateutil
#setting default arguments
default_args = {
'owner': 'test dag',
'depends_on_past': False,
'start_date': datetime(2021, 1, 1),
'email': ['myemailid#example.com'],
'email_on_failure': True,
'email_on_retry': False,
'retries': 0
}
def print_time(**context):
now_utc = datetime.now(timezone.utc)
print("current time",now_utc)
with DAG('example_dag', schedule_interval='* 12 * * *', max_active_runs=1, catchup=False,default_args=default_args) as dag: ##dag name is 'example_dag'
current_time = PythonOperator(task_id='current_time', python_callable=print_time,
provide_context=True,
dag=dag) ##task to call print_time definition
send_email = EmailOperator(task_id='send_email', to='myemailid#example.com',
subject='DAG completed successfully',
html_content="<p>Hi,<br><br>example DAG completed successfully<br>", dag=dag) ## task to send email
current_time >> send_email ##defining tasks dependency
Here current_time and send_email are 2 different tasks performing different work. Now we have a dependency here like we have to send an email once the current time is printed so we have established that task dependency at the end. Also we have given a scheduled_interval to run the DAG everyday at 12 PM. This together forms a DAG.
I have an Airflow DAG which I use to submit a spark job and for that I use the SparkSubmitOperator. In the DAG, I have to specify the application JAR that needs to be run. At the moment, it is hardcoded to spark-job-1.0.jar as following:
from airflow import DAG
from airflow.contrib.operators.spark_submit_operator import SparkSubmitOperator
from datetime import datetime, timedelta
args = {
'owner': 'joe',
'start_date': datetime(2020, 5, 23)
}
dag = DAG('spark_job', default_args=args)
operator = SparkSubmitOperator(
task_id='spark_sm_job',
conn_id='spark_submit',
java_class='com.mypackage',
application='/home/ubuntu/spark-job-1.0.jar', <---------
total_executor_cores='1',
executor_cores='1',
executor_memory='500M',
num_executors='1',
name='airflow-spark-job',
verbose=False,
driver_memory='500M',
application_args=["yarn", "10.11.21.12:9092"],
dag=dag,
)
The problem is, the release name will increment and I tried using a wildcard spark-job-*.jar but it didn't work! - Is it possible to use a wildcard, or is there any other way to get around this?
Assuming that there will always be only one file matching the glob /home/ubuntu/spark-job-*.jar, you can do the following:
import glob
...
application=glob.glob('/home/ubuntu/spark-job-*.jar')[0],
...
I am trying to use Jinja template variable as against using Variable.get('sql_path'), So as to avoid hitting DB for every scan of the dag file
Original code
import datetime
import os
from functools import partial
from datetime import timedelta
from airflow.models import DAG,Variable
from airflow.contrib.operators.snowflake_operator import SnowflakeOperator
from alerts.email_operator import dag_failure_email
SNOWFLAKE_CONN_ID = 'etl_conn'
tmpl_search_path = []
for subdir in ['business/', 'audit/', 'business/transform/']:
tmpl_search_path.append(os.path.join(Variable.get('sql_path'), subdir))
def get_db_dag(
*,
dag_id,
start_date,
schedule_interval,
max_taskrun,
max_dagrun,
proc_nm,
load_sql
):
default_args = {
'owner': 'airflow',
'start_date': start_date,
'provide_context': True,
'execution_timeout': timedelta(minutes=max_taskrun),
'retries': 0,
'retry_delay': timedelta(minutes=3),
'retry_exponential_backoff': True,
'email_on_retry': False,
}
dag = DAG(
dag_id=dag_id,
schedule_interval=schedule_interval,
dagrun_timeout=timedelta(hours=max_dagrun),
template_searchpath=tmpl_search_path,
default_args=default_args,
max_active_runs=1,
catchup='{{var.value.dag_catchup}}',
on_failure_callback=alert_email_callback,
)
load_table = SnowflakeOperator(
task_id='load_table',
sql=load_sql,
snowflake_conn_id=SNOWFLAKE_CONN_ID,
autocommit=True,
dag=dag,
)
load_vcc_svc_recon
return dag
# ======== DAG DEFINITIONS #
edw_table_A = get_db_dag(
dag_id='edw_table_A',
start_date=datetime.datetime(2020, 5, 21),
schedule_interval='0 5 * * *',
max_taskrun=3, # Minutes
max_dagrun=1, # Hours
load_sql='recon/extract.sql',
)
When I have replaced Variable.get('sql_path') with Jinja Template '{{var.value.sql_path}}' as below and ran the Dag, it threw an error as below, it was not able to get the file from the subdirectory of the SQL folder
tmpl_search_path = []
for subdir in ['bus/', 'audit/', 'business/snflk/']:
tmpl_search_path.append(os.path.join('{{var.value.sql_path}}', subdir))
Got below error as
inja2.exceptions.TemplateNotFound: extract.sql
Templates are not rendered everywhere in a DAG script. Usually they are rendered in the templated parameters of Operators. So, unless you pass the elements of tmpl_search_path to some templated parameter {{var.value.sql_path}} will not be rendered.
The template_searchpath of DAG is not templated. That is why you cannot pass Jinja templates to it.
The options of which I can think are
Hardcode the value of Variable.get('sql_path') in the pipeline script.
Save the value of Variable.get('sql_path') in a configuration file and read it from there in the pipeline script.
Move the Variable.get() call out of the for-loop. This will result in three times fewer requests to the database.
More info about templating in Airflow.
I am trying to create Dynamic DAGs and then get them to the scheduler. I tried the reference from https://www.astronomer.io/guides/dynamically-generating-dags/ which works well. I changed it a bit as in the below code. Need help in debugging the issue.
I tried
1. Test run the file. The Dag gets executed and the globals() is printing all the DAGs objects. But somehow not listing in the list_dags or in the UI
from datetime import datetime, timedelta
import requests
import json
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.operators.http_operator import SimpleHttpOperator
def create_dag(dag_id,
dag_number,
default_args):
def hello_world_py(*args):
print('Hello World')
print('This is DAG: {}'.format(str(dag_number)))
dag = DAG(dag_id,
schedule_interval="#hourly",
default_args=default_args)
with dag:
t1 = PythonOperator(
task_id='hello_world',
python_callable=hello_world_py,
dag_number=dag_number)
return dag
def fetch_new_dags(**kwargs):
for n in range(1, 10):
print("=====================START=========\n")
dag_id = "abcd_" + str(n)
print (dag_id)
print("\n")
globals()[dag_id] = create_dag(dag_id, n, default_args)
print(globals())
default_args = {
'owner': 'diablo_admin',
'depends_on_past': False,
'start_date': datetime(2019, 8, 8),
'email': ['airflow#example.com'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=1),
'trigger_rule': 'none_skipped'
#'schedule_interval': '0 * * * *'
# 'queue': 'bash_queue',
# 'pool': 'backfill',
# 'priority_weight': 10,
# 'end_date': datetime(2016, 1, 1),
}
dag = DAG('testDynDags', default_args=default_args, schedule_interval='*/1 * * * *')
#schedule_interval='*/1 * * * *'
check_for_dags = PythonOperator(dag=dag,
task_id='tst_dyn_dag',
provide_context=True,
python_callable=fetch_new_dags
)
check_for_dags
Expected to create 10 DAGs dynamically and added to the scheduler.
I guess doing the following would fix it
completely remove the global testDynDags dag and tst_dyn_dags task (instantiation and invocation)
invoke your fetch_new_dags(..) method with requisite arguments in global scope
Explanation
Dynamic dags / tasks merely means that you have a well-defined logic at the time of writing dag-definition file that can help create tasks / dags having a known structure in a pre-defined fashion.
You can NOT determine the structure of your DAG at runtime (task execution). So, for instance, you cannot add n identical tasks to your DAG if the upstream task returned an integer value n. But you can iterate over a YAML file containing n segments and generate n tasks / dags.
So clearly, wrapping dag-generation code inside an Airflow task itself makes no sense.
UPDATE-1
From what is indicated in comments, I infer that the requirement dictates that you revise your external source that feeds inputs (how many dags or tasks to create) to your DAG / task-generation script. While this is indeed a complex use-case, but a simple way to achieve this is to create 2 separate DAGs.
One dag runs once in a while and generates the inputs that are stored in an an external resource like Airflow Variable (or any other external store like file / S3 / database etc.)
The second DAG is constructed programmatically by reading that same datasource which was written by the first DAG
You can take inspiration from the Adding DAGs based on Variable value section
While executing the following python script using cloud-composer, I get *** Task instance did not exist in the DB under the gcs2bq task Log in Airflow
Code:
import datetime
import os
import csv
import pandas as pd
import pip
from airflow import models
#from airflow.contrib.operators import dataproc_operator
from airflow.operators.bash_operator import BashOperator
from airflow.operators.python_operator import PythonOperator
from airflow.utils import trigger_rule
from airflow.contrib.operators import gcs_to_bq
from airflow.contrib.operators import bigquery_operator
print('''/-------/--------/------/
-------/--------/------/''')
yesterday = datetime.datetime.combine(
datetime.datetime.today() - datetime.timedelta(1),
datetime.datetime.min.time())
default_dag_args = {
# Setting start date as yesterday starts the DAG immediately when it is
# detected in the Cloud Storage bucket.
'start_date': yesterday,
# To email on failure or retry set 'email' arg to your email and enable
# emailing here.
'email_on_failure': False,
'email_on_retry': False,
# If a task fails, retry it once after waiting at least 5 minutes
'retries': 1,
'retry_delay': datetime.timedelta(minutes=5),
'project_id': 'data-rubrics'
#models.Variable.get('gcp_project')
}
try:
# [START composer_quickstart_schedule]
with models.DAG(
'composer_agg_quickstart',
# Continue to run DAG once per day
schedule_interval=datetime.timedelta(days=1),
default_args=default_dag_args) as dag:
# [END composer_quickstart_schedule]
op_start = BashOperator(task_id='Initializing', bash_command='echo Initialized')
#op_readwrite = PythonOperator(task_id = 'ReadAggWriteFile', python_callable=read_data)
op_load = gcs_to_bq.GoogleCloudStorageToBigQueryOperator( \
task_id='gcs2bq',\
bucket='dr-mockup-data',\
source_objects=['sample.csv'],\
destination_project_dataset_table='data-rubrics.sample_bqtable',\
schema_fields = [{'name':'a', 'type':'STRING', 'mode':'NULLABLE'},{'name':'b', 'type':'FLOAT', 'mode':'NULLABLE'}],\
write_disposition='WRITE_TRUNCATE',\
dag=dag)
#op_write = PythonOperator(task_id = 'AggregateAndWriteFile', python_callable=write_data)
op_start >> op_load
UPDATE:
Can you remove dag=dag from gcs2bq task as you are already using with models.DAG and run your dag again?
It might be because you have a dynamic start date. Your start_date should never be dynamic. Read this FAQ: https://airflow.apache.org/faq.html#what-s-the-deal-with-start-date
We recommend against using dynamic values as start_date, especially datetime.now() as it can be quite confusing. The task is triggered once the period closes, and in theory an #hourly DAG would never get to an hour after now as now() moves along.
Make your start_date static or use Airflow utils/macros:
import airflow
args = {
'owner': 'airflow',
'start_date': airflow.utils.dates.days_ago(2),
}
Okay, this was a stupid question on my part and apologies for everyone who wasted time here. I had a Dag running due to which the one I was shooting off was always in the que. Also, I did not write the correct value in destination_project_dataset_table. Thanks and apologies to all who spent time.