Apache Airflow Task Instance state is blank - airflow

I have the dag config like below
args = {
'owner': 'XXX',
'depends_on_past': False,
'start_date': datetime(2018, 2, 26),
'email': ['sample#sample.com'],
'email_on_failure': False,
'retries': 1,
'retry_delay': timedelta(minutes=5)
}
dag = DAG(dag_id='Daily_Report',
default_args=args,
schedule_interval='0 11 * * *',
dagrun_timeout=timedelta(seconds=30))
I have a bash operator and a data bricks operator
run_this = BashOperator(task_id='run_report',
bash_command=templated_command,
dag=dag)
notebook_run = DatabricksSubmitRunOperator(
task_id='notebook_run',
notebook_task=notebook_task,
existing_cluster_id='xxxx',
dag=dag)
I'm running this like run_this.set_downstream(notebook_run)
The bash operator runs fine but the data bricks operator doesn't run it just leaves a blank state like below
Blank Status Airflow
Any thing I'm missing ? Im using Airflow version from Databricks here https://github.com/databricks/incubator-airflow

Try highlighting the text in the white label. It will probably say "None". White on white is terrible UX so I'm not sure why Airflow does it that way.

Related

Airflow does not run dags

Context: I successfully installed Airflow on EC2, changed things like executor to LocalExecutor; sql_alchemy_conn to postgresql+psycopg2://postgres#localhost:5432/airflow; max_threads to 10.
My problem is when I create a dag which I indicate to be run everyday everything is fine, but when I create a dag to be run like at 10am on Monday and Wednesday Airflow doesn't does not run it. Does anybody know what could I do wrong and should I do in order to fix this issue?
Dag for script which runs fine and properly:
import airflow
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
from datetime import timedelta
args = {
'owner': 'arseniyy123',
'start_date': airflow.utils.dates.days_ago(1),
'depends_on_past': False,
'email': ['exam#exam.com'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=1),
}
dag = DAG(
'daily_script',
default_args=args,
description = 'daily_script',
schedule_interval = "0 10 * * *",
)
t1 = BashOperator(
task_id='daily',
bash_command='cd /root/ && python3 DAILY_WORK.py',
dag=dag)
t1
Dag for script which should run on Monday and Wednesday, but it does not run at all:
import airflow
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
from datetime import timedelta
args = {
'owner': 'arseniyy123',
'start_date': airflow.utils.dates.days_ago(1),
'depends_on_past': False,
'email': ['exam#exam.com'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=1),
}
dag = DAG(
'monday_wednesday',
default_args=args,
description = 'monday_wednesday',
schedule_interval = "0 10 * * 1,3",
)
t1 = BashOperator(
task_id='monday_wednesday',
bash_command='cd /root/ && python3 not_daily_work.py',
dag=dag)
t1
I also have some problems with scheduler, it uses to die after being working more than 10 hours, anybody know why does it happen?
Thank you in advance!
Can you try changing the start_date to a static datetime e.g. datetime.date(2020, 3, 20) instead of using airflow.utils.dates.days_ago(1)
Maybe read through the scheduling examples here, to understand why your code didn't work. From that documentation:
Let’s Repeat That The scheduler runs your job one schedule_interval AFTER the start date, at the END of the period

Airflow: DAG status is success even when no task ran

In its 2 out of 10 runs, the DAG status automatically sets to succes even when no tasks inside of it ran. Following is the DAG args which was passed and its tree view.
args = {
'owner': 'xyz',
'depends_on_past': False,
'catchup': False,
'start_date': datetime(2019, 7, 8),
'email': ['a#b.c'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 0,
'provide_context': True,
'retry_delay': timedelta(minutes=2)
}
And I am passing DAG as a context like this:
with DAG(PARENT_DAG_NAME, default_args=args, schedule_interval='30 * * * *') as main_dag:
task1 = DummyOperator(
task_id='Raw_Data_Ingestion_Started',
)
task2 = DummyOperator(
task_id='Raw_Data_Ingestion_Completed',
)
task1 >> task2
Any idea what could be the issue? Is it something I need to change in the config file? And this behaviour is not periodic.
According to the official airflow documentation on DummyOperator:
Operator that does literally nothing. It can be used to group tasks in a DAG.

Apache Airflow dag schedules in midnight UTC

I created Apache Airflow DAG with following default args. I want this DAG to run every day at 10PM UTC but it's always running at 12AM UTC and ignoring the date time I had set in start_date. Is this not the right way? Thanks.
default_args = {
'owner': config.OWNER,
'depends_on_past': False,
'start_date': datetime(2018, 10, 14, 22, 0, 0),
'email': [config.ALERT_EMAIL],
'email_on_failure': True,
'email_on_retry': False,
'retry_delay': timedelta(minutes=1),
'retries': 2,
}
# DAG
dag = DAG('Test',
default_args=default_args,
description='Initial setup',
schedule_interval='#daily')
You can also use cron format in your schedule interval argument like this:
# DAG
dag = DAG('Test',
default_args=default_args,
description='Initial setup',
schedule_interval='0 22 * * *')
Regarding the schedule_interval you have at least three options:
datetime.timedelta
dateutil.relativedelta
cron style string
The schedule_interval defines how often that DAG runs. This timedelta object is added to your latest task instance’s execution_date to figure out the next schedule. And keep in mind that: start_date for the task, determines the execution_date for the first task instance.
All of the above is correct.
I have encountered an issue where, in Airflow 2.0, schedule_interval is ignored when put in the default_args. When I removed it and put it in the DAG declaration, all worked. I could test this by looking at the DAG details in the UI.
Example:
default_args = {
'owner': 'Hector Hoffman',
'depends_on_past': False,
'start_date':start_date,
'schedule_interval': '0 5 * * *',
'email': ['hector#email.com'],
'email_on_failure': True,
'email_on_retry': False,
'retries': 0,
'on_failure_callback': task_fail_slack_alert
}
Results in:
Whereas, when I put it in the DAG:
with models.DAG(
"dealstampede_workflow",
default_args=default_args,
catchup=False,
schedule_interval='0 5 * * *'
) as dag:
Results in:
If anyone has any insight as to why the schedule_interval doesn't work in the default_args I'd appreciate feedback. Thanks.

How to limit Airflow to run only one instance of a DAG run at a time?

I want the tasks in the DAG to all finish before the 1st task of the next run gets executed.
I have max_active_runs = 1, but this still happens.
default_args = {
'depends_on_past': True,
'wait_for_downstream': True,
'max_active_runs': 1,
'start_date': datetime(2018, 03, 04),
'owner': 't.n',
'email': ['t.n#example.com'],
'email_on_failure': True,
'email_on_retry': False,
'retries': 3,
'retry_delay': timedelta(minutes=4)
}
dag = DAG('example', default_args=default_args, schedule_interval = schedule_interval)
(All of my tasks are dependent on the previous task. Airflow version is 1.8.0)
Thank you
I changed to put max_active_runs as an argument of DAG() instead of in default_arguments, and it worked.
Thanks SimonD for giving me the idea, though not directly pointing to it in your answer.
You've put the 'max_active_runs': 1 into the default_args parameter and not into the correct spot.
max_active_runs is a constructor argument for a DAG and should not be put into the default_args dictionary.
Here is an example DAG that shows where you need to move it to:
dag_args = {
'owner': 'Owner',
# 'max_active_runs': 1, # <--- Here is where you had it.
'depends_on_past': False,
'start_date': datetime(2018, 01, 1, 12, 00),
'email_on_failure': False
}
sched = timedelta(hours=1)
dag = DAG(
job_id,
default_args=dag_args,
schedule_interval=sched,
max_active_runs=1 # <---- Here is where it is supposed to be
)
If the tasks that your dag is running are actually sub-dags then you may need to pass max_active_runs into the subdags too but not 100% sure on this.
You can use xcoms to do it. First take 2 python operators as 'start' and 'end' to the DAG. Set the flow as:
start ---> ALL TASKS ----> end
'end' will always push a variable
last_success = context['execution_date'] to xcom (xcom_push). (Requires provide_context = True in the PythonOperators).
And 'start' will always check xcom (xcom_pull) to see whether there exists a last_success variable with value equal to the previous DagRun's execution_date or to the DAG's start_date (to let the process start).
Followed this answer
Actually you should use DAG_CONCURRENCY=1 as environment var. Worked for me.

unwanted DAG runs in Airflow

I configured my DAG like this:
default_args = {
'owner': 'Aviv',
'depends_on_past': False,
'start_date': datetime(2017, 1, 1),
'email': ['aviv#oron.com'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 0,
'retry_delay': timedelta(minutes=1)
}
dag = DAG(
'MyDAG'
, schedule_interval=timedelta(minutes=3)
, default_args=default_args
, catchup=False
)
and for some reason, when i un-pause the DAG, its being executed twice immediatly.
Any idea why? And is there any rule i can apply to tell this DAG to never run more than once in the same time?
You can specify max_active_runs like this:
dag = airflow.DAG(
'customer_staging',
schedule_interval="#daily",
dagrun_timeout=timedelta(minutes=60),
template_searchpath=tmpl_search_path,
default_args=args,
max_active_runs=1)
I've never seen it happening, are you sure that those runs are not backfills, see: https://stackoverflow.com/a/47953439/9132848
I think its because you have missed the scheduled time and airflow is backfilling it automatically when you ON it again. You can disable this by
catchup_by_default = False in the airflow.cfg.

Resources