Scheduled DAG is not running. How do i diagnose the problem? - airflow

I am using Airflow 1.8.1. I have a DAG that I believe I have scheduled to run every 5 minutes, but it isn't doing so:
Ignore the 2 successful DAG runs, those were manually triggered.
I look at the scheduler log for that DAG and I see:
[2019-04-26 22:03:35,601] {jobs.py:343} DagFileProcessor839 INFO - Started process (PID=5653) to work on /usr/local/airflow/dags/retrieve_airflow_artifacts.py
[2019-04-26 22:03:35,606] {jobs.py:1525} DagFileProcessor839 INFO - Processing file /usr/local/airflow/dags/retrieve_airflow_artifacts.py for tasks to queue
[2019-04-26 22:03:35,607] {models.py:168} DagFileProcessor839 INFO - Filling up the DagBag from /usr/local/airflow/dags/retrieve_airflow_artifacts.py
[2019-04-26 22:03:36,083] {jobs.py:1539} DagFileProcessor839 INFO - DAG(s) ['retrieve_airflow_artifacts'] retrieved from /usr/local/airflow/dags/retrieve_airflow_artifacts.py
[2019-04-26 22:03:36,112] {jobs.py:1172} DagFileProcessor839 INFO - Processing retrieve_airflow_artifacts
[2019-04-26 22:03:36,126] {jobs.py:566} DagFileProcessor839 INFO - Skipping SLA check for <DAG: retrieve_airflow_artifacts> because no tasks in DAG have SLAs
[2019-04-26 22:03:36,132] {models.py:323} DagFileProcessor839 INFO - Finding 'running' jobs without a recent heartbeat
[2019-04-26 22:03:36,132] {models.py:329} DagFileProcessor839 INFO - Failing jobs without heartbeat after 2019-04-26 21:58:36.132768
[2019-04-26 22:03:36,139] {jobs.py:351} DagFileProcessor839 INFO - Processing /usr/local/airflow/dags/retrieve_airflow_artifacts.py took 0.539 seconds
[2019-04-26 22:04:06,776] {jobs.py:343} DagFileProcessor845 INFO - Started process (PID=5678) to work on /usr/local/airflow/dags/retrieve_airflow_artifacts.py
[2019-04-26 22:04:06,780] {jobs.py:1525} DagFileProcessor845 INFO - Processing file /usr/local/airflow/dags/retrieve_airflow_artifacts.py for tasks to queue
[2019-04-26 22:04:06,780] {models.py:168} DagFileProcessor845 INFO - Filling up the DagBag from /usr/local/airflow/dags/retrieve_airflow_artifacts.py
[2019-04-26 22:04:07,258] {jobs.py:1539} DagFileProcessor845 INFO - DAG(s) ['retrieve_airflow_artifacts'] retrieved from /usr/local/airflow/dags/retrieve_airflow_artifacts.py
[2019-04-26 22:04:07,287] {jobs.py:1172} DagFileProcessor845 INFO - Processing retrieve_airflow_artifacts
[2019-04-26 22:04:07,301] {jobs.py:566} DagFileProcessor845 INFO - Skipping SLA check for <DAG: retrieve_airflow_artifacts> because no tasks in DAG have SLAs
[2019-04-26 22:04:07,307] {models.py:323} DagFileProcessor845 INFO - Finding 'running' jobs without a recent heartbeat
[2019-04-26 22:04:07,307] {models.py:329} DagFileProcessor845 INFO - Failing jobs without heartbeat after 2019-04-26 21:59:07.307607
[2019-04-26 22:04:07,314] {jobs.py:351} DagFileProcessor845 INFO - Processing /usr/local/airflow/dags/retrieve_airflow_artifacts.py took 0.538 seconds
over and over again. I've compared that to a DAG on another server and from doing so I know that there would be extra log records indicating that the DAG had been triggered via a schedule, there are no such records in this log file.
Here's how the schedule of my DAG is defined:
args = {
'owner': 'airflow',
'start_date': (datetime.datetime.now() - datetime.timedelta(minutes=5))
}
dag = DAG(
dag_id='retrieve_airflow_artifacts', default_args=args,
schedule_interval="0,5,10,15,20,25,30,35,40,45,50,55 * * * *")
Could someone help me to figure out why my DAG isn't running because I've looked high and low and cannot figure it out.

If I had to guess, I would say your start_date is causing you some issues.
Change your args to have a static start and prevent it from running on past intervals:
args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime(2019, 4, 27) #year month day
}
Also, just to make it easier to read, change your DAG args to (same functionality):
dag = DAG(
dag_id='retrieve_airflow_artifacts',
default_args=args,
schedule_interval="*/5 * * * *"
)
That should allow the scheduler to pick it up!
It's generally recommended not to set your start_date dynamically.
Taken from Airflow FAQ:
We recommend against using dynamic values as start_date, especially
datetime.now() as it can be quite confusing. The task is triggered
once the period closes, and in theory an #hourly DAG would never get
to an hour after now as now() moves along.
Another SO question on this: why dynamic start dates cause issues

Related

About ExternalTaskMarker & ExternalTaskSensor in Apache Airflow

Suppose there are two DAGs i.e., DAG1 and DAG2. should i need to schedule ExternalTaskMarker
in DAG1 and ExternalTaskSensor at same time? or how can i schedule it
There are two DAGs namely DAG1 & DAG2 . I define ExternalTaskMarker in DAG1 which is scheduled at 15 * * * * and defined ExternalTaskSensor in DAG2 which is scheduled at 0 * * * * . i see that ExternalTaskMarker is running fine in DAG1 and in DAG2 ExternalTaskSensor is failing because of belo error
[2022-11-05 15:32:12,091] {{taskinstance.py:901}} INFO - Executing <Task(ExternalTaskSensor): external_task_sensor-plume_dev_download_dimensions_hourly-pan.device_master_dimension> on 2022-11-03T20:00:00+00:00
[2022-11-05 15:32:12,095] {{standard_task_runner.py:54}} INFO - Started process 48476 to run task
[2022-11-05 15:32:12,123] {{standard_task_runner.py:77}} INFO - Running: ['airflow', 'run', 'sd_plume_dev_incoming_data_quality_hourly-pan', 'external_task_sensor-plume_dev_download_dimensions_hourly-pan.device_master_dimension', '2022-11-03T20:00:00+00:00', '--job_id', '56965917', '--pool', 'default_pool', '--raw', '-sd', 'DAGS_FOLDER/sd_plume_dev_incoming_data_quality_hourly-pan.py', '--cfg_path', '/tmp/tmpy1p4tag5']
[2022-11-05 15:32:12,123] {{standard_task_runner.py:78}} INFO - Job 56965917: Subtask external_task_sensor-plume_dev_download_dimensions_hourly-pan.device_master_dimension
[2022-11-05 15:32:12,192] {{logging_mixin.py:112}} INFO - Running %s on host %s <TaskInstance: sd_plume_dev_incoming_data_quality_hourly-pan.external_task_sensor-plume_dev_download_dimensions_hourly-pan.device_master_dimension 2022-11-03T20:00:00+00:00 [running]> itclab21
[2022-11-05 15:32:12,278] {{external_task_sensor.py:117}} INFO - Poking for plume_dev_download_dimensions_hourly-pan.device_master_dimension on 2022-11-03T20:00:00+00:00 ...
[2022-11-05 15:32:12,301] {{taskinstance.py:1141}} INFO - Rescheduling task, marking task as UP_FOR_RESCHEDULE
[2022-11-05 15:32:17,075] {{local_task_job.py:102}} INFO - Task exited with return code 0

Airlfow Execution Timeout not working well

I've set 'execution_timeout': timedelta(seconds=300) parameter on many tasks. When the execution timeout is set on task downloading data from Google Analytics it works properly - after ~300 seconds is the task set to failed. The task downloads some data from API (python), then it does some transformations (python) and loads data into PostgreSQL.
Then I've a task which executes only one PostgreSQL function - execution sometimes takes more than 300 seconds but I get this (task is marked as finished successfully).
*** Reading local file: /home/airflow/airflow/logs/bulk_replication_p2p_realtime/t1/2020-07-20T00:05:00+00:00/1.log
[2020-07-20 05:05:35,040] {__init__.py:1139} INFO - Dependencies all met for <TaskInstance: bulk_replication_p2p_realtime.t1 2020-07-20T00:05:00+00:00 [queued]>
[2020-07-20 05:05:35,051] {__init__.py:1139} INFO - Dependencies all met for <TaskInstance: bulk_replication_p2p_realtime.t1 2020-07-20T00:05:00+00:00 [queued]>
[2020-07-20 05:05:35,051] {__init__.py:1353} INFO -
--------------------------------------------------------------------------------
[2020-07-20 05:05:35,051] {__init__.py:1354} INFO - Starting attempt 1 of 1
[2020-07-20 05:05:35,051] {__init__.py:1355} INFO -
--------------------------------------------------------------------------------
[2020-07-20 05:05:35,098] {__init__.py:1374} INFO - Executing <Task(PostgresOperator): t1> on 2020-07-20T00:05:00+00:00
[2020-07-20 05:05:35,099] {base_task_runner.py:119} INFO - Running: ['airflow', 'run', 'bulk_replication_p2p_realtime', 't1', '2020-07-20T00:05:00+00:00', '--job_id', '958216', '--raw', '-sd', 'DAGS_FOLDER/bulk_replication_p2p_realtime.py', '--cfg_path', '/tmp/tmph11tn6fe']
[2020-07-20 05:05:37,348] {base_task_runner.py:101} INFO - Job 958216: Subtask t1 [2020-07-20 05:05:37,347] {settings.py:182} INFO - settings.configure_orm(): Using pool settings. pool_size=10, pool_recycle=1800, pid=26244
[2020-07-20 05:05:39,503] {base_task_runner.py:101} INFO - Job 958216: Subtask t1 [2020-07-20 05:05:39,501] {__init__.py:51} INFO - Using executor LocalExecutor
[2020-07-20 05:05:39,857] {base_task_runner.py:101} INFO - Job 958216: Subtask t1 [2020-07-20 05:05:39,856] {__init__.py:305} INFO - Filling up the DagBag from /home/airflow/airflow/dags/bulk_replication_p2p_realtime.py
[2020-07-20 05:05:39,894] {base_task_runner.py:101} INFO - Job 958216: Subtask t1 [2020-07-20 05:05:39,894] {cli.py:517} INFO - Running <TaskInstance: bulk_replication_p2p_realtime.t1 2020-07-20T00:05:00+00:00 [running]> on host dwh2-airflow-dev
[2020-07-20 05:05:39,938] {postgres_operator.py:62} INFO - Executing: CALL dw_system.bulk_replicate(p_graph_name=>'replication_p2p_realtime',p_group_size=>4 , p_group=>1, p_dag_id=>'bulk_replication_p2p_realtime', p_task_id=>'t1')
[2020-07-20 05:05:39,960] {logging_mixin.py:95} INFO - [2020-07-20 05:05:39,953] {base_hook.py:83} INFO - Using connection to: id: postgres_warehouse. Host: XXX Port: 5432, Schema: XXXX Login: XXX Password: XXXXXXXX, extra: {}
[2020-07-20 05:05:39,973] {logging_mixin.py:95} INFO - [2020-07-20 05:05:39,972] {dbapi_hook.py:171} INFO - CALL dw_system.bulk_replicate(p_graph_name=>'replication_p2p_realtime',p_group_size=>4 , p_group=>1, p_dag_id=>'bulk_replication_p2p_realtime', p_task_id=>'t1')
[2020-07-20 05:23:21,450] {logging_mixin.py:95} INFO - [2020-07-20 05:23:21,449] {timeout.py:42} ERROR - Process timed out, PID: 26244
[2020-07-20 05:23:36,453] {logging_mixin.py:95} INFO - [2020-07-20 05:23:36,452] {jobs.py:2562} INFO - Task exited with return code 0
Does anyone know how to enforce execution timeout out for such long running functions? It seems that the execution timeout is evaluated once the PG function finish.
Airflow uses the signal module from the standard library to affect a timeout. In Airflow it's used to hook into these system signals and request that the calling process be notified in N seconds and, should the process still be inside the context (see the __enter__ and __exit__ methods on the class) it will raise an AirflowTaskTimeout exception.
Unfortunately for this situation, there are certain classes of system operations that cannot be interrupted. This is actually called out in the signal documentation:
A long-running calculation implemented purely in C (such as regular expression matching on a large body of text) may run uninterrupted for an arbitrary amount of time, regardless of any signals received. The Python signal handlers will be called when the calculation finishes.
To which we say "But I'm not doing a long-running calculation in C!" -- yeah for Airflow this is almost always due to uninterruptable I/O operations.
The highlighted sentence above (emphasis mine) nicely explains why the handler is still triggered even after the task is allowed to (frustratingly!) finish, well beyond your requested timeout.

How fix DAG seems to be missing?

I want to run a simple Dag "test_update_bq", but when I go to localhost I see this: DAG "test_update_bq" seems to be missing.
There are no errors when I run "airflow initdb", also when I run test airflow test test_update_bq update_table_sql 2015-06-01, It was successfully done and the table was updated in BQ. Dag:
from airflow import DAG
from airflow.contrib.operators.bigquery_operator import BigQueryOperator
from datetime import datetime, timedelta
default_args = {
'owner': 'Anna',
'depends_on_past': True,
'start_date': datetime(2017, 6, 2),
'email': ['airflow#airflow.com'],
'email_on_failure': True,
'email_on_retry': False,
'retries': 5,
'retry_delay': timedelta(minutes=5),
}
schedule_interval = "00 21 * * *"
# Define DAG: Set ID and assign default args and schedule interval
dag = DAG('test_update_bq', default_args=default_args, schedule_interval=schedule_interval, template_searchpath = ['/home/ubuntu/airflow/dags/sql_bq'])
update_task = BigQueryOperator(
dag = dag,
allow_large_results=True,
task_id = 'update_table_sql',
sql = 'update_bq.sql',
use_legacy_sql = False,
bigquery_conn_id = 'test'
)
update_task
I would be grateful for any help.
/logs/scheduler
[2019-10-10 11:28:53,308] {logging_mixin.py:95} INFO - [2019-10-10 11:28:53,308] {dagbag.py:90} INFO - Filling up the DagBag from /home/ubuntu/airflow/dags/update_bq.py
[2019-10-10 11:28:53,333] {scheduler_job.py:1532} INFO - DAG(s) dict_keys(['test_update_bq']) retrieved from /home/ubuntu/airflow/dags/update_bq.py
[2019-10-10 11:28:53,383] {scheduler_job.py:152} INFO - Processing /home/ubuntu/airflow/dags/update_bq.py took 0.082 seconds
[2019-10-10 11:28:56,315] {logging_mixin.py:95} INFO - [2019-10-10 11:28:56,315] {settings.py:213} INFO - settings.configure_orm(): Using pool settings. pool_size=5, max_overflow=10, pool_recycle=3600, pid=11761
[2019-10-10 11:28:56,318] {scheduler_job.py:146} INFO - Started process (PID=11761) to work on /home/ubuntu/airflow/dags/update_bq.py
[2019-10-10 11:28:56,324] {scheduler_job.py:1520} INFO - Processing file /home/ubuntu/airflow/dags/update_bq.py for tasks to queue
[2019-10-10 11:28:56,325] {logging_mixin.py:95} INFO - [2019-10-10 11:28:56,325] {dagbag.py:90} INFO - Filling up the DagBag from /home/ubuntu/airflow/dags/update_bq.py
[2019-10-10 11:28:56,350] {scheduler_job.py:1532} INFO - DAG(s) dict_keys(['test_update_bq']) retrieved from /home/ubuntu/airflow/dags/update_bq.py
[2019-10-10 11:28:56,399] {scheduler_job.py:152} INFO - Processing /home/ubuntu/airflow/dags/update_bq.py took 0.081 seconds
Restarting the airflow webserver helped.
So I kill gunicorn process on ubuntu and then restart airflow webserver
This error is usually due to an exception happening when Airflow tries to parse a DAG. So the DAG gets registered in metastore(thus visible UI), but it wasn't parsed by Airflow. Can you take a look at Airflow logs, you might see an exception causing this error.
None of the responses helped me solving this issue.
However after spending some time I found out how to see the exact problem.
In my case I ran airflow (v2.4.0) using helm chart (v1.6.0) inside kubernetes. It created multiple docker containers. I got into the running container using ssh and executed two commands using airflow's cli and this helped me a lot to debug and understand the problem
airflow dags report
airflow dags reserialize
In my case the problem was that database schema didn't match the airflow version.

DummyOperator marked upstream_failed yet all upstream tasks marked success

I have an Airflow pipeline that produces 12 staging tables from Google Cloud Storage files and then performs some downstream processing. I have a DummyOperator to collect these tasks before proceeding to the next stages.
I'm getting an error on the wait_stg_load operator saying it's in an upstream_failed state. However all of the upstream tasks are marked as success. The DAG itself is now marked as failed. If I clear the status on wait_stg_load, everything proceeds fine. Any ideas on what I'm doing wrong?
I am using Google Cloud Composer which is version Airflow v 1.9 on Python 3
with DAG('load_data',
default_args=default_args,
schedule_interval='0 9 * * *',
concurrency=3
) as dag:
t2 = DummyOperator(
task_id='wait_stg_load',
dag=dag
)
for t in tables:
t1 = GoogleCloudStorageToBigQueryOperator(
task_id='load_stg_{}'.format(t.replace('.','_')),
bucket='my-bucket',
source_objects=['data/{}.json'.format(t)],
destination_project_dataset_table='{}.stg_{}'.format(DATASET_NAME, t.replace('.','_')),
schema_object='data/schemas/{}.json'.format(t),
source_format='NEWLINE_DELIMITED_JSON',
write_disposition='WRITE_TRUNCATE',
dag=dag
)
t1 >> t2
Update 1
I believe this is a concurrency issue within Airflow. I noticed that the task does indeed fail at some point, but later runs anyway. It gets marked complete, yet the DummyOperator doesn't see that.
[2019-02-14 09:00:14,734] {cli.py:374} INFO - Running on host airflow-worker
[2019-02-14 09:00:16,686] {models.py:1196} INFO - Dependencies all met for <TaskInstance: dag.task 2019-02-13 09:00:00 [queued]>
[2019-02-14 09:00:16,694] {models.py:1189} INFO - Dependencies not met for <TaskInstance: dag.task 2019-02-13 09:00:00 [queued]>, dependency 'Task Instance Slots Available' FAILED: The maximum number of running tasks (3) for this task's DAG 'dag' has been reached.
[2019-02-14 09:00:16,694] {models.py:1389} WARNING -
-------------------------------------------------------------------------------
FIXME: Rescheduling due to concurrency limits reached at task runtime. Attempt 1 of 1. State set to NONE
-------------------------------------------------------------------------------
[2019-02-14 09:00:16,694] {models.py:1392} INFO - Queuing into pool None
[2019-02-14 09:00:26,619] {cli.py:374} INFO - Running on host airflow-worker
[2019-02-14 09:00:28,563] {models.py:1196} INFO - Dependencies all met for <TaskInstance: dag.task 2019-02-13 09:00:00 [failed]>
[2019-02-14 09:00:28,570] {models.py:1196} INFO - Dependencies all met for <TaskInstance: dag.task 2019-02-13 09:00:00 [failed]>
[2019-02-14 09:00:28,570] {models.py:1406} INFO -
-------------------------------------------------------------------------------
Starting attempt 1 of
-------------------------------------------------------------------------------
[2019-02-14 09:00:28,607] {models.py:1427} INFO - Executing <Task(GoogleCloudStorageToBigQueryOperator): task> on 2019-02-13 09:00:00

Airflow DAG Run triggered, but never executed?

I've found myself in a situation where I manually trigger a DAG Run (via airflow trigger_dag datablocks_dag) run, and the Dag Run shows up in the interface, but it then stays "Running" forever without actually doing anything.
When I inspect this DAG Run in the UI, I see the following:
I've got start_date set to datetime(2016, 1, 1), and schedule_interval set to #once. My understanding from reading the docs is that since start_date < now, the DAG will be triggered. The #once makes sure it only happens a single time.
My log file says:
[2017-07-11 21:32:05,359] {jobs.py:343} DagFileProcessor0 INFO - Started process (PID=21217) to work on /home/alex/Desktop/datablocks/tests/.airflow/dags/datablocks_dag.py
[2017-07-11 21:32:05,359] {jobs.py:534} DagFileProcessor0 ERROR - Cannot use more than 1 thread when using sqlite. Setting max_threads to 1
[2017-07-11 21:32:05,365] {jobs.py:1525} DagFileProcessor0 INFO - Processing file /home/alex/Desktop/datablocks/tests/.airflow/dags/datablocks_dag.py for tasks to queue
[2017-07-11 21:32:05,365] {models.py:176} DagFileProcessor0 INFO - Filling up the DagBag from /home/alex/Desktop/datablocks/tests/.airflow/dags/datablocks_dag.py
[2017-07-11 21:32:05,703] {models.py:2048} DagFileProcessor0 WARNING - schedule_interval is used for <Task(BashOperator): foo>, though it has been deprecated as a task parameter, you need to specify it as a DAG parameter instead
[2017-07-11 21:32:05,703] {models.py:2048} DagFileProcessor0 WARNING - schedule_interval is used for <Task(BashOperator): foo2>, though it has been deprecated as a task parameter, you need to specify it as a DAG parameter instead
[2017-07-11 21:32:05,704] {jobs.py:1539} DagFileProcessor0 INFO - DAG(s) dict_keys(['example_branch_dop_operator_v3', 'latest_only', 'tutorial', 'example_http_operator', 'example_python_operator', 'example_bash_operator', 'example_branch_operator', 'example_trigger_target_dag', 'example_short_circuit_operator', 'example_passing_params_via_test_command', 'test_utils', 'example_subdag_operator', 'example_subdag_operator.section-1', 'example_subdag_operator.section-2', 'example_skip_dag', 'example_xcom', 'example_trigger_controller_dag', 'latest_only_with_trigger', 'datablocks_dag']) retrieved from /home/alex/Desktop/datablocks/tests/.airflow/dags/datablocks_dag.py
[2017-07-11 21:32:07,083] {models.py:3529} DagFileProcessor0 INFO - Creating ORM DAG for datablocks_dag
[2017-07-11 21:32:07,234] {models.py:331} DagFileProcessor0 INFO - Finding 'running' jobs without a recent heartbeat
[2017-07-11 21:32:07,234] {models.py:337} DagFileProcessor0 INFO - Failing jobs without heartbeat after 2017-07-11 21:27:07.234388
[2017-07-11 21:32:07,240] {jobs.py:351} DagFileProcessor0 INFO - Processing /home/alex/Desktop/datablocks/tests/.airflow/dags/datablocks_dag.py took 1.881 seconds
What could be causing the issue?
Am I misunderstanding how start_date operates?
Or are the worrisome-seeming schedule_interval WARNING lines in the log file possibly the source of the problem?
The problem is that the dag is paused.
In the screenshot you have provided, in the top left corner, flip this to On and that should do it.
This is a common "gotcha" when starting out with airflow.
The accepted answer is correct. This issue can be handled through UI.
One other way of handling this is by using configuration.
By defualt all dags are paused on creation.
You can check the default configuration in airflow.cfg
# Are DAGs paused by default at creation
dags_are_paused_at_creation = True
Turning the flag on will start your dag after the next heartbeat.
Relevant git issue
I had the same issue, but it had to with depends_on_past or wait_for_downstream

Resources