I see that we have the job table in the Airflow metadata database. I'd like to understand what the rows in that table mean.
I have this doubt because when we have some failure in a task, the task_instance table rows are being set as failed, but the jobs in the job table continue with running state indefinitely.
So, I'd like to know what those jobs represent and if it's ok to have those instances left as running forever. I mean, don't they impact in scheduler parallelism or such things?
Related
I want to extract statistics on our Airflow processes using its database. One of these statistics is how many DAG runs are finished smoothly, without any failures and reruns. Doing that using the try_number column of the dag_run table doesn't help, since it also counts automatic retries. I want to count only the cases in which an engineer had to rerun or resume the DAG run.
Thank you.
If I understand correctly you want to get all Dagruns that never had a failed task in them? You can do this by excluding all DAG run_id s that have an entry in the task_failed table:
SELECT *
FROM dag_run
WHERE run_id NOT IN (
SELECT run_id
FROM dag_run
JOIN task_fail USING(run_id)
)
;
This of course would not catch other task states that an engineer might intervene with like marking a task as successful that is stuck in running or a deferred state etc.
One note as of Airflow 2.5.0 you can add notes to DAG and task runs in the UI when manually intervening. Those notes are stored in the tables dag_run_note and task_instance_note.
We have an ETL DAG which is executed daily. DAG and tasks have the following parameters:
catchup=False
max_active_runs=1
depends_on_past=True
When we add a new task, due to depends_on_past property, no new DAG runs get scheduled, as all previous states for new task are missing.
We would like to avoid having to run manual backfill or manually marking previous runs from UI as it can be easily forgotten, and we also have some dynamic DAGs where tasks get added automatically and halt future DAG executions.
Is there a way to automatically set past executions for new tasks as skipped by default, or some other approach that will allow future DAG runs to execute without human intervention?
We also considered creating a maintenance DAG that would insert missing task executions with skipped state, but would rather not go this route.
Are we missing something as the flow looks like a common thing to do?
Defined in Airflow documentation on BaseOperator:
depends_on_past (bool) – when set to true, task instances will run
sequentially and only if the previous instance has succeeded or has
been skipped. The task instance for the start_date is allowed to run.
As long as there exists a previous instance of the task, if that previous instance is not in the success state, the current instance of the task cannot run.
When adding a task to a DAG with existing dagrun, Airflow will create the missing task instances in the None state for all dagruns. Unfortunately, it is not possible to set the default state of task instances.
I do not believe there is a way to allow future task instances of a DAG with existing dagruns to run without human intervention. Personally, for depends_on_past enabled tasks, I will mark the previous task instance as success either through the CLI or the Airflow UI.
Looks like there is an Github Issue describing exactly what you are experiencing! Feel free to bump this PR or take a stab at it if you would like.
A hacky solution is to set depends_on_past to False as max_active_runs=1 will implicitly guarantee the same behavior. As of the current Airflow version, the scheduler orders both dag runs and task instances by execution date before running them (checked 1.10.x but also 2.0)
Another difference is that next execution will be scheduled even if previous fails. We solved this by retrying unlimited times (setting a ver large number), and alert if retry number is larger than some value.
I have a bunch of DAG runs in the ui. If I clear some task across all the DAG runs, some of the tasks are correctly triggered, whereas others are stuck with a cleared state.
At the moment I am simply using airflow CLI to backfill these tasks. This works, but it is unfortunate that I need an unbroken CLI session to complete a clearing/reprocessing scenario.
The reason for this is the naming (and thus type) of your DAG runs.
If you go into your airflow meta data db and open table "dag_runs", you will see run_id. The scheduler identifies the runs it creates with "scheduled__" followed by a datetime. If you backfill the run_id will be named "backfil_" followed by a datetime.
The scheduler will only check and queue tasks for run_ids that starts with "scheduled__", denoting a DagRunType of "scheduled".
If you rename the run_id from backfill_ to scheduled__, the scheduler will identify the dag runs and schedule the cleared task underneath.
This SQL query will change bacfill_ to schelduled__:
UPDATE dag_run
SET run_id = Replace(run_id, 'backfill_', 'scheduled__')
where id in (
select id from dag_run where ("run_id"::TEXT LIKE '%backfill_%'));
-- note that backfill_ is a single underscore, and scheduled__ is two.
-- This is not a mistake in my case. But please review the values in your tabel.
I am trying to use airflow trigger_dag dag_id to trigger my dag, but it just show me running state and doesn't do anymore.
I have searched for many questions, but all people just say dag id paused. the problem is my dag is unpaused, but also keep the running state.
Note: I can use one dag to trigger another one in Web UI. But it doesn't work in command line.
please see the snapshot as below
I had the same issue many times, The state of the task is not running, it is not queued either, it's stuck after we 'clear'. Sometimes I found the task is going to Shutdown state before getting into stuck. And after a large time the instance will be failed, still, the task status will be in white. I have solved it in many ways, I
can't say its reason or exact solution, but try one of this:
Try trigger dag command again with the same Execution date and time instead of the clear option.
Try backfill it will run only unsuccessful instances.
or try with a different time within the same interval it will create another instance which is fresh and not have the issue.
I have an Airflow DAG that runs once daily at a specific time. The DAG runs a bunch of SQL scripts to create and load tables in a database, and the very last task updates permissions so that users can access the tables. Currently the permissions task requires that all previous SQL tasks have completed, so this means that none of the tables' permissions are updated if any of the table tasks fail.
To fix this I'd like to create another permissions task (i.e., a backup task) that runs at a preset time regardless of the status of any of the previous tasks (doesn't hurt to update permissions multiple times). If I don't specify a time different from the DAG's time, then because the new task has no dependencies, the task will try updating permissions before any of the tables have been updated. Is there a setting for me to pass a cron string to a specific task? Or is there an option to pass a timedelta on top of the task's DAG time? I need to run the task some amount of time after the DAG time.
If your permissions task can run no matter what the result of the upstream tasks, I think the best option is simply to change the trigger_rule of your permissions task to all_done (default is all_success).
If you need to do some specific stuffs when there is a failure, you could consider creating a secondary DAG which first step is a sensor that waits for the main DAG to complete with State.FAILED, then run your permissions task.
Have a look at ExternalTaskSensor when you want to establish a dependency between DAGs.
I haven't checked but you might also need to use soft_fail on the sensor to prevent the secondary DAG to show up as failed when the main DAG completes successfully.