How to stop Airflow running tasks from 'off' dags - airflow

I created some DAGs, ran them and stopped them in the middle of their execution (with the OFF button).
The UI still shows 'Running tasks' for those stopped DAGs though.
I tried to set 'clear' to those tasks and not they are in blue, in 'shutdown state'.
I am wondering if those tasks are counted in the total of running tasks, and blocking other tasks from starting (with my current configuration, only 32 tasks can run in parallel). Is there a way to clean completely the DAGs that I don't need anymore and to make sure the tasks are not blocking anything and making Airflow slower?
Thanks!

You can delete all of the dag data from the dag_run and task_instances tables in the meta database.
You can also do this through the Airflow Webserver UI by navigating to
Browse -> DAG Runs
& Browse -> Task Instances
And deleting all the records relevant to the dag id.
One note though is that the task and DAG status fields on the main page may take a while to reflect the changes.

Related

Automatically remove dag from airflow UI not present in dagbag

Scenario
I have a python file which creates multiple dags(Dynamic dag). This file fetches some data from an API and say 100 dags are created based on 100 rows from the API response.
Issue
When the API response changes, say now 90 rows are coming then 10 dags are removed from dagbag since dyamic dag file is not creating those dags, however those dags are still present on airflow UI. Also sometimes I see certain task of these dags in scheduled state(since code of the dag is not present in dagbag, so they can't go to running state) which I have to manually kill and then pause the dag.
Looking for?
I wanted to know if there is any way(config or otherwise) using which I can make sure if a dag is not present in dagbag then it doesn't show up on airflow AI until it's response added back in API again and nor did it tasks mess up the stats on airflow. I am using airflow-2.3.2
Every dag_dir_list_interval, the DagFileProcessorManager list the scripts in the dags folder, then if the script is new or processed since more than the min_file_process_interval, it creates DagFileProcessorProcess for the file to process it and generate the dags.
At this moment, DagFileProcessorProcess will call the API and get the dags ids, then update the dag bag.
But the dag records (runs, tasks, tags, ...) will stay in the Metastore, and they can be deleted by UI, API or CLI:
# API
curl -X DELETE <airflow url>/api/v1/dags/<dag id>
# CLI
airflow dags delete <dag id>
Why the dags are not deleted automatically when they disappear from dagbag?
Suppose you have some dags created dynamically based on a config file stored in S3 and there is a network problem or a bug in the new release, or you have a problem with the volume which contains the dags files, in this case, if the DagFileProcessorManager detects the difference between the Metastore and the local dagbag, then deletes these dags, there will a big problem where you will loss the history of your dags.
Instead, Airflow keeps the data, to let you decide if you want to delete them.
Can you delete the dags dynamically?
You can create an hourly dag with a task which fill a dagbag locally, and load the Metastore dagbag, then delete the dags which appear in the Metastore dagbag and not the local dagbag.
But do these removed dags remain visible in the UI?
The answer is no, they are marked as deactivated after deactivate_stale_dags_interval which is 1 min by default, this deactivated/activated notion can solve the first problem I mentioned above, where only the activated dags are visible on the UI. Then when the network/volume issue is solved, the DagFileProcessorManager will create the dags, and marked them as activated in the Metastore.
So if your goal is just hiding the deleted dags from the UI, you can check what do you have as value for deactivate_stale_dags_interval and decrease the value, but if you want to completely delete the dag, you need to do it manually or using a dag which run the manual commands/API request.

Airflow 2.2.4 manually triggered DAG stuck in 'queued' status

I have two DAGs in my airflow scheduler, which were working in the past. After needing to rebuild the docker containers running airflow, they are now stuck in queued. DAGs in my case are triggered via the REST API, so no actual scheduling is involved.
Since there are quite a few similar posts, I ran through the checklist of this answer from a similar question:
Do you have the airflow scheduler running?
Yes!
Do you have the airflow webserver running?
Yes!
Have you checked that all DAGs you want to run are set to On in the web ui?
Yes, both DAGS are shown in the WebUI and no errors are displayed.
Do all the DAGs you want to run have a start date which is in the past?
Yes, the constructor of both DAGs looks as follows:
dag = DAG(
dag_id='image_object_detection_dag',
default_args=args,
schedule_interval=None,
start_date=days_ago(2),
tags=['helloworld'],
)
Do all the DAGs you want to run have a proper schedule which is shown in the web ui?
No, I trigger my DAGs manually via the REST API.
If nothing else works, you can use the web ui to click on the dag, then on Graph View. Now select the first task and click on Task Instance. In the paragraph Task Instance Details you will see why a DAG is waiting or not running.
Here is the output of what this paragraph is showing me:
What is the best way to find the reason, why the tasks won't exit the queued state and run?
EDIT:
Out of curiousity I tried to trigger the DAG from within the WebUI and now both Runs executed (the one triggered from the WebUI failed, but that was expected, since there was no config set)

Run Airflow task at separate time from the rest of the DAG's tasks

I have an Airflow DAG that runs once daily at a specific time. The DAG runs a bunch of SQL scripts to create and load tables in a database, and the very last task updates permissions so that users can access the tables. Currently the permissions task requires that all previous SQL tasks have completed, so this means that none of the tables' permissions are updated if any of the table tasks fail.
To fix this I'd like to create another permissions task (i.e., a backup task) that runs at a preset time regardless of the status of any of the previous tasks (doesn't hurt to update permissions multiple times). If I don't specify a time different from the DAG's time, then because the new task has no dependencies, the task will try updating permissions before any of the tables have been updated. Is there a setting for me to pass a cron string to a specific task? Or is there an option to pass a timedelta on top of the task's DAG time? I need to run the task some amount of time after the DAG time.
If your permissions task can run no matter what the result of the upstream tasks, I think the best option is simply to change the trigger_rule of your permissions task to all_done (default is all_success).
If you need to do some specific stuffs when there is a failure, you could consider creating a secondary DAG which first step is a sensor that waits for the main DAG to complete with State.FAILED, then run your permissions task.
Have a look at ExternalTaskSensor when you want to establish a dependency between DAGs.
I haven't checked but you might also need to use soft_fail on the sensor to prevent the secondary DAG to show up as failed when the main DAG completes successfully.

Determining if a DAG is executing

I am using Airflow 1.9.0 with a custom SFTPOperator. I have code in my DAGs that poll an SFTP site to find new files. If any are found, then I create custom task id's for the dynamically created task and retrieve/delete the files.
directory_list = sftp_handler('sftp-site', None, '/', None, SFTPToS3Operation.LIST)
for file_path in directory_list:
... SFTP code that GET's the remote files
That part works fine. It seems both the airflow webserver and airflow scheduler are iterating through all the DAGs once a second and actually running the code that retrieves the directory_list. This means I'm hitting the SFTP site ~2 x a second to authenticate and pull a list of files. I'd like to have some conditional code that only executes if the DAG is actually being run.
When an SFTP site uses password authentication, the # of times I connect really isn't an issue. One site requires key authentication and if there are too many authentication failures in a short timespan, the account is locked. During my testing, this seems to happen occasionally for reasons I'm still trying to track down.
However, if I were authenticating only when the DAG was scheduled to execute, or executing manually, this would not be an issue. It also seems wasteful to spend so much time connecting to an SFTP site when it's not scheduled to do so.
I've seen a post that can check to see if a task is executing, but that's not ideal as I'd have to create a long-running task, using up resources I shouldn't require, just to perform that test. Any thoughts on how to accomplish this?
You have a very good use case for Airflow (SFTP to _____ batch jobs), but Airflow is not meant for dynamic DAGs as you are attempting to use them.
Top-Level DAG Code and the Scheduler Loop
As you noticed, any top-level code in a DAG is executed with each scheduler loop. Or put another way, every time the scheduler loop processes the files in your DAG directory it is interpreting all the code in your DAG files. Anything not in a task or operator is interpreted/executed immediately. This puts undue strain on the scheduler as well as any external systems you are making calls to.
Dynamic DAGs and the Airflow UI
Airflow does not handle dynamic DAGs through the UI well. This is mostly the result of the Airflow DAG state not being stored in the database. DAG views and history are rendered based on what exist in the interpreted DAG file at any given moment. I personally hope to see this change in the future with some form of DAG versioning.
In a dynamic DAG you can both add and remove tasks from a DAG.
Adding Tasks Dynamically
When adding tasks for a DAG run will make it appear (in the UI) that all DAG
runs before when that task never ran that task all. The will have a None state
and the DAG run will be set to success or failed depending on the outcome
of the DAG run.
Removing Tasks Dynamically
If your dynamic DAG ever removes tasks you will lose the ability to review history of the DAG. For example, if you run a DAG with task_x in the first 20 DAG runs but remove it after that, it will fail to show up in the UI until it is added back into the DAG.
Idempotency and Airflow
Airflow works best when the DAG runs are idempotent. This means that re-running any DAG Run should have the same affect no matter when you run it or how many times you run it. Dynamic DAGs in Airflow break idempotency by adding and removing tasks to previous DAG runs so that the results of re-running are not the same.
Solution Options
You have at least two options moving forward
1.) Continue to build your SFTP DAG dynamically, but create another DAG that writes the available SFTP files to a local file (if not using distributed executor) or an Airflow Variable (this will result in more reads to the Airflow DB) and build your DAG dynamically from that.
2.) Overload the SFTPOperator to take a list of files so that every file that exist is processed within a single task run. This will make the DAGs idempotent and you will maintain accurate history through the logs.
I apologize for the extended explanation, but you're touching on one of the rough spots of Airflow and I felt it was appropriate to give an overview of the problem at hand.

Airflow 1.9.0 is queuing but not launching tasks

Airflow is randomly not running queued tasks some tasks dont even get queued status. I keep seeing below in the scheduler logs
[2018-02-28 02:24:58,780] {jobs.py:1077} INFO - No tasks to consider for execution.
I do see tasks in database that either have no status or queued status but they never get started.
The airflow setup is running https://github.com/puckel/docker-airflow on ECS with Redis. There are 4 scheduler threads and 4 Celery worker tasks. For the tasks that are not running are showing in queued state (grey icon) when hovering over the task icon operator is null and task details says:
All dependencies are met but the task instance is not running. In most cases this just means that the task will probably be scheduled soon unless:- The scheduler is down or under heavy load
Metrics on scheduler do not show heavy load. The dag is very simple with 2 independent tasks only dependent on last run. There are also tasks in the same dag that are stuck with no status (white icon).
Interesting thing to notice is when I restart the scheduler tasks change to running state.
Airflow can be a bit tricky to setup.
Do you have the airflow scheduler running?
Do you have the airflow webserver running?
Have you checked that all DAGs you want to run are set to On in the web ui?
Do all the DAGs you want to run have a start date which is in the past?
Do all the DAGs you want to run have a proper schedule which is shown in the web ui?
If nothing else works, you can use the web ui to click on the dag, then on Graph View. Now select the first task and click on Task Instance. In the paragraph Task Instance Details you will see why a DAG is waiting or not running.
I've had for instance a DAG which was wrongly set to depends_on_past: True which forbid the current instance to start correctly.
Also a great resource directly in the docs, which has a few more hints: Why isn't my task getting scheduled?.
I'm running a fork of the puckel/docker-airflow repo as well, mostly on Airflow 1.8 for about a year with 10M+ task instances. I think the issue persists in 1.9, but I'm not positive.
For whatever reason, there seems to be a long-standing issue with the Airflow scheduler where performance degrades over time. I've reviewed the scheduler code, but I'm still unclear on what exactly happens differently on a fresh start to kick it back into scheduling normally. One major difference is that scheduled and queued task states are rebuilt.
Scheduler Basics in the Airflow wiki provides a concise reference on how the scheduler works and its various states.
Most people solve the scheduler diminishing throughput problem by restarting the scheduler regularly. I've found success at a 1-hour interval personally, but have seen as frequently as every 5-10 minutes used too. Your task volume, task duration, and parallelism settings are worth considering when experimenting with a restart interval.
For more info see:
Airflow: Tips, Tricks, and Pitfalls (section "The scheduler should be restarted frequently")
Bug 1286825 - Airflow scheduler stopped working silently
Airflow at WePay (section "Restart everything when deploying DAG changes.")
This used to be addressed by restarting every X runs using the SCHEDULER_RUNS config setting, although that setting was recently removed from the default systemd scripts.
You might also consider posting to the Airflow dev mailing list. I know this has been discussed there a few times and one of the core contributors may be able to provide additional context.
Related Questions
Airflow tasks get stuck at "queued" status and never gets running (especially see Bolke's answer here)
Jobs not executing via Airflow that runs celery with RabbitMQ
Make sure you don't have datetime.now() as your start_date
It's intuitive to think that if you tell your DAG to start "now" that it'll execute "now." BUT, that doesn't take into account how Airflow itself actually reads datetime.now().
For a DAG to be executed, the start_date must be a time in the past, otherwise Airflow will assume that it's not yet ready to execute. When Airflow evaluates your DAG file, it interprets datetime.now() as the current timestamp (i.e. NOT a time in the past) and decides that it's not ready to run. Since this will happen every time Airflow heartbeats (evaluates your DAG) every 5-10 seconds, it'll never run.
To properly trigger your DAG to run, make sure to insert a fixed time in the past (e.g. datetime(2019,1,1)) and set catchup=False (unless you're looking to run a backfill).
By design, an Airflow DAG will execute at the completion of its schedule_interval
That means one schedule_interval AFTER the start date. An hourly DAG, for example, will execute its 2pm run when the clock strikes 3pm. The reasoning here is that Airflow can't ensure that all data corresponding to the 2pm interval is present until the end of that hourly interval.
This is a peculiar aspect to Airflow, but an important one to remember - especially if you're using default variables and macros.
Time in Airflow is in UTC by default
This shouldn't come as a surprise given that the rest of your databases and APIs most likely also adhere to this format, but it's worth clarifying.
Full article and source here
I also had a similar issue, but it is mostly related to SubDagOperator with more than 3000 task instances in total (30 tasks * 44 subdag tasks).
What I found out is that airflow scheduler mainly responsible for putting your scheduled tasks in to "Queued Slots" (pool), while airflow celery workers is the one who pick up your queued task and put it into the "Used Slots" (pool) and run it.
Based on your description, your scheduler should work fine. I suggest you check your "celery workers" log to see whether there is any error, or restart it to see whether it helps or not. I experienced some issues that celery workers normally go on strike for a few minutes then start working again (especially on SubDagOperator)
One of the very silly reasons could be that the DAG is "paused" which is the default state for the first time. I lost around 2 hrs fighting it. If you are using Airflow Web interface, then this shows up as a toggle next to your DAG in the list
I am facing the issue today and found that bullet point 4 from tobi6 answer below worked out and resolved the issue
*'Do all the DAGs you want to run have a start date which is in the past?'*
I am using airflow version v1.10.3
My problem was one step further, in addition to my tasks being queued, I couldn't see any of my celery workers on the Flower UI. The solution was that, since I was running my celery worker as root I had to make changes in my ~/.bashrc file.
The following steps made it work:
Add export C_FORCE_ROOT=true to your ~/.bashrc file
source ~/.bashrc
Run worker : nohup airflow worker $* >> ~/airflow/logs/worker.logs &
Check your Flower UI at http://{HOST}:5555
I think it's worth mentioning that there's an open issue that can cause tasks to fail to run with no obvious reason: https://issues.apache.org/jira/browse/AIRFLOW-5506
The problem seems to occur when using LocalScheduler connected to a PostgreSQL airflow db, and results in the scheduler logging a number of "Killing PID xxxx" lines. Check the scheduler logs after the DAGs have been stalled without starting any new tasks for a while.
You can try to stop the webserver and the scheduler:
ps -ef | grep airflow #show the process id
kill 1234 #kill the webserver
kill 5678 #kill the scheduler
Remove the files from the airflow folder if they exist (they will be created again):
airflow-scheduler.err
airflow-scheduler.pid
airflow-webserver.err
airflow-webserver.pid
Start the webserver and the scheduler again.
airflow webserver -D
airflow scheduler -D
-D will make the services run in the background.
I had a similar issue of a triggered DAG "running" indefinitely because its first task stuck in "queued" state.
I realized this was because of a "ghost" DAG that actually changed name. It seems that since the DAG has run in the past (had data in the postgresDG) and was referenced as child-DAG in other DAGs, the trigger of the parent DAGs referencing the old name would "resurrect" the old DAG name, but with the new code. Indeed the old DAG name and new DAG code did not match, thus producing an "infinite queued execution" bug.
Solution:
Delete the all the previous DAG runs of the previous DAG-runs with the old name
Restart everything (webserver, worker, executor,...) OR Delete relevant DAGs (with the "delete DAG" button in the UI).
The interpretation of the bug can vary but this fix worked in my case.
One more thing to check is whether "the concurrency parameter of your DAG reached?".
I'd experienced the same situation when some task was shown as NO STATUS.
It turned out that my File_Sensor tasks were run with timeout set up to 1 week, while DAG time out was only 5 hours. That leaded to the case when the Files were missing, many sensors tasked were running at the same time. Which results the concurrency overloaded!
The depending tasks couldn't be started before the sensor task succeed, when the dag timeout, they got NO STATUS.
My solution:
Carefully set tasks and DAG timeout
Increase dag_concurrency in airflow.cfg file in AIRFLOW_HOME folder.
Please refer to the docs.
https://airflow.apache.org/faq.html#why-isn-t-my-task-getting-scheduled
I believe this is an issue with celery version 4.2.1 and redis 3.0.1 as described here:
https://github.com/celery/celery/issues/3808
we resolved the issue by downgrading our redis version 2.10.6:
redis==2.10.6
In my case, tasks were not being launched because I had for all operators a pool configured and hadn't created it, hence, tasks were not even scheduled. An operator looks like:
foo = DummyOperator(
task_id='foo',
dag=dag,
pool='capser'
)
To create a pool go to Admin > Pools > Create and set slots, for example, 128, which runs successfully for me. You can also configure by using the CLI.
counter intuitive UI message!
I have spent days on this. So want to elaborate on my specific issue (s).
Each dag has a state. By default the state could be 'pause' or 'not pause'.
The first confusion arises from - what is the default state on startup? The UI message attached seems to indicate that the state is 'not pause' and on clicking the toggle, it pauses.
In reality, the default state is 'pause'. This state can be controlled by settings, environment variables, parameters and UI. I have detailed them below.
The second confusion arises because of the UI again. When we manually trigger a dag which is in the pause state. The UI shows the dag as running (green circle)! But the dag is actually in the 'pause' state. The tasks will not execute unless it is 'un-paused'.
If we read the task instance details. The message would be
Task is in the 'None' state which is not a valid state for execution. The task must be cleared in order to be run.
What is the 'None' state!? And clear which task?!
The actual problem is that the dag is in the pause state. On toggling the dag state the tasks would start to execute.
The pause state of the dag can be changed by
clicking the button on the UI.
set your particular dag to run, by adding the below parameter to your dag
DAG(dag_id='your-dag', is_paused_upon_creation=True)
setting the config variable in airflow.cfg file. (caution: this will start all your dags including the example ones)
dags_are_paused_at_creation = FALSE
configuring an environment variable before starting up the scheduler/webserver.(caution: this will start all your dags including the example ones)
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION=False
Make sure that your task is assigned to the same queue, that your workers is listening to. This means that in your DAG file you have to set 'queue': 'queue_name' and in your worker configuration you have to set either default_queue = 'queue_name' in the airflow.cfg or AIRFLOW__OPERATORS__DEFAULT_QUEUE: 'queue_name' in the docker-compose.yaml (in case you're using Docker).

Resources