airflow: practical limit on the number of DAGS - airflow

What is the practical limit on the number of DAGS in an airflow system?
We are seeing severe delays after a couple of hundred DAGS are created.
Is anyone running 1000's of DAGS?

It depends on how you run Airflow and how much resources you provide it. You can run Airflow in a distributed mode with Celery and Master and Worker having high memory and vCores, if you have huge number of tasks.

Related

Manage Airflow Workers' CPU in DAG concurrency peaks

I am using composer 2.0.2 (Airlfow 2.2.5) in GCP on a ‘small’ environment which uses the following resources:
Environment resources
The environment has 90 DAGs which are triggered by API calls as our architecture is event based. The other day, I ran a worst case scenario test where the 90 DAGs were triggered nearly at the same time. As a result, many of the DAGs’ tasks were in ‘skipped’ state even though its dependencies were in ‘success’, then the DAGs failed due to timeout.
Based on what I saw on my monitoring dashboard and what I found in documentation/internet, these anomalies happened because the workers (which can scale up to 1.5) hit their CPU limit:
CPU limit reached
In this context, I am looking for ways to reduce CPU usage even if it means having longer execution times. Multiple forums and blogs mention Airflow parameters that could be useful such as AIRFLOW__SCHEDULER__MIN_FILE_PROCESS_INTERVAL but I am still not sure.
Please notice that scaling the environment resources is not an option due to cost restrictions.

Maximum number of DAGs in Airflow and Cloud Composer

Is there a maximum number of DAGs that can be run in 1 Airflow or Cloud Composer environment?
If this is dependent on several factors (Airflow infrastructure config, Composer cluster specs, number of active runs per DAG etc..) what are all the factors that affect this?
I found from Composer docs that Composer uses CeleryExecutor and runs it on Google Kubernetes Engine (GKE).
There is no limit on the maximum number of dags in Airflow and it is a function of the resources (nodes, CPU, memory) available and then assuming there are resources available, the Airflow configuration options are just a limit setting that will be a bottleneck and have to be modified.
There is a helpful guide on how to do this in Cloud Composer here. So once you enable autoscaling in the underlying GKE cluster, and unlock the hard-limits specified in the Airflow configuration, there should be no limit to maximum number of tasks.
For vanilla Airflow, it will depend on the executor you are using in Airflow, and it will be easier to scale up if you use the KubernetesExecutor and then handle the autoscaling in K8s.
If you are using LocalExecutor then you can improve this if you are facing slow performance by increasing the resources allocated to your Airflow installation (CPU, memory).
it depends on the available resources allowed to your airflow and the type of the executor. And there is a maximum amount of allowed tasks and dags to run concurrently and simultaneously defined in the [core] section of airflow.cfg :
# The amount of parallelism as a setting to the executor. This defines
# the max number of task instances that should run simultaneously
# on this airflow installation
parallelism = 124
# The number of task instances allowed to run concurrently by the scheduler
dag_concurrency = 124
# The maximum number of active DAG runs per DAG
max_active_runs_per_dag = 500

Sharing large intermediate state between Airflow tasks

We have an Airflow deployment with Celery executors.
Many of our DAGs require a local processing step of some file in a BashOperator or PythonOperator.
However, in our understanding the tasks of a given DAG may not always be scheduled on the same machine.
The options for state sharing between tasks I've gathered so far:
Use Local Executors - this may suffice for one team, depending on the load, but may not scale to the wider company
Use XCom - does this have a size limit? Probably unsuitable for large files
Write custom Operators for every combination of tasks that need local processing in between. This approach reduces modularity of tasks and requires replicating existing operators' code.
Use Celery queues to route DAGs to the same worker (docs) - This option seems attractive at first, but what would be an appropriate way to set it up in order to avoid routing everything to one executor, or crafting a million queues?
Use a shared network storage in all machines that run executors - Seems like an additional infrastructure burden, but is a possibility.
What is the recommended way to do sharing of large intermediate state, such as files, between tasks in Airflow?
To clarify something: No matter how you setup airflow, there will only be one executor running.
The executor runs on the same machine as the scheduler.
Currently (current is airflow 1.9.0 at time of writing) there is no safe way to run multiple schedulers, so there will only ever be one executor running.
Local executor executes the task on the same machine as the scheduler.
Celery Executor just puts tasks in a queue to be worked on the celery workers.
However, the question you are asking does apply to Celery workers. If you use Celery Executor you will probably have multiple celery workers.
Using network shared storage solves multiple problems:
Each worker machine sees the same dags because they have the same dags folder
Results of operators can be stored on a shared file system
The scheduler and webserver can also share the dags folder and run on different machines
I would use network storage, and write the output file name to xcom. Then when you need to input the output from a previous task, you would read the file name from that task's Xcom and process that file.
Change datatype of column key in xcom table of airflow metastore.
Default datatype of key is: blob.
Change it to LONGBLOB. It will help you to store upto 4GB in between intermediate tasks.

How to increase tasks queued per second?

I am trying to diagnose an under-performing airflow pipeline and am wondering what kind of performance I should expect out of the airflow scheduler in terms similar to "tasks scheduled per second".
I have few queued jobs and many of my tasks finish in seconds so I suspect the scheduler is the limiting component and it is my fault for having many quick tasks. Still, I would rather not rewrite my DAGs if it can be avoided.
What can I do to increase the rate at which the scheduler queues tasks?
Pipeline Details
Here is what my current airflow.cfg looks like.
I only have two dags running. One is scheduled every 5 min and the other is rarely triggered by the first. I am currently trying to backfill several years at this frequency, but may need to change my approach:
As for worker nodes: I currently have 4 fairly powerful servers running at less than 10% resource usage in disk, network, cpu, RAM, swap. Toggling 3 of the workers off has no impact on my task throughput and the server left on barely even registers the change in workload.
There are a number of config values in your airflow.cfg that could be related to this.
Under [core]:
parallelism: Total number of task instances that can run at once.
dag_concurrency: Limit of task instances that can run per DAG run, may need to bump if you have many parallel tasks. Can override when defining a DAG.
non_pooled_task_slot_count: Limit of tasks without a pool configured that can run at once.
max_active_runs_per_dag: The maximum number of active DAG runs per DAG. If you're triggering runs manually or there's a backup of DAG runs scheduled with a short interval. Can override when defining a DAG.
Under [scheduler]:
schedule_heartbeat_sec: Defines how often the scheduler runs, try it out with lower values.
min_file_process_interval: Process each file at most once every N seconds. Set to 0 to never limit how often you process a file.
Under [worker]:
celeryd_concurrency: Number of workers celery will run with, so essentially number of task instances a worker can take at once. Matching the number of CPUs is a popular starting point, but can definitely go higher.
Last one is only if you're using the CeleryExecutor, which I'd definitely recommend if you're looking to increase your task throughput.

Airflow parallelism

the Local Executor spawns new processes while scheduling tasks. Is there a limit to the number of processes it creates. I needed to change it. I need to know what is the difference between scheduler's "max_threads" and
"parallelism" in airflow.cfg ?
parallelism: not a very descriptive name. The description says it sets the maximum task instances for the airflow installation, which is a bit ambiguous — if I have two hosts running airflow workers, I'd have airflow installed on two hosts, so that should be two installations, but based on context 'per installation' here means 'per Airflow state database'. I'd name this max_active_tasks.
dag_concurrency: Despite the name based on the comment this is actually the task concurrency, and it's per worker. I'd name this max_active_tasks_for_worker (per_worker would suggest that it's a global setting for workers, but I think you can have workers with different values set for this).
max_active_runs_per_dag: This one's kinda alright, but since it seems to be just a default value for the matching DAG kwarg, it might be nice to reflect that in the name, something like default_max_active_runs_for_dags
So let's move on to the DAG kwargs:
concurrency: Again, having a general name like this, coupled with the fact that concurrency is used for something different elsewhere makes this pretty confusing. I'd call this max_active_tasks.
max_active_runs: This one sounds alright to me.
source: https://issues.apache.org/jira/browse/AIRFLOW-57
max_threads gives the user some control over cpu usage. It specifies scheduler parallelism.
It's 2019 and more updated docs have come out. In short:
AIRFLOW__CORE__PARALLELISM is the max number of task instances that can run concurrently across ALL of Airflow (all tasks across all dags)
AIRFLOW__CORE__DAG_CONCURRENCY is the max number of task instances allowed to run concurrently FOR A SINGLE SPECIFIC DAG
These docs describe it in more detail:
According to https://www.astronomer.io/guides/airflow-scaling-workers/:
parallelism is the max number of task instances that can run
concurrently on airflow. This means that across all running DAGs, no
more than 32 tasks will run at one time.
And
dag_concurrency is the number of task instances allowed to run
concurrently within a specific dag. In other words, you could have 2
DAGs running 16 tasks each in parallel, but a single DAG with 50 tasks
would also only run 16 tasks - not 32
And, according to https://airflow.apache.org/faq.html#how-to-reduce-airflow-dag-scheduling-latency-in-production:
max_threads: Scheduler will spawn multiple threads in parallel to
schedule dags. This is controlled by max_threads with default value of
2. User should increase this value to a larger value(e.g numbers of cpus where scheduler runs - 1) in production.
But it seems like this last piece shouldn't take up too much time, because it's just the "scheduling" portion. Not the actual running portion. Therefore we didn't see the need to tweak max_threads much, but AIRFLOW__CORE__PARALLELISM and AIRFLOW__CORE__DAG_CONCURRENCY did affect us.
The scheduler's max_threads is the number of processes to parallelize the scheduler over. The max_threads cannot exceed the cpu count. The LocalExecutor's parallelism is the number of concurrent tasks the LocalExecutor should run. Both the scheduler and the LocalExecutor use python's multiprocessing library for parallelism.

Resources