I have one dag that tells another dag what tasks to create in a specific order.
Dag 1 -> a file that has a task order
This runs every 5 minutes or so to keep this file fresh.
Dag 2 -> runs the task
this runs daily.
How can I pass this data between the two DAGs using Airflow.
Solutions and problems
The problem with using Airflow Variables is that I cannot set them at runtime.
The problem with using Xcoms is that they can only be run during the task stage and once the tasks are created in Dag 2, they're set and cannot be changed correct?
The problem with pushing the file to s3 is that the airflow instance doesn't have permission to pull from s3 due to security reasons decided by a team that I have no control over.
So what can I do? What are some choices I have?
What is the file format of the output from the 1st DAG? I would recommend the following workflow
Dag 1 -> Update the tasks order and store it in a yaml or json file inside the airflow environment.
Dag 2 -> Read the file to create the required tasks and run them daily.
You need to understand that airflow is constantly reading your dag files to have the latest configuration, so no extra step would be required.
I have had a similar issue in the past and it largely depends on your setup.
If you are running Airflow on Kubernetes this might work.
You create a PV(Persistent Volume) and PVC
You start your application with a KubernetesOperator and mount the PVC to it.
You store the result on the PVC.
You mount the PVC to the other pod.
Related
Recently I'm developing an airflow pipeline that will be running for multi tenants. This DAG will be triggered via API, and separated by batches, which is controlled by a metadabase in SQL following some business rules.
Each batch has a batch_id in order to controll the batches, and it is passed to conf DAG via API. The batch id has the timestamp of creation combined with tenant and filetype. Example: tenant1_20221120123323 ... tenant2_20221120123323. These batches can contain two filetypes ( for example purpouses) and for each filetype a DAG is triggered (DAG1 for filetype 1 and DAG2 for filetype 2) and then from the file perspective, it is combined with the filetype in some stages tenant1_20221120123323_filetype1, tenant1_20221120123323_filetype2 ...
For illustrate this, imagine that the first dag the following pipeline process_data_on_spark >> check_new_files_on_statingstorage >> [filetype2_exists, write_new_data_to_warehouse] filetype2_exists >> read_data_from_filetype2 >> merge_filetype2_filetype2 >> write_new_data_to_warehouse . Where the filetype2_exists is a BranchPythonOperator, that verify if DAG_2 was triggered, and if it was, it will merge the resulted data form DAG2 with the DAG1 before execute write_new_data_to_warehouse.
Based on this DAG model, there will be one DAG run for each tenant. So, the DAG can have multiple DAG runs running in parallel if we trigger more than one DAG run (one per tenant). Here is my first question:
Is a good practice work with multiple DAG runs in the same DAG instead of working with Dynamic DAGs ? In this case, I would end withprocess_data_on_spark _tenant1,
process_data_on_spark _tenant2, ...process_data_on_spark _tenantN. It worth mention that the number of tenants can reach hundreads.
Now, considering that the filetype2 can or not be present in the batch, and, considering that I would use the model mentioned above (on single DAG with multiples DAG run runnning in parallel - one for each tenant). The only idead that I have for check if DAG2 was triggered for the current batch (ie., filetype2 was present in the batch) was modify the DAG_run_id to include the batch_id, combined with the filetype:
The default dag_run_id: manual__2022-11-19T00:00:00+00:00
The new dag_run_id: manual__tenant1_20221120123323_filetype2__2022-11-19T00:00:00+00:00
And from then, I would be able to query the airflow metadatabse and check if there was an dag_run_id that contains the current batch_id and the filetype2 running, and, with a sensor, wait for the dag_status be success. Then, I could run the read_data_from_filetype2 task. Otherwise, if there is no dag_run_id with batch_id and filetype2 registed in airflow metadatabase, I can follow the write_new_data_to_warehouse directly.
Here's the other question:
Is a good practice to modify dag_run_id and use it combined with airflow metadatabase to control pipelines?
Considering this scenario, It would be better to create dynamic DAGs, even if there would be result in hundeads DAGs or working with dag_run_id and airflow_metadabase and keep parallel DAG runs in one single DAG?
Or, there would be a better approach for this problem?
Thank You.
Scenario
I have a python file which creates multiple dags(Dynamic dag). This file fetches some data from an API and say 100 dags are created based on 100 rows from the API response.
Issue
When the API response changes, say now 90 rows are coming then 10 dags are removed from dagbag since dyamic dag file is not creating those dags, however those dags are still present on airflow UI. Also sometimes I see certain task of these dags in scheduled state(since code of the dag is not present in dagbag, so they can't go to running state) which I have to manually kill and then pause the dag.
Looking for?
I wanted to know if there is any way(config or otherwise) using which I can make sure if a dag is not present in dagbag then it doesn't show up on airflow AI until it's response added back in API again and nor did it tasks mess up the stats on airflow. I am using airflow-2.3.2
Every dag_dir_list_interval, the DagFileProcessorManager list the scripts in the dags folder, then if the script is new or processed since more than the min_file_process_interval, it creates DagFileProcessorProcess for the file to process it and generate the dags.
At this moment, DagFileProcessorProcess will call the API and get the dags ids, then update the dag bag.
But the dag records (runs, tasks, tags, ...) will stay in the Metastore, and they can be deleted by UI, API or CLI:
# API
curl -X DELETE <airflow url>/api/v1/dags/<dag id>
# CLI
airflow dags delete <dag id>
Why the dags are not deleted automatically when they disappear from dagbag?
Suppose you have some dags created dynamically based on a config file stored in S3 and there is a network problem or a bug in the new release, or you have a problem with the volume which contains the dags files, in this case, if the DagFileProcessorManager detects the difference between the Metastore and the local dagbag, then deletes these dags, there will a big problem where you will loss the history of your dags.
Instead, Airflow keeps the data, to let you decide if you want to delete them.
Can you delete the dags dynamically?
You can create an hourly dag with a task which fill a dagbag locally, and load the Metastore dagbag, then delete the dags which appear in the Metastore dagbag and not the local dagbag.
But do these removed dags remain visible in the UI?
The answer is no, they are marked as deactivated after deactivate_stale_dags_interval which is 1 min by default, this deactivated/activated notion can solve the first problem I mentioned above, where only the activated dags are visible on the UI. Then when the network/volume issue is solved, the DagFileProcessorManager will create the dags, and marked them as activated in the Metastore.
So if your goal is just hiding the deleted dags from the UI, you can check what do you have as value for deactivate_stale_dags_interval and decrease the value, but if you want to completely delete the dag, you need to do it manually or using a dag which run the manual commands/API request.
hope you are all doing well.
I am using an airflow instance deployed on Kubernetes using Helm Chart.
I setup my dag folder inside a rook nfs storage.
I need these dags to be processed instantly by the airflow scheduler.
Airflow provide an environment variable, namely "dag_dir_list_interval". In my configuration I set this variable to 1 which means that the scheduler will check every seconds if there is a new dag file inside the dag folder.
It works but as you can imagine it is very not efficiency as it costs a lot in terms of CPU Usage.
I wanted to know if there were any alternative to this environment variable, for example, let's say a call API that allows me to tell to the scheduler "hey there is a new dag to be processed" without checking every seconds for new file inside the nfs storage.
Thank you for your suggestions.
I am using Airflow 1.9.0 with a custom SFTPOperator. I have code in my DAGs that poll an SFTP site to find new files. If any are found, then I create custom task id's for the dynamically created task and retrieve/delete the files.
directory_list = sftp_handler('sftp-site', None, '/', None, SFTPToS3Operation.LIST)
for file_path in directory_list:
... SFTP code that GET's the remote files
That part works fine. It seems both the airflow webserver and airflow scheduler are iterating through all the DAGs once a second and actually running the code that retrieves the directory_list. This means I'm hitting the SFTP site ~2 x a second to authenticate and pull a list of files. I'd like to have some conditional code that only executes if the DAG is actually being run.
When an SFTP site uses password authentication, the # of times I connect really isn't an issue. One site requires key authentication and if there are too many authentication failures in a short timespan, the account is locked. During my testing, this seems to happen occasionally for reasons I'm still trying to track down.
However, if I were authenticating only when the DAG was scheduled to execute, or executing manually, this would not be an issue. It also seems wasteful to spend so much time connecting to an SFTP site when it's not scheduled to do so.
I've seen a post that can check to see if a task is executing, but that's not ideal as I'd have to create a long-running task, using up resources I shouldn't require, just to perform that test. Any thoughts on how to accomplish this?
You have a very good use case for Airflow (SFTP to _____ batch jobs), but Airflow is not meant for dynamic DAGs as you are attempting to use them.
Top-Level DAG Code and the Scheduler Loop
As you noticed, any top-level code in a DAG is executed with each scheduler loop. Or put another way, every time the scheduler loop processes the files in your DAG directory it is interpreting all the code in your DAG files. Anything not in a task or operator is interpreted/executed immediately. This puts undue strain on the scheduler as well as any external systems you are making calls to.
Dynamic DAGs and the Airflow UI
Airflow does not handle dynamic DAGs through the UI well. This is mostly the result of the Airflow DAG state not being stored in the database. DAG views and history are rendered based on what exist in the interpreted DAG file at any given moment. I personally hope to see this change in the future with some form of DAG versioning.
In a dynamic DAG you can both add and remove tasks from a DAG.
Adding Tasks Dynamically
When adding tasks for a DAG run will make it appear (in the UI) that all DAG
runs before when that task never ran that task all. The will have a None state
and the DAG run will be set to success or failed depending on the outcome
of the DAG run.
Removing Tasks Dynamically
If your dynamic DAG ever removes tasks you will lose the ability to review history of the DAG. For example, if you run a DAG with task_x in the first 20 DAG runs but remove it after that, it will fail to show up in the UI until it is added back into the DAG.
Idempotency and Airflow
Airflow works best when the DAG runs are idempotent. This means that re-running any DAG Run should have the same affect no matter when you run it or how many times you run it. Dynamic DAGs in Airflow break idempotency by adding and removing tasks to previous DAG runs so that the results of re-running are not the same.
Solution Options
You have at least two options moving forward
1.) Continue to build your SFTP DAG dynamically, but create another DAG that writes the available SFTP files to a local file (if not using distributed executor) or an Airflow Variable (this will result in more reads to the Airflow DB) and build your DAG dynamically from that.
2.) Overload the SFTPOperator to take a list of files so that every file that exist is processed within a single task run. This will make the DAGs idempotent and you will maintain accurate history through the logs.
I apologize for the extended explanation, but you're touching on one of the rough spots of Airflow and I felt it was appropriate to give an overview of the problem at hand.
I have a dag that checks for files on an FTP server (airflow runs on separate server). If file(s) exist, the file(s) get moved to S3 (we archive here). From there, the filename is passed to a Spark submit job. The spark job will process the file via S3 (spark cluster on different server). I'm not sure if I need to have multiple dags but here's the flow. What I'm looking to do is to only run a Spark job if a file exist in the S3 bucket.
I tried using an S3 sensor but that fails/timeouts after it meets the timeout criteria, therefore the whole dag is set to failed.
check_for_ftp_files -> move_files_to_s3 -> submit_job_to_spark -> archive_file_once_done
I only want to run everything after the script that does the FTP check ONLY when a file or files were moved into S3.
You can have 2 different DAGs. One only has the S3 sensor and keeps running, lets say, every 5 minutes. If it finds the file, it triggers the second DAG. The second DAG submits the file to S3 and archives if done. You can use TriggerDagRunOperator in the first DAG for triggering.
The answer Him gave will work.
Another option is using the "soft_fail" parameter that Sensors have (it is a parameter from the BaseSensorOperator). IF you set this parameter to True, instead of failing a task, it will skip it and all following tasks in the branch will also be skipped.
See airflow code for more info.