We have an Airflow python script which read configuration files and then generate > 100 DAGs dynamically. When running the script in Airflow 2.4.1, from the task run log, we notice that Airflow is trying to parse our python script for every task run.
https://github.com/apache/airflow/blob/2.4.1/airflow/task/task_runner/standard_task_runner.py#L91-L97
Is there any way to make Airflow deserialize DAGs from DBs instead?
just found out that it is an expected behavior
https://medium.com/apache-airflow/airflows-magic-loop-ec424b05b629
https://medium.com/apache-airflow/magic-loop-in-airflow-reloaded-3e1bd8fb6671
but the Python script may use parsing context to load the respective DAG only
https://github.com/apache/airflow/pull/25161
I have a bash script spark_submit.sh that I want to use for scheduling my airflow job with a the BashOperator. The spark_submit.sh uses ivy to pull in the dependencies and then starts job.
spark_submit.sh looks like:
spark-submit --conf "spark.jars.ivySettings=.ivy2/ivysettings.xml" --repositories https://artifactory,https://artifactory/artifactory/maven-daco-releases --packages io.delta:delta-core_2.12:0.8.0 --master yarn --name spark-job
SparkJob.py
When I run on server without Airflow the spark_submit.sh works fine but when use BashOperator in DAG the error I get is complete jibberish. Other BashOperator actions I have tried work fine so I am suspecting that the Jinja Templating is doing something to my spark_submit.sh which causes it to fail. Anybody encountered this before?
I know there is a SparkSubmitOperator but it is not installed on server yet.
I am trying to run a single task within a DAG on a GCP cloud composer airflow instance and mark all other tasks in the dag both upstream and downstream as successful. However, the following airflow command seems to not be working for me on cloud composer.
Does anyone know what is wrong with the followinggcloud cli command?
dag_id: "airflow_monitoring" <br>
task_id: "echo1" <br>
execution_date: "2020-07-03" <br>
gcloud composer environments run my-composer --location us-centra1 \
-- "airflow_monitoring" "echo1" "2020-07-03"
Thanks for your help.
If you aim just to correctly compose the above mentioned gcloud command, triggering the specific DAG, then after fixing some typos and propagating Airflow CLI sub-command parameters, I got this works:
gcloud composer environments run my-composer --location=us-central1 \
--project=<project-id> trigger_dag -- airflow_monitoring --run_id=echo1 --exec_date="2020-07-03"
I would also encourage you to check out the full Airflow CLI sub-command list.
In case you expect to get some different functional result, then feel free to expand the initial question, adding more essential content.
When there is a task running, Airflow will pop a notice saying the scheduler does not appear to be running and it kept showing until the task finished:
The scheduler does not appear to be running. Last heartbeat was received 5 minutes ago.
The DAGs list may not update, and new tasks will not be scheduled.
Actually, the scheduler process is running, as I have checked the process. After the task finished, the notice will disappear and everything back to normal.
My task is kind of heavy, may running for couple hours.
I think it is expected for Sequential Executor. Sequential Executor runs one thing at a time so it cannot run heartbeat and task at the same time.
Why do you need to use Sequential Executor / Sqlite? The advice to switch to other DB/Executor make perfect sense.
You have started airflow webserver and you haven't started your airflow scheduler.
Run airflow scheduler in background
airflow scheduler > /console/scheduler_log.log &
I had the same issue.
I switch to postgresql by updating airflow.cfg file > sql_alchemy_conn =postgresql+psycopg2://airflow#localhost:5432/airflow
and executor = LocalExecutor
This link may help how to set this up locally
https://medium.com/#taufiq_ibrahim/apache-airflow-installation-on-ubuntu-ddc087482c14
A quick fix could be to run the airflow scheduler separately. Perhaps not the best solution but it did work for me. To do so, run this command in the terminal:
airflow scheduler
I had a similar issue and have been trying to troubleshoot this for a while now.
I managed to fix it by setting this value in airflow.cfg:
scheduler_health_check_threshold = 240
PS: Based on a recent conversation in Airflow Slack Community, it could happen due to contention at the Database side. So, another workaround suggested was to scale up the database. In my case, this was not a viable solution.
EDIT:
This was last tested with Airflow Version 2.3.3
I have solved this issue by deleting airflow-scheduler.pid file.
then
airflow scheduler -D
Check the airflow-scheduler.err and airflow-scheduler.log files.
I got an error like this:
Traceback (most recent call last):
File "/home/myVM/venv/py_env/lib/python3.8/site-packages/lockfile/pidlockfile.py", ine 77, in acquire
write_pid_to_pidfile(self.path)
File "/home/myVM/venv/py_env/lib/python3.8/site-packages/lockfile/pidlockfile.py", line 161, in write_pid_to_pidfile
pidfile_fd = os.open(pidfile_path, open_flags, open_mode)
FileExistsError: [Errno 17] File exists: '/home/myVM/venv/py_env/airflow-scheduler.pid'
I removed the existing airflow-scheduler.pid file and started the scheduler again by airflow scheduler -D. It was working fine then.
Our problem is that the file "logs/scheduler.log" is too large, 1TB. After cleaning this file everything is fine.
I had the same issue while using sqlite. There was a special message in Airflow logs: ERROR - Cannot use more than 1 thread when using sqlite. Setting max_threads to 1. If you use only 1 thread, the scheduler will be unavailable while executing a dag.
So if use sqlite, try to switch to another database. If you don't, check max_threads value in your airflow.cfg.
On Composer page, click on your environment name, and it will open the Environment details, go to the PyPIPackages tab.
Click on Edit button, increase the any package version.
For example:
I increased the version of pymsql packages, and this restarted the airflow environment, it took a while for it to update. Once it is done, I'm no longer have this error.
You can also add a Python package, it will restart the airflow environment.
I've had the same issue after changing the airflow timezone. I then restarted the airflow-scheduler and it works. You can also check if the airflow-scheduler and airflow-worker are on different servers.
In simple words, using LocalExecutor and postgresql could fix this error.
Running Airflow locally, following the instruction, https://airflow.apache.org/docs/apache-airflow/stable/start/local.html.
It has the default config
executor = SequentialExecutor
sql_alchemy_conn = sqlite:////Users/yourusername/airflow/airflow.db
It will use SequentialExecutor and sqlite by default, and it will have this "The scheduler does not appear to be running." error.
To fix it, I followed Jarek Potiuk's advice. I changed the following config:
executor = LocalExecutor
sql_alchemy_conn = postgresql://postgres:masterpasswordforyourlocalpostgresql#localhost:5432
And then I rerun the "airflow db init"
airflow db init
airflow users create \
--username admin \
--firstname Peter \
--lastname Parker \
--role Admin \
--email spiderman#superhero.org
After the db inited. Run
airflow webserver --port 8080
airflow scheduler
This fixed the airflow scheduler error.
This happens to me when AIRFLOW_HOME is not set.
By setting AIRFLOW_HOME to the correct path, the indicated executor will be selected.
If it matters: somehow, the -D flag causes a lot of problems for me. The airflow webserver -D immediately crashes after starting, and airflow scheduler -D somehow does next to nothing for me.
Weirdly enough, it works without the detach flag. This means I can just run the program normally, and make it run in the background, with e.g. nohup airflow scheduler &.
After change executor from SequentialExecutor to LocalExecutor, it works!
in airflow.cfg:
executor = LocalExecutor
I have a running Airflow server and I am making a config change in airflow.cfg which requires to run airflow initdb .
Will running airflow initdb command for the second time be destructive to existing tables or it will only execute changes according to the new config?
Only destructive command related to airflow database is airflow resetdb.
initdb and upgradedb share the same behavior (except for the first-run).
I think you can run both:
From Airflow source code:
def initdb():
from airflow import models
from airflow.models import Connection
upgradedb()