Airflow webserver not starting while using helm chart on minikube - airflow

I'm trying to run Airflow locally (to test it before deployment) using minikube and helm chart stable/airflow. But airflow-webserver doesn't start due to gunicorn issue.
Helm: v2.14.3
Kubernetes: v1.15.2
Minikube: v1.3.1
Helm chart image: puckel/docker-airflow
These are the steps:
minikube start
helm install --namespace "airflow" --name "airflow" stable/airflow
Logs are:
Thu Sep 12 07:29:54 UTC 2019 - waiting for Postgres... 1/20
Thu Sep 12 07:30:00 UTC 2019 - waiting for Postgres... 2/20
waiting 60s...
executing webserver...
[2019-09-12 07:31:05,745] {{settings.py:213}} INFO - settings.configure_orm(): Using pool settings. pool_size=5, max_overflow=10, pool_recycle=1800, pid=1
/usr/local/lib/python3.7/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
[2019-09-12 07:31:06,030] {{__init__.py:51}} INFO - Using executor CeleryExecutor
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
[2019-09-12 07:31:06,585] {{dagbag.py:90}} INFO - Filling up the DagBag from /usr/local/airflow/dags
Running the Gunicorn Server with:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
=================================================================
[2019-09-12 07:31:07,676] {{settings.py:213}} INFO - settings.configure_orm(): Using pool settings. pool_size=5, max_overflow=10, pool_recycle=1800, pid=21
/usr/local/lib/python3.7/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
[2019-09-12 07:31:07 +0000] [21] [INFO] Starting gunicorn 19.9.0
[2019-09-12 07:31:07 +0000] [21] [INFO] Listening at: http://0.0.0.0:8080 (21)
[2019-09-12 07:31:07 +0000] [21] [INFO] Using worker: sync
[2019-09-12 07:31:07 +0000] [25] [INFO] Booting worker with pid: 25
[2019-09-12 07:31:07 +0000] [26] [INFO] Booting worker with pid: 26
[2019-09-12 07:31:07 +0000] [27] [INFO] Booting worker with pid: 27
[2019-09-12 07:31:07 +0000] [28] [INFO] Booting worker with pid: 28
[2019-09-12 07:31:08,444] {{__init__.py:51}} INFO - Using executor CeleryExecutor
[2019-09-12 07:31:08,446] {{__init__.py:51}} INFO - Using executor CeleryExecutor
[2019-09-12 07:31:08,545] {{__init__.py:51}} INFO - Using executor CeleryExecutor
[2019-09-12 07:31:08,669] {{__init__.py:51}} INFO - Using executor CeleryExecutor
[2019-09-12 07:31:10,047] {{dagbag.py:90}} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2019-09-12 07:31:20,932] {{cli.py:825}} ERROR - [0 / 0] some workers seem to have died and gunicorndid not restart them as expected
[2019-09-12 07:31:22,095] {{dagbag.py:90}} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2019-09-12 07:31:22 +0000] [25] [INFO] Parent changed, shutting down: <Worker 25>
[2019-09-12 07:31:22 +0000] [25] [INFO] Worker exiting (pid: 25)
[2019-09-12 07:31:32 +0000] [28] [INFO] Parent changed, shutting down: <Worker 28>
[2019-09-12 07:31:32 +0000] [28] [INFO] Worker exiting (pid: 28)
[2019-09-12 07:31:33,289] {{dagbag.py:90}} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2019-09-12 07:31:33,324] {{dagbag.py:90}} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2019-09-12 07:31:35 +0000] [26] [INFO] Parent changed, shutting down: <Worker 26>
[2019-09-12 07:31:35 +0000] [26] [INFO] Worker exiting (pid: 26)
[2019-09-12 07:31:35 +0000] [27] [INFO] Parent changed, shutting down: <Worker 27>
[2019-09-12 07:31:35 +0000] [27] [INFO] Worker exiting (pid: 27)
[2019-09-12 07:33:32,017] {{cli.py:832}} ERROR - No response from gunicorn master within 120 seconds
[2019-09-12 07:33:32,018] {{cli.py:833}} ERROR - Shutting down webserver
I can run that docker image locally with docker-compose with no issues, but no luck using helm, it fails and restarts constantly.

Turns out that the issue was that the minikube configuration wasn't making postgres' pod available, editing the pod deployment with the ip of the postgres instance it worked.

Related

SQLite Error disk I/O when running Airflow commands

Upon running:
airflow scheduler
I get the following error:
[2022-08-10 13:26:53,501] {scheduler_job.py:708} INFO - Starting the scheduler
[2022-08-10 13:26:53,502] {scheduler_job.py:713} INFO - Processing each file at most -1 times
[2022-08-10 13:26:53,509] {executor_loader.py:105} INFO - Loaded executor: SequentialExecutor
[2022-08-10 13:26:53 -0400] [1388] [INFO] Starting gunicorn 20.1.0
[2022-08-10 13:26:53,540] {manager.py:160} INFO - Launched DagFileProcessorManager with pid: 1389
[2022-08-10 13:26:53,545] {scheduler_job.py:1233} INFO - Resetting orphaned tasks for active dag runs
.
.
.
[2022-08-10 13:26:53 -0400] [1391] [INFO] Booting worker with pid: 1391
Process DagFileProcessor10-Process:
Traceback (most recent call last):
File "/home/dromo/anaconda3/envs/airflow_env_2/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 998, in _commit_impl
self.engine.dialect.do_commit(self.connection)
File "/home/dromo/anaconda3/envs/airflow_env_2/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 672, in do_commit
dbapi_connection.commit()
sqlite3.OperationalError: disk I/O error
I get this 'disk I/O error' as well when I run airflow webserver --port 8080 command as so:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
Access Logformat:
=================================================================
[2022-08-10 14:42:28 -0400] [2759] [INFO] Starting gunicorn 20.1.0
[2022-08-10 14:42:29 -0400] [2759] [INFO] Listening at: http://0.0.0.0:8080 (2759)
[2022-08-10 14:42:29 -0400] [2759] [INFO] Using worker: sync
.
.
.
[2022-08-10 14:42:55,149] {app.py:1455} ERROR - Exception on /static/appbuilder/datepicker/bootstrap-datepicker.css [GET]
Traceback (most recent call last):
File "/home/dromo/anaconda3/envs/airflow_env_2/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 998, in _commit_impl
self.engine.dialect.do_commit(self.connection)
File "/home/dromo/anaconda3/envs/airflow_env_2/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 672, in do_commit
dbapi_connection.commit()
sqlite3.OperationalError: disk I/O error
Any ideas as to what might be causing this and possible fixes?
It seems like airflow doesn't find the database on the disk, try to initialize it:
airflow db init

Airflow standalone sqlite3 Integrity Error

I'm trying to run airflow standalone after following these instructions https://airflow.apache.org/docs/apache-airflow/stable/start/local.html on "Ubuntu on Windows". I already placed the AirflowHome folder inside C:/Users/my_user_name/ and that's esentially all the changes I did. However, I'm getting an IntegrityError and the documentation seems very cryptic. Could you guys help me out?
standalone | Starting Airflow Standalone
standalone | Checking database is initialized
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
WARNI [airflow.models.crypto] empty cryptography key - values will not be stored encrypted.
standalone | Database ready
[2022-05-26 10:21:49,812] {manager.py:585} INFO - Removed Permission menu access on Permissions to role Admin
[2022-05-26 10:21:49,885] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
[2022-05-26 10:21:50,076] {manager.py:508} INFO - Created Permission View: menu access on Permissions
[2022-05-26 10:21:50,127] {manager.py:568} INFO - Added Permission menu access on Permissions to role Admin
triggerer | ____________ _____________
triggerer | ____ |__( )_________ __/__ /________ __
triggerer | ____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
triggerer | ___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
triggerer | _/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
triggerer | [2022-05-26 10:21:58,355] {triggerer_job.py:101} INFO - Starting the triggerer
scheduler | ____________ _____________
scheduler | ____ |__( )_________ __/__ /________ __
scheduler | ____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
scheduler | ___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
scheduler | _/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
scheduler | [2022-05-26 10:21:58 -0400] [233] [INFO] Starting gunicorn 20.1.0
scheduler | [2022-05-26 10:21:58 -0400] [233] [INFO] Listening at: http://0.0.0.0:8793 (233)
scheduler | [2022-05-26 10:21:58 -0400] [233] [INFO] Using worker: sync
scheduler | [2022-05-26 10:21:58 -0400] [234] [INFO] Booting worker with pid: 234
scheduler | [2022-05-26 10:21:58,614] {scheduler_job.py:693} INFO - Starting the scheduler
scheduler | [2022-05-26 10:21:58,614] {scheduler_job.py:698} INFO - Processing each file at most -1 times
scheduler | [2022-05-26 10:21:58,619] {executor_loader.py:106} INFO - Loaded executor: SequentialExecutor
scheduler | [2022-05-26 10:21:58,622] {manager.py:156} INFO - Launched DagFileProcessorManager with pid: 235
scheduler | [2022-05-26 10:21:58,624] {scheduler_job.py:1218} INFO - Resetting orphaned tasks for active dag runs
scheduler | [2022-05-26 10:21:58,639] {settings.py:55} INFO - Configured default timezone Timezone('UTC')
scheduler | [2022-05-26 10:21:58 -0400] [236] [INFO] Booting worker with pid: 236
scheduler | [2022-05-26 10:21:58,709] {manager.py:399} WARNING - Because we cannot use more than 1 thread (parsing_processes = 2) when using sqlite. So we set parallelism to 1.
webserver | [2022-05-26 10:21:59 -0400] [231] [INFO] Starting gunicorn 20.1.0
webserver | [2022-05-26 10:21:59 -0400] [231] [INFO] Listening at: http://0.0.0.0:8080 (231)
webserver | [2022-05-26 10:21:59 -0400] [231] [INFO] Using worker: sync
webserver | [2022-05-26 10:21:59 -0400] [239] [INFO] Booting worker with pid: 239
webserver | [2022-05-26 10:21:59 -0400] [240] [INFO] Booting worker with pid: 240
webserver | [2022-05-26 10:21:59 -0400] [241] [INFO] Booting worker with pid: 241
webserver | [2022-05-26 10:22:00 -0400] [242] [INFO] Booting worker with pid: 242
webserver | [2022-05-26 10:22:02,638] {manager.py:585} INFO - Removed Permission menu access on Permissions to role Admin
webserver | [2022-05-26 10:22:02,644] {manager.py:587} ERROR - Remove Permission to Role Error: DELETE statement on table 'ab_permission_view_role' expected to delete 1 row(s); Only 0 were matched.
webserver | [2022-05-26 10:22:02,776] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
webserver | /home/carlos/.local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py:1461 SAWarning: DELETE statement on table 'ab_permission_view' expected to delete 1 row(s); 0 were matched. Please set confirm_deleted_rows=False within the mapper configuration to prevent this warning.
webserver | [2022-05-26 10:22:02,817] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
webserver | /home/carlos/.local/lib/python3.8/site-packages/sqlalchemy/orm/session.py:2459 SAWarning: Identity map already had an identity for (<class 'airflow.www.fab_security.sqla.models.Permission'>, (185,), None), replacing it with newly flushed object. Are there load operations occurring inside of an event handler within the flush?
webserver | [2022-05-26 10:22:03,097] {manager.py:508} INFO - Created Permission View: menu access on Permissions
webserver | [2022-05-26 10:22:03,168] {manager.py:568} INFO - Added Permission menu access on Permissions to role Admin
webserver | [2022-05-26 10:22:03,175] {manager.py:570} ERROR - Add Permission to Role Error: (sqlite3.IntegrityError) UNIQUE constraint failed: ab_permission_view_role.permission_view_id, ab_permission_view_role.role_id
webserver | [SQL: INSERT INTO ab_permission_view_role (permission_view_id, role_id) VALUES (?, ?)]
webserver | [parameters: (185, 1)]
webserver | (Background on this error at: http://sqlalche.me/e/14/gkpj)
webserver | [2022-05-26 10:22:03,188] {manager.py:570} ERROR - Add Permission to Role Error: (sqlite3.IntegrityError) UNIQUE constraint failed: ab_permission_view_role.permission_view_id, ab_permission_view_role.role_id
webserver | [SQL: INSERT INTO ab_permission_view_role (permission_view_id, role_id) VALUES (?, ?)]
webserver | [parameters: (185, 1)]
webserver | (Background on this error at: http://sqlalche.me/e/14/gkpj)
standalone |
standalone | Airflow is ready
standalone | Login with username: admin password: qhbDVvxz9ARPaWQt
standalone | Airflow Standalone is for development purposes only. Do not use this in production!
When trying to open airflow, I always get the following:
And the same for http://0.0.0.0:8793/
Almost exactly the same happens when I try to run airflow webserver.
Thanks!

Unable to start Apache Airflow webserver due to dagbag /dev/null error

I have installed apache airflow V 2.1.0 on Ubuntu running on Windows Linux subsystem(WSL).
After installation, I have created an admin user and also set the AIRFLOW_HOME variable in the ~/.bashrc file as below
export AIRFLOW_HOME=~/airflow
However, when I'm trying to start the airflow webserver it is not working and I'm getting below text
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
[2021-06-12 03:29:19,807] {dagbag.py:487} INFO - Filling up the DagBag from /dev/null
Running the Gunicorn Server with:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
Access Logformat:
=================================================================
[2021-06-12 03:29:23 +0530] [12464] [INFO] Starting gunicorn 20.1.0
[2021-06-12 03:29:23 +0530] [12464] [INFO] Listening at: http://0.0.0.0:8080 (12464)
[2021-06-12 03:29:23 +0530] [12464] [INFO] Using worker: sync
[2021-06-12 03:29:23 +0530] [12466] [INFO] Booting worker with pid: 12466
[2021-06-12 03:29:23 +0530] [12467] [INFO] Booting worker with pid: 12467
[2021-06-12 03:29:23 +0530] [12468] [INFO] Booting worker with pid: 12468
[2021-06-12 03:29:23 +0530] [12469] [INFO] Booting worker with pid: 12469
Can anyone please help me to get rid of this issue?
Check if you have a user airflow. Basically, it looks like airflow is trying to find a path /home/airflow and it couldn't find it or that mount path is missing. I am not sure how ubuntu as a windows subsystem is being configured.

airflow webserver cpu usage high even when idle

I setup airflow instance using docker compose defined in quickstart. I switched to a LocalExecutor and removed Celery and worker instance. One other change was to increase healthcheck interval to 3600s. Apart from this all default settings. Airflow image version is 2.0.1
This setup on a ec2 t3a.medium instance has an average 20% CPU utilization even when idle, this simply eats up cpu credits. Looking at cpu utilization I see a gunicorn processes popping up regularly. I stopped webserver and the utilization drops to 2%. Is there any configuration change that can be done to lower the cpu usage and what are the tradeoff involved with that?
Webserver logs looks like this.
airflow-webserver_1 | [2021-04-12 14:21:09 +0000] [17] [INFO] Handling signal: ttou
airflow-webserver_1 | [2021-04-12 14:21:09 +0000] [17222] [INFO] Worker exiting (pid: 17222)
airflow-webserver_1 | [2021-04-12 14:21:28 +0000] [17] [INFO] Handling signal: ttin
airflow-webserver_1 | [2021-04-12 14:21:28 +0000] [17237] [INFO] Booting worker with pid: 17237
airflow-webserver_1 | [2021-04-12 14:21:40 +0000] [17] [INFO] Handling signal: ttou
airflow-webserver_1 | [2021-04-12 14:21:40 +0000] [17225] [INFO] Worker exiting (pid: 17225)
airflow-webserver_1 | [2021-04-12 14:21:59 +0000] [17] [INFO] Handling signal: ttin
airflow-webserver_1 | [2021-04-12 14:21:59 +0000] [17240] [INFO] Booting worker with pid: 17240
Thanks
was able to reduce cpu usage by increasing refresh and timeout intervals. Added these environment variables to airflow-webserver service
AIRFLOW__WEBSERVER__WORKER_REFRESH_INTERVAL: 600
AIRFLOW__WEBSERVER__WEB_SERVER_WORKER_TIMEOUT: 1200

Airflow doesn't run: The application with bundle id (null) in running setugid() which is not allowed

I am very new to airflow. My airflow has been running fine for many days now. I managed to run some complex pipelines. Something wrong happened recently and airflow stopped working. Actually, the server boots on several pids and this occur recurrently.
Attached is the error. When I start Airflow, the below keeps on repeating and the web-server doesn't starts . I am sure I may be doing something silly, but its difficult for me to figure out the problem, since I am new to airflow. The version of Airflow is 1.8.0. Would appreciate any help. Thanks
enter Running the Gunicorn Server with:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
=================================================================
[2018-04-30 02:41:53,282] {__init__.py:57} INFO - Using executor SequentialExecutor
[2018-04-30 02:41:53 -0500] [4461] [INFO] Starting gunicorn 19.3.0
[2018-04-30 02:41:53 -0500] [4461] [INFO] Listening at: http://0.0.0.0:8080 (4461)
[2018-04-30 02:41:53 -0500] [4461] [INFO] Using worker: sync
[2018-04-30 02:41:53 -0500] [4464] [INFO] Booting worker with pid: 4464
/Users/sam/App-Setup/anaconda/envs/anaconda35/lib/python3.5/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
[2018-04-30 02:41:53 -0500] [4465] [INFO] Booting worker with pid: 4465
[2018-04-30 02:41:53 -0500] [4466] [INFO] Booting worker with pid: 4466
/Users/sam/App-Setup/anaconda/envs/anaconda35/lib/python3.5/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
/Users/sam/App-Setup/anaconda/envs/anaconda35/lib/python3.5/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
[2018-04-30 02:41:53 -0500] [4467] [INFO] Booting worker with pid: 4467
/Users/sam/App-Setup/anaconda/envs/anaconda35/lib/python3.5/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
[2018-04-30 02:41:53,935] [4464] {models.py:167} INFO - Filling up the DagBag from /Users/sam/All-Program/App/PropertyClassification/dags
[2018-04-30 02:41:54,038] [4465] {models.py:167} INFO - Filling up the DagBag from /Users/sam/All-Program/App/PropertyClassification/dags
[2018-04-30 02:41:54,052] [4466] {models.py:167} INFO - Filling up the DagBag from /Users/sam/All-Program/App/PropertyClassification/dags
[2018-04-30 02:41:54,152] [4467] {models.py:167} INFO - Filling up the DagBag from /Users/sam/All-Program/App/PropertyClassification/dags
2018-04-30 02:41:56.080 python[4464:89245] The application with bundle ID (null) is running setugid(), which is not allowed.
[2018-04-30 02:41:56 -0500] [4480] [INFO] Booting worker with pid: 4480
/Users/sam/App-Setup/anaconda/envs/anaconda35/lib/python3.5/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
2018-04-30 02:41:56.217 python[4465:89249] The application with bundle ID (null) is running setugid(), which is not allowed.
2018-04-30 02:41:56.221 python[4466:89250] The application with bundle ID (null) is running setugid(), which is not allowed.
[2018-04-30 02:41:56 -0500] [4481] [INFO] Booting worker with pid: 4481
[2018-04-30 02:41:56 -0500] [4482] [INFO] Booting worker with pid: 4482
/Users/sam/App-Setup/anaconda/envs/anaconda35/lib/python3.5/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
2018-04-30 02:41:56.323 python[4467:89253] The application with bundle ID (null) is running setugid(), which is not allowed.
/Users/sam/App-Setup/anaconda/envs/anaconda35/lib/python3.5/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
[2018-04-30 02:41:56 -0500] [4483] [INFO] Booting worker with pid: 4483
/Users/sam/App-Setup/anaconda/envs/anaconda35/lib/python3.5/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
[2018-04-30 02:41:56,446] [4480] {models.py:167} INFO - Filling up the DagBag from /Users/sam/All-Program/App/PropertyClassification/dags
[2018-04-30 02:41:56,574] [4481] {models.py:167} INFO - Filling up the DagBag from /Users/sam/All-Program/App/PropertyClassification/dags
[2018-04-30 02:41:56,618] [4482] {models.py:167} INFO - Filling up the DagBag from /Users/sam/All-Program/App/PropertyClassification/dags
[2018-04-30 02:41:56,712] [4483] {models.py:167} INFO - Filling up the DagBag from /Users/sam/All-Program/App/PropertyClassification/dags
2018-04-30 02:41:58.580 python[4480:89485] The application with bundle ID (null) is running setugid(), which is not allowed.
[2018-04-30 02:41:58 -0500] [4498] [INFO] Booting worker with pid: 4498
/Users/sam/App-Setup/anaconda/envs/anaconda35/lib/python3.5/site- packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
I tried to reset the db "airflow resetdb". I get the below error:
low/migrations/env.py", line 86, in <module>
run_migrations_online()
File "/Users/sam/App- Setup/anaconda/envs/anaconda35/lib/python3.5/site- packages/airflow/migrations/env.py", line 81, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/Users/sam/App-Setup/anaconda/envs/anaconda35/lib/python3.5/site-packages/alembic/runtime/environment.py", line 807, in run_migrations
self.get_context().run_migrations(**kw)
File "/Users/sam/App-Setup/anaconda/envs/anaconda35/lib/python3.5/site-packages/alembic/runtime/migration.py", line 321, in run_migrations
step.migration_fn(**kw)
File "/Users/sam/App-Setup/anaconda/envs/anaconda35/lib/python3.5/site-packages/airflow/migrations/versions/cc1e65623dc7_add_max_tries_column_to_task_instance.py", line 53, in upgrade
query = session.query(sa.func.count(TaskInstance.max_tries)).filter(
AttributeError: type object 'TaskInstance' has no attribute 'max_tries'

Resources