I have currently migrated to airflow 2 from 1.10 and see the below error:
Broken DAG: [/opt/airflow/dags/wh_braze_dag.py] Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 416, in parent
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/timeout.py", line 37, in handle_timeout
raise AirflowTaskTimeout(self.error_message)
airflow.exceptions.AirflowTaskTimeout: DagBag import timeout for /opt/airflow/dags/orders_dag.py after 30.0s, PID: 11866
How to resolve this?This does not happen with airflow 1.10 works fine there why time out in 2.0 anyone faced this issue ?
Related
I have setup airflow on Windows 10 WSL with Python 3.6.8. I started the scheduler of airflow using airflow scheduler command but it got following error :
[2020-04-28 19:24:06,500] {base_job.py:205} ERROR - SchedulerJob heartbeat got an exception
Traceback (most recent call last):
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 755, in _commit_impl
self.engine.dialect.do_commit(self.connection)
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 543, in do_commit
dbapi_connection.commit()
sqlite3.OperationalError: disk I/O error
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/akshay/.local/lib/python3.6/site-packages/airflow/jobs/base_job.py", line 173, in heartbeat
previous_heartbeat = self.latest_heartbeat
File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__
next(self.gen)
File "/home/akshay/.local/lib/python3.6/site-packages/airflow/utils/db.py", line 45, in create_session
session.commit()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 1036, in commit
self.transaction.commit()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 507, in commit
t[1].commit()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1736, in commit
self._do_commit()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1767, in _do_commit
self.connection._commit_impl()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 757, in _commit_impl
self._handle_dbapi_exception(e, None, None, None, None)
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1482, in _handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 178, in raise_
raise exception
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 755, in _commit_impl
self.engine.dialect.do_commit(self.connection)
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 543, in do_commit
dbapi_connection.commit()
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) disk I/O error
(Background on this error at: http://sqlalche.me/e/e3q8)
What is the reason for this failing? What can be the solution to resolve this as my airflow scheduler is running well since last 10 days?
I am trying to configure remote logging with Azure blob.
Airflow version: 1.10.2
Python: 3.6.5
Ubuntu: 18.04
Following are the step I did:
In $AIRFLOW_HOME/config/log_config.py, I have put REMOTE_BASE_LOG_FOLDER = 'wasb-airflow-logs' (This is a folder inside the container (container name: airflow-logs))
Empty init.py is in $AIRFLOW_HOME/config/
$AIRFLOW_HOME/config/ is added in $PYTHONPATH
Renamed DEFAULT_LOGGING_CONFIG to LOGGING CONFIG everywhere in $AIRFLOW_HOME/config/log_config.py
User defined in Airflow blob connection has read/write access to REMOTE_BASE_LOG_FOLDER
$AIRFLOW_HOME/airflow.cfg it has remote_logging = True
logging_config_class = log_config.LOGGING_CONFIG
remote_log_conn_id =
Following is the error:
Unable to load the config, contains a configuration error.
Traceback (most recent call last):
File "/home/gsingh/anaconda3/lib/python3.6/logging/config.py", line 382, in resolve
found = getattr(found, frag)
AttributeError: module 'airflow.utils.log' has no attribute 'wasb_task_handler'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/gsingh/anaconda3/lib/python3.6/logging/config.py", line 384, in resolve
self.importer(used)
File "/home/gsingh/venv/lib/python3.6/site-packages/airflow/utils/log/wasb_task_handler.py", line 23, in <module>
from airflow.contrib.hooks.wasb_hook import WasbHook
File "/home/gsingh/venv/lib/python3.6/site-packages/airflow/contrib/hooks/wasb_hook.py", line 22, in <module>
from airflow.hooks.base_hook import BaseHook
File "/home/gsingh/venv/lib/python3.6/site-packages/airflow/hooks/base_hook.py", line 28, in <module>
from airflow.models import Connection
File "/home/gsingh/venv/lib/python3.6/site-packages/airflow/models.py", line 86, in <module>
from airflow.utils.dag_processing import list_py_file_paths
File "/home/gsingh/venv/lib/python3.6/site-packages/airflow/utils/dag_processing.py", line 49, in <module>
from airflow.settings import logging_class_path
ImportError: cannot import name 'logging_class_path'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/gsingh/anaconda3/lib/python3.6/logging/config.py", line 558, in configure
handler = self.configure_handler(handlers[name])
File "/home/gsingh/anaconda3/lib/python3.6/logging/config.py", line 708, in configure_handler
klass = self.resolve(cname)
File "/home/gsingh/anaconda3/lib/python3.6/logging/config.py", line 391, in resolve
raise v
File "/home/gsingh/anaconda3/lib/python3.6/logging/config.py", line 384, in resolve
self.importer(used)
File "/home/gsingh/venv/lib/python3.6/site-packages/airflow/utils/log/wasb_task_handler.py", line 23, in <module>
from airflow.contrib.hooks.wasb_hook import WasbHook
File "/home/gsingh/venv/lib/python3.6/site-packages/airflow/contrib/hooks/wasb_hook.py", line 22, in <module>
from airflow.hooks.base_hook import BaseHook
File "/home/gsingh/venv/lib/python3.6/site-packages/airflow/hooks/base_hook.py", line 28, in <module>
from airflow.models import Connection
File "/home/gsingh/venv/lib/python3.6/site-packages/airflow/models.py", line 86, in <module>
from airflow.utils.dag_processing import list_py_file_paths
File "/home/gsingh/venv/lib/python3.6/site-packages/airflow/utils/dag_processing.py", line 49, in <module>
from airflow.settings import logging_class_path
ValueError: Cannot resolve 'airflow.utils.log.wasb_task_handler.WasbTaskHandler': cannot import name 'logging_class_path'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/gsingh/venv/bin/airflow", line 21, in <module>
from airflow import configuration
File "/home/gsingh/venv/lib/python3.6/site-packages/airflow/__init__.py", line 36, in <module>
from airflow import settings, configuration as conf
File "/home/gsingh/venv/lib/python3.6/site-packages/airflow/settings.py", line 262, in <module>
logging_class_path = configure_logging()
File "/home/gsingh/venv/lib/python3.6/site-packages/airflow/logging_config.py", line 73, in configure_logging
raise e
File "/home/gsingh/venv/lib/python3.6/site-packages/airflow/logging_config.py", line 68, in configure_logging
dictConfig(logging_config)
File "/home/gsingh/anaconda3/lib/python3.6/logging/config.py", line 795, in dictConfig
dictConfigClass(config).configure()
File "/home/gsingh/anaconda3/lib/python3.6/logging/config.py", line 566, in configure
'%r: %s' % (name, e))
ValueError: Unable to configure handler 'processor': Cannot resolve 'airflow.utils.log.wasb_task_handler.WasbTaskHandler': cannot import name 'logging_class_path'
I am not sure which configuration I am missing. Has anyone faced the same issue?
You need to install the azure package.
pip install 'apache-airflow[azure_blob_storage,azure_data_lake,azure_cosmos,azure_container_instances]
As per updating.md
This now should be installed with
pip install apache-airflow[azure]
But this didn't work for me.
sudo chown 50000:0 dags logs plugins in my case.
I tried to run official docker-compose.yml with all these containers (which are dependent on these 3 volume forwards) or simply wrap airflow standalone into a single container for a debug purpose. Turned out volumes were created with root ownerships instead of airflows.
I had the same error however if I scrolled up higher I could see that there was another exception thrown before the ValueError. Which was a PermissionError.
PermissionError: [Errno 13] Permission denied: '/usr/local/airflow/logs/scheduler'
The reason I got that error is because I didn't create the initial 3 folders (dags, logs, plugins) before running airflow docker container. So docker seems to have created then automatically but the permissions were wrong.
Steps to fix:
Stop current container
docker-compose down --volumes --remove-orphans
Delete folders dags, logs, plugins
Just in case Destroy the images and volumes already created (in Docker Desktop)
Create folders again from command line
mkdir logs dags plugins
run airflow docker again
docker-compose up airflow-init
docker-compose up
I'm running eventlet.monkey_patch() while trying to spin up a flask server which uses flask-socketio. This is the traceback:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib64/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/alhasan/MeetupPoint/venv/lib/python3.6/site-packages/werkzeug/serving.py", line 777, in inner
srv.serve_forever()
File "/home/alhasan/MeetupPoint/venv/lib/python3.6/site-packages/werkzeug/serving.py", line 612, in serve_forever
HTTPServer.serve_forever(self)
File "/usr/lib64/python3.6/socketserver.py", line 232, in serve_forever
with _ServerSelector() as selector:
File "/usr/lib64/python3.6/selectors.py", line 348, in __init__
self._poll = select.poll()
AttributeError: module 'select' has no attribute 'poll'
I tried using monkey_patch, as previously I encountered the following error:
RuntimeError: You need to use the eventlet server. See the Deployment section of the documentation for more information.
I have eventlet installed.
...
eventlet==0.23.0
Flask==0.12.2
Flask-Migrate==2.1.1
Flask-Script==2.0.6
Flask-SocketIO==3.0.1
...
Is there a fix for this?
My initial problem was that my server returns bad requests everytime I try to emit a message from the client. But, the other way works. Would really appreciate any kind of a solution. :)
I'm taking advantage of the fact that Airflow v1.7.1.3 provides access to airflow.cfg to place some configuration values there rather than embedded in the code. We added the following as the first lines of the airflow.cfg file:
[foo]
bar = foo
bar
In the foobarDAG.py class representing the DAG, I do the following:
from airflow.configuration import conf
…
def fooBar():
pass
foobarList = conf['foo']['bar'].split('\n')
foobarOperator = PythonOperator(
task_id='fooBar',
provide_context=True,
python_callable=fooBar,
op_args=[foobarList],
dag=dag)
Testing this manually from the Python prompt is easy:
>>> from foobarDAG import foobarList
…
>>> foobarList
['foo', 'bar']
That's just what I would expect from the information in airflow.cfg, above.
We've also performed a test on the DAG directly:
airflow test foobarDAG fooBar 10-19-2016
That doesn't report any problems.
The problem crops up when we try to use the scheduler to schedule that one DAG:
airflow scheduler -d foobarDAG >& foobar_log.txt
In the web UI, we see the following at the top of the "DAGS" section:
Broken DAG: [/path/to/…/foobarDAG.py] 'foo'
And in foobar_log.txt, here is the error message:
[2016-10-19 14:56:09,028] {models.py:250} ERROR - Failed to import: /path/to/foobarDAG.py
Traceback (most recent call last):
File "/path/to/airflow/models.py", line 247, in process_file
m = imp.load_source(mod_name, filepath)
File "/path/to/anaconda3/envs/foobarenv/lib/python3.5/imp.py", line 172, in load_source
module = _load(spec)
File "<frozen importlib._bootstrap>", line 693, in _load
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 662, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/path/to/foobarDAG.py", line 67, in <module>
foobarList = conf['foo']['bar'].split('\n')
File "/path/to/anaconda3/envs/foobarenv/lib/python3.5/configparser.py", line 956, in __getitem__
raise KeyError(key)
KeyError: 'foo'
So oddly it appears that the scheduler isn't retrieving the ['foo'] section from airflow.cfg and providing it to the DAG. Any idea why?
It turns out that everything was working properly, but the scheduler hadn't been restarted. The scheduler was apparently still using the old airflow.cfg which did not have the added section.
Can anyone explain please what could be wrong:
I have a fresh Plone 4.1.4 installation via buildout and a fresh out-of-box Plone site created (no work is done on the site). After running ./bin/test --all testsuite (just out of curiosity) it gives lots of the following errors:
Mik#S-linux:/Plone414/PLONE414/zinstance>
./bin/test --all
./bin/test:239: DeprecationWarning: zope.testing.testrunner is deprecated in favour of zope.testrunner. /Plone414/PLONE414/buildout-cache/eggs/zope.testing-3.9.7-py2.6.egg/zope/testing/testrunner/formatter.py:28: DeprecationWarning: zope.testing.exceptions is deprecated in favour of zope.testrunner.exceptions from zope.testing.exceptions import DocTestFailureException Running Testing.ZopeTestCase.layer.ZopeLite tests: Set up Testing.ZopeTestCase.layer.ZopeLite in 0.071 seconds. Running: 8/44 (18.2%)
Failure in test testDateTime (Products.DocFinderTab.tests.testAnalyse.TestAnalyse) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/Products.DocFinderTab-1.0.5-py2.6.egg/Products/DocFinderTab/tests/testAnalyse.py", line 198, in testDateTime self.assertEqual(self.ob.getdoc('_DateTime').Type(), 'DateTime') File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 350, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: 'DateTime instance' != 'DateTime'
Ran 44 tests with 1 failures and 0 errors in 1.376 seconds. Running zope.testing.testrunner.layer.UnitTests tests: Tear down Testing.ZopeTestCase.layer.ZopeLite in 0.000 seconds. Set up zope.testing.testrunner.layer.UnitTests in 0.000 seconds. Running: 2/47 (4.3%)
Failure in test test_search_modules (plone.reload.tests.test_code.TestSearch) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/tests/test_code.py", line 33, in test_search_modules self.assertTrue(found) File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 325, in failUnless if not expr: raise self.failureException, msg AssertionError 5/47 (10.6%)
Error in test test_check_mod_times_change (plone.reload.tests.test_code.TestTimes) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/tests/test_code.py", line 82, in test_check_mod_times_change
our_entry = MOD_TIMES[our_package] KeyError: '/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/__init__.pyc'
8/47 (17.0%)
Failure in test test_get_mod_times (plone.reload.tests.test_code.TestTimes) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/tests/test_code.py", line 70, in test_get_mod_times self.assertTrue(our_package in times) File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 325, in failUnless if not expr: raise self.failureException, msg AssertionError 10/47 (21.3%)
Error in test test_reload_code_change (plone.reload.tests.test_code.TestTimes) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/tests/test_code.py", line 98, in test_reload_code_change our_entry = MOD_TIMES[our_package] KeyError: '/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/__init__.pyc'
Ran 47 tests with 2 failures and 2 errors in 0.102 seconds. Tearing down left over layers: Tear down zope.testing.testrunner.layer.UnitTests in 0.000 seconds. Total: 91 tests, 3 failures, 2 errors in 1.682 seconds.
This isn't a supported way to run the tests. Some of the tests for the components of Plone change global state and then do not clean up after themselves, causing failures in tests that run later which depended on that state being a certain way. The environment we use to develop Plone, buildout.coredev, uses the plone.recipe.alltests buildout recipe to set up a script that can run all the tests successfully by isolating some packages from others.
This is of course not ideal, but it's a pragmatic solution until someone does the work to find and solve the test isolation problems.