I have setup airflow on Windows 10 WSL with Python 3.6.8. I started the scheduler of airflow using airflow scheduler command but it got following error :
[2020-04-28 19:24:06,500] {base_job.py:205} ERROR - SchedulerJob heartbeat got an exception
Traceback (most recent call last):
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 755, in _commit_impl
self.engine.dialect.do_commit(self.connection)
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 543, in do_commit
dbapi_connection.commit()
sqlite3.OperationalError: disk I/O error
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/akshay/.local/lib/python3.6/site-packages/airflow/jobs/base_job.py", line 173, in heartbeat
previous_heartbeat = self.latest_heartbeat
File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__
next(self.gen)
File "/home/akshay/.local/lib/python3.6/site-packages/airflow/utils/db.py", line 45, in create_session
session.commit()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 1036, in commit
self.transaction.commit()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 507, in commit
t[1].commit()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1736, in commit
self._do_commit()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1767, in _do_commit
self.connection._commit_impl()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 757, in _commit_impl
self._handle_dbapi_exception(e, None, None, None, None)
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1482, in _handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 178, in raise_
raise exception
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 755, in _commit_impl
self.engine.dialect.do_commit(self.connection)
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 543, in do_commit
dbapi_connection.commit()
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) disk I/O error
(Background on this error at: http://sqlalche.me/e/e3q8)
What is the reason for this failing? What can be the solution to resolve this as my airflow scheduler is running well since last 10 days?
Related
I have a problem launching the Jupyter notebook on a remote Windows computer. running Jupyter from conda prompt returns WinError 1326 as below:
Traceback (most recent call last):
File "D:\Anaconda3\lib\site-packages\traitlets\traitlets.py", line 535, in get
value = obj._trait_values[self.name]
KeyError: 'runtime_dir'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Anaconda3\Scripts\jupyter-notebook-script.py", line 10, in <module>
sys.exit(main())
File "D:\Anaconda3\lib\site-packages\jupyter_core\application.py", line 254, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "D:\Anaconda3\lib\site-packages\traitlets\config\application.py", line 844, in launch_instance
app.initialize(argv)
File "D:\Anaconda3\lib\site-packages\traitlets\config\application.py", line 87, in inner
return method(app, *args, **kwargs)
File "D:\Anaconda3\lib\site-packages\notebook\notebookapp.py", line 2127, in initialize
self.init_configurables()
File "D:\Anaconda3\lib\site-packages\notebook\notebookapp.py", line 1650, in init_configurables
connection_dir=self.runtime_dir,
File "D:\Anaconda3\lib\site-packages\traitlets\traitlets.py", line 575, in __get__
return self.get(obj, cls)
File "D:\Anaconda3\lib\site-packages\traitlets\traitlets.py", line 538, in get
default = obj.trait_defaults(self.name)
File "D:\Anaconda3\lib\site-packages\traitlets\traitlets.py", line 1578, in trait_defaults
return self._get_trait_default_generator(names[0])(self)
File "D:\Anaconda3\lib\site-packages\jupyter_core\application.py", line 85, in _runtime_dir_default
ensure_dir_exists(rd, mode=0o700)
File "D:\Anaconda3\lib\site-packages\jupyter_core\utils\__init__.py", line 11, in ensure_dir_exists
os.makedirs(path, mode=mode)
File "D:\Anaconda3\lib\os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "D:\Anaconda3\lib\os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "D:\Anaconda3\lib\os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
[Previous line repeated 3 more times]
File "D:\Anaconda3\lib\os.py", line 223, in makedirs
mkdir(name, mode)
OSError: [WinError 1326] Username or password is incorrect: 'MY_SERVER_ADDRESS'
I tried changing some values in config file but no luck! please help me fix this problem
This might help:->
steps include:->
1-open cmd
2-paste the path of your Jupyter folder
3-write "Jupyter notebook" and press enter you will redirect to Jupyter
4-still not working check your python path is configured or not and then go to step1,2,3:->
upvote if it's helpful
or
you can made another folder jupyter folder and then configure python
and install jupyter notebook and use above steps.
So it looks like my install of apache airflow on a Google Compute Engine instance broke down. Everything was working great and then two days ago all the DAG runs show up stuck in a running state. I am using LocalExecutioner.
When I try to look at the log I get this error:
* Log file isn't local.
* Fetching here: http://:8793/log/collector/aa_main_combined_collector/2017-12-15T09:00:00
*** Failed to fetch log file from worker.
I didn't touch a setting anywhere. I looked through all the config files and I scanned the logs and I see this error
[2017-12-16 20:08:42,558] {jobs.py:355} DagFileProcessor0 ERROR - Got an exception! Propagating...
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/airflow/jobs.py", line 347, in helper
pickle_dags)
File "/usr/local/lib/python3.4/dist-packages/airflow/utils/db.py", line 53, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/airflow/jobs.py", line 1584, in process_file
self._process_dags(dagbag, dags, ti_keys_to_schedule)
File "/usr/local/lib/python3.4/dist-packages/airflow/jobs.py", line 1173, in _process_dags
dag_run = self.create_dag_run(dag)
File "/usr/local/lib/python3.4/dist-packages/airflow/utils/db.py", line 53, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/airflow/jobs.py", line 763, in create_dag_run
last_scheduled_run = qry.scalar()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2843, in scalar
ret = self.one()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2814, in one
ret = self.one_or_none()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2784, in one_or_none
ret = list(self)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2855, in iter
return self._execute_and_instances(context)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2878, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 945, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/sql/elements.py", line 263, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 1053, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context
context)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 1405, in _handle_dbapi_exception
util.reraise(*exc_info)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 187, in reraise
raise value
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.4/dist-packages/airflow/bin/cli.py", line 69, in sigint_handler
sys.exit(0)
SystemExit: 0
Any thoughts out there?
I solved this problem though in doing so I discovered another problem.
Long and short of it as soon as I manually started the scheduler, everything worked again. It appears the problem was that the scheduler did not get restarted correctly after a system reboot.
I have scheduler running through SystemD. The Webserver .service works fine. However I do notice that the scheduler .service continually restarts. It appears there is an issue there I need to resolve. This part of it is solved for now.
Look at the log URL, verify if it ends with a date with special characters +:
&execution_date=2018-02-23T08:00:00+00:00
This was fixed here.
You can replace the + for -, or replace all special characters, in my case:
&execution_date=2018-02-23T08%3A00%3A00%2B00%3A00
This happens here.
The FileTaskHandler can not load the log from local disk, and try to load from worker.
Another thing that could be causing this error is the exclusion of the airflow/logs folder or the sub folders inside it.
I've setup a database connection using sql_alchemy_conn = ibm_db_sa://{USERNAME}:{PASSWORD}#{HOST}:50000/airflow in the airflow.cfg file.
When I run airflow initdb, it pops up KeyError: 'ibm_db_sa'. How can I use a DB2 connection with Airflow?
===============
Here is more specific error message:
airflow initdb
[2017-02-01 15:55:57,135] {__init__.py:36} INFO - Using executor SequentialExecutor
DB: ibm_db_sa://db2inst1:***#localhost:50000/airflow
[2017-02-01 15:55:58,151] {db.py:222} INFO - Creating tables
Traceback (most recent call last):
File "/opt/anaconda/bin/airflow", line 15, in <module>
args.func(args)
File "/opt/anaconda/lib/python2.7/site-packages/airflow/bin/cli.py", line 524, in initdb
db_utils.initdb()
File "/opt/anaconda/lib/python2.7/site-packages/airflow/utils/db.py", line 106, in initdb
upgradedb()
File "/opt/anaconda/lib/python2.7/site-packages/airflow/utils/db.py", line 230, in upgradedb
command.upgrade(config, 'heads')
File "/opt/anaconda/lib/python2.7/site-packages/alembic/command.py", line 174, in upgrade
script.run_env()
File "/opt/anaconda/lib/python2.7/site-packages/alembic/script/base.py", line 416, in run_env
util.load_python_file(self.dir, 'env.py')
File "/opt/anaconda/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 93, in load_python_file
module = load_module_py(module_id, path)
File "/opt/anaconda/lib/python2.7/site-packages/alembic/util/compat.py", line 79, in load_module_py
mod = imp.load_source(module_id, path, fp)
File "/opt/anaconda/lib/python2.7/site-packages/airflow/migrations/env.py", line 74, in <module>
run_migrations_online()
File "/opt/anaconda/lib/python2.7/site-packages/airflow/migrations/env.py", line 65, in run_migrations_online
compare_type=COMPARE_TYPE,
File "<string>", line 8, in configure
File "/opt/anaconda/lib/python2.7/site-packages/alembic/runtime/environment.py", line 773, in configure
opts=opts
File "/opt/anaconda/lib/python2.7/site-packages/alembic/runtime/migration.py", line 159, in configure
return MigrationContext(dialect, connection, opts, environment_context)
File "/opt/anaconda/lib/python2.7/site-packages/alembic/runtime/migration.py", line 103, in __init__
self.impl = ddl.DefaultImpl.get_by_dialect(dialect)(
File "/opt/anaconda/lib/python2.7/site-packages/alembic/ddl/impl.py", line 65, in get_by_dialect
return _impls[dialect.name]
KeyError: 'ibm_db_sa'
Did you install the required package for db2? Eg. pip install ibm_db_sa. By the way the pypi page lists that it is only compatible with python 3.
Please note that db2 is not officially supported as a backend for Airflow.
Does anyone know how I can fix this error that I get when I do a buildout?
An internal error occurred due to a bug in either zc.buildout or in a
recipe being used:
Traceback (most recent call last):
File "/usr/local/Plone/buildout-cache/eggs/zc.buildout-1.7.1-py2.7.egg/zc/buildout/buildout.py", line 1866, in main
getattr(buildout, command)(args)
File "/usr/local/Plone/buildout-cache/eggs/zc.buildout-1.7.1-py2.7.egg/zc/buildout/buildout.py", line 625, in install
installed_files = self[part]._call(recipe.install)
File "/usr/local/Plone/buildout-cache/eggs/zc.buildout-1.7.1-py2.7.egg/zc/buildout/buildout.py", line 1345, in _call
return f()
File "/usr/local/Plone/buildout-cache/eggs/plone.recipe.precompiler-0.6-py2.7.egg/plone/recipe/precompiler/__init__.py", line 29, in install
return self._run()
File "/usr/local/Plone/buildout-cache/eggs/plone.recipe.precompiler-0.6-py2.7.egg/plone/recipe/precompiler/__init__.py", line 35, in _run
self._compile_eggs()
File "/usr/local/Plone/buildout-cache/eggs/plone.recipe.precompiler-0.6-py2.7.egg/plone/recipe/precompiler/__init__.py", line 67, in _compile_eggs
py_compile.compile(fn, None, None, True)
File "/usr/lib/python2.7/py_compile.py", line 123, in compile
with open(cfile, 'wb') as fc:
IOError: [Errno 13] Permission denied: '/usr/local/Plone/zeocluster/products/MyScriptModules/__init__.pyc'
Remove the file that causes the problem. which is:
/usr/local/Plone/zeocluster/products/MyScriptModules/__init__.pyc
Then re-run buildout as the user that removed the above file.
After 3.3 -> 4.1 migration I get exception on the resulting page
File "/fast/buildout-cache/eggs/plone.app.viewletmanager-2.0.2-py2.6.egg/plone/app/viewletmanager/manager.py", line 85, in render
return u'\n'.join([viewlet.render() for viewlet in self.viewlets])
File "/fast/buildout-cache/eggs/plone.memoize-1.1.1-py2.6.egg/plone/memoize/volatile.py", line 281, in replacement
cached_value = cache.get(key, _marker)
File "/fast/buildout-cache/eggs/plone.memoize-1.1.1-py2.6.egg/plone/memoize/ram.py", line 124, in get
return self.__getitem__(key)
File "/fast/buildout-cache/eggs/plone.memoize-1.1.1-py2.6.egg/plone/memoize/ram.py", line 166, in __getitem__
MARKER)
File "/fast/buildout-cache/eggs/zope.ramcache-1.0-py2.6.egg/zope/ramcache/ram.py", line 107, in query
s = self._getStorage()
File "/fast/buildout-cache/eggs/zope.ramcache-1.0-py2.6.egg/zope/ramcache/ram.py", line 122, in _getStorage
cacheId = self._cacheId
AttributeError: 'RAMCache' object has no attribute '_cacheId'
Looks like RAMCache object is in invalid state.
Also before this seeing in logs:
2012-06-21 16:42:54 INFO plone.app.upgrade Ran upgrade step: Miscellaneous
2012-06-21 16:42:54 INFO plone.app.upgrade End of upgrade path, migration has finished
2012-06-21 16:42:54 INFO plone.app.upgrade Your Plone instance is now up-to-date.
2012-06-21 16:43:02 ERROR txn.4553572352 Error in tpc_abort() on manager <Connection at 10be48490>
Traceback (most recent call last):
File "/fast/buildout-cache/eggs/transaction-1.1.1-py2.6.egg/transaction/_transaction.py", line 484, in _cleanup
rm.tpc_abort(self)
File "/fast/buildout-cache/eggs/ZODB3-3.10.5-py2.6-macosx-10.7-x86_64.egg/ZODB/Connection.py", line 730, in tpc_abort
self._storage.tpc_abort(transaction)
File "/fast/buildout-cache/eggs/ZODB3-3.10.5-py2.6-macosx-10.7-x86_64.egg/ZEO/ClientStorage.py", line 1157, in tpc_abort
self._server.tpc_abort(id(txn))
File "/fast/buildout-cache/eggs/ZODB3-3.10.5-py2.6-macosx-10.7-x86_64.egg/ZEO/ServerStub.py", line 255, in tpc_abort
self.rpc.call('tpc_abort', id)
File "/fast/buildout-cache/eggs/ZODB3-3.10.5-py2.6-macosx-10.7-x86_64.egg/ZEO/zrpc/connection.py", line 768, in call
raise inst # error raised by server
OSError: [Errno 2] No such file or directory: '/fast/xxx-2012/var/blobstorage/0x00/0x00/0x00/0x00/0x00/0x07/0xdc/0x55/0x00FWigqp.tmp-'
2012-06-21 16:43:03 ERROR Zope.SiteErrorLog 1340286183.10.000607291180815 http://localhost:9666/xxx/##plone-upgrade
Traceback (innermost last):
Module ZPublisher.Publish, line 134, in publish
Module Zope2.App.startup, line 301, in commit
Module transaction._manager, line 89, in commit
Module transaction._transaction, line 329, in commit
Module transaction._transaction, line 446, in _commitResources
Module ZODB.Connection, line 781, in tpc_vote
Module ZEO.ClientStorage, line 1098, in tpc_vote
Module ZEO.ClientStorage, line 929, in _check_serials
IOError: [Errno 2] No such file or directory: '/fast/xxx-2012/var/blobstorage/0x00/0x00/0x00/0x00/0x00/0x07/0xdd/0xca/0x009kWNYQ.tmp-'
Why this would might happen?
Any pointers how to reinitialize RAMCache object?
RAMCache is first time referred by FaviconViewlet which is using #memoize deorator and it leads to this error.
Well, your migration obviously did not complete successfully, based on the traceback. So I would focus on figuring out why it failed, rather than working around things like the broken RAMCache which are likely a result of the migration not having run.
The traceback indicates that it broke while trying to abort the transaction...so you'll probably need to do some debugging to determine what caused it to try to abort, since that's not indicated in the logs you pasted.