Cinder Driver failed to start - openstack

I have written a cinder driver but it is failing to start. It is giving me following error
2016-11-29 17:01:56.807 INFO cinder.volume.manager [req-4fe59a59-bb73-48e4-bfca-e8730e9a74c8 None None] Determined volume DB was empty at startup.
2016-11-29 17:01:56.808 DEBUG cinder.volume.manager [req-4fe59a59-bb73-48e4-bfca-e8730e9a74c8 None None] Cinder Volume DB check: vol_db_empty=True from (pid=25266) __init__ /opt/stack/cinder/cinder/volume/manager.py:193
2016-11-29 17:01:56.837 WARNING cinder.keymgr.conf_key_mgr [req-4fe59a59-bb73-48e4-bfca-e8730e9a74c8 None None] This key manager is insecure and is not recommended for production deployments
2016-11-29 17:01:56.916 ERROR cinder.cmd.volume [req-4fe59a59-bb73-48e4-bfca-e8730e9a74c8 None None] Volume service akdevstck#ixsystems-iscsi failed to start.
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume Traceback (most recent call last):
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume File "/opt/stack/cinder/cinder/cmd/volume.py", line 100, in main
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume cluster=cluster)
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume File "/opt/stack/cinder/cinder/service.py", line 387, in create
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume cluster=cluster)
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume File "/opt/stack/cinder/cinder/service.py", line 206, in __init__
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume *args, **kwargs)
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume File "/opt/stack/cinder/cinder/volume/manager.py", line 226, in __init__
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume active_backend_id=curr_active_backend_id)
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume File "/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 44, in import_object
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume return import_class(import_str)(*args, **kwargs)
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume File "/opt/stack/cinder/cinder/volume/drivers/ixsystems/iscsi.py", line 57, in __init__
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume self.configuration.ixsystems_iqn_prefix += ':'
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume File "/opt/stack/cinder/cinder/volume/configuration.py", line 80, in __getattr__
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume return getattr(local_conf, value)
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume File "/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3120, in __getattr__
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume return self._conf._get(name, self._group)
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume File "/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2731, in _get
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume value = self._do_get(name, group, namespace)
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume File "/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2761, in _do_get
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume if opt.mutable and namespace is None:
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume AttributeError: 'StrOpt' object has no attribute 'mutable'
2016-11-29 17:01:56.916 TRACE cinder.cmd.volume
2016-11-29 17:01:57.190 ERROR cinder.cmd.volume [req-4fe59a59-bb73-48e4-bfca-e8730e9a74c8 None None] No volume service(s) started successfully, terminating.
c-vol failed to start
stack#akdevstck:~/devstack$
Please help to sort out the problem.

Try to file a bug on this website and provide enough environment information there. Cinder guys are willing to give help and also you can directly raise your issue on irc channel #openstack-cinder.

Related

Airflow scheduler gets stuck

I have setup airflow on Windows 10 WSL with Python 3.6.8. I started the scheduler of airflow using airflow scheduler command but it got following error :
[2020-04-28 19:24:06,500] {base_job.py:205} ERROR - SchedulerJob heartbeat got an exception
Traceback (most recent call last):
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 755, in _commit_impl
self.engine.dialect.do_commit(self.connection)
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 543, in do_commit
dbapi_connection.commit()
sqlite3.OperationalError: disk I/O error
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/akshay/.local/lib/python3.6/site-packages/airflow/jobs/base_job.py", line 173, in heartbeat
previous_heartbeat = self.latest_heartbeat
File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__
next(self.gen)
File "/home/akshay/.local/lib/python3.6/site-packages/airflow/utils/db.py", line 45, in create_session
session.commit()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 1036, in commit
self.transaction.commit()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 507, in commit
t[1].commit()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1736, in commit
self._do_commit()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1767, in _do_commit
self.connection._commit_impl()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 757, in _commit_impl
self._handle_dbapi_exception(e, None, None, None, None)
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1482, in _handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 178, in raise_
raise exception
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 755, in _commit_impl
self.engine.dialect.do_commit(self.connection)
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 543, in do_commit
dbapi_connection.commit()
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) disk I/O error
(Background on this error at: http://sqlalche.me/e/e3q8)
What is the reason for this failing? What can be the solution to resolve this as my airflow scheduler is running well since last 10 days?

Airflow BashOperator UnicodeEncodeError

I'm using airflow 1.10.0 on Python 3.5 and encounter this error about encoding error with logging.
The operator uses default setting of output_encoding which is already utf-8.
task_compile = BashOperator(
task_id='task_compile',
retries=1,
retry_delay=timedelta(minutes=5),
bash_command='/root/docker/tools/compile.sh',
dag=dag
)
task_compile.set_downstream(task_last)
The shell script pops a docker container and runs composer install, I tested with another simple composer install task and nothing fails, the error is only with a certain set of dependencies. As shown in the trace stack the module reponsible for the exception is file_task_handler.py when it emits the line to be logged into the log file.
[2018-09-19 20:42:18,708] {bash_operator.py:111} INFO - Package operations: 134 installs, 0 updates, 0 removals
[2018-09-19 20:42:18,790] {bash_operator.py:111} INFO - - Installing ocramius/package-versions (1.3.0): Downloading (100%)
[2018-09-19 20:42:18,850] {bash_operator.py:111} INFO - - Installing symfony/flex (v1.1.1): Downloading (100%)
[2018-09-19 20:42:18,897] {bash_operator.py:111} INFO -
[2018-09-19 20:42:18,898] {logging_mixin.py:95} WARNING - --- Logging error ---
[2018-09-19 16:12:51,554] {logging_mixin.py:95} WARNING - --- Logging error ---
[2018-09-19 16:12:51,555] {logging_mixin.py:95} WARNING - Traceback (most recent call last):
[2018-09-19 16:12:51,555] {logging_mixin.py:95} WARNING - File "/usr/lib/python3.5/logging/__init__.py", line 983, in emit
stream.write(msg)
[2018-09-19 16:12:51,555] {logging_mixin.py:95} WARNING - UnicodeEncodeError: 'ascii' codec can't encode character '\U0001f3b6' in position 81: ordinal not in range(128)
[2018-09-19 16:12:51,555] {logging_mixin.py:95} WARNING - Call stack:
[2018-09-19 16:12:51,557] {logging_mixin.py:95} WARNING - File "/usr/local/bin/airflow", line 32, in <module>
args.func(args)
[2018-09-19 16:12:51,557] {logging_mixin.py:95} WARNING - File "/usr/local/lib/python3.5/dist-packages/airflow/utils/cli.py", line 74, in wrapper
return f(*args, **kwargs)
[2018-09-19 16:12:51,557] {logging_mixin.py:95} WARNING - File "/usr/local/lib/python3.5/dist-packages/airflow/bin/cli.py", line 498, in run
_run(args, dag, ti)
[2018-09-19 16:12:51,558] {logging_mixin.py:95} WARNING - File "/usr/local/lib/python3.5/dist-packages/airflow/bin/cli.py", line 402, in _run
pool=args.pool,
[2018-09-19 16:12:51,558] {logging_mixin.py:95} WARNING - File "/usr/local/lib/python3.5/dist-packages/airflow/utils/db.py", line 74, in wrapper
return func(*args, **kwargs)
[2018-09-19 16:12:51,558] {logging_mixin.py:95} WARNING - File "/usr/local/lib/python3.5/dist-packages/airflow/models.py", line 1633, in _run_raw_task
result = task_copy.execute(context=context)
[2018-09-19 16:12:51,558] {logging_mixin.py:95} WARNING - File "/usr/local/lib/python3.5/dist-packages/airflow/operators/bash_operator.py", line 110, in execute
self.log.info(line)
[2018-09-19 16:12:51,558] {logging_mixin.py:95} WARNING - File "/usr/lib/python3.5/logging/__init__.py", line 1280, in info
self._log(INFO, msg, args, **kwargs)
[2018-09-19 16:12:51,558] {logging_mixin.py:95} WARNING - File "/usr/lib/python3.5/logging/__init__.py", line 1416, in _log
self.handle(record)
[2018-09-19 16:12:51,558] {logging_mixin.py:95} WARNING - File "/usr/lib/python3.5/logging/__init__.py", line 1426, in handle
self.callHandlers(record)
[2018-09-19 16:12:51,558] {logging_mixin.py:95} WARNING - File "/usr/lib/python3.5/logging/__init__.py", line 1488, in callHandlers
hdlr.handle(record)
[2018-09-19 16:12:51,558] {logging_mixin.py:95} WARNING - File "/usr/lib/python3.5/logging/__init__.py", line 856, in handle
self.emit(record)
[2018-09-19 16:12:51,558] {logging_mixin.py:95} WARNING - File "/usr/local/lib/python3.5/dist-packages/airflow/utils/log/file_task_handler.py", line 61, in emit
self.handler.emit(record)
The issue is that the locale in the container is not set to UTF-8
Faced similar issue, I was able to resolve it by adding env variable LANG=en_US.UTF-8 into the supervisord configuration and restarting supervisord.
I use supervisor to start airflow scheduler, webserver and flower.
Note: This env variable needs to be added into all the airflow worker nodes as well.

PyInstaller - OSError: [Errno 2] No such file or directory

I run into "OSError: [Errno 2] No such file or directory" while running pyinstaller. Could someone point me what needs to be installed or done to solve it?
Below is the error message.
root#mylinkit:/usr# pyinstaller t123.py
2999 INFO: PyInstaller: 3.2.1
3002 INFO: Python: 2.7.12
3013 INFO: Platform: Linux-3.18.44-mips-with-glibc2.0
3026 INFO: wrote /usr/t123.spec
3069 INFO: UPX is not available.
3089 INFO: Extending PYTHONPATH with paths
['/usr', '/usr']
3092 INFO: checking Analysis
3258 INFO: checking PYZ
3346 INFO: checking PKG
3356 INFO: Bootloader /usr/lib/python2.7/site-packages/PyInstaller/bootloader/Linux-32bit/run
3358 INFO: checking EXE
3360 INFO: Building EXE because out00-EXE.toc is non existent
3362 INFO: Building EXE from out00-EXE.toc
3575 INFO: Appending archive to ELF section in EXE /usr/build/t123/t123
Traceback (most recent call last):
File "/usr/bin/pyinstaller", line 9, in <module>
load_entry_point('PyInstaller==3.2.1', 'console_scripts', 'pyinstaller')()
File "/usr/lib/python2.7/site-packages/PyInstaller/__main__.py", line 90, in run
run_build(pyi_config, spec_file, **vars(args))
File "/usr/lib/python2.7/site-packages/PyInstaller/__main__.py", line 46, in run_build
PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs)
File "/usr/lib/python2.7/site-packages/PyInstaller/building/build_main.py", line 788, in main
build(specfile, kw.get('distpath'), kw.get('workpath'), kw.get('clean_build'))
File "/usr/lib/python2.7/site-packages/PyInstaller/building/build_main.py", line 734, in build
exec(text, spec_namespace)
File "<string>", line 26, in <module>
File "/usr/lib/python2.7/site-packages/PyInstaller/building/api.py", line 411, in __init__
self.__postinit__()
File "/usr/lib/python2.7/site-packages/PyInstaller/building/datastruct.py", line 161, in __postinit__
self.assemble()
File "/usr/lib/python2.7/site-packages/PyInstaller/building/api.py", line 563, in assemble
self.name)
File "/usr/lib/python2.7/site-packages/PyInstaller/compat.py", line 486, in exec_command_all
stdout=subprocess.PIPE, stderr=subprocess.PIPE, **kwargs)
File "/usr/lib/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1343, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
root#mylinkit:/usr#
I've done some investigation and I believe binutils may be the missing dependency.
apt-get install binutils

Plone 4 buildout error

Does anyone know how I can fix this error that I get when I do a buildout?
An internal error occurred due to a bug in either zc.buildout or in a
recipe being used:
Traceback (most recent call last):
File "/usr/local/Plone/buildout-cache/eggs/zc.buildout-1.7.1-py2.7.egg/zc/buildout/buildout.py", line 1866, in main
getattr(buildout, command)(args)
File "/usr/local/Plone/buildout-cache/eggs/zc.buildout-1.7.1-py2.7.egg/zc/buildout/buildout.py", line 625, in install
installed_files = self[part]._call(recipe.install)
File "/usr/local/Plone/buildout-cache/eggs/zc.buildout-1.7.1-py2.7.egg/zc/buildout/buildout.py", line 1345, in _call
return f()
File "/usr/local/Plone/buildout-cache/eggs/plone.recipe.precompiler-0.6-py2.7.egg/plone/recipe/precompiler/__init__.py", line 29, in install
return self._run()
File "/usr/local/Plone/buildout-cache/eggs/plone.recipe.precompiler-0.6-py2.7.egg/plone/recipe/precompiler/__init__.py", line 35, in _run
self._compile_eggs()
File "/usr/local/Plone/buildout-cache/eggs/plone.recipe.precompiler-0.6-py2.7.egg/plone/recipe/precompiler/__init__.py", line 67, in _compile_eggs
py_compile.compile(fn, None, None, True)
File "/usr/lib/python2.7/py_compile.py", line 123, in compile
with open(cfile, 'wb') as fc:
IOError: [Errno 13] Permission denied: '/usr/local/Plone/zeocluster/products/MyScriptModules/__init__.pyc'
Remove the file that causes the problem. which is:
/usr/local/Plone/zeocluster/products/MyScriptModules/__init__.pyc
Then re-run buildout as the user that removed the above file.

3.3 -> 4.1 migration fails, busted RAMCache AttributeError: 'RAMCache' object has no attribute '_cacheId'

After 3.3 -> 4.1 migration I get exception on the resulting page
File "/fast/buildout-cache/eggs/plone.app.viewletmanager-2.0.2-py2.6.egg/plone/app/viewletmanager/manager.py", line 85, in render
return u'\n'.join([viewlet.render() for viewlet in self.viewlets])
File "/fast/buildout-cache/eggs/plone.memoize-1.1.1-py2.6.egg/plone/memoize/volatile.py", line 281, in replacement
cached_value = cache.get(key, _marker)
File "/fast/buildout-cache/eggs/plone.memoize-1.1.1-py2.6.egg/plone/memoize/ram.py", line 124, in get
return self.__getitem__(key)
File "/fast/buildout-cache/eggs/plone.memoize-1.1.1-py2.6.egg/plone/memoize/ram.py", line 166, in __getitem__
MARKER)
File "/fast/buildout-cache/eggs/zope.ramcache-1.0-py2.6.egg/zope/ramcache/ram.py", line 107, in query
s = self._getStorage()
File "/fast/buildout-cache/eggs/zope.ramcache-1.0-py2.6.egg/zope/ramcache/ram.py", line 122, in _getStorage
cacheId = self._cacheId
AttributeError: 'RAMCache' object has no attribute '_cacheId'
Looks like RAMCache object is in invalid state.
Also before this seeing in logs:
2012-06-21 16:42:54 INFO plone.app.upgrade Ran upgrade step: Miscellaneous
2012-06-21 16:42:54 INFO plone.app.upgrade End of upgrade path, migration has finished
2012-06-21 16:42:54 INFO plone.app.upgrade Your Plone instance is now up-to-date.
2012-06-21 16:43:02 ERROR txn.4553572352 Error in tpc_abort() on manager <Connection at 10be48490>
Traceback (most recent call last):
File "/fast/buildout-cache/eggs/transaction-1.1.1-py2.6.egg/transaction/_transaction.py", line 484, in _cleanup
rm.tpc_abort(self)
File "/fast/buildout-cache/eggs/ZODB3-3.10.5-py2.6-macosx-10.7-x86_64.egg/ZODB/Connection.py", line 730, in tpc_abort
self._storage.tpc_abort(transaction)
File "/fast/buildout-cache/eggs/ZODB3-3.10.5-py2.6-macosx-10.7-x86_64.egg/ZEO/ClientStorage.py", line 1157, in tpc_abort
self._server.tpc_abort(id(txn))
File "/fast/buildout-cache/eggs/ZODB3-3.10.5-py2.6-macosx-10.7-x86_64.egg/ZEO/ServerStub.py", line 255, in tpc_abort
self.rpc.call('tpc_abort', id)
File "/fast/buildout-cache/eggs/ZODB3-3.10.5-py2.6-macosx-10.7-x86_64.egg/ZEO/zrpc/connection.py", line 768, in call
raise inst # error raised by server
OSError: [Errno 2] No such file or directory: '/fast/xxx-2012/var/blobstorage/0x00/0x00/0x00/0x00/0x00/0x07/0xdc/0x55/0x00FWigqp.tmp-'
2012-06-21 16:43:03 ERROR Zope.SiteErrorLog 1340286183.10.000607291180815 http://localhost:9666/xxx/##plone-upgrade
Traceback (innermost last):
Module ZPublisher.Publish, line 134, in publish
Module Zope2.App.startup, line 301, in commit
Module transaction._manager, line 89, in commit
Module transaction._transaction, line 329, in commit
Module transaction._transaction, line 446, in _commitResources
Module ZODB.Connection, line 781, in tpc_vote
Module ZEO.ClientStorage, line 1098, in tpc_vote
Module ZEO.ClientStorage, line 929, in _check_serials
IOError: [Errno 2] No such file or directory: '/fast/xxx-2012/var/blobstorage/0x00/0x00/0x00/0x00/0x00/0x07/0xdd/0xca/0x009kWNYQ.tmp-'
Why this would might happen?
Any pointers how to reinitialize RAMCache object?
RAMCache is first time referred by FaviconViewlet which is using #memoize deorator and it leads to this error.
Well, your migration obviously did not complete successfully, based on the traceback. So I would focus on figuring out why it failed, rather than working around things like the broken RAMCache which are likely a result of the migration not having run.
The traceback indicates that it broke while trying to abort the transaction...so you'll probably need to do some debugging to determine what caused it to try to abort, since that's not indicated in the logs you pasted.

Resources