how to create a percona instance or Percona XtraDB Cluster in openstack trove? - openstack

when create percona using:
netID=$(openstack network list | grep GREEN | awk '{ print $2 }') && echo $netID
openstack database instance create \
--flavor 2C_2G_20G \
--size 4 \
--nic net-id=$netID \
--databases test \
--users app:pass123 \
--datastore percona --datastore-version 5.7 \
--is-public \
percona_1
there is error log at guest vm :
2021-11-05 15:52:59.693 2097 CRITICAL root [-] Unhandled error: ModuleNotFoundError: No module named 'trove.guestagent.datastore.experimental'
2021-11-05 15:52:59.693 2097 ERROR root Traceback (most recent call last):
2021-11-05 15:52:59.693 2097 ERROR root File "/usr/local/bin/guest-agent", line 10, in <module>
2021-11-05 15:52:59.693 2097 ERROR root sys.exit(main())
2021-11-05 15:52:59.693 2097 ERROR root File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/cmd/guest.py", line 94, in main
2021-11-05 15:52:59.693 2097 ERROR root rpc_api_version=guest_api.API.API_LATEST_VERSION)
2021-11-05 15:52:59.693 2097 ERROR root File "/opt/guest-agent-venv/lib/python3.6/site-packages/trove/common/rpc/service.py", line 48, in __init__
2021-11-05 15:52:59.693 2097 ERROR root _manager = importutils.import_object(manager)
2021-11-05 15:52:59.693 2097 ERROR root File "/opt/guest-agent-venv/lib/python3.6/site-packages/oslo_utils/importutils.py", line 44, in import_object
2021-11-05 15:52:59.693 2097 ERROR root return import_class(import_str)(*args, **kwargs)
2021-11-05 15:52:59.693 2097 ERROR root File "/opt/guest-agent-venv/lib/python3.6/site-packages/oslo_utils/importutils.py", line 30, in import_class
2021-11-05 15:52:59.693 2097 ERROR root __import__(mod_str)
2021-11-05 15:52:59.693 2097 ERROR root ModuleNotFoundError: No module named 'trove.guestagent.datastore.experimental'
How to enable trove to use percona ? Thanks in advance!

I have this same problem with deploying redis. I'm using wallaby release of trove. Looks like the experimental datastore managers were removed in the victoria release, not sure why yet. They are still there in the ussuri release. See here: https://opendev.org/openstack/trove/src/branch/stable/ussuri/trove/guestagent/datastore/experimental
I wonder if the intention is for you to add them yourself via a custom guest image or something.
EDIT: Here's the review entry for the change: https://review.opendev.org/c/openstack/trove/+/728419
Looks like a bunch of functionality was clobbered. Again, unsure why.

Related

Jupyterhub user home folder permissions mismatched on restarts

I have set up a single server with Google Auth Jupyterhub using Docker. You can find the setup scripts here - https://github.com/deepakputhraya/jupyterhub. The setup works well with multiple users able to login with separate home directories.
The problem arises when I update the Dockerfile or requirements.txt file and restart the server. Users whose accounts are created can log in but cannot access their home folders.
[ec2-user#ip-10-0-1-196 ~]$ sudo docker exec -it jupyterhub /bin/sh
# ls -lah /home
total 64K
drwxr-xr-x 16 root root 4.0K Oct 18 10:17 .
drwxr-xr-x 1 root root 4.0K Sep 4 15:46 ..
drwxr-xr-x 15 abhilash abhilash 4.0K Oct 31 10:51 abhilash
drwxr-xr-x 9 ajay ajay 4.0K Oct 10 09:11 ajay
drwxr-xr-x 8 abhilash abhilash 4.0K Sep 2 11:05 akshay
drwxr-xr-x 7 deepak deepak 4.0K Oct 4 12:20 deepak
Logs:
[I 2019-10-31 14:57:14.985 JupyterHub log:174] 200 GET /hub/spawn-pending/deepak (deepak#127.0.0.1) 38.35ms
[I 2019-10-31 14:57:17.321 JupyterHub spawner:1387] Spawning jupyterhub-singleuser --port=34405 --NotebookApp.default_url=/lab
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/traitlets/traitlets.py", line 528, in get
value = obj._trait_values[self.name]
KeyError: 'runtime_dir'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/bin/jupyterhub-singleuser", line 10, in <module>
sys.exit(main())
File "/opt/conda/lib/python3.6/site-packages/jupyterhub/singleuser.py", line 660, in main
return SingleUserNotebookApp.launch_instance(argv)
File "/opt/conda/lib/python3.6/site-packages/jupyter_core/application.py", line 268, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "/opt/conda/lib/python3.6/site-packages/jupyterhub/singleuser.py", line 558, in initialize
return super().initialize(argv)
File "</opt/conda/lib/python3.6/site-packages/decorator.py:decorator-gen-7>", line 2, in initialize
File "/opt/conda/lib/python3.6/site-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/notebook/notebookapp.py", line 1676, in initialize
self.init_configurables()
File "/opt/conda/lib/python3.6/site-packages/notebook/notebookapp.py", line 1349, in init_configurables
connection_dir=self.runtime_dir,
File "/opt/conda/lib/python3.6/site-packages/traitlets/traitlets.py", line 556, in __get__
return self.get(obj, cls)
File "/opt/conda/lib/python3.6/site-packages/traitlets/traitlets.py", line 535, in get
value = self._validate(obj, dynamic_default())
File "/opt/conda/lib/python3.6/site-packages/jupyter_core/application.py", line 99, in _runtime_dir_default
ensure_dir_exists(rd, mode=0o700)
File "/opt/conda/lib/python3.6/site-packages/jupyter_core/utils/__init__.py", line 13, in ensure_dir_exists
os.makedirs(path, mode=mode)
File "/opt/conda/lib/python3.6/os.py", line 210, in makedirs
makedirs(head, mode, exist_ok)
File "/opt/conda/lib/python3.6/os.py", line 210, in makedirs
makedirs(head, mode, exist_ok)
File "/opt/conda/lib/python3.6/os.py", line 210, in makedirs
makedirs(head, mode, exist_ok)
[Previous line repeated 1 more time]
File "/opt/conda/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/home/deepak'
[I 2019-10-31 14:57:18.274 JupyterHub log:174] 302 GET /hub/spawn/deepak -> /hub/spawn-pending/deepak (deepak#127.0.0.1) 1014.98ms
[I 2019-10-31 14:57:18.332 JupyterHub pages:303] deepak is pending spawn
[I 2019-10-31 14:57:18.335 JupyterHub log:174] 200 GET /hub/spawn-pending/deepak (deepak#127.0.0.1) 17.16ms
ERROR:asyncio:Task exception was never retrieved
future: <Task finished coro=<BaseHandler.spawn_single_user() done, defined at /opt/conda/lib/python3.6/site-packages/jupyterhub/handlers/base.py:697> exception=HTTPError()>
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/jupyterhub/handlers/base.py", line 889, in spawn_single_user
timedelta(seconds=self.slow_spawn_timeout), finish_spawn_future
tornado.util.TimeoutError: Timeout
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/jupyterhub/handlers/base.py", line 922, in spawn_single_user
% (status, spawner._log_name),
tornado.web.HTTPError: HTTP 500: Internal Server Error (Spawner failed to start [status=1]. The logs for deepak may contain details.)
[W 2019-10-31 14:57:44.492 JupyterHub user:678] deepak's server never showed up at http://127.0.0.1:34405/user/deepak/ after 30 seconds. Giving up
[E 2019-10-31 14:57:44.530 JupyterHub gen:593] Exception in Future <Task finished coro=<BaseHandler.spawn_single_user.<locals>.finish_user_spawn() done, defined at /opt/conda/lib/python3.6/site-packages/jupyterhub/handlers/base.py:800> exception=TimeoutError("Server at http://127.0.0.1:34405/user/deepak/ didn't respond in 30 seconds",)> after timeout
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/tornado/gen.py", line 589, in error_callback
future.result()
File "/opt/conda/lib/python3.6/site-packages/jupyterhub/handlers/base.py", line 807, in finish_user_spawn
await spawn_future
File "/opt/conda/lib/python3.6/site-packages/jupyterhub/user.py", line 654, in spawn
await self._wait_up(spawner)
File "/opt/conda/lib/python3.6/site-packages/jupyterhub/user.py", line 701, in _wait_up
raise e
File "/opt/conda/lib/python3.6/site-packages/jupyterhub/user.py", line 669, in _wait_up
http=True, timeout=spawner.http_timeout, ssl_context=ssl_context
File "/opt/conda/lib/python3.6/site-packages/jupyterhub/utils.py", line 234, in wait_for_http_server
timeout=timeout,
File "/opt/conda/lib/python3.6/site-packages/jupyterhub/utils.py", line 177, in exponential_backoff
raise TimeoutError(fail_message)
TimeoutError: Server at http://127.0.0.1:34405/user/deepak/ didn't respond in 30 seconds
The folder permissions for user akshay is that of abhilash. New users who signup for the first time do not have this issue. Again this only happens when there is an update to Docker image. If I were to restart the server, it neither fixes the issue nor changes the user folder permissions map for other users.
Why are the permissions getting mismatched? How can this be fixed?

AWS SAM Local dotnetcore2.1 exception when running API Gateway

Setup
Windows 10
Docker for Windows v18.09.0
AWS SAM CLI v0.10.0
Python 3.7.0
AWS CLI v1.16.67
dotnet core sdk v2.1.403
Powershell v5.1.17134.407
Problem
I'm following the quickstart for AWS SAM Local (as well as the readme generated once the init command is executed below), using the dotnetcore2.1 runtime.
I've run the following command to initialise AWS SAM for use with dotnetcore2.1
sam init --runtime dtonetcore2.1
Then I created the package by running
build.ps1 --target=package
Finally I start the local API Gateway service by running
sam local start-api
I then open a browser and navigate to http://localhost:3000/hello where I'm presented with the following:
PS C:\Users\user_name\Documents\Workspace\messaround\aws-sam\sam-app> sam local start-api
2019-01-04 10:39:15 Found credentials in shared credentials file: ~/.aws/credentials
2019-01-04 10:39:15 Mounting HelloWorldFunction at http://127.0.0.1:3000/hello [GET]
2019-01-04 10:39:15 You can now browse to the above endpoints to invoke your functions. You do not need to restart/reload SAM CLI while working on your functions changes will be reflected instantly/automatically. You only need to restart SAM CLI if you update your AWS SAM template
2019-01-04 10:39:16 * Running on http://127.0.0.1:3000/ (Press CTRL+C to quit)
2019-01-04 10:40:10 Invoking HelloWorld::HelloWorld.Function::FunctionHandler (dotnetcore2.1)
2019-01-04 10:40:10 Decompressing C:\Users\user_name\Documents\Workspace\messaround\aws-sam\sam-app\artifacts\HelloWorld.zip
Fetching lambci/lambda:dotnetcore2.1 Docker container image......
2019-01-04 10:40:13 Mounting C:\Users\user_name\AppData\Local\Temp\tmpq0zka7a7 as /var/task:ro inside runtime container
2019-01-04 10:40:14 Exception on /hello [GET]
Traceback (most recent call last):
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\docker\api\client.py", line 246, in _raise_for_status
response.raise_for_status()
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\requests\models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localnpipe/v1.35/containers/102dda11417068e01873242be2383c78c7ad4e2739fd4f8b42c1e0ea494d2bbb/start
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\flask\_compat.py", line 35, in reraise
raise value
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\samcli\local\apigw\local_apigw_service.py", line 153, in _request_handler
self.lambda_runner.invoke(route.function_name, event, stdout=stdout_stream_writer, stderr=self.stderr)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\samcli\commands\local\lib\local_lambda.py", line 85, in invoke
self.local_runtime.invoke(config, event, debug_context=self.debug_context, stdout=stdout, stderr=stderr)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\samcli\local\lambdafn\runtime.py", line 86, in invoke
self._container_manager.run(container)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\samcli\local\docker\manager.py", line 98, in run
container.start(input_data=input_data)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\samcli\local\docker\container.py", line 187, in start
real_container.start()
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\docker\models\containers.py", line 390, in start
return self.client.api.start(self.id, **kwargs)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\docker\utils\decorators.py", line 19, in wrapped
return f(self, resource_id, *args, **kwargs)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\docker\api\container.py", line 1075, in start
self._raise_for_status(res)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\docker\api\client.py", line 248, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "C:\Users\user_name\AppData\Roaming\Python\Python37\site-packages\docker\errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 500 Server Error: Internal Server Error ("error while creating mount source path '/host_mnt/c/Users/user_name/AppData/Local/Temp/tmpq0zka7a7': mkdir /host_mnt/c/Users/user_name/AppData: permission denied")
2019-01-04 10:40:14 127.0.0.1 - - [04/Jan/2019 10:40:14] "GET /hello HTTP/1.1" 502 -
2019-01-04 10:40:14 127.0.0.1 - - [04/Jan/2019 10:40:14] "GET /favicon.ico HTTP/1.1" 403 -
What I've tried
Resetting the shared drive credentials
Initially I though this was a permissioning error between my Windows drive and the VM running docker... After searching the docker forums I found this article which I've followed. However this doesn't seem to have changed the error message
Any suggestions would be greatly received. Thanks
That's how I fixed my problem:
When SAM CLI sees a zip, it unzip into a temp directory (looks to be C:/Users/user_name/AppData/Local/Temp/tmpq0zka7a7 in your case).
Docker must have access to that folder.
In my case, I've created a local user to give Docker access to shared drives and that local user didn't have access to C:/Users/user_name.
I gave it access and got my problem sorted. Maybe you can fix it the same way.
Try to run the following:
docker run --rm -v c:/Users/user_name:/data alpine ls /data
It should list c:/Users/user_name content if all is fine.
Good luck!

Cloudify manager bootsrapping - rest service failed

I followed the steps in http://docs.getcloudify.org/4.1.0/installation/bootstrapping/#option-2-bootstrapping-a-cloudify-manager to bootstrap the cloudify manager using option 2, and getting the following error repeatedly:
Workflow failed: Task failed 'fabric_plugin.tasks.run_script' -> restservice
error: http: //127.0.0.1:8100: <urlopen error [Errno 111] Connection refused>
The command is able to install a verify a lot of things like rabbitmq, postgresql etc, but always fails at rest service. Create and configure of rest service is successful, but verification fails. It looks like the service never starts.
2017-08-22 04:23:19.700 CFY <manager> [rest_service_cyd4of.start] Task started 'fabric_plugin.tasks.run_script'
2017-08-22 04:23:20.506 LOG <manager> [rest_service_cyd4of.start] INFO: Starting Cloudify REST Service...
2017-08-22 04:23:21.011 LOG <manager> [rest_service_cyd4of.start] INFO: Verifying Rest service is running...
2017-08-22 04:23:21.403 LOG <manager> [rest_service_cyd4of.start] INFO: Verifying Rest service is working as expected...
2017-08-22 04:23:21.575 LOG <manager> [rest_service_cyd4of.start] WARNING: <urlopen error [Errno 111] Connection refused>, Retrying in 3 seconds...
2017-08-22 04:23:24.691 LOG <manager> [rest_service_cyd4of.start] WARNING: <urlopen error [Errno 111] Connection refused>, Retrying in 6 seconds...
2017-08-22 04:23:30.815 LOG <manager> [rest_service_cyd4of.start] WARNING: <urlopen error [Errno 111] Connection refused>, Retrying in 12 seconds...
[10.0.2.15] out: restservice error: http: //127.0.0.1:8100: <urlopen error [Errno 111] Connection refused>
[10.0.2.15] out: Traceback (most recent call last):
[10.0.2.15] out: File "/tmp/cloudify-ctx/scripts/tmp4BXh2m-start.py-VHYZP1K3", line 71, in <module>
[10.0.2.15] out: verify_restservice(restservice_url)
[10.0.2.15] out: File "/tmp/cloudify-ctx/scripts/tmp4BXh2m-start.py-VHYZP1K3", line 34, in verify_restservice
[10.0.2.15] out: utils.verify_service_http(SERVICE_NAME, url, headers=headers)
[10.0.2.15] out: File "/tmp/cloudify-ctx/scripts/utils.py", line 1734, in verify_service_http
[10.0.2.15] out: ctx.abort_operation('{0} error: {1}: {2}'.format(service_name, url, e))
[10.0.2.15] out: File "/tmp/cloudify-ctx/cloudify.py", line 233, in abort_operation
[10.0.2.15] out: subprocess.check_call(cmd)
[10.0.2.15] out: File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
[10.0.2.15] out: raise CalledProcessError(retcode, cmd)
[10.0.2.15] out: subprocess.CalledProcessError: Command '['ctx', 'abort_operation', 'restservice error: http: //127.0.0.1:8100: <urlopen error [Errno 111] Connection refused>']' returned non-zero exit status 1
[10.0.2.15] out:
Fatal error: run() received nonzero return code 1 while executing!
Requested: source /tmp/cloudify-ctx/scripts/env-tmp4BXh2m-start.py-VHYZP1K3 && /tmp/cloudify-ctx/scripts/tmp4BXh2m-start.py-VHYZP1K3
Executed: /bin/bash -l -c "cd /tmp/cloudify-ctx/work && source /tmp/cloudify-ctx/scripts/env-tmp4BXh2m-start.py-VHYZP1K3 && /tmp/cloudify-ctx/scripts/tmp4BXh2m-start.py-VHYZP1K3"
I am using CentOS 7.
Any suggestion to address the issue or debug will be appreciated
Can you please try the same bootstrap option using these instructions and let me know if it works for you?
Do you have the python-virtualenv package installed? If you do, try uninstalling it.
The version of virtualenv in CentOS repositories is too old and causes problems with the REST service installation. Cloudify will install its own version of virtualenv while bootstrapping, but only if one is not already present in the system.

Installing Bika LIMS on Plone 5.0

I am trying to install Plone in Ubuntu LTS 14 (newly built server) with Bika with the procedure here:
https://github.com/bikalabs/Bika-LIMS/blob/0c606e0/INSTALL.rst
I can start the Plone server using the command:
sudo -u plone_daemon bin/plonectl zeoserver start
/usr/local/Plone/zeocluster# sudo -u plone_daemon bin/plonectl restart zeoserver
zeoserver: .
daemon process started, pid=3864
/usr/local/Plone/zeocluster# sudo -u plone_daemon bin/plonectl status zeoserver
zeoserver: program running; pid=3864
But when I start the client1, it shows
ERROR Application Could not import Products.ATExtensions':
sudo -u plone_daemon bin/plonectl client1 fg
The client1 could not be started.
Could you please help advise what the possible cause could be?
Here are the error messages while starting the client1:
/usr/local/Plone/zeocluster# sudo -u plone_daemon bin/plonectl fg client1
client1: 2015-10-11 12:37:05 INFO ZServer HTTP server started at Sun Oct 11 12:37:05 2015
Hostname: 0.0.0.0
Port: 8080
2015-10-11 12:37:07 ERROR Application Could not import Products.ATExtensions
Traceback (most recent call last):
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.23-py2.7.egg/OFS/Application.py", line 606, in import_product
product=__import__(pname, global_dict, global_dict, silly)
File "/usr/local/Plone/buildout-cache/eggs/Products.ATExtensions-1.1-py2.7.egg/Products/ATExtensions/__init__.py", line 18, in module
validation.register(PartialUrlValidator('isPartialUrl'))
File "/usr/local/Plone/buildout-cache/eggs/Products.validation-2.0-py2.7.egg/Products/validation/service.py", line 33, in register
raise FalseValidatorError, validator
FalseValidatorError: <Products.ATExtensions.validator.isPartialUrl.PartialUrlValidator instance at 0x7fe90f0048c0>
Traceback (most recent call last):
File "/usr/local/Plone/zeocluster/parts/client1/bin/interpreter", line 302, in module
exec(compile(__file__f.read(), __file__, "exec"))
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.23-py2.7.egg/Zope2/Startup/run.py", line 76, in module
run()
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.23-py2.7.egg/Zope2/Startup/run.py", line 22, in run
starter.prepare()
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.23-py2.7.egg/Zope2/Startup/__init__.py", line 86, in prepare
self.startZope()
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.23-py2.7.egg/Zope2/Startup/__init__.py", line 262, in startZope
Zope2.startup()
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.23-py2.7.egg/Zope2/__init__.py", line 47, in startup
_startup()
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.23-py2.7.egg/Zope2/App/startup.py", line 67, in startup
OFS.Application.import_products()
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.23-py2.7.egg/OFS/Application.py", line 583, in import_products
import_product(product_dir, product_name, raise_exc=debug_mode)
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.23-py2.7.egg/OFS/Application.py", line 606, in import_product
product=__import__(pname, global_dict, global_dict, silly)
File "/usr/local/Plone/buildout-cache/eggs/Products.ATExtensions-1.1-py2.7.egg/Products/ATExtensions/__init__.py", line 18, in module
validation.register(PartialUrlValidator('isPartialUrl'))
File "/usr/local/Plone/buildout-cache/eggs/Products.validation-2.0-py2.7.egg/Products/validation/service.py", line 33, in register
raise FalseValidatorError, validator
Products.validation.exceptions.FalseValidatorError: <Products.ATExtensions.validator.isPartialUrl.PartialUrlValidator instance at 0x7fe90f0048c0>
Bika LIMS will not work out of the box in Plone 5, as it depends on Products.ATExtensions and this package seems not to be compatible with Plone 5.
Besides that, Archetypes is not installed by default on Plone 5.

Error message in nova schedule

i tried to start 5 instance on my "litle cloud". I have one controller node (tb22) with nova-api and compute. An i have one compute node (tb23).
I get an error message in nova-schedule:
2014-07-09 13:00:23.858 ERROR nova.scheduler.filter_scheduler
[req-f699a7d3-e3de-40e4-b291-9ae972c7d8f9 admin demo] [instance:
55febf3d-1d56-4381-a6ca-b4b3b37e92e0] Error from last host: tb23 (node
tb23): [u'Traceback (most recent call last):\n', u' File
"/opt/stack/nova/nova/compute/manager.py", line 1305, in
_build_instance\n set_access_ip=set_access_ip)\n', u' File "/opt/stack/nova/nova/compute/manager.py", line 393, in
decorated_function\n return function(self, context, *args,
**kwargs)\n', u' File "/opt/stack/nova/nova/compute/manager.py", line 1717, in spawn\n LOG.exception((\'Instance failed to spawn\'),
instance=instance)\n', u' File
"/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in
exit\n six.reraise(self.type_, self.value, self.tb)\n', u' File "/opt/stack/nova/nova/compute/manager.py", line 1714, in _spawn\n
block_device_info)\n', u' File
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2262, in spawn\n
write_to_disk=True)\n', u' File
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3447, in to_xml\n
disk_info, rescue, block_device_info)\n', u' File
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3263, in
get_guest_config\n flavor)\n', u' File
"/opt/stack/nova/nova/virt/libvirt/vif.py", line 384, in get_config\n
_("Unexpected vif_type=%s") % vif_type)\n', u'NovaException: Unexpected vif_type=binding_failed\n']
Has anybody an idea what the fault is?
Thanks
This is error is caused by configuration mistake in
/etc/neutron/plugins/ml2/ml2_conf.ini
Fix
Edit /etc/neutron/plugins/ml2/ml2_conf.ini on both compute and network node
Change tunnel_type = gre to tunnel_types = gre
Restart these services in network node
service openvswitch-switch restart
service neutron-plugin-openvswitch-agent restart
service neutron-l3-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
Restart these services in compute node
service openvswitch-switch restart
service nova-compute restart
service neutron-plugin-openvswitch-agent restart

Resources