Openstack, devstack installation ERROR: openstackclient.shell Exception - openstack

I'm not sure if it's the right place for my question, I am trying to install devstack but I'm getting these errors:
openstackclient.shell Exception
2015-04-21 16:22:04.991 | ERROR: openstackclient.shell Exception raised: (six 1.7.3 (/usr/lib/python2.7/dist-packages), Requirement.parse('six>=1.9.0'), set(['oslo.i18n', 'oslo.utils', 'cliff']))
2015-04-21 16:22:05.014 | + ADMIN_TENANT=
2015-04-21 16:22:05.015 | ++ openstack user create admin --project '' --email admin#example.com --password openstack
2015-04-21 16:22:05.015 | ++ grep ' id '
2015-04-21 16:22:05.018 | ++ get_field 2
2015-04-21 16:22:05.018 | ++ read data
2015-04-21 16:22:05.447 | ERROR: openstackclient.shell Exception raised: (six 1.7.3 (/usr/lib/python2.7/dist-packages), Requirement.parse('six>=1.9.0'), set(['oslo.i18n', 'oslo.utils', 'cliff']))
2015-04-21 16:22:05.466 | + ADMIN_USER=
2015-04-21 16:22:05.467 | ++ openstack role create admin
2015-04-21 16:22:05.468 | ++ grep ' id '
2015-04-21 16:22:05.469 | ++ get_field 2
2015-04-21 16:22:05.469 | ++ read data
2015-04-21 16:22:05.897 | ERROR: openstackclient.shell Exception raised: (six 1.7.3 (/usr/lib/python2.7/dist-packages), Requirement.parse('six>=1.9.0'), set(['oslo.i18n', 'oslo.utils', 'cliff']))
2015-04-21 16:22:05.916 | + ADMIN_ROLE=
2015-04-21 16:22:05.916 | + openstack role add --project --user
2015-04-21 16:22:06.349 | ERROR: openstackclient.shell Exception raised: (six 1.7.3 (/usr/lib/python2.7/dist-packages), Requirement.parse('six>=1.9.0'), set(['oslo.i18n', 'oslo.utils', 'cliff']))
2015-04-21 16:22:06.368 | + exit_trap
2015-04-21 16:22:06.368 | + local r=1
2015-04-21 16:22:06.368 | ++ jobs -p
2015-04-21 16:22:06.368 | + jobs=
2015-04-21 16:22:06.369 | + [[ -n '' ]]
2015-04-21 16:22:06.369 | + exit 1
After some research, typing pip install --upgrade setuptools, but that doesn't work I am using ubuntu 14.10 with no Virtual Machine, Please help?

For the original question, you have version 1.7.3 of six but 1.9.0 is needed. Uninstall six (pip uninstall six), then uninstall devStack (./clean.sh), then re-install devStack (./stack.sh)--but I'd recommend installing Kilo.
As for the "worlddump.py" output, that is just a dump of the state when an error occurs. The actual error will be in the text above this.

Related

OpenStack Mistral workflow error while executing using GUI

I am getting error while executing OpenStack simple mistral workflow on OpenStack(wallaby) devstack environment. While I can execute the workflow from CLI command and got success But it fails if I try the same thing with GUI
root#openstack:~# openstack workflow definition show test_get
---
version: '2.0'
test_get:
description: Test Get.
tasks:
my_task:
action: std.http
input:
url: http://www.google.com
root#openstack:~# openstack workflow execution create test_get
+--------------------+--------------------------------------+
| Field | Value |
+--------------------+--------------------------------------+
| ID | 482e3803-45ef-411e-a0f4-1427abfc8649 |
| Workflow ID | 9dc0d4a4-8c5b-4288-8126-e1147da3bd02 |
| Workflow name | test_get |
| Workflow namespace | |
| Description | |
| Task Execution ID | <none> |
| Root Execution ID | <none> |
| State | RUNNING |
| State info | None |
| Created at | 2021-06-21 16:58:54 |
| Updated at | 2021-06-21 16:58:54 |
| Duration | ... |
+--------------------+--------------------------------------+
But while executing in GUI I get **
Execution is missing field "workflow_identifier"
**
Faced the same issue in Yoga release. Spent a few hours to investigate it and found interesting thing:
/usr/local/lib/python3.8/dist-packages/mistralclient/api/v2/executions.py
class ExecutionManager(base.ResourceManager):
resource_class = Execution
def create(self, wf_identifier='', namespace='',
workflow_input=None, description='', source_execution_id=None,
**params):
self._ensure_not_empty(
workflow_identifier=wf_identifier or source_execution_id
)
But! in the webform we are using workflow_identifier instead of wf_identifier
/usr/local/lib/python3.8/dist-packages/mistraldashboard/workflows/forms.py
def handle(self, request, data):
try:
data['workflow_identifier'] = data.pop('workflow_name')
data['workflow_input'] = {}
for param in self.workflow_parameters:
value = data.pop(param)
if value == "":
value = None
data['workflow_input'][param] = value
ex = api.execution_create(request, **data)
FIX is to rename workflow_identifier to wf_identifier in the form like
data['wf_identifier'] = data.pop('workflow_name')
After that mistral-dashboard works fine with execution creating.

devstack error: The request you have made requires authentication. (HTTP 401)

I installed devstack successfully only on the first time.
Then, after running again ./stack.sh I get error:
2016-01-20 19:58:45.797 | + '[' -n
/home/const/data/devstack/files/images/cirros-0.3.4-x86_64-uec/cirros-0.3.4-x86_64-vmlinuz ']'
2016-01-20 19:58:45.798 | ++ openstack --os-cloud=devstack-admin image create cirros-0.3.4-x86_64-uec-kernel --public --container-format aki --disk-format aki
2016-01-20 19:58:45.799 | ++ get_field 2
2016-01-20 19:58:45.800 | ++ local data field
2016-01-20 19:58:45.800 | ++ read data
2016-01-20 19:58:45.800 | ++ grep ' id '
2016-01-20 19:58:46.569 | The request you have made requires authentication. (HTTP 401) (Request-ID: req-ed5f0f38-5798-4a52-8d1c-0d185ca8bb80)
2016-01-20 19:58:46.610 | + kernel_id=
2016-01-20 19:58:46.610 | + '[' -n /home/const/data/devstack/files/images/cirros-0.3.4-x86_64-uec/cirros-0.3.4-x86_64-initrd ']'
2016-01-20 19:58:46.611 | ++ openstack --os-cloud=devstack-admin image create cirros-0.3.4-x86_64-uec-ramdisk --public --container-format ari --disk-format ari
Platform: Ubuntu 14.04.3 LTS # PC
I've tried to remove directories and databases. It doesn't help.
Try removing following directories and run again.
~/.config/openstack
/etc/nova
/etc/keystone
/etc/cinder
/etc/glance
It may happen that the .conf files are not getting updated with the new install.

Devstack installation error - Keystone

Managed to work through my errors with DevStack and stack.sh generates about 6900+ logs on the screen so I am guessing I am very close :)
I am using Keystone 3, I manually exported OS_URL, OS_AUTH_URL (in an earlier run it complained about this one) and others. When the script exits, I tried the command manually with the os_url as noted in the logs below but it complains that it doesn't how which plugin to load and when I switch it to v3.0 it complains with a 404.
Following is a snippet of the error from the screen log, wondering if anyone else has seen this:
2015-08-26 18:30:38.005 | :./stack.sh:575+echo 'Waiting for keystone to start...'
2015-08-26 18:30:38.005 | Waiting for keystone to start...
2015-08-26 18:30:38.005 | :./stack.sh:579+wait_for_service 60 http://172.16.11.14:5000/v2.0/
2015-08-26 18:30:38.005 | :./stack.sh:340+local timeout=60
2015-08-26 18:30:38.005 | :./stack.sh:341+local url=http://172.16.11.14:5000/v2.0/
2015-08-26 18:30:38.005 | :./stack.sh:342+timeout 60 sh -c 'while ! curl -g -k --noproxy '\''*'\'' -s http://172.16.11.14:5000/v2.0/ >/dev/null; do sleep 1; done'
2015-08-26 18:30:38.591 | :./stack.sh:584+is_service_enabled tls-proxy
2015-08-26 18:30:38.593 | :./stack.sh:1738+return 1
2015-08-26 18:30:38.593 | :./stack.sh:976+SERVICE_ENDPOINT=http://172.16.11.14:35357/v2.0
2015-08-26 18:30:38.593 | :./stack.sh:978+is_service_enabled tls-proxy
2015-08-26 18:30:38.596 | :./stack.sh:1738+return 1
2015-08-26 18:30:38.596 | :./stack.sh:985+export OS_TOKEN=password
2015-08-26 18:30:38.596 | :./stack.sh:985+OS_TOKEN=password
2015-08-26 18:30:38.597 | :./stack.sh:986+export OS_URL=http://172.16.11.14:35357/v2.0
2015-08-26 18:30:38.597 | :./stack.sh:986+OS_URL=http://172.16.11.14:35357/v2.0
2015-08-26 18:30:38.597 | :./stack.sh:988+create_keystone_accounts
2015-08-26 18:30:38.597 | ::./stack.sh:376+get_or_create_project admin
2015-08-26 18:30:38.597 | ::./stack.sh:729+local os_cmd=openstack
2015-08-26 18:30:38.597 | ::./stack.sh:730+local domain=
2015-08-26 18:30:38.597 | ::./stack.sh:731+[[ ! -z '' ]]
2015-08-26 18:30:38.597 | :::./stack.sh:740+openstack project create admin --or-show -f value -c id
2015-08-26 18:30:39.596 | ERROR: openstack The resource could not be found. (HTTP 404) (Request-ID: req-a5703c0a-bdb4-4ca0-8bf7-61ddbacbddf1)
2015-08-26 18:30:39.617 | ::./stack.sh:738+local project_id=
2015-08-26 18:30:39.617 | ::./stack.sh:739+echo
2015-08-26 18:30:39.617 | :./stack.sh:376+local admin_tenant=
2015-08-26 18:30:39.617 | ::./stack.sh:377+get_or_create_user admin password
2015-08-26 18:30:39.618 | ::./stack.sh:700+[[ ! -z '' ]]
2015-08-26 18:30:39.618 | ::./stack.sh:703+local email=
2015-08-26 18:30:39.618 | ::./stack.sh:705+local os_cmd=openstack
2015-08-26 18:30:39.618 | ::./stack.sh:706+local domain=
2015-08-26 18:30:39.618 | ::./stack.sh:707+[[ ! -z '' ]]
2015-08-26 18:30:39.618 | :::./stack.sh:723+openstack user create admin --password password --or-show -f value -c id
2015-08-26 18:30:40.853 | ERROR: openstack 'links'
On inspecting the logs across many files, I found that --os-url was getting a null value and hence the 404. To fix this I added OS_URL to openrc file (export OS_URL=http://:5000.... )
I then also had to force use of the identity v3 ... for this you need to modify the get_or_add_project_role sub-routine in the functions-common file (should be around line 775...) to specify the --os-url=$KEYSTONE_SERVICE_URI_V3 and --os-identity-api-version=3 options.
I was then able to get past this error.
Hope that helps.

Cygnus install on localhost

By following this guide
https://github.com/telefonicaid/fiware-connectors/blob/master/flume/doc/quick_start_guide.md
I tried to use
/usr/cygnus/bin/cygnus-flume-ng agent --conf /usr/cygnus/conf/ -f /usr/cygnus/conf/agent_1.conf -n cygnusagent -Dflume.root.logger=INFO,console
But I got this error
time=2015-03-11T17:35:01.965CET | lvl=WARN | trans= | function=warn | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[76] : failed SocketConnector#0.0.0.0:8081: java.net.BindException: Address already in use
time=2015-03-11T17:35:01.965CET | lvl=WARN | trans= | function=warn | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[76] : failed Server#57c59fac: java.net.BindException: Address already in use
time=2015-03-11T17:35:01.965CET | lvl=FATAL | trans= | function=run | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.http.JettyServer[63] : Fatal error running the Management Interface. Details=Address already in use
And besides this error. I use service cygnus status and start correctly.
time=2015-03-11T17:46:52.337CET | lvl=ERROR | trans= | function=run | comp=Cygnus | msg=org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable[253] : Unable to start EventDrivenSourceRunner: { source:org.apache.flume.source.http.HTTPSource{name:http-source,state:IDLE} } - Exception follows.
java.lang.IllegalStateException: Running HTTP Server found in source: http-source before I started one.Will not attempt to start.
at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
at org.apache.flume.source.http.HTTPSource.start(HTTPSource.java:137)
at org.apache.flume.source.EventDrivenSourceRunner.start(EventDrivenSourceRunner.java:44)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)
I change the port to 8085 8084 8083 ... see that he read the conf and ignore this conf ...
[root#alex alex]# /usr/cygnus/bin/cygnus-flume-ng agent --conf /usr/cygnus/conf -f /usr/cygnus/conf/cygnus_instance_1.conf -n cygnusagent -Dflume.root.logger=INFO,console [-p 8085]
+ exec /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.34.x86_64//bin/java -Xmx20m -Dflume.root.logger=INFO,console -cp '/usr/cygnus/conf:/usr/cygnus/lib/*:/usr/cygnus/plugins.d/cygnus/lib/*:/usr/cygnus/plugins.d/cygnus/libext/*' -Djava.library.path= es.tid.fiware.fiwareconnectors.cygnus.nodes.CygnusApplication -f /usr/cygnus/conf/cygnus_instance_1.conf -n cygnusagent '[-p' '8085]'
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/cygnus/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/cygnus/plugins.d/cygnus/lib/cygnus-0.7.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
time=2015-03-11T19:47:50.882CET | lvl=INFO | trans= | function=start | comp=Cygnus | msg=org.apache.flume.node.PollingPropertiesFileConfigurationProvider[61] : Configuration provider starting
time=2015-03-11T19:47:50.895CET | lvl=INFO | trans= | function=run | comp=Cygnus | msg=org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable[133] : Reloading configuration file:/usr/cygnus/conf/cygnus_instance_1.conf
time=2015-03-11T19:47:50.906CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: CONFIG_FILE = /usr/cygnus/conf/agent_1.conf
time=2015-03-11T19:47:50.907CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: CONFIG_FOLDER = /usr/cygnus/conf
time=2015-03-11T19:47:50.907CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: AGENT_NAME = cygnusagent
time=2015-03-11T19:47:50.907CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: CYGNUS_USER = root
time=2015-03-11T19:47:50.907CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: LOGFILE_NAME = cygnus.log
time=2015-03-11T19:47:50.907CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: ADMIN_PORT = 8085
time=2015-03-11T19:47:50.907CET | lvl=INFO | trans= | function=validateConfiguration | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[140] : Post-validation flume configuration contains configuration for agents: []
time=2015-03-11T19:47:50.908CET | lvl=WARN | trans= | function=getConfiguration | comp=Cygnus | msg=org.apache.flume.node.AbstractConfigurationProvider[138] : No configuration found for this host:cygnusagent
time=2015-03-11T19:47:50.913CET | lvl=INFO | trans= | function=startAllComponents | comp=Cygnus | msg=org.apache.flume.node.Application[138] : Starting new configuration:{ sourceRunners:{} sinkRunners:{} channels:{} }
time=2015-03-11T19:47:50.925CET | lvl=INFO | trans= | function=startManagementInterface | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.nodes.CygnusApplication[85] : Starting a Jetty server listening on port 8081 (Management Interface)
time=2015-03-11T19:47:50.942CET | lvl=INFO | trans= | function=info | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[67] : Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
time=2015-03-11T19:47:50.942CET | lvl=INFO | trans= | function=stopAllComponents | comp=Cygnus | msg=org.apache.flume.node.Application[101] : Shutting down configuration: { sourceRunners:{} sinkRunners:{} channels:{} }
time=2015-03-11T19:47:50.942CET | lvl=INFO | trans= | function=info | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[67] : jetty-6.1.26
time=2015-03-11T19:47:50.942CET | lvl=INFO | trans= | function=startAllComponents | comp=Cygnus | msg=org.apache.flume.node.Application[138] : Starting new configuration:{ sourceRunners:{} sinkRunners:{} channels:{} }
time=2015-03-11T19:47:50.949CET | lvl=INFO | trans= | function=startManagementInterface | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.nodes.CygnusApplication[85] : Starting a Jetty server listening on port 8081 (Management Interface)
time=2015-03-11T19:47:50.958CET | lvl=INFO | trans= | function=info | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[67] : jetty-6.1.26
time=2015-03-11T19:47:50.978CET | lvl=WARN | trans= | function=warn | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[76] : failed SocketConnector#0.0.0.0:8081: java.net.SocketException: Address already in use
time=2015-03-11T19:47:50.980CET | lvl=INFO | trans= | function=info | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[67] : Started SocketConnector#0.0.0.0:8081
time=2015-03-11T19:47:50.982CET | lvl=WARN | trans= | function=warn | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[76] : failed Server#6e811049: java.net.SocketException: Address already in use
time=2015-03-11T19:47:50.982CET | lvl=FATAL | trans= | function=run | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.http.JettyServer[63] : Fatal error running the Management Interface. Details=Address already in use
Alejandro, this is a well-known bug for Cygnus 0.7.0. A new 0.7.1 version was uploaded at the beginig of this week to the FIWARE repo. Anyway, that supposedly FATAL error (it is an error, but not FATAL :)) does not affect the behaviour of Cygnus since it only affects the Management Interface (which currently has only one method that returns the version you are running). Thus, Cygnus should be working properly in the port you have configured for the HTTPSource at your /usr/cygnus/conf/agent_1.conf file:
cygnusagent.sources.http-source.port = 5050
Before installing the new version, I recommend you to remove the previous one. I mean, do not simply run yum install cygnus in order to update the existent installacion, but actively yum remove cygnus and then yum install cygnus. The reason is another bug regarding the RPM deployment that was fixed witin version 0.7.1 as well.

Storing output of MYSQL command in variable in UNIX Bash shell

I am having issue in storing output of MYSQL command in variable in UNIX Bash shell
db_out= $(mysql -u$MASTER_DB_USER -p$MASTER_DB_PASSWD -P$MASTER_DB_PORT -h$MASTER_DB_HOST -D$MASTER_DB_NAME<<Enf
show databases;
Enf)
echo $db_out
I am getting no output in variable $db_out.
Can some body suggest what is missing in above query?
having below output:
db_out= mysql -u$MASTER_DB_USER -p$MASTER_DB_PASSWD -P$MASTER_DB_PORT -h$MASTER_DB_HOST -D$MASTER_DB_NAME --execute "show databases"
+ db_out=
+ mysql -uroot -paxway -P3306 -h10.151.14.248 -Dsentinel --execute 'show databases'
+--------------------+
| Database |
+--------------------+
+--------------------+
| information_schema |
| composer |
| mysql |
| sentinel |
| test |
+--------------------+
echo "$db_out"
+ echo ''
It seems that the output is written to STDERR.
try:
db_out= $(mysql -u$MASTER_DB_USER -p$MASTER_DB_PASSWD -P$MASTER_DB_PORT -h$MASTER_DB_HOST -D$MASTER_DB_NAME 2>&1 <<Enf
show databases;
Enf)
or:
db_out= $( echo "show databases;" | mysql -u$MASTER_DB_USER -p$MASTER_DB_PASSWD -P$MASTER_DB_PORT -h$MASTER_DB_HOST -D$MASTER_DB_NAME 2>&1)

Resources