I installed and configured octavia for openstack load balancing. but when i want create a new loadbalancer using openstack loadbalancer create --name lb1 --vip-subnet-id subnet-pub octavia worker log say: ERROR octavia.controller.worker.v1.controller_worker octavia.common.exceptions.ComputeBuildException: Failed to build compute instance due to: Failed to retrieve image with amphora tag.
why? (I use ubuntu)
another question is: I installed octavia on controller node. must install anything on compute node(s)?
I had a similar problem and adding --project service solved it when uploading the image.
$ openstack image create amphora-x64-haproxy.qcow2 --container-format bare --disk-format qcow2 --private --tag amphora --file amphora-x64-haproxy.qcow2 --property hw_architecture='x86_64' --property hw_rng_model=virtio --project service
About the second question no need for anything to be installed on compute nodes. Only network access to lb-mgmt-net from controllers.
This Link helped me.
Set tag of image with value "amphora"
openstack image set --tag "amphora" image_name
Hi,
I'm trying to make simple openstack loadbalancer with cpu_util, memory and disk usage. But I'm struggle with aodh, gnocchi api.
I installed openstack with devstack(I posted local.conf file below ). But whenever I tried to matric/v1/matirc with admin token, it returns 503 internal error. I thought devstack configure everything to use gnocchi. Could you tell me what's I need to configure to use gnocchi api.
[[local|localrc]]
HOST_IP=xxx.xxx.xxx.xxx
ADMIN_PASSWORD=devstack
RABBIT_PASSWORD=devstack
SERVICE_PASSWORD=devstack
DATABASE_PASSWORD=devstack
GIT_BASE=https://git.openstack.org/
NOVA_BRANCH=stable/rocky
NOVACLIENT_BRANCH=stable/rocky
KEYSTONE_BRANCH=stable/rocky
KEYSTONECLIENT_BRANCH=stable/rocky
CINDER_BRANCH=stable/rocky
NEUTRON_BRANCH=stable/rocky
GLANCE_BRANCH=stable/rocky
enable_plugin heat https://git.openstack.org/openstack/heat stable/rocky
enable_plugin heat-dashboard https://git.openstack.org/openstack/heat-dashboard stable/rocky
enable_service h-eng h-api h-cfn h-api-cw heat-dashboard
enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas stable/rocky
enable_plugin neutron-lbaas-dashboard https://git.openstack.org/openstack/neutron-lbaas-dashboard stable/rocky
enable_plugin octavia https://git.openstack.org/openstack/octavia stable/rocky
enable_service q-svc q-agt q-dhcp q-l3 q-meta
enable_service q-lbaasv2 neutron-lbaas-dashboard
enable_service octavia o-cw o-hk o-hm o-api
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer.git stable/rocky
CEILOMETER_BACKEND=gnocchi
enable_plugin aodh https://git.openstack.org/openstack/aodh stable/rocky
enable_plugin panko https://git.openstack.org/openstack/panko stable/rocky
enable_service c-bak
enable_service swift
What I want to do with gnocchi api was cheacking what kind of data it has. Because gnocchi doc only tells about cpu_util. I want to know what data gnocchi can collect and for collect that do i need to configure something?
with that information, I want to make loadbalancer using aodh and heat. On aodh api doc only show alarm with cpu_util. Can I give a alarm based on disk,memory and cpu usage.
On aodh api doc, It gets cpu_util data from gnocchi. when create alarm it gets a threshold number like 0.8. I thought gnocchi doesn't give a cpu_util value like that. where and how threshold calculated?
It's my first question on stack-over-flow so I may have made lots of mistakes.
Thank you for reading it and I hope you have time to answer it.
[[local|localrc]]
ADMIN_PASSWORD=pass123
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
HOST_IP=192.168.1.57
# Services
ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,s-proxy,s-object,s-container,s-account
ENABLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
ENABLED_SERVICES+=,trove,tr-api,tr-tmgr,tr-cond
ENABLED_SERVICES+=,horizon
# Ceilometer
ENABLED_SERVICES+=,ceilometer-acompute,ceilometer-acentral,ceilometer-collector,ceilometer-api
ENABLED_SERVICES+=,ceilometer-alarm-notify,ceilometer-alarm-eval
# Heat
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng BLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
ENABLED_SERVICES+=,trove,tr-api,tr-tmgr,tr-cond
ENABLED_SERVICES+=,horizon
# Ceilometer
ENABLED_SERVICES+=,ceilometer-acompute,ceilometer-acentral,ceilometer-collector,ceilometer-api
ENABLED_SERVICES+=,ceilometer-alarm-notify,ceilometer-alarm-eval
# Heat
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
# Neutron
DISABLED_SERVICES=n-net
ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,neutron
# Neutron - Load Balancing
ENABLED_SERVICES+=,q-lbaas
# VLAN configuration
Q_PLUGIN=ml2
ENABLE_TENANT_VLANS=True
# GRE tunnel configuration
Q_PLUGIN=ml2
ENABLE_TENANT_TUNNELS=True
Q_ML2_TENANT_NETWORK_TYPE=gre
# Logging
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
LOGDAYS=2
# Swift
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
SWIFT_REPLICAS=1
SWIFT_DATA_DIR=$DEST/data
# Neutron
DISABLED_SERVICES=n-net
ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,neutron
# Neutron - Load Balancing
ENABLED_SERVICES+=,q-lbaas
# VLAN configuration
Q_PLUGIN=ml2
ENABLE_TENANT_VLANS=True
# GRE tunnel configuration
Q_PLUGIN=ml2
ENABLE_TENANT_TUNNELS=True
Q_ML2_TENANT_NETWORK_TYPE=gre
# Logging
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
LOGDAYS=2
# Swift
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
SWIFT_REPLICAS=1
SWIFT_DATA_DIR=$DEST/data
# Tempest
enable_service tempest
This in my local.conf file. I installed devstack using this configuration.
I'm new to devstack and I can't find the logs now. I've searched the internet and can't find a clear path to the log files. I need the log files to solve an error that says no hosts were found.
As explained in this link devstack now runs all services as systemd unit files (I can't recall exactly since which version, although I would say since the pike release)
Thus, to check the logs of the different services you can use the journalctl utility (along with its infinite options).
Some examples:
sudo journalctl -f -u devstack#n-cond.service
In order to check the nova-conductor logs.
Or:
sudo journalctl -f -u devstack#h*
to check all heat-related services.
In devstack, logs of openstack services are attached with linux screens. you can find them by running the command screen -x stack. And you can switch to each service log.
Troubleshooting the error "No valid hosts were found":
Check whether there are enough resources like vcpu, disk, RAM in the tenant/project.
If you are using availability_zone then check for available resources for the compute host in that zone.
I am unable to spin up instances having flavor greater than m1.tiny on a server having 64GB RAM. The error reported by horizon is "Volume 71bc03cf-6ab1-4511-8763-43647fd1ea2c did not finish being created even after we waited 0 seconds or 1 attempts." However, there are no errors with ,m1.tiny. I am using stable/ocata on Ubuntu 16.04. Any help would be really appreciated. Thanks
Local.conf:
[[local|localrc]]
HOST_IP=10.0.0.150
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_TOKEN=password
SERVICE_PASSWORD=password
ADMIN_PASSWORD=password
enable_plugin zun https://git.openstack.org/openstack/zun
enable_plugin zun-tempest-plugin https://git.openstack.org/openstack/zun-tempest-plugin
RECLONE=yes
#This below plugin enables installation of container engine on Devstack.
#The default container engine is Docker
enable_plugin devstack-plugin-container https://git.openstack.org/openstack/devstack-plugin-container
SCREEN_LOGDIR=$DEST/logs/screen
LOG_COLOR=false
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
KURYR_CAPABILITY_SCOPE=local
KURYR_ETCD_PORT=2379
enable_plugin kuryr-libnetwork https://git.openstack.org/openstack/kuryr-libnetwork
ENABLE_IDENTITY_V2=False
LIBS_FROM_GIT="python-zunclient"
enable_plugin zun-ui https://github.com/openstack/zun-ui
enable_plugin heat https://git.openstack.org/openstack/heat
The issue was fixed by following this link.
While installing Devstack on a compute node in a multi-node devstack lab environment error encountered: Service n-net is not running.
The local.conf file has localrc as:
HOST_IP=192.168.42.12 # change this per compute node
FLAT_INTERFACE=eth0
FIXED_RANGE=10.4.128.0/20
FIXED_NETWORK_SIZE=4096
FLOATING_RANGE=192.168.42.128/25
MULTI_HOST=1
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=labstack
DATABASE_PASSWORD=supersecret
RABBIT_PASSWORD=supersecret
SERVICE_PASSWORD=supersecret
DATABASE_TYPE=mysql
SERVICE_HOST=192.168.42.11
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
ENABLED_SERVICES=n-cpu,n-net,n-api-meta,c-vol
NOVA_VNC_ENABLED=True
NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html"
VNCSERVER_LISTEN=$HOST_IP
VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN
Please help me removing this error.
P.S: I must use nova-net and not neutron for interaction between the controller and the compute nodes.
For the Ocata release I founded a solution (2-node setup). An import part is the placement-api since the Newton update (14.0.0), so first of all enable this in all your nodes:
local.conf:
enable_service placement-api
First run ./stack.sh on your controller node and after that installation run it on the other nodes.
Also here you will see the error Service n-net is not running...
Now edit your nova.conf file in /etc/nova/nova.conf because there will be no database and database_api section:
[database]
connection=mysql+pymysql://root:DB_PASS#IP_OF_CONTROLLER_NODE/nova
[api_database]
connection=mysql+pymysql://root:DB_PASS#IP_OF_CONTROLLER_NODE/nova_api
When adding these, you can check if it works with following command:
stack#jerico-02:/devstack$ nova-manage --debug host list
host zone
0.0.0.0 internal
jerico-03 internal
jerico-02 nova
Also in the dashboard the new compute (hypervisor) shows up!
Hope it helps!
(Tested on Ubuntu Server 16.04 LTS with devstack and OpenStack Ocata)