How to remove duplicate service with help of nova-manage command? - openstack

I installed openstack. All services are running successfully.
[root#test ~]# nova-manage service list
Binary Host Zone Status State Updated_At
nova-cert localhost.localdomain nova enabled :-) 2012-11-06 04:25:36.396817
nova-scheduler localhost.localdomain nova enabled :-) 2012-11-06 04:25:41.735192
nova-network compute nova enabled :-) 2012-11-06 04:25:42.109157
nova-compute compute nova enabled :-) 2012-11-06 04:25:43.240902
After that I change HOSTNAME in /etc/sysconfig/network to myhost.mydomain. Then restart the services.
Now I get the duplicate entry for the services.
[root#test ~]# nova-manage service list
Binary Host Zone Status State Updated_At
nova-cert localhost.localdomain nova enabled XXX 2012-11-06 04:25:36.396817
nova-cert myhost.mydomain nova enabled :-) 2012-11-06 05:25:36.396817
nova-scheduler localhost.localdomain nova enabled XXX 2012-11-06 04:25:41.735192
nova-scheduler myhost.mydomain nova enabled :-) 2012-11-06 05:25:41.735192
nova-network compute nova enabled :-) 2012-11-06 04:25:42.109157
nova-compute compute nova enabled :-) 2012-11-06 04:25:43.240902
From these services old services are not running.
I want to remove the services for host localhost.localdomain.
I check the nova-manage service --help but there is no option for the delete :(.
[root#test ~]# nova-manage service --help
--help does not match any options:
describe_resource
disable
enable
list

Looking at your example above, I suspect you're seeing a duplicate because you have two hosts with their hostnames set identically. If this is the case, the following code/answer isn't likely to help you out too much. There's an implicit assumption in that whole setup that hostnames of nodes upon which nova worker processes run will be unique.
In the latest branch, there isn't a command explicitly enabled for this, but the API exists underneath to do what you're after. Here's a snippet of code (untested!) that should do what you want; or at least point you to the relevant API if you're interested.
from nova import context
from nova import db
hostname = 'some_hostname'
service_name = 'nova_service_you_want_to_destroy'
ctxt = context.get_admin_context()
service = db.service_get_by_args(ctxt, hostname, service_name)
#... pick one of these services ...
#... assign it to 'service'
db.service_destroy(ctxt, service[id])
NOTE: this will remove the service from the database, or raise an exception if it doesn't exist (or something else goes wrong). If the service is running, expect that it will just "show up" again, as the service list is populated by the various nova worker agents processes reporting in.

Related

Openstack-Ironic:Networking is not working properly in Ironic nodes

Im trying to use the ml2 network of openstack for ironic node created with IPMI driver,but it was not connecting properly and causing the error given below when I validate the node created
====
network | False | Unexpected exception, traceback saved into log by ironic conductor service that is running on dr: Unexpected exception for 10.0.2.20/v2.0/networks?fields=id&name=net1: Invalid URL '10.0.2.20/v2.0/networks?fields=id&name=net1': No schema supplied. Perhaps you meant http://10.0.2.20/v2.0/networks?fields=id&name=net1?
====
It seems the neutron endpoint is misconfigured in the ironic configuration file or misconfigured in the keystone endpoint.

where can I find log files in devstack?

[[local|localrc]]
ADMIN_PASSWORD=pass123
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
HOST_IP=192.168.1.57
# Services
ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,s-proxy,s-object,s-container,s-account
ENABLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
ENABLED_SERVICES+=,trove,tr-api,tr-tmgr,tr-cond
ENABLED_SERVICES+=,horizon
# Ceilometer
ENABLED_SERVICES+=,ceilometer-acompute,ceilometer-acentral,ceilometer-collector,ceilometer-api
ENABLED_SERVICES+=,ceilometer-alarm-notify,ceilometer-alarm-eval
# Heat
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng BLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
ENABLED_SERVICES+=,trove,tr-api,tr-tmgr,tr-cond
ENABLED_SERVICES+=,horizon
# Ceilometer
ENABLED_SERVICES+=,ceilometer-acompute,ceilometer-acentral,ceilometer-collector,ceilometer-api
ENABLED_SERVICES+=,ceilometer-alarm-notify,ceilometer-alarm-eval
# Heat
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
# Neutron
DISABLED_SERVICES=n-net
ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,neutron
# Neutron - Load Balancing
ENABLED_SERVICES+=,q-lbaas
# VLAN configuration
Q_PLUGIN=ml2
ENABLE_TENANT_VLANS=True
# GRE tunnel configuration
Q_PLUGIN=ml2
ENABLE_TENANT_TUNNELS=True
Q_ML2_TENANT_NETWORK_TYPE=gre
# Logging
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
LOGDAYS=2
# Swift
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
SWIFT_REPLICAS=1
SWIFT_DATA_DIR=$DEST/data
# Neutron
DISABLED_SERVICES=n-net
ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,neutron
# Neutron - Load Balancing
ENABLED_SERVICES+=,q-lbaas
# VLAN configuration
Q_PLUGIN=ml2
ENABLE_TENANT_VLANS=True
# GRE tunnel configuration
Q_PLUGIN=ml2
ENABLE_TENANT_TUNNELS=True
Q_ML2_TENANT_NETWORK_TYPE=gre
# Logging
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
LOGDAYS=2
# Swift
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
SWIFT_REPLICAS=1
SWIFT_DATA_DIR=$DEST/data
# Tempest
enable_service tempest
This in my local.conf file. I installed devstack using this configuration.
I'm new to devstack and I can't find the logs now. I've searched the internet and can't find a clear path to the log files. I need the log files to solve an error that says no hosts were found.
As explained in this link devstack now runs all services as systemd unit files (I can't recall exactly since which version, although I would say since the pike release)
Thus, to check the logs of the different services you can use the journalctl utility (along with its infinite options).
Some examples:
sudo journalctl -f -u devstack#n-cond.service
In order to check the nova-conductor logs.
Or:
sudo journalctl -f -u devstack#h*
to check all heat-related services.
In devstack, logs of openstack services are attached with linux screens. you can find them by running the command screen -x stack. And you can switch to each service log.
Troubleshooting the error "No valid hosts were found":
Check whether there are enough resources like vcpu, disk, RAM in the tenant/project.
If you are using availability_zone then check for available resources for the compute host in that zone.

Error on configuring Devstack compute nodes: Service n-net is not running

While installing Devstack on a compute node in a multi-node devstack lab environment error encountered: Service n-net is not running.
The local.conf file has localrc as:
HOST_IP=192.168.42.12 # change this per compute node
FLAT_INTERFACE=eth0
FIXED_RANGE=10.4.128.0/20
FIXED_NETWORK_SIZE=4096
FLOATING_RANGE=192.168.42.128/25
MULTI_HOST=1
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=labstack
DATABASE_PASSWORD=supersecret
RABBIT_PASSWORD=supersecret
SERVICE_PASSWORD=supersecret
DATABASE_TYPE=mysql
SERVICE_HOST=192.168.42.11
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
ENABLED_SERVICES=n-cpu,n-net,n-api-meta,c-vol
NOVA_VNC_ENABLED=True
NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html"
VNCSERVER_LISTEN=$HOST_IP
VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN
Please help me removing this error.
P.S: I must use nova-net and not neutron for interaction between the controller and the compute nodes.
For the Ocata release I founded a solution (2-node setup). An import part is the placement-api since the Newton update (14.0.0), so first of all enable this in all your nodes:
local.conf:
enable_service placement-api
First run ./stack.sh on your controller node and after that installation run it on the other nodes.
Also here you will see the error Service n-net is not running...
Now edit your nova.conf file in /etc/nova/nova.conf because there will be no database and database_api section:
[database]
connection=mysql+pymysql://root:DB_PASS#IP_OF_CONTROLLER_NODE/nova
[api_database]
connection=mysql+pymysql://root:DB_PASS#IP_OF_CONTROLLER_NODE/nova_api
When adding these, you can check if it works with following command:
stack#jerico-02:/devstack$ nova-manage --debug host list
host zone
0.0.0.0 internal
jerico-03 internal
jerico-02 nova
Also in the dashboard the new compute (hypervisor) shows up!
Hope it helps!
(Tested on Ubuntu Server 16.04 LTS with devstack and OpenStack Ocata)

nova-scheduler don`t rpc.cast to nova-compute, no errors, but vm in 'scheduling' state

OpenStack Juno + OpenContrail. Ubuntu 14.04.2 LTS. 2 node setup: control+compute.
Everything worked well.
Delete and reinstall compute node.
Now when starting new vm its stuck in 'scheduling' state.
No errors in logs.
With debug I see how nova-scheduler doing filtering and now should
pass rpc.cast to nova-compute.
nova-compute shows nothing in debug.
p.s. rabbit is ok, I see many control connections and 3 connections from compute node.
If you exec nova list, are you see network interfaces of the new vm?

failed to launch Openstack instance: 'authentication required' when trying to create port

I'm trying to deploy Openstack Icehouse on Ubuntu Server 14.04 by following the official document. But after Keystone\Nova\Neutron\Glance were deployed, when I tried to launch a CirrOS instance by
nova boot -nic ... -image ... -flavor ...
, it failed.
The log in Nova client shows that:
The Neutron client(Yes, it's neutron. I guess there are interactions between them in booting) tried to connect with Neutron server to create a port on tenant's network.
But Neutron client set up the token-getting request using {username:neutron, password:REDACTED} to Keystone server and used that token to request for creating port to Neutron server.
Finally, the Neutron Server decided that that's an authentication problem.
I'm sure that I requested to create instance using tenant 'demo''s info($OS_TENANT_NAME, $OS_USERNAME, $OS_PASSWORD, $OS_AUTH_URL were properly set with 'demo''s value) by
source demoopenrc.sh
with demo's credential in that file.
Is that something wrong in the Neutron client's configuration or booting process? I paste a part of the neutron.conf here:
the Keystone setting
[keystone_authtoken]
auth_uri = http://controller:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = neutronpass
signing_dir = $state_path/keystone-signing
Since the Neutron client used 'neutron' user's credential for token getting, is there something wrong in this part?
The problem has been solved after nearly a month. For anyone still interested in this problem, please visit here

Resources