I'm using Centos 6.5 x86_64 to setup Openstack Havana and all services work well. But when I've rebooted the operating system, I've founded that the nova service does not work properly, the following error triggered:
nova flavor-list
ERROR: [Errno 111] Connection refused
Reviewing the log files in / var / log / nova gives the following error:
2014-03-24 12:24:04.293 6275 INFO nova.osapi_compute.wsgi.server [-] (6275) wsgi starting up
2014-03-24 12:24:04.297 6267 CRITICAL nova [-] [Errno 98] Address already in use
2014-03-24 12:24:04.412 6275 INFO nova.openstack.common.service [-] Parent process has died unexpectedly, exiting
2014-03-24 12:24:04.412 6274 INFO nova.openstack.common.service [-] Parent process has died unexpectedly, exiting
2014-03-24 12:24:04.412 6275 INFO nova.wsgi [-] Stopping WSGI server.
2014-03-24 12:24:04.412 6274 INFO nova.wsgi [-] Stopping WSGI server.
The state of my OpenStack server
nova-manage service list
Binary Host Zone Status State Updated_At
nova-cert controller internal enabled :-) 2014-03-24 14:28:03
nova-consoleauth controller internal enabled :-) 2014-03-24 14:28:01
nova-scheduler controller internal enabled :-) 2014-03-24 14:28:00
nova-conductor controller internal enabled :-) 2014-03-24 14:27:59
nova-compute controller nova enabled :-) 2014-03-24 14:28:06
nova-network controller internal enabled :-) 2014-03-24 14:27:58
keystone service-list
+----------------------------------+----------+----------+---------------------------+
| id | name | type | description |
+----------------------------------+----------+----------+---------------------------+
| 7ce108d652ee48d7897127045a371795 | cinder | volume | Cinder Volume Service |
| 9452b875328f4763b7766eb533bd75c4 | cinderv2 | volumev2 | Cinder Volume Service V2 |
| e9607d1a308140298f8364fd2a0e62a8 | glance | image | Glance Image Service |
| b7ac07f69e2e41f684d6470c69db4781 | keystone | identity | Keystone Identity Service |
| cbdfa73329094d7d94c7464b9bf0ef7d | nova | compute | Nova Compute service |
+----------------------------------+----------+----------+---------------------------+
ps -ef | grep "nova-api"
nova 2522 1 0 11:22 ? 00:00:00 /usr/bin/python /usr/bin/nova-api-metadata --logfile /var/log/nova/metadata-api.log
root 11909 6217 0 15:11 pts/1 00:00:01 gedit nova-api.log
root 12644 3832 0 15:31 pts/0 00:00:00 grep nova-api
netstat -napo | grep 877
tcp 0 0 0.0.0.0:8775 0.0.0.0:* LISTEN 2522/python off (0.00/0/0)
Any pointers would be extremely helpful.
Thanks
firstly, i strongly recommend you to find or ask for answer on ask.openstack.org
then from what you described, it may caused by: you've enabled nova-api-metadata and nova-api service in the same time.
from the default configuration we know that: ['ec2', 'osapi_compute', 'metadata'] are enabled, see https://github.com/openstack/nova/blob/stable/havana/nova/service.py#L55
so it will start each service one by one when nova-api service is called, see https://github.com/openstack/nova/blob/stable/havana/nova/cmd/api.py#L45
since nova-api-metadata service is running, which cause the 8775 port is used, then one service launched by nova-api will die and since this exception is not caught, then the other two will die too, then you get what you see in the log
If what I've assumed is right, please cancel the nova-api-metadata service and use nova-api service only, which means 'chkconfig openstack-nova-api-metadata off; chkconfig openstack-nova-api on', i'm not sure about the specific service name on your system, but should be something like that, correct it if i'm wrong
Connection refused is a common error encountered everytime. One of the case is keystone is refusing the connection for the nova service.
make sure SERVICE_PASSWORD for nova and quantum are same while creating the keystone services.Go to quantum and nova config files and verify the SERVICE_PASSWORD are same.
Njoy!!
Related
I'm running CentOS v7.9 with MariaDB v5.5.68. I'm trying to access the MariaDB databases from a Win10 machine using Visual Studio Code with SQLTools & MySQL/MariaDB extensions.
I have configured MariaDB for remote access per this link: Configuring MariaDB for Remote Client Access
[mysqld]
skip-networking=0
skip-bind-address
I created the users and added the privileges - tested by logging in locally with 'bob' and viewing permissions in mysql.user. (BTW, in case not readily apparent, the UID, host, and PWD aren't real.)
CREATE USER 'bob'#'1.2.3.%' IDENTIFIED BY 'myPWD';
GRANT ALL PRIVILEGES ON *.* TO 'bob'#'1.2.3.%' IDENTIFIED BY 'myPWD';
However, when I try to log in remotely (from another Linux box) using mysql -u userID -h hostIP -p, I get the error:
ERROR 2003 (HY000): Can't connect to MySQL server on '1.2.3.4' (110)
When I try to make the database connection using VS Code, SQLTools tells me I've connected, but it won't show any tables, I'm not able to make any queries, and I get this error: Request connection/GetChildrenForTreeItemRequest failed with message: Handshake inactivity timeout.
I have reviewed this SO page and others, but still can't get the connection to work.
UPDATED for clarity - provides mysql.user and netstat info:
MariaDB [(none)]> select user, host from mysql.user;
+------+-------------+
| user | host |
+------+-------------+
| bob | 10.0.2.15 | # Can't connect
| rob | 127.0.0.1 | # Logs in locally via command line
| root | 127.0.0.1 | # Logs in locally via command line
| bob | 192.168.0.% | # Can't connect
| root | 192.168.0.% | # Can't connect
| root | ::1 | # Logs in locally via command line
| rob | localhost | # Logs in locally via command line
| root | localhost | # Logs in locally via command line
+------+-------------+
8 rows in set (0.00 sec)
$ > netstat -tulpen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 27 33813 -
Any help is much appreciated as I've been working this problem for 2+ days and have not made any headway.
I am getting a connection timeout error while uploading the stemcell to bosh director. I am using bosh cli v2. The following is myerror logs.
> bosh -e sdp-bosh-env upload-stemcell https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=3541.12 --fix
Using environment '10.82.73.8' as client 'admin'
Task 13
Task 13 | 05:02:40 | Update stemcell: Downloading remote stemcell (00:00:51)
Task 13 | 05:03:31 | Update stemcell: Extracting stemcell archive (00:00:03)
Task 13 | 05:03:34 | Update stemcell: Verifying stemcell manifest (00:00:00)
Task 13 | 05:03:35 | Update stemcell: Checking if this stemcell already exists (00:00:00)
Task 13 | 05:03:35 | Update stemcell: Uploading stemcell bosh-openstack-kvm-ubuntu-trusty-go_agent/3541.12 to the cloud (00:10:41)
L Error: CPI error 'Bosh::Clouds::CloudError' with message 'Unable to connect to the OpenStack Keystone API http://10.81.102.5:5000/v2.0/tokens
Connection timed out - connect(2) for 10.81.102.5:5000 (Errno::ETIMEDOUT)' in 'create_stemcell' CPI method
Task 13 | 05:14:16 | Error: CPI error 'Bosh::Clouds::CloudError' with message 'Unable to connect to the OpenStack Keystone API http://10.81.102.5:5000/v2.0/tokens
Connection timed out - connect(2) for 10.81.102.5:5000 (Errno::ETIMEDOUT)' in 'create_stemcell' CPI method
Task 13 Started Sat Apr 7 05:02:40 UTC 2018
Task 13 Finished Sat Apr 7 05:14:16 UTC 2018
Task 13 Duration 00:11:36
Task 13 error
Uploading remote stemcell 'https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=3541.12':
Expected task '13' to succeed but state is 'error'
Exit code 1
Check OpenStack's security group for the BOSH Director machine.
SG should contain ALLOW IPv4 to 0.0.0.0/0, if it doesn't add at least egress TCP to 10.81.102.5 on port 5000.
Check connection using ssh:
bbl ssh --director
nc -tvn 10.81.102.5 5000
If it doesn't help check network/firewall configuration.
https://bosh.io/docs/uploading-stemcells/
https://github.com/cloudfoundry/bosh-bootloader/blob/master/terraform/openstack/templates/resources.tf
I've configured on 2 servers(srv50/51),
one of them is Master and the second one is slave,
Here the configuration of my configuration file /etc/maxscale.cnf :
[Read-Only Service]
type=service
router=readconnroute
servers=server50, server51
user=YYYYYYYYYYYYY
passwd=XXXXXXXXXXXXXX
router_options=slave
[Write-Only Service]
type=service
router=readconnroute
servers=server50, server51
user=YYYYYYYYYYYYY
passwd=XXXXXXXXXXXXXX
router_options=master
[Read-Only Listener]
type=listener
service=Read-Only Service
protocol=MySQLClient
port=4008
[Write-Only Listener]
type=listener
service=Write-Only Service
protocol=MySQLClient
port=4009
As i understool the router_options look who is the master and send the writing query to the master
Maxscale (via maxadmin) seems to discover the 2 serveur and understand witch one is the Master :
MaxScale> list servers
Servers.
-------------------+-----------------+-------+-------------+--------------------
Server | Address | Port | Connections | Status
-------------------+-----------------+-------+-------------+--------------------
server51 | 192.168.0.51 | 3306 | 0 | Slave, Running
server50 | 192.168.0.50 | 3306 | 0 | Master, Running
-------------------+-----------------+-------+-------------+--------------------
But even if I connect in Mysql in local on my Maxscale Write-Only Listener port (4009), Listener are in Stopped mode, is it normal ?
MaxScale> list listeners
Listeners.
---------------------+--------------------+-----------------+-------+--------
Service Name | Protocol Module | Address | Port | State
---------------------+--------------------+-----------------+-------+--------
Read-Only Service | MySQLClient | * | 4008 | Stopped
Write-Only Service | MySQLClient | * | 4009 | Stopped
MaxAdmin Service | maxscaled | * | 6603 | Running
---------------------+--------------------+-----------------+-------+--------
I've try to create a database in srv51 (slave), and it was created only on srv51, not in srv50.
Is something wrong in my configuration ? It's strange because it's not my first cluster, and on other cluster all write go to the master (but listeners are Running). Do i don't understand well the meaning of "router_options=master" ? How to start listeners ? I prefere to keep the 51 in Write list to detect topology change
===== UPDATE =====
After watching Log file /var/log/maxscale/maxscale1.log
I found that my monitor user didn't have the correct password :
[MySQL Monitor]
type=monitor
module=mysqlmon
servers=server50, server51
user=MONITOR
passwd=MONITOR_PASS
monitor_interval=10000
I corrected password for user and restarted maxscale, Now everything is running :
MaxScale> list listeners
Listeners.
---------------------+--------------------+-----------------+-------+--------
Service Name | Protocol Module | Address | Port | State
---------------------+--------------------+-----------------+-------+--------
Read-Only Service | MySQLClient | * | 4008 | Running
Write-Only Service | MySQLClient | * | 4009 | Running
MaxAdmin Service | maxscaled | * | 6603 | Running
---------------------+--------------------+-----------------+-------+--------
But write query are still done on Slave and not on Master
Thanks to MariaDb support, I was trying to connect like this :
mysql -h localhost --port=4009 -u USER -p
But Maxscale & Mysql were installed in the same server, even if Mysql bind port 3306, when you specify 'localhost', the connection is done on Mysql port 3306 and not in Maxscale port 4009, the port is ignore !!
The solution is to connect like this :
mysql -h 127.0.0.1 --port=4009 -u USER -p
or like this :
mysql -h localhost --protocol=tcp --port=4009 -u USER -p
I've try both solution and they works.
The solution about the listener not Running is on update of the question.
If writes are done on the slaves, the simplest explanation would be that you're executing writes on the wrong port or your configuration is wrong. To diagnose these problems, enable the info log level by adding log_info=true under the [maxscale] section.
If enabling the info log and inspecting the log files does not provide any clues, I'd suggest opening a bug report on the Maxscale Jira.
Can someone please help me why this call is no longer working in Apache Karaf 3.0.2. I verified that it was working in version 3.0.1. All instances are up and running, but I am unable to connect to one of my instances directly from the command line.
su - karaf -c " client -h localhost -a 8101 -u karaf -r 50 -d 2 \" instance:connect -u karaf -p karaf test1 \\\" feature:repo-list \\\" \" "
Logging in as karaf
455 [sshd-SshClient[bea319b]-nio2-thread-1] WARN org.apache.sshd.client.keyverifier.AcceptAllServerKeyVerifier - Server at [localhost/127.0.0.1:8101, DSA, b6:f6:d6:3f:8b:2f:ad:a4:0f:3f:3d:c3:7b:96:fd:ae] presented unverified {} key: {}
Connecting to host localhost on port 8103
Connecting to unknown server. Automatically adding to known hosts.
Storing the server key in known_hosts.
Error executing command: Authentication failed
The call is part of an automated process and I cannot connect to a specific instance directly. Is there any specific configuration required, that was not necessary in 3.0.1?
UPDATE #1:
I have added the verbose option... Does it give you any hints what to do?
client -v -h localhost -a 8101 -u karaf -r 50 -d 2 " instance:connect -u karaf test1 \" feature:repo-list \" "
39 [main] INFO org.apache.sshd.common.util.SecurityUtils - BouncyCastle not registered, using the default JCE provider
Logging in as karaf
367 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Client session created
380 [main] INFO org.apache.sshd.client.session.ClientSessionImpl - Start flagging packets as pending until key exchange is done
383 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Server version string: SSH-2.0-SSHD-CORE-0.12.0
384 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Kex: server->client [aes128-ctr, hmac-sha1, none] {} {}
384 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Kex: client->server [aes128-ctr, hmac-sha1, none] {} {}
444 [sshd-SshClient[bea319b]-nio2-thread-1] WARN org.apache.sshd.client.keyverifier.AcceptAllServerKeyVerifier - Server at [localhost/127.0.0.1:8101, DSA, 22:8b:f8:9d:bc:c6:40:d8:fe:52:aa:90:c0:f2:70:ec] presented unverified {} key: {}
457 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Dequeing pending packets
524 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientUserAuthServiceNew - Received SSH_MSG_USERAUTH_FAILURE
568 [sshd-SshClient[bea319b]-nio2-thread-2] INFO org.apache.sshd.client.session.ClientUserAuthServiceNew - Received SSH_MSG_USERAUTH_SUCCESS
Connecting to host localhost on port 8102
Error executing command: Authentication failed
UPDATE #2:
I switched the logger to DEBUG and I found this exception:
2015-01-15 11:28:48,920 | DEBUG | 5]-nio2-thread-1 | ClientSessionImpl | 28 - org.apache.sshd.core - 0.12.0 | Received SSH_MSG_SERVICE_ACCEPT
2015-01-15 11:28:48,920 | INFO | 5]-nio2-thread-1 | ClientUserAuthServiceNew | 28 - org.apache.sshd.core - 0.12.0 | Received SSH_MSG_USERAUTH_FAILURE
2015-01-15 11:28:48,920 | DEBUG | 5]-nio2-thread-1 | ClientUserAuthServiceNew | 28 - org.apache.sshd.core - 0.12.0 | Authentications that can continue: keyboard-interactive, password, publickey
2015-01-15 11:28:48,922 | DEBUG | 5]-nio2-thread-1 | Nio2Session | 28 - org.apache.sshd.core - 0.12.0 | Caught exception, now calling handler
2015-01-15 11:28:48,922 | WARN | 5]-nio2-thread-1 | ClientSessionImpl | 28 - org.apache.sshd.core - 0.12.0 | Exception caught
java.lang.IllegalStateException: No SSH_AUTH_SOCK environment variable set
at org.apache.karaf.shell.ssh.KarafAgentFactory.createClient(KarafAgentFactory.java:71)
at org.apache.sshd.client.auth.UserAuthPublicKey.init(UserAuthPublicKey.java:78)
at org.apache.sshd.client.session.ClientUserAuthServiceNew.tryNext(ClientUserAuthServiceNew.java:212)
at org.apache.sshd.client.session.ClientUserAuthServiceNew.processUserAuth(ClientUserAuthServiceNew.java:178)
at org.apache.sshd.client.session.ClientUserAuthServiceNew.process(ClientUserAuthServiceNew.java:131)
at org.apache.sshd.client.session.ClientUserAuthService.process(ClientUserAuthService.java:80)
at org.apache.sshd.common.session.AbstractSession.doHandleMessage(AbstractSession.java:399)
at org.apache.sshd.common.session.AbstractSession.handleMessage(AbstractSession.java:295)
at org.apache.sshd.client.session.ClientSessionImpl.handleMessage(ClientSessionImpl.java:256)
at org.apache.sshd.common.session.AbstractSession.decode(AbstractSession.java:731)
at org.apache.sshd.common.session.AbstractSession.messageReceived(AbstractSession.java:277)
at org.apache.sshd.common.AbstractSessionIoHandler.messageReceived(AbstractSessionIoHandler.java:54)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:187)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:173)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157)[:1.7.0_65]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:275)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:296)[:1.7.0_65]
at java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:407)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:173)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:189)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:173)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157)[:1.7.0_65]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:275)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:296)[:1.7.0_65]
at java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:407)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:173)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:189)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:173)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157)[:1.7.0_65]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:275)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:296)[:1.7.0_65]
at java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:407)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:173)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:189)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:173)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157)[:1.7.0_65]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:275)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:296)[:1.7.0_65]
at java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:407)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:173)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Connector$1.onCompleted(Nio2Connector.java:53)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Connector$1.onCompleted(Nio2Connector.java:46)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker$2.run(Invoker.java:218)[:1.7.0_65]
at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112)[:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_65]
at java.lang.Thread.run(Thread.java:745)[:1.7.0_65]
I installed openstack. All services are running successfully.
[root#test ~]# nova-manage service list
Binary Host Zone Status State Updated_At
nova-cert localhost.localdomain nova enabled :-) 2012-11-06 04:25:36.396817
nova-scheduler localhost.localdomain nova enabled :-) 2012-11-06 04:25:41.735192
nova-network compute nova enabled :-) 2012-11-06 04:25:42.109157
nova-compute compute nova enabled :-) 2012-11-06 04:25:43.240902
After that I change HOSTNAME in /etc/sysconfig/network to myhost.mydomain. Then restart the services.
Now I get the duplicate entry for the services.
[root#test ~]# nova-manage service list
Binary Host Zone Status State Updated_At
nova-cert localhost.localdomain nova enabled XXX 2012-11-06 04:25:36.396817
nova-cert myhost.mydomain nova enabled :-) 2012-11-06 05:25:36.396817
nova-scheduler localhost.localdomain nova enabled XXX 2012-11-06 04:25:41.735192
nova-scheduler myhost.mydomain nova enabled :-) 2012-11-06 05:25:41.735192
nova-network compute nova enabled :-) 2012-11-06 04:25:42.109157
nova-compute compute nova enabled :-) 2012-11-06 04:25:43.240902
From these services old services are not running.
I want to remove the services for host localhost.localdomain.
I check the nova-manage service --help but there is no option for the delete :(.
[root#test ~]# nova-manage service --help
--help does not match any options:
describe_resource
disable
enable
list
Looking at your example above, I suspect you're seeing a duplicate because you have two hosts with their hostnames set identically. If this is the case, the following code/answer isn't likely to help you out too much. There's an implicit assumption in that whole setup that hostnames of nodes upon which nova worker processes run will be unique.
In the latest branch, there isn't a command explicitly enabled for this, but the API exists underneath to do what you're after. Here's a snippet of code (untested!) that should do what you want; or at least point you to the relevant API if you're interested.
from nova import context
from nova import db
hostname = 'some_hostname'
service_name = 'nova_service_you_want_to_destroy'
ctxt = context.get_admin_context()
service = db.service_get_by_args(ctxt, hostname, service_name)
#... pick one of these services ...
#... assign it to 'service'
db.service_destroy(ctxt, service[id])
NOTE: this will remove the service from the database, or raise an exception if it doesn't exist (or something else goes wrong). If the service is running, expect that it will just "show up" again, as the service list is populated by the various nova worker agents processes reporting in.