where can I find log files in devstack? - openstack

[[local|localrc]]
ADMIN_PASSWORD=pass123
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
HOST_IP=192.168.1.57
# Services
ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,s-proxy,s-object,s-container,s-account
ENABLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
ENABLED_SERVICES+=,trove,tr-api,tr-tmgr,tr-cond
ENABLED_SERVICES+=,horizon
# Ceilometer
ENABLED_SERVICES+=,ceilometer-acompute,ceilometer-acentral,ceilometer-collector,ceilometer-api
ENABLED_SERVICES+=,ceilometer-alarm-notify,ceilometer-alarm-eval
# Heat
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng BLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
ENABLED_SERVICES+=,trove,tr-api,tr-tmgr,tr-cond
ENABLED_SERVICES+=,horizon
# Ceilometer
ENABLED_SERVICES+=,ceilometer-acompute,ceilometer-acentral,ceilometer-collector,ceilometer-api
ENABLED_SERVICES+=,ceilometer-alarm-notify,ceilometer-alarm-eval
# Heat
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
# Neutron
DISABLED_SERVICES=n-net
ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,neutron
# Neutron - Load Balancing
ENABLED_SERVICES+=,q-lbaas
# VLAN configuration
Q_PLUGIN=ml2
ENABLE_TENANT_VLANS=True
# GRE tunnel configuration
Q_PLUGIN=ml2
ENABLE_TENANT_TUNNELS=True
Q_ML2_TENANT_NETWORK_TYPE=gre
# Logging
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
LOGDAYS=2
# Swift
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
SWIFT_REPLICAS=1
SWIFT_DATA_DIR=$DEST/data
# Neutron
DISABLED_SERVICES=n-net
ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,neutron
# Neutron - Load Balancing
ENABLED_SERVICES+=,q-lbaas
# VLAN configuration
Q_PLUGIN=ml2
ENABLE_TENANT_VLANS=True
# GRE tunnel configuration
Q_PLUGIN=ml2
ENABLE_TENANT_TUNNELS=True
Q_ML2_TENANT_NETWORK_TYPE=gre
# Logging
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
LOGDAYS=2
# Swift
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
SWIFT_REPLICAS=1
SWIFT_DATA_DIR=$DEST/data
# Tempest
enable_service tempest
This in my local.conf file. I installed devstack using this configuration.
I'm new to devstack and I can't find the logs now. I've searched the internet and can't find a clear path to the log files. I need the log files to solve an error that says no hosts were found.

As explained in this link devstack now runs all services as systemd unit files (I can't recall exactly since which version, although I would say since the pike release)
Thus, to check the logs of the different services you can use the journalctl utility (along with its infinite options).
Some examples:
sudo journalctl -f -u devstack#n-cond.service
In order to check the nova-conductor logs.
Or:
sudo journalctl -f -u devstack#h*
to check all heat-related services.

In devstack, logs of openstack services are attached with linux screens. you can find them by running the command screen -x stack. And you can switch to each service log.
Troubleshooting the error "No valid hosts were found":
Check whether there are enough resources like vcpu, disk, RAM in the tenant/project.
If you are using availability_zone then check for available resources for the compute host in that zone.

Related

openstack services not showing up

Openstack(devstack) just installed on ubuntu 20.04.3 (4cpu, 8Gb), with local.conf by stack.sh
[[local|localrc]] ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
HOST_IP=192.168.31.200
And it looks like works fine, but "systemctl -t service" don't showing cinder services (cinder-scheduler, cinder-volume) and obviously some others:
systemctl -t service output
cinder service-list ouput
so, wt.. ?
All services named devstack#... are OpenStack services. Specifically, Cinder's services are named c-api, c-sch and c-vol.
See also https://docs.openstack.org/devstack/latest/systemd.html.

Openstack All-In-One Single Machine Networking

I'm having a hard time configuring an Openstack environment based on the All-In-One Single Machine installer for bridged networking in my LAN.
My objective is to SSH into the instances created in Openstack from my LAN.
The server is an Ubuntu 16.04 LTS with minimal installation and OpenSSH. The network configuration of the server is:
auto enp3s0
iface enp3s0 inet static
address 10.4.4.1
netmask 255.255.255.0
gateway 10.4.4.254
broadcast 10.4.4.255
network 10.4.4.0
dns-nameservers 10.4.1.12 10.4.1.10
Basically my network details are the following:
LAN 10.4.4.0
MASK 255.255.255.0
Gateway/DHCP Server 10.4.4.254
The local.conf file I've used for deploying the devstack is the following:
# Sample ``local.conf`` for user-configurable variables in ``stack.sh``
# NOTE: Copy this file to the root DevStack directory for it to work properly.
# ``local.conf`` is a user-maintained settings file that is sourced from ``stackrc``.
# This gives it the ability to override any variables set in ``stackrc``.
# Also, most of the settings in ``stack.sh`` are written to only be set if no
# value has already been set; this lets ``local.conf`` effectively override the
# default values.
# This is a collection of some of the settings we have found to be useful
# in our DevStack development environments. Additional settings are described
# in https://docs.openstack.org/devstack/latest/configuration.html#local-conf
# These should be considered as samples and are unsupported DevStack code.
# The ``localrc`` section replaces the old ``localrc`` configuration file.
# Note that if ``localrc`` is present it will be used in favor of this section.
[[local|localrc]]
# Minimal Contents
# ----------------
# While ``stack.sh`` is happy to run without ``localrc``, devlife is better when
# there are a few minimal variables set:
# If the ``*_PASSWORD`` variables are not set here you will be prompted to enter
# values for them by ``stack.sh``and they will be added to ``local.conf``.
FLOATING_RANGE=10.4.4.192/27
FIXED_RANGE=192.168.0.0/24
FIXED_NETWORK_SIZE=256
FLAT_INTERFACE=enp3s0
ADMIN_PASSWORD=nomoresecret
DATABASE_PASSWORD=stackdb
RABBIT_PASSWORD=stackqueue
SERVICE_PASSWORD=$ADMIN_PASSWORD
# ``HOST_IP`` and ``HOST_IPV6`` should be set manually for best results if
# the NIC configuration of the host is unusual, i.e. ``eth1`` has the default
# route but ``eth0`` is the public interface. They are auto-detected in
# ``stack.sh`` but often is indeterminate on later runs due to the IP moving
# from an Ethernet interface to a bridge on the host. Setting it here also
# makes it available for ``openrc`` to include when setting ``OS_AUTH_URL``.
# Neither is set by default.
HOST_IP=10.4.4.1
#HOST_IPV6=2001:db8::7
# Logging
# -------
# By default ``stack.sh`` output only goes to the terminal where it runs. It can
# be configured to additionally log to a file by setting ``LOGFILE`` to the full
# path of the destination log file. A timestamp will be appended to the given name.
LOGFILE=$DEST/logs/stack.sh.log
# Old log files are automatically removed after 7 days to keep things neat. Change
# the number of days by setting ``LOGDAYS``.
LOGDAYS=2
# Nova logs will be colorized if ``SYSLOG`` is not set; turn this off by setting
# ``LOG_COLOR`` false.
#LOG_COLOR=False
# Using milestone-proposed branches
# ---------------------------------
# Uncomment these to grab the milestone-proposed branches from the
# repos:
#CINDER_BRANCH=milestone-proposed
#GLANCE_BRANCH=milestone-proposed
#HORIZON_BRANCH=milestone-proposed
#KEYSTONE_BRANCH=milestone-proposed
#KEYSTONECLIENT_BRANCH=milestone-proposed
#NOVA_BRANCH=milestone-proposed
#NOVACLIENT_BRANCH=milestone-proposed
#NEUTRON_BRANCH=milestone-proposed
#SWIFT_BRANCH=milestone-proposed
# Using git versions of clients
# -----------------------------
# By default clients are installed from pip. See LIBS_FROM_GIT in
# stackrc for details on getting clients from specific branches or
# revisions. e.g.
# LIBS_FROM_GIT="python-ironicclient"
# IRONICCLIENT_BRANCH=refs/changes/44/2.../1
# Swift
# -----
# Swift is now used as the back-end for the S3-like object store. Setting the
# hash value is required and you will be prompted for it if Swift is enabled
# so just set it to something already:
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
# For development purposes the default of 3 replicas is usually not required.
# Set this to 1 to save some resources:
SWIFT_REPLICAS=1
# The data for Swift is stored by default in (``$DEST/data/swift``),
# or (``$DATA_DIR/swift``) if ``DATA_DIR`` has been set, and can be
# moved by setting ``SWIFT_DATA_DIR``. The directory will be created
# if it does not exist.
SWIFT_DATA_DIR=$DEST/data
At the end of the deployment I'm able to ping from the instance to my LAN and do nslookup on google.com for example, but I can't do it backwards, ping/ssh/telnet the instance in Openstack.
The security group permits all traffic, all ICMP ingress/egress, SSH from everywhere.
I've tried to telnet on my local computer from the Openstack instance and it's showing the IP of the Openstack host, not the host. So I'm missing something in the network topology.
netstat -ant | grep 1716
tcp6 0 0 :::1716 :::* LISTEN
tcp6 0 0 10.4.3.34:1716 10.4.4.1:42992 ESTABLISHED
Is there any type of network deployment I'm missing?
Any advice would be much appreciated!
If you are trying to access your instances from the "outside", you will need to create a floating IP pool and assign a floating IP to one of your instances.

Error on configuring Devstack compute nodes: Service n-net is not running

While installing Devstack on a compute node in a multi-node devstack lab environment error encountered: Service n-net is not running.
The local.conf file has localrc as:
HOST_IP=192.168.42.12 # change this per compute node
FLAT_INTERFACE=eth0
FIXED_RANGE=10.4.128.0/20
FIXED_NETWORK_SIZE=4096
FLOATING_RANGE=192.168.42.128/25
MULTI_HOST=1
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=labstack
DATABASE_PASSWORD=supersecret
RABBIT_PASSWORD=supersecret
SERVICE_PASSWORD=supersecret
DATABASE_TYPE=mysql
SERVICE_HOST=192.168.42.11
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
ENABLED_SERVICES=n-cpu,n-net,n-api-meta,c-vol
NOVA_VNC_ENABLED=True
NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html"
VNCSERVER_LISTEN=$HOST_IP
VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN
Please help me removing this error.
P.S: I must use nova-net and not neutron for interaction between the controller and the compute nodes.
For the Ocata release I founded a solution (2-node setup). An import part is the placement-api since the Newton update (14.0.0), so first of all enable this in all your nodes:
local.conf:
enable_service placement-api
First run ./stack.sh on your controller node and after that installation run it on the other nodes.
Also here you will see the error Service n-net is not running...
Now edit your nova.conf file in /etc/nova/nova.conf because there will be no database and database_api section:
[database]
connection=mysql+pymysql://root:DB_PASS#IP_OF_CONTROLLER_NODE/nova
[api_database]
connection=mysql+pymysql://root:DB_PASS#IP_OF_CONTROLLER_NODE/nova_api
When adding these, you can check if it works with following command:
stack#jerico-02:/devstack$ nova-manage --debug host list
host zone
0.0.0.0 internal
jerico-03 internal
jerico-02 nova
Also in the dashboard the new compute (hypervisor) shows up!
Hope it helps!
(Tested on Ubuntu Server 16.04 LTS with devstack and OpenStack Ocata)

Accessing remote Fuse/Karaf console using SSH

I have a Fuse ESB standalone server running in a RHEL box. I want to connect to the Karaf console remotely to manage the bundles.
If I close my current session, How I go back to my karaf console again ?
I have my Fuse ESB configured to 8101 port for SSH. Will I be able to connect it directly through my SSH client(Putty)
Or Do I need another fuse esb instance locally to access the remote Fuse instance ?
Either ways I am not able to connect, It says access denied. Is there any other easier way to connect to remote fuse/karaf instance ?
Even I tried using Client.sh from bin directory, it says authentication failure. But I have created a JAAS user with Admin role.
By the way, Is just a user is enough to do this ? Or does it need Public/Private key configuration also ?
What is the usual approach for managing the remote Fuse/Karaf instance ?
You can find many details in the JBoss Fuse documentation (eg successor to Fuse ESB) at
https://access.redhat.com/site/documentation/en-US/JBoss_Fuse/
And there is a chapter on remote connecting to containers here
https://access.redhat.com/site/documentation/en-US/JBoss_Fuse/6.0/html-single/Configuring_and_Running_JBoss_Fuse/index.html#ESBRuntimeRemote
You need to pass in credentials for a user on the container that is valid and is in the admin role.
The karaf shell also has a jaas command, which allows you to list the users and their roles etc. And as well add new users, etc. You can also do some user management form the FMC web console that is part of Fuse ESB.
You might also want to check your IPtables
http://ask.xmodulo.com/open-port-firewall-centos-rhel.html.
- $ sudo iptables -I INPUT -p tcp -m tcp --dport 8101 -j ACCEPT
- $ sudo service iptables save
- $ service iptables restart
From another karaf instance you can run this command
JBossFuse:karaf#root> ssh -l username -P password -p port hostname
e.g
- JBossFuse:karaf#root> ssh -l smx-P smx -p 8101 10.234.12.12
You have to make sure that the ssh role name that is defined in etc/org.apache.karaf.shell.cfg
# shRole defines the role required to access the console through ssh
#
sshRole = ssh
matches the one in etc/user.properties
#
# This file contains the users, groups, and roles.
# Each line has to be of the format:
#
# USER=PASSWORD,ROLE1,ROLE2,...
# USER=PASSWORD,_g_:GROUP,...
# _g_\:GROUP=ROLE1,ROLE2,...
#
# All users, grousp, and roles entered in this file are available after Karaf startup
# and modifiable via the JAAS command group. These users reside in a JAAS domain
# with the name "karaf".
#
karaf = karaf,_g_:admingroup
_g_\:admingroup = group,admin,manager,viewer,ssh

Openstack-Keystone failing to start

I've tried almost everything in the past couple of days to get keystone running to no avail.
The setup is all on the same host, the virtualization and openstack and keystone are all on the same host, so I've tried setting up keystone with 127.0.0.1 and localhost and the IP of the host with no luck
[DEFAULT] log_file = /var/log/keystone/keystone.log
admin_token = ***
bind_host = 192.168.33.11
public_port = 5000
admin_port = 35357
compute_port = 8774
# === Logging Options ===
# Print debugging output verbose = True
# Print more verbose output
# (includes plaintext request logging, potentially including passwords)
# debug = False
# Name of log file to output to. If not set, logging will go to stdout. log_file = keystone.log
# The directory to keep log files in (will be prepended to --logfile) log_dir = /var/log/keystone
# Use syslog for logging.
# use_syslog = False
# syslog facility to receive log lines
# syslog_log_facility = LOG_USER
# If this option is specified, the logging configuration file specified is
# used and overrides any other logging options specified. Please see the
# Python logging module documentation for details on logging configuration
# files. log_config = logging.conf
# A logging.Formatter log message format string which may use any of the
# available logging.LogRecord attributes.
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# Format string for %(asctime)s in log records.
# log_date_format = %Y-%m-%d %H:%M:%S
# onready allows you to send a notification when the process is ready to serve
# For example, to have it notify using systemd, one could set shell command:
# onready = systemd-notify --ready
# or a module with notify() method:
# onready = keystone.common.systemd
[sql] connection = mysql://keystone:***#localhost/keystone
# idle_timeout = 200
[identity] driver = keystone.identity.backends.sql.Identity
[catalog] template_file = /etc/keystone/default_catalog.templates driver = keystone.catalog.backends.sql.Catalog
# dynamic, sql-based backend (supports API/CLI-based management commands)
# driver = keystone.catalog.backends.sql.Catalog
# static, file-based backend (does *NOT* support any management commands)
# driver = keystone.catalog.backends.templated.TemplatedCatalog
# template_file = default_catalog.templates
[token] driver = keystone.token.backends.sql.Token
# driver = keystone.token.backends.kvs.Token
# Amount of time a token should remain valid (in seconds)
# expiration = 86400
I've enabled logging in the logging.conf file and set the level to DEBUG and INFO, however nothing in log files.
[root#* keystone]# service openstack-keystone restart
Stopping keystone: [FAILED]
Starting keystone: [ OK ]
[root#* keystone]# service openstack-keystone restart
Stopping keystone: [FAILED]
Starting keystone: [ OK ]
[root#* keystone]# ps aux | grep keystone
root 25580 0.0 0.0 103236 880 pts/1 S+ 09:41 0:00 grep keystone
[root#* keystone]#
Any ideas will be greatly appreciated.Thank you
As I mentioned in the comment, I've never seen a config file with the section headings on the same line as config option:
[DEFAULT] log_file = /var/log/keystone/keystone.log
I've also seen it like this instead:
[DEFAULT]
log_file = /var/log/keystone/keystone.log
However, I have no idea if this is related to your issue.
To enable debug-level logging, make sure you set the following in /etc/keystone/logging.conf:
[logger_root]
level=DEBUG
Then try running keystone manually instead of as a service:
$ sudo -u keystone bash
$ HOME=/var/lib/keystone keystone-all --debug
Hopefully you'll see a relevant error message on standard out.
(I believe it will still send the logging to /var/log/keystone/keystone.log, not sure how to actually get it to log to standard out when running manually like this).
Add a valid token for admin_token. It should not be "*".
Check the below line:
[sql] connection = mysql://keystone:*#localhost/keystone
It should be something like:
connection = mysql://keystone:keystone#localhost/keystone
Refer to this url for an example keystone.conf file
http://docs.openstack.org/trunk/openstack-compute/install/yum/content/keystone-conf-file.html
I ran into this issue as well. I am running on Ubuntu 12.04LTS. What i found was the the service start command in /etc/init/keystone.conf is using start-stop-daemon to run the service. It was written for a newer version than the one on my box. The --chdir variable is not accepted as an input. once i removed that line keystone started right up.
Try running:
start-stop-daemon --start --chuid keystone --name keystone --exec /usr/bin/keystone-all
/etc/init/keystone.conf after
description "Keystone API server"
author "Soren Hansen <soren#linux2go.dk>"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec start-stop-daemon --start --chuid keystone \
--name keystone \
--exec /usr/bin/keystone-all
Check if your IP-adress is equal to HOST_IP=... in localrc
This might be due to keystone not getting started properly and therefore port 35357 is not in listening mode.
This seems to be anomalous behavior of service keystone.
I am mentioning steps which have worked on my system for havana installtion on Ubuntu 12.04 Kernel version 3.2.0-67-generic. After a day of headache around this issue. Try these steps, preferably in the same order.
1) Remove keystone package:-
apt-get remove keystone
2) Reboot your system
reboot
3) After reboot again INSTALL KEYSTONE.
apt-get install keystone
4) Check status of keystone service
service keystone status
It will show start/running
5) Now do the necessary changes you want to do in /etc/keystone/keystone.conf
after making changes in conf file DO NOT RESTART KEYSTONE SERVICE
Use stop and start command to make an effect of restart but don't restart.
service keystone stop
service keystone start
For further help, pasting a dump of my CLI :-
http://pastebin.com/sduuFCL7
There are multiple problems with the icehouse documentations and install. packstack is broken so the only way to get started is to manually follow the upstream docs for your distro. keystone is very important to set up first correctly before moving on, because other services rely on it.
the paste-file /usr/share/keystone/keystone-dist-paste.ini should be copied to /etc/ to be accessible to the config scripts like this:
cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/
chown keystone:keystone /etc/keystone/*
make sure to update keystone.conf with the new config_file value
documentation is wrong about the mysql connection, it should go to [sql] and not [database] so:
openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:PASSWD#controller/keystone
the name controller should be resolved to whatever mysql is bound to, I will add it to /etc/hosts like this if [mysqld]/bind-address in /etc/my.cnf is 10.1.1.100:
10.1.1.100 controller
make sure to uncomment log_file in keystone.conf to get what is happening.
I was facing similar issue.I followed below mentioned steps and openstack-keystone service got started.
Edit the /etc/keystone/keystone.conf file and complete the following actions:
In the [DEFAULT] section
[DEFAULT]
admin_token = ADMIN_TOKEN
In the [database] section
[database]
connection = mysql://keystone:KEYSTONE_DBPASS#controller/keystone
In the [token] section, configure the UUID token provider and SQL driver
[token]
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token
In the [revoke] section
[revoke]
driver = keystone.contrib.revoke.backends.sql.Revoke
After making above changes populate the Identity service database using command
su -s /bin/sh -c "keystone-manage db_sync" keystone
Start the openstack-keystone service using below command
systemctl start openstack-keystone

Resources