Keystone connection fail - openstack

I have install Keystone following the guide for Ubuntu14.04
When i try to create a service entity:
openstack service create --type identity \
--description "Openstack Identity" keystone
I obtain:
INFO: urllib3.connectionpool Starting new HTTP connection (1): controller
ERROR: cliff.app Internal Server Error (HTTP 500)
I am sure that i have connection to "controller",and mysql is configured to accept connections from any host.
My configuration file of keystone is:
[DEFAULT] admin_token =ADMIN
admin_port=35357
public_port=5000
[database]
connection = mysql://keystone:keystone#controller/keystone
[memcache]
servers = localhost:11211
[token]
provider = keystone.token.providers.uuid.Provider
driver =keystone.token.persistence.backends.memcache.Token
[revoke]
driver = keystone.contrib.revoke.backends.sql.Revoke
[DEFAULT]
verbose = True
And Apache is configured as shown in the guide.
Where am i failing?

I dont' know if you found and answer already but I also had this problem.
The reason was quite simple really, one of the instructions on the guide didn't work for me. This is the one:
# apt-get install ubuntu-cloud-keyring
# echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" \
"trusty-updates/kilo main" > /etc/apt/sources.list.d/cloudarchive-kilo.list
So I was not using the kilo packages but older ones (urllib3 being one of them). How to fix this? Just create this file manually:
nano /etc/apt/sources.list.d/cloudarchive-kilo.list
And just write this inside:
deb http://ubuntu-cloud.archive.canonical.com/ubuntu trusty-updates/kilo main
Finish it with this command, to make the update:
# apt-get update && apt-get dist-upgrade
You should have a lot of new updates now.
There you go, hope it helps, it fixed the problem for me at least.
Bruno

Related

Podman build command unable to pull image

I have configured Subuid and Subgid after installing Podman in RHEL7
I have created a simple Dockerfile to print hello world and was trying to build the image.
My Dockerfile
FROM alpine
CMD ["echo", "Hello World"]
To test I am running below command
Podman build -t imagename .
I see the below error received.
STEP 1: FROM alpine
Error: error creating build container: The following failures happened while trying to pull image specified by "alpine" based on search registries in /etc/containers/registries.conf:
* "localhost/alpine": Error initializing source docker://localhost/alpine:latest: error pinging docker registry localhost: Get https://localhost/v2/: dial tcp [::1]:443: connect: connection refused
* "registry.access.redhat.com/alpine": Error initializing source docker://registry.access.redhat.com/alpine:latest: error pinging docker registry registry.access.redhat.com: Get https://registry.access.redhat.com/v2/: read tcp 10.70.85.174:17758->23.54.147.129:443: read: connection reset by peer
* "registry.redhat.io/alpine": Error initializing source docker://registry.redhat.io/alpine:latest: error pinging docker registry registry.redhat.io: Get https://registry.redhat.io/v2/: read tcp 10.70.85.174:36028->104.79.150.216:443: read: connection reset by peer
* "docker.io/library/alpine": Error initializing source docker://alpine:latest: error pinging docker registry registry-1.docker.io: Get https://registry-1.docker.io/v2/: read tcp 10.70.85.174:53352->18.213.137.78:443: read: connection reset by peer
Am I missing any configuration ?
Thanks
Have you still the docket Daemon running and/or docker installed?
First stop the docker Daemon
sudo systemctl stop docker
OR
sudo service docker stop
Then uninstall docker
Ubuntu here but what ever you need you can Google :D
sudo apt-get remove docker docker-engine docker.io containerd runc
Try again,
If other fail now try a refreshed install of podman
sudo --reinstall install podman
Sources
https://www.cyberciti.biz/faq/debian-ubuntu-linux-reinstall-a-package-using-apt-get-command/
https://askubuntu.com/questions/935569/how-to-completely-uninstall-docker
https://intellipaat.com/community/43965/how-to-stop-docker
https://podman.io/getting-started/installation
I suggest that you first search your image in registries
podman search alpine
you should get a list of images available. Choose the one you want - version, name, tag etc and put that in the dockerfile.
to be sure it is accessible, do the 'pull' manually
podman pull alpine<version,tag>

managemnt tab in kaa sandbox URL

I created Kaa sandbox instance on the AWS Linux host. I am getting some of the issues
Still I am not able to see the management button on the kaa Sandbox console.
I am not able to connect AWS with using ssh. I followed all the required step to connect to AWS Linux host, but not lucky to connect.
My problem is that, I would like to change the host IP in the sandbox setting with my AWS linux host IP, so that my end point device gets connected to host,
Still I am struggling with above points. Please advise.
Regards,
Prasad
That seems to be an issue with the Kaa 0.10.0 Sandbox for AWS. We created a bug for tracking this.
For now, you can use the next workaround:
echo "sudo sed -Ei 's/(gui_change_host_enabled=).*$/\1true/'" \
"/usr/lib/kaa-sandbox/conf/sandbox-server.properties;" \
"sudo service kaa-sandbox restart" | \
ssh -i <your-private-aws-instance-key.pem> ubuntu#<your-aws-instance-host>
Note: this is a multi-line single command that works correctly in bash (should also work in sh and others, but that is not tested).
Note 2: don't forget to replace
<your-private-aws-instance-key.pem>
<your-aws-instance-host>
with the respective key name and host name/IP address.

Kaa node service fails to start mongodb and zookeeper

We are trying to setup a Single Node Kaa server(version 0.10.0) in an Ubuntu 16.04 machine.
Followed the documentation given here
We were unable to connect to the admin UI after starting the kaa node service.
On investigating further we could see that the Mongodb and zookeeper services were not started. So we manually started those services. After that we were able to connect to Kaa admin UI. Do we need any additional steps to get these service running on kaa-node start ?
I setup kaaproject with the guide for my Ubuntu 16.04.1 LTS VM and Zookeeper was not running by default on my server also, so I had to install the deamon (which starts zookeeper also on startup):
sudo apt-get install zookeeperd
Check if zookeeper is running:
netstat -ntlp | grep 2181
This should result in an output like this:
With mongodb I had the problem, that there was not enough space available for the journal files. I fixed this by increasing the available disk space + setting smallfiles=true in the /etc/mongod.conf
Probably you have some troubles with configurations for services. Check if auto-startup is enabled for MongoDB / Zookeeper by the next command:
$ systemctl is-enabled ${service-name}
if you see this:
$ disabled
then auto-startup is disabled for specified service and you should try next in order to enable it:
$ systemctl enable ${service-name}

initctl: Unknown instance: error after Rstudio conf change

I have a new version of R installed on an aws-machine (which always come with an old version for some reason and it's near impossible to just have yum or apt-get to work). I want rstudio to point to this new version which I've built from source without throwing the old version away. I therefore go to /etc/rstudio/rserver.conf (documentation) and change the contents to:
# Server Configuration File
rsession-which-r=/root/R-3.2.1/bin/R
I can confirm that at this location a new version of R is installed but then I get an error after rstudio-server restarts.
root#ip-172-31-40-49 rstudio]$ rstudio-server restart
initctl: Unknown instance:
What am I to do?
Below worked for me:
1) check the process that used 8787
sudo fuser 8787/tcp
2) with the -k option to kill all process.
sudo fuser -k 8787/tcp
3) Start RStudio Server
sudo rstudio-server start
The solution above is provided here by Leon Zhang.
The first thing to do is to check your configuration with:
rstudio-server verify-installation
a number of times when updating R or RStudio I have run into the same error as you have and get the following error message.
-bash-4.1$ sudo rstudio-server verify-installation
29 Sep 2015 18:24:11 [rserver] ERROR system error 98 (Address already in use); OCCURRED AT: rstudio::core::Error rstudio::core::http::initTcpIpAcceptor(rstudio::core::http::SocketAcceptorService<boost::asio::ip::tcp>&, const std::string&, const std::string&) /root/rstudio/src/cpp/core/include/core/http/TcpIpSocketUtils.hpp:103; LOGGED FROM: int main(int, char* const*) /root/rstudio/src/cpp/server/ServerMain.cpp:436
rstudio-server start/running, process 48632
Although I have never been able to figure out the cause, I can suggest the following workaround:
1. change the port /etc/rstudio/rserver.conf for example from 8787 to 8788
2. open the new ports in your firewall settings. (allow access to the new port in /etc/sysconfig/iptables)
3. update your firewall: sudo /sbin/service iptables restart
4. restart Rstudio server: sudo rstudio-server restart
This has worked for me each of the ~4-5 times this has happened. Although I am not 100% sure this can help with your use case, it may. As an alternative, if you can use containers in your AWS setup, you may be interested in a great off-the shelf docker image with the latest R/Rstudio.
It happened for me on my Cento-7.x machine while I upgraded from old RStudio server to the new version. Rebooting the machine seems to have fixed the problem.

Openstack-Keystone failing to start

I've tried almost everything in the past couple of days to get keystone running to no avail.
The setup is all on the same host, the virtualization and openstack and keystone are all on the same host, so I've tried setting up keystone with 127.0.0.1 and localhost and the IP of the host with no luck
[DEFAULT] log_file = /var/log/keystone/keystone.log
admin_token = ***
bind_host = 192.168.33.11
public_port = 5000
admin_port = 35357
compute_port = 8774
# === Logging Options ===
# Print debugging output verbose = True
# Print more verbose output
# (includes plaintext request logging, potentially including passwords)
# debug = False
# Name of log file to output to. If not set, logging will go to stdout. log_file = keystone.log
# The directory to keep log files in (will be prepended to --logfile) log_dir = /var/log/keystone
# Use syslog for logging.
# use_syslog = False
# syslog facility to receive log lines
# syslog_log_facility = LOG_USER
# If this option is specified, the logging configuration file specified is
# used and overrides any other logging options specified. Please see the
# Python logging module documentation for details on logging configuration
# files. log_config = logging.conf
# A logging.Formatter log message format string which may use any of the
# available logging.LogRecord attributes.
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# Format string for %(asctime)s in log records.
# log_date_format = %Y-%m-%d %H:%M:%S
# onready allows you to send a notification when the process is ready to serve
# For example, to have it notify using systemd, one could set shell command:
# onready = systemd-notify --ready
# or a module with notify() method:
# onready = keystone.common.systemd
[sql] connection = mysql://keystone:***#localhost/keystone
# idle_timeout = 200
[identity] driver = keystone.identity.backends.sql.Identity
[catalog] template_file = /etc/keystone/default_catalog.templates driver = keystone.catalog.backends.sql.Catalog
# dynamic, sql-based backend (supports API/CLI-based management commands)
# driver = keystone.catalog.backends.sql.Catalog
# static, file-based backend (does *NOT* support any management commands)
# driver = keystone.catalog.backends.templated.TemplatedCatalog
# template_file = default_catalog.templates
[token] driver = keystone.token.backends.sql.Token
# driver = keystone.token.backends.kvs.Token
# Amount of time a token should remain valid (in seconds)
# expiration = 86400
I've enabled logging in the logging.conf file and set the level to DEBUG and INFO, however nothing in log files.
[root#* keystone]# service openstack-keystone restart
Stopping keystone: [FAILED]
Starting keystone: [ OK ]
[root#* keystone]# service openstack-keystone restart
Stopping keystone: [FAILED]
Starting keystone: [ OK ]
[root#* keystone]# ps aux | grep keystone
root 25580 0.0 0.0 103236 880 pts/1 S+ 09:41 0:00 grep keystone
[root#* keystone]#
Any ideas will be greatly appreciated.Thank you
As I mentioned in the comment, I've never seen a config file with the section headings on the same line as config option:
[DEFAULT] log_file = /var/log/keystone/keystone.log
I've also seen it like this instead:
[DEFAULT]
log_file = /var/log/keystone/keystone.log
However, I have no idea if this is related to your issue.
To enable debug-level logging, make sure you set the following in /etc/keystone/logging.conf:
[logger_root]
level=DEBUG
Then try running keystone manually instead of as a service:
$ sudo -u keystone bash
$ HOME=/var/lib/keystone keystone-all --debug
Hopefully you'll see a relevant error message on standard out.
(I believe it will still send the logging to /var/log/keystone/keystone.log, not sure how to actually get it to log to standard out when running manually like this).
Add a valid token for admin_token. It should not be "*".
Check the below line:
[sql] connection = mysql://keystone:*#localhost/keystone
It should be something like:
connection = mysql://keystone:keystone#localhost/keystone
Refer to this url for an example keystone.conf file
http://docs.openstack.org/trunk/openstack-compute/install/yum/content/keystone-conf-file.html
I ran into this issue as well. I am running on Ubuntu 12.04LTS. What i found was the the service start command in /etc/init/keystone.conf is using start-stop-daemon to run the service. It was written for a newer version than the one on my box. The --chdir variable is not accepted as an input. once i removed that line keystone started right up.
Try running:
start-stop-daemon --start --chuid keystone --name keystone --exec /usr/bin/keystone-all
/etc/init/keystone.conf after
description "Keystone API server"
author "Soren Hansen <soren#linux2go.dk>"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec start-stop-daemon --start --chuid keystone \
--name keystone \
--exec /usr/bin/keystone-all
Check if your IP-adress is equal to HOST_IP=... in localrc
This might be due to keystone not getting started properly and therefore port 35357 is not in listening mode.
This seems to be anomalous behavior of service keystone.
I am mentioning steps which have worked on my system for havana installtion on Ubuntu 12.04 Kernel version 3.2.0-67-generic. After a day of headache around this issue. Try these steps, preferably in the same order.
1) Remove keystone package:-
apt-get remove keystone
2) Reboot your system
reboot
3) After reboot again INSTALL KEYSTONE.
apt-get install keystone
4) Check status of keystone service
service keystone status
It will show start/running
5) Now do the necessary changes you want to do in /etc/keystone/keystone.conf
after making changes in conf file DO NOT RESTART KEYSTONE SERVICE
Use stop and start command to make an effect of restart but don't restart.
service keystone stop
service keystone start
For further help, pasting a dump of my CLI :-
http://pastebin.com/sduuFCL7
There are multiple problems with the icehouse documentations and install. packstack is broken so the only way to get started is to manually follow the upstream docs for your distro. keystone is very important to set up first correctly before moving on, because other services rely on it.
the paste-file /usr/share/keystone/keystone-dist-paste.ini should be copied to /etc/ to be accessible to the config scripts like this:
cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/
chown keystone:keystone /etc/keystone/*
make sure to update keystone.conf with the new config_file value
documentation is wrong about the mysql connection, it should go to [sql] and not [database] so:
openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:PASSWD#controller/keystone
the name controller should be resolved to whatever mysql is bound to, I will add it to /etc/hosts like this if [mysqld]/bind-address in /etc/my.cnf is 10.1.1.100:
10.1.1.100 controller
make sure to uncomment log_file in keystone.conf to get what is happening.
I was facing similar issue.I followed below mentioned steps and openstack-keystone service got started.
Edit the /etc/keystone/keystone.conf file and complete the following actions:
In the [DEFAULT] section
[DEFAULT]
admin_token = ADMIN_TOKEN
In the [database] section
[database]
connection = mysql://keystone:KEYSTONE_DBPASS#controller/keystone
In the [token] section, configure the UUID token provider and SQL driver
[token]
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token
In the [revoke] section
[revoke]
driver = keystone.contrib.revoke.backends.sql.Revoke
After making above changes populate the Identity service database using command
su -s /bin/sh -c "keystone-manage db_sync" keystone
Start the openstack-keystone service using below command
systemctl start openstack-keystone

Resources