Setting repmgr witness node on Debian - repmgr

I am trying to set up repmgr version 5 on Debian with PostgtrSql 11.
Seems like the documentation is more oriented towards centos/RHEL.
When I am trying to setup the witnes node to start the repmgr daemon, I get an error without any idea where to look for for seeing what is the cause of the error.
This is my repmgr.conf file:
node_id=3
node_name='PG-Node-Witness'
conninfo='host=10.97.7.140 user=repmgr dbname=repmgr connect_timeout=2'
data_directory='/var/lib/postgresql/11/main'
failover='automatic'
promote_command='/usr/bin/repmgr standby promote -f /etc/repmgr.conf --log-to-file'
follow_command='/usr/bin/repmgr standby follow -f /etc/repmgr.conf --log-to-file --upstream-node-id=%n'
priority=60
monitor_interval_secs=2
connection_check_type='ping'
reconnect_attempts=6
reconnect_interval=8
primary_visibility_consensus=true
standby_disconnect_on_failover=true
repmgrd_service_start_command='sudo /etc/init.d/repmgrd start' #??????
repmgrd_service_stop_command='sudo //etc/init.d/repmgrd stop'#??????
service_start_command='sudo /usr/bin/systemctl start postgresql#11-main.service'
service_stop_command='sudo /usr/bin/systemctl stop postgresql#11-main.service'
service_restart_command='sudo /usr/bin/systemctl restart postgresql#11-main.service'
service_reload_command='sudo /usr/bin/systemctl relaod postgresql#11-main.service'
monitoring_history=yes
log_status_interval=60
register is OK:
repmgr -f /etc/repmgr.conf witness register -h 10.97.7.97
INFO: connecting to witness node "PG-Node-Witness" (ID: 3)
INFO: connecting to primary node
NOTICE: attempting to install extension "repmgr"
NOTICE: "repmgr" extension successfully installed
INFO: witness registration complete
NOTICE: witness node "PG-Node-Witness" (ID: 3) successfully registered
repmgr daemon dry-run OK too:
$repmgr -f /etc/repmgr.conf daemon start --dry-run
INFO: prerequisites for starting repmgrd met
DETAIL: following command would be executed:
sudo /usr/bin/systemctl start postg...#11-main.service
I setup /etc/default/repmgrd with:
REPMGRD_ENABLED=yes
and
REPMGRD_CONF="/etc/repmgr.conf"
But still get error when trying to run the daemon start:
$ repmgr -f /etc/repmgr.conf daemon start
I get:
NOTICE: executing: "sudo /etc/init.d/repmgrd start"
ERROR: repmgrd does not appear to have started after 15 seconds
HINT: use "repmgr service status" to confirm that repmgrd was successfully started

It is recommended to run repmgrd as a systemd service,
According to the docs (for debian) you may first need to configure /etc/default/repmgrd,
My configuration looks like this:
# default settings for repmgrd. This file is source by /bin/sh from
# /etc/init.d/repmgrd
# disable repmgrd by default so it won't get started upon installation
# valid values: yes/no
REPMGRD_ENABLED=yes
# configuration file (required)
REPMGRD_CONF="/etc/repmgr/12/repmgr.conf"
# additional options
REPMGRD_OPTS="--daemonize=false"
# user to run repmgrd as
REPMGRD_USER=postgres
# repmgrd binary
REPMGRD_BIN=/bin/repmgrd
# pid file
REPMGRD_PIDFILE=/var/run/repmgrd.pid
Secondly, I would revisit sudoers (visudo) in order to check whether the non-root user can execute sudo /etc/init.d/repmgrd start.
Further, the user who runs repmgr commands should be able to write logs depending on your configuration.

Apparently the correct command to start the repmgr daemon is:
repmgrd -f /etc/prepmgr.conf

Related

OpenVAS: OSPD scanner can't be used as scanner in new task

After understanding how to add an ospd scanner, verify it etc ...
I though I could finally use it but got an error through UI to add it to a task.
In my case, I run OpenVAS 9 on a debian 9 and I'm trying to include a w3af scanner but I got the same issue with every OSP scanner I add.
my pip freeze :
ospd==1.2.0
ospd-debsecan==1.2b1
ospd-nmap==1.0b1
ospd-w3af==1.0.0
Note that here is an example of w3af but the issue is the same for debsecan scanner and nmap scanner.
my openvas-check-setup :
Step 1: Checking OpenVAS Scanner ...
OK: OpenVAS Scanner is present in version 5.1.1.
OK: redis-server is present in version v=3.2.6.
OK: scanner (kb_location setting) is configured properly using the redis-server socket: /tmp/redis.sock
OK: redis-server is running and listening on socket: /tmp/redis.sock.
OK: redis-server configuration is OK and redis-server is running.
OK: NVT collection in /usr/local/var/lib/openvas/plugins contains 47727 NVTs.
WARNING: Signature checking of NVTs is not enabled in OpenVAS Scanner.
SUGGEST: Enable signature checking (see http://www.openvas.org/trusted-nvts.html).
OK: The NVT cache in /usr/local/var/cache/openvas contains 47727 files for 47727 NVTs.
Step 2: Checking OpenVAS Manager ...
OK: OpenVAS Manager is present in version 7.0.2.
OK: OpenVAS Manager database found in /usr/local/var/lib/openvas/mgr/tasks.db.
OK: Access rights for the OpenVAS Manager database are correct.
OK: sqlite3 found, extended checks of the OpenVAS Manager installation enabled.
OK: OpenVAS Manager database is at revision 184.
OK: OpenVAS Manager expects database at revision 184.
OK: Database schema is up to date.
OK: OpenVAS Manager database contains information about 47727 NVTs.
OK: At least one user exists.
OK: OpenVAS SCAP database found in /usr/local/var/lib/openvas/scap-data/scap.db.
OK: OpenVAS CERT database found in /usr/local/var/lib/openvas/cert-data/cert.db.
OK: xsltproc found.
Step 3: Checking user configuration ...
WARNING: Your password policy is empty.
SUGGEST: Edit the /usr/local/etc/openvas/pwpolicy.conf file to set a password policy.
Step 4: Checking Greenbone Security Assistant (GSA) ...
OK: Greenbone Security Assistant is present in version 7.0.2.
OK: Your OpenVAS certificate infrastructure passed validation.
Step 5: Checking OpenVAS CLI ...
OK: OpenVAS CLI version 1.4.5.
Step 6: Checking Greenbone Security Desktop (GSD) ...
SKIP: Skipping check for Greenbone Security Desktop.
Step 7: Checking if OpenVAS services are up and running ...
OK: netstat found, extended checks of the OpenVAS services enabled.
OK: OpenVAS Scanner is running and listening on a Unix domain socket.
OK: OpenVAS Manager is running and listening on a Unix domain socket.
OK: Greenbone Security Assistant is listening on port 443, which is the default port.
Step 8: Checking nmap installation ...
WARNING: Your version of nmap is not fully supported: 7.40
SUGGEST: You should install nmap 5.51 if you plan to use the nmap NSE NVTs.
Step 10: Checking presence of optional tools ...
OK: pdflatex found.
WARNING: PDF generation failed, most likely due to missing LaTeX packages. The PDF report format will not work.
SUGGEST: Install required LaTeX packages.
OK: ssh-keygen found, LSC credential generation for GNU/Linux targets is likely to work.
OK: rpm found, LSC credential package generation for RPM based targets is likely to work.
OK: alien found, LSC credential package generation for DEB based targets is likely to work.
OK: nsis found, LSC credential package generation for Microsoft Windows targets is likely to work.
To create the scanner in openvas, I use:
openvasmd --create-scanner="w3af" --scanner-host=127.0.0.1 --scanner-port=1235 --scanner-type="OSP" \
--scanner-ca-pub=/usr/local/var/lib/openvas/CA/cacert.pem \
--scanner-key-pub=/usr/local/var/lib/openvas/CA/clientcert.pem \
--scanner-key-priv=/usr/local/var/lib/openvas/private/CA/clientkey.pem
To run ospd-w3af scanner, I use:
~# ospd-w3af -b 127.0.0.1 -p 1235 -k \
/usr/local/var/lib/openvas/private/CA/clientkey.pem -c \
/usr/local/var/lib/openvas/CA/clientcert.pem --ca-file \
/usr/local/var/lib/openvas/CA/cacert.pem -L DEBUG
When I verify the scanner with openvasmd --verify-scanner xxxxx I got
Scanner version: 2018.8.22.
note: in the logs of the scanner I got this for every verify I do, I don't know if it's related or no and I didn't find a way to fix this:
2018-10-15 14:27:47,413 ospd.ospd: DEBUG: New connection from 127.0.0.1:60078
2018-10-15 14:27:49,430 ospd.ospd: DEBUG: Error: ('The read operation timed out',)
2018-10-15 14:27:49,433 ospd.ospd: DEBUG: 127.0.0.1:60078: Connection closed
So, my verification made, I want to create a task that uses this scanner but I can't save it due to error "Given scanner_type was invalid" :
https://i.stack.imgur.com/fvIJd.png
I got 0 connection to the chosen scanner at this moment and I can't find anything in the logs (maybe I can't search). I suspect the gsad UI being responsible for this but I can't find it.
I don't know what to do and if someone more expert than me (not very hard) could help that'd be great :)
Thanks in advance.
I solved this issue by creating a scan configuration for the ospd scanner (I though it didn't need one since it import them)
I faced another issue concerning ospd-w3af configuration, I couldn't create one because it needs ospd 1.0.0 installed, I modified the dependencies few days ago and it doesn't work with ospd 1.2.0
Now I'm facing the issue where the scans doesn't start properly. It stops at 1%
Getting openvas 9 running on new install of Ubuntu 18 was a pain. once i got past all my errors by creating files and ln -s for redis-server socks connections my tasks crapped out at 1%. My fix was install sudo apt install libopenvas-dev after that scans work and check-setup worked. Check-setup report no scanner but openvassd was running and openvasmd --verify-scanner (uuid) showed the scanner.

ICp 2.1.0.1: Installation failed with error TASK [master: Waiting for MariaDB service to start]

I am installing ICp 2.1.0.1 and I received an error at the TASK
[master: Waiting for MariaDB service to start] msg: The MariaDB
component failed to start.
After this msg the installation completed with failed status.
We are installing ICp with 3 Masters, 3 Proxies and 2 Workers. We have 1 IP for VIP master and 1 for VIP proxy.
I tried to install multiple times and all installations got the same error.
For prior issues with that error, the correct db admin password was not used. So check the db user and password to resolve issue.
Would you validate whether each master host was able to access port 3306 on the other hosts?
If you run with .. install -vv | tee -a install-log.txt, do you get additional details as well?
The error was solved by following the steps below.
Check whether kubelet is running:
Log in to your master node.
Run the following command to check kubelet status:
systemctl status kubelet
If kubelet is not running, run the following command to get the logs:
journalctl -u kubelet &> kubelet.log
We found the error in the kubelet.log log:
Error: failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false.
We found this troubleshoot in this link, and the solution at the ICP issue 4651.
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0/troubleshoot/etcd_fails.html
https://github.ibm.com/IBMPrivateCloud/roadmap/issues/4651

Prerender does not return a page response

Long story short:
Server OS got updated, which broke my virtualenv's. Before this everything worked just fine.
Reinstalled the environment.
Tried to fire up Prerender.io middleware with the following env variables (usually run by supervisor, now tried manually):
`
export PORT=35292
export PRERENDER_SERVICE_URL='http://localhost:35292/'
export PHANTOMJS_PORT=35294
export PHANTOM_CLUSTER_NUM_WORKERS=5
export PHANTOM_WORKER_ITERATIONS=10
export CACHE_ROOT_DIR="/home/users/jz/snapshot-env/prerender/filecache/"
export CACHE_LIVE_TIME=7200
PHANTOM_CLUSTER_MESSAGE_TIMEOUT=800
`
Set up the demo server demoserver.js:
`
#!/usr/bin/env node
var prerender = require('./lib');
var server = prerender({
workers: process.env.PRERENDER_NUM_WORKERS,
iterations: process.env.PRERENDER_NUM_ITERATIONS
});
//server.use(prerender.sendPrerenderHeader());
// server.use(prerender.basicAuth());
// server.use(prerender.whitelist());
server.use(prerender.blacklist());
// server.use(prerender.logger());
server.use(prerender.removeScriptTags());
server.use(prerender.httpHeaders());
// server.use(prerender.inMemoryHtmlCache());
// server.use(prerender.s3HtmlCache());
server.start();
`
Server starts:
`
$ node demoserver.js
2016-05-24T01:41:35.814Z starting worker thread #0
2016-05-24T01:41:35.832Z starting worker thread #1
2016-05-24T01:41:35.839Z starting worker thread #2
2016-05-24T01:41:35.842Z starting worker thread #3
2016-05-24T01:41:35.844Z starting worker thread #4
2016-05-24T01:41:36.120Z starting phantom...
2016-05-24T01:41:36.132Z Server running on port 35292
2016-05-24T01:41:36.135Z starting phantom...
2016-05-24T01:41:36.146Z starting phantom...
2016-05-24T01:41:36.152Z Server running on port 35292
2016-05-24T01:41:36.153Z starting phantom...
2016-05-24T01:41:36.160Z Server running on port 35292
2016-05-24T01:41:36.170Z Server running on port 35292
2016-05-24T01:41:36.176Z starting phantom...
2016-05-24T01:41:36.190Z Server running on port 35292
Fontconfig warning: ignoring UTF-8: not a valid region tag
Fontconfig warning: ignoring UTF-8: not a valid region tag
Fontconfig warning: ignoring UTF-8: not a valid region tag
Fontconfig warning: ignoring UTF-8: not a valid region tag
Fontconfig warning: ignoring UTF-8: not a valid region tag
`
Try to access the server locally:
$ lynx http://localhost:35292/http://google.com
I see it tries to fetch the page, but no response:
HTTP request sent; waiting for response.
On the server log I see it has received the request:
2016-05-24T01:53:42.449Z getting http://google.com/
After that no entries and no action. I see prerender has indeed spawned several phantomjs processes, but for some reason nothing happens.
Any ideas how to debug this further to see why phantomjs is not processing or returning the request?
Edit: npm install output here - don't see anything fishy.
(snapshot-env)jz#lakka:~/snapshot-env/prerender$ uname -a
Linux lakka 3.14.66-grbfs-kapsi #1 SMP Sat Apr 16 10:30:24 EEST 2016 x86_64 GNU/Linux
(snapshot-env)jz#lakka:~/snapshot-env/prerender$ cat /etc/*release
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
NAME="Debian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=debian
HOME_URL="http://www.debian.org/"
SUPPORT_URL="http://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
Node version:
jz#lakka:~$ cd snapshot-env;source bin/activate
(snapshot-env)jz#lakka:~/snapshot-env$ node -v
v6.2.0
hey I have this problem before, I fixed by these commands
1. Downgrade to node 4.x
sudo apt-get purge nodejs
curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
sudo apt-get install nodejs
2. Fix the issue: Fontconfig warning: ignoring UTF-8: not a valid region tag
export LANGUAGE=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8
locale-gen en_US.UTF-8
That's it.
Hopefully, it can help you.
I reinstall the service and I wrote a blog showing the flows
check it out http://ccaloha.cc/blog/2016/06/22/how-to-install-preloader-dot-io-service-in-ubuntu-14-dot-04/

Error on starting the application Puppet in the Generic enablers Cosmos

Good afternoon,
I have installed the Generic enablers Cosmos, following the manual BigData Analysis - Installation and Administration Guide. When I have come to 'Step 7: applying Puppet' and executed the commands, in the file puppet.err has appeared the following errors:
Error: Could not prefetch yumrepo provider ' inifilé: Section 'openvz-utils' is already defined, cannot re-defines in/etc/yum.repos.d/openvz.repo
Description: There is a conflict with the titles (indicated in bold type) of the file /etc/yum.repos.d/cosmos-openvz.repo and /etc/yum.repos.d/openvz.repo .
cat /etc/yum.repos.d/cosmos-openvz.repo
[openvz-utils]
...
[openvz-kernel-rhel6]
...
cat /etc/yum.repos.d/openvz.repo
[openvz-utils]
...
[openvz-kernel-rhel6]
...
[openvz-kernel-rhel6-testing]
...
Solution: I have realized a change in the titles of the file /etc/yum.repos.d/openvz.repo, example: [openvz-utils_1]
Error: Could not prefetch database_grant provider 'mysql': Execution of '/usr/bin/mysql mysql -Be describe user' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)
Description: in the folder /var/lib/mysql/ was not found the file mysql.sock.
Solution: I have installed mysql-server.x86_64:
yum install mysql-server.x86_64
At the end of the installations, I restarted the service:
/etc/init.d/mysqld stop
/etc/init.d/mysqld start
Error: Could not update: Execution of '/usr/bin/yum -d 0 -e 0 -y list vzstats' returned 1: Error: Cannot retrieve repository metadata (repomd.xml) for repository: ambari. Please verify its path and try again
Description: This error appears in the machine of the Master node, this one is provoked by the configuration of the file [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/hieradata/my_environment/common.yaml, indicated in 'Step 6: Puppet configuration'. Concretely, the URL that use the IP: 130.206.81.65
Solution: in the file [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/hieradata /my_environment/common.yaml to change the line:
ambari::params::repo_url: 'http:// 130.206.81.65/cosmos/ambari/'
(without blank space)
for
ambari::params::repo_url: 'http:// public-repo-1.hortonworks.com/ambari/centos6/1.x/GA'
(without blank space)
Error: Could not update: Execution of '/usr/bin/yum -d 0 -e 0 -y list vzstats' returned 1: Error: Cannot retrieve repository metadata (repomd.xml) for repository: cosmos-libvirt. Please verify its path and try again
Description: it is the same problem as the previous error. The difficulty in this one is that I cannot modify the file [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/hieradata /my_environment/common.yaml in the line:
cosmos::params::cosmos_repo_deps_url: 'http:// 130.206.81.65/cosmos/rpms/cosmos-deps'
(without blank space)
Because it is line is used in several files:
cat /etc/yum.repos.d/cosmos-libvirt.repo
[cosmos-libvirt]
name=Cosmos LibVirt with OpenVZ - v1.0.5 - NO PolKIT
baseurl=http:// 130.206.81.65/cosmos/rpms/cosmos-deps//libvirt
gpgcheck=0
priority=10
enabled=1
cat /etc/yum.repos.d/cosmos-openvz.repo
[openvz-utils]
name=OpenVZ utilities
baseurl=http:// 130.206.81.65/cosmos/rpms/cosmos-deps//OpenVZ/openvz-utils
enabled=1
gpgcheck=0
priority=1
[openvz-kernel-rhel6]
name=OpenVZ RHEL6-based kernel
baseurl=http:// 130.206.81.65/cosmos/rpms/cosmos-deps//OpenVZ/openvz-kernel- > rhel6
enabled=1
gpgcheck=0
priority=1
It does not also allow to modify the file previous, in the moment to execute the command (indicated in 'Step 7: applying Puppet'):
puppet apply --debug --verbose \
--modulepath [COSMOS_TMP_PATH]/puppet/modules/:[COSMOS_TMP_PATH]/puppet/modules_third_party/ \
--environment my_environment --hiera_config [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/hiera.yaml \
--manifestdir [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/ [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/site.pp \
> puppet.out 2> puppet.err
It will erase the modified.
Solution: https://github.com/telefonicaid/fiware-cosmos-platform/issues/4
I need help with the error:
Error: /Stage[main]/Ambari::Server::Config/Augeas[ambari-config-repoinfo]: Could not evaluate: Saving failed, see debug
Might they to throw me a hand with these last error?
Thank you in advance.
PD: Forgive for if it is written badly

Openstack-Keystone failing to start

I've tried almost everything in the past couple of days to get keystone running to no avail.
The setup is all on the same host, the virtualization and openstack and keystone are all on the same host, so I've tried setting up keystone with 127.0.0.1 and localhost and the IP of the host with no luck
[DEFAULT] log_file = /var/log/keystone/keystone.log
admin_token = ***
bind_host = 192.168.33.11
public_port = 5000
admin_port = 35357
compute_port = 8774
# === Logging Options ===
# Print debugging output verbose = True
# Print more verbose output
# (includes plaintext request logging, potentially including passwords)
# debug = False
# Name of log file to output to. If not set, logging will go to stdout. log_file = keystone.log
# The directory to keep log files in (will be prepended to --logfile) log_dir = /var/log/keystone
# Use syslog for logging.
# use_syslog = False
# syslog facility to receive log lines
# syslog_log_facility = LOG_USER
# If this option is specified, the logging configuration file specified is
# used and overrides any other logging options specified. Please see the
# Python logging module documentation for details on logging configuration
# files. log_config = logging.conf
# A logging.Formatter log message format string which may use any of the
# available logging.LogRecord attributes.
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# Format string for %(asctime)s in log records.
# log_date_format = %Y-%m-%d %H:%M:%S
# onready allows you to send a notification when the process is ready to serve
# For example, to have it notify using systemd, one could set shell command:
# onready = systemd-notify --ready
# or a module with notify() method:
# onready = keystone.common.systemd
[sql] connection = mysql://keystone:***#localhost/keystone
# idle_timeout = 200
[identity] driver = keystone.identity.backends.sql.Identity
[catalog] template_file = /etc/keystone/default_catalog.templates driver = keystone.catalog.backends.sql.Catalog
# dynamic, sql-based backend (supports API/CLI-based management commands)
# driver = keystone.catalog.backends.sql.Catalog
# static, file-based backend (does *NOT* support any management commands)
# driver = keystone.catalog.backends.templated.TemplatedCatalog
# template_file = default_catalog.templates
[token] driver = keystone.token.backends.sql.Token
# driver = keystone.token.backends.kvs.Token
# Amount of time a token should remain valid (in seconds)
# expiration = 86400
I've enabled logging in the logging.conf file and set the level to DEBUG and INFO, however nothing in log files.
[root#* keystone]# service openstack-keystone restart
Stopping keystone: [FAILED]
Starting keystone: [ OK ]
[root#* keystone]# service openstack-keystone restart
Stopping keystone: [FAILED]
Starting keystone: [ OK ]
[root#* keystone]# ps aux | grep keystone
root 25580 0.0 0.0 103236 880 pts/1 S+ 09:41 0:00 grep keystone
[root#* keystone]#
Any ideas will be greatly appreciated.Thank you
As I mentioned in the comment, I've never seen a config file with the section headings on the same line as config option:
[DEFAULT] log_file = /var/log/keystone/keystone.log
I've also seen it like this instead:
[DEFAULT]
log_file = /var/log/keystone/keystone.log
However, I have no idea if this is related to your issue.
To enable debug-level logging, make sure you set the following in /etc/keystone/logging.conf:
[logger_root]
level=DEBUG
Then try running keystone manually instead of as a service:
$ sudo -u keystone bash
$ HOME=/var/lib/keystone keystone-all --debug
Hopefully you'll see a relevant error message on standard out.
(I believe it will still send the logging to /var/log/keystone/keystone.log, not sure how to actually get it to log to standard out when running manually like this).
Add a valid token for admin_token. It should not be "*".
Check the below line:
[sql] connection = mysql://keystone:*#localhost/keystone
It should be something like:
connection = mysql://keystone:keystone#localhost/keystone
Refer to this url for an example keystone.conf file
http://docs.openstack.org/trunk/openstack-compute/install/yum/content/keystone-conf-file.html
I ran into this issue as well. I am running on Ubuntu 12.04LTS. What i found was the the service start command in /etc/init/keystone.conf is using start-stop-daemon to run the service. It was written for a newer version than the one on my box. The --chdir variable is not accepted as an input. once i removed that line keystone started right up.
Try running:
start-stop-daemon --start --chuid keystone --name keystone --exec /usr/bin/keystone-all
/etc/init/keystone.conf after
description "Keystone API server"
author "Soren Hansen <soren#linux2go.dk>"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec start-stop-daemon --start --chuid keystone \
--name keystone \
--exec /usr/bin/keystone-all
Check if your IP-adress is equal to HOST_IP=... in localrc
This might be due to keystone not getting started properly and therefore port 35357 is not in listening mode.
This seems to be anomalous behavior of service keystone.
I am mentioning steps which have worked on my system for havana installtion on Ubuntu 12.04 Kernel version 3.2.0-67-generic. After a day of headache around this issue. Try these steps, preferably in the same order.
1) Remove keystone package:-
apt-get remove keystone
2) Reboot your system
reboot
3) After reboot again INSTALL KEYSTONE.
apt-get install keystone
4) Check status of keystone service
service keystone status
It will show start/running
5) Now do the necessary changes you want to do in /etc/keystone/keystone.conf
after making changes in conf file DO NOT RESTART KEYSTONE SERVICE
Use stop and start command to make an effect of restart but don't restart.
service keystone stop
service keystone start
For further help, pasting a dump of my CLI :-
http://pastebin.com/sduuFCL7
There are multiple problems with the icehouse documentations and install. packstack is broken so the only way to get started is to manually follow the upstream docs for your distro. keystone is very important to set up first correctly before moving on, because other services rely on it.
the paste-file /usr/share/keystone/keystone-dist-paste.ini should be copied to /etc/ to be accessible to the config scripts like this:
cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/
chown keystone:keystone /etc/keystone/*
make sure to update keystone.conf with the new config_file value
documentation is wrong about the mysql connection, it should go to [sql] and not [database] so:
openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:PASSWD#controller/keystone
the name controller should be resolved to whatever mysql is bound to, I will add it to /etc/hosts like this if [mysqld]/bind-address in /etc/my.cnf is 10.1.1.100:
10.1.1.100 controller
make sure to uncomment log_file in keystone.conf to get what is happening.
I was facing similar issue.I followed below mentioned steps and openstack-keystone service got started.
Edit the /etc/keystone/keystone.conf file and complete the following actions:
In the [DEFAULT] section
[DEFAULT]
admin_token = ADMIN_TOKEN
In the [database] section
[database]
connection = mysql://keystone:KEYSTONE_DBPASS#controller/keystone
In the [token] section, configure the UUID token provider and SQL driver
[token]
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token
In the [revoke] section
[revoke]
driver = keystone.contrib.revoke.backends.sql.Revoke
After making above changes populate the Identity service database using command
su -s /bin/sh -c "keystone-manage db_sync" keystone
Start the openstack-keystone service using below command
systemctl start openstack-keystone

Resources