Salt state.apply getting failed when run multiple times - salt-stack

We are trying to setup a proxy using salt state files..
Initially my proxy.sls file runs perfectly from master without any errors. When i run the same command again it gets failed..
I run the below command from salt-master
[root#omsstusaltmgmt ~]# salt "proxy" state.apply
As i said initially it will be successful and when i run the same command it gets failed.. as below..
**up_oms_st_proxy1:
ID: git
Function: pkg.installed
Result: False
Comment: Unable to run command '[u'rpm', u'-qa', u'--queryformat', u'%{NAME}_|-%{EPOCH}_|-%{VERSION}_|-%{RELEASE}_|-%{ARCH}_|-(none)_|-%{INSTALLTIME}\n']' with the context '{u'timeout': None, u'with_communicate': True, u'shell': False, u'bg': False, u'stderr': -2, u'env': {'LC_NUMERIC': 'C', 'HTTP_PROXY': '216.203.5.248:9090', 'LC_CTYPE': 'C', 'LESSOPEN': '||/usr/bin/lesspipe.sh %s', 'SSH_CLIENT': '10.139.65.217 49284 22', 'SELINUX_USE_CURRENT_RANGE': '', 'LOGNAME': 'root', 'USER'**
**ID: unzip
Function: pkg.installed
Result: False
Comment: Unable to run command '[u'rpm', u'-qa', u'--queryformat', u'%{NAME}_|-%{EPOCH}_|-%{VERSION}_|-%{RELEASE}_|-%{ARCH}_|-(none)_|-%{INSTALLTIME}\n']' with the context '{u'timeout': None, u'with_communicate': True, u'shell': False, u'bg': False, u'stderr': -2, u'env': {'LC_NUMERIC': 'C', 'HTTP_PROXY':**
*When i manually goes into my salt-minion(proxy server) and restart the salt-minion this command(salt "proxy" state.apply get executed perfectly without failure *
I dont want to manually restart the minion when ever i required a deployment on proxy box..
Can anyone help me out here
Thanks in Advance

Related

Ansible Ad-Hoc command with ssh keys

I would like to setup ansible on my Mac. I've done something similar in GNS3 and it worked but here there are more factors I need to take into account. so I have the Ansible installed. I added hostnames in /etc/hosts and I can ping using the hostnames I provided there.
I have created ansible folder which I am going to use and put ansible.cfg inside:
[defaults]
hostfile = ./hosts
host_key_checking = false
timeout = 5
inventory = ./hosts
In the same folder I have hosts file:
[tp-lab]
lab-acc0
When I try to run the following command: ansible tx-edge-acc0 -m ping
I am getting the following errors:
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
[WARNING]: Unhandled error in Python interpreter discovery for host tx-edge-acc0: unexpected output from Python interpreter discovery
[WARNING]: sftp transfer mechanism failed on [tx-edge-acc0]. Use ANSIBLE_DEBUG=1 to see detailed information
[WARNING]: scp transfer mechanism failed on [tx-edge-acc0]. Use ANSIBLE_DEBUG=1 to see detailed information
[WARNING]: Platform unknown on host tx-edge-acc0 is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change the meaning of that path. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more information.
tx-edge-acc0 | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"module_stderr": "Shared connection to tx-edge-acc0 closed.\r\n",
"module_stdout": "\r\nerror: unknown command: /bin/sh\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 0
Any idea what might the problem here? much appreciated
At first glance it seems that you ansible controller does not load configuration files (especially ansible.cfg) when playbook is fired.
(From documentation) Ansible searches for configuration files in the following order, processing the first file it finds and ignoring the rest:
$ANSIBLE_CONFIG if the environment variable is set.
ansible.cfg if it’s in the current directory.
~/.ansible.cfg if it’s in the user’s home directory.
/etc/ansible/ansible.cfg, the default config file.
Edit: For peace of mind it is good to use full paths
EDIT Based on comments
$ cat /home/ansible/ansible.cfg
[defaults]
host_key_checking = False
inventory = /home/ansible/hosts # <-- use full path to inventory file
$ cat /home/ansible/hosts
[servers]
server-a
server-b
Command & output:
# Supplying inventory host group!
$ ansible servers -m ping
server-a | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
server-b | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}

Minion cannot find file on master

On Minion:
ID: run_snmpv3_config
Function: file.managed
Name: /tmp/run_snmpv3_config_cmd.sh
Result: False
Comment: Source file salt://files/run_snmpv3_config_cmd.sh not found in saltenv 'base'
Started: 15:11:56.175325
Duration: 27.084 ms
Changes:
On master we confirm that the minion does in fact see the file:
master # salt minion cp.list_master | grep snmp
- files/run_snmpv3_config_cmd.sh
So why isn't it able to get it?
(In fact I wanted to use cmd.script but that errors out with Unable to cache script, so I tried to just copy the file, which doesn't work either as we see above.)
I called the state for debugging purposes on a client system using
salt-call --local state.apply teststate -l debug
Of course in this case it will look for file salt://x inside /srv/salt (or whatever the minion's config is) on the minion and not the master....

[Atom][Remote-ftp][colfax] Unable to connect remote server

I'm recently trying to work on the Intel® AI DevCloud, please see Connecting from Linux or a Mac.
I can connect the remove server colfax via SSH. But I'm not able to set atom-remote-ftp .ftpconfig correctly for colfax.
Here is what I did:
download the linux access key and put it at key_path
add
Host colfax
User xxxxxx
IdentityFile key_path
ProxyCommand ssh -T -i key_path guest#cluster.colfaxresearch.com
logging in use
ssh colfax
Would anyone please let me know what should be the host(?), usr(xxxxxx) and pass("")?
{
"protocol": "ftp",
"host": "***FTP_HOSTNAME_HERE***",
"port": 21,
"user": "***YOUR_USERNAME_HERE***",
"pass": "***YOUR_PASSWORD_HERE***",
"promptForPass": false,
"remote": "***REMOTE_PATH_HERE***",
"secure": true,
"secureOptions": {"rejectUnauthorized": false, "requestCert": true, "agent": false},
"connTimeout": 10000, // integer - How long (in milliseconds) to wait for the control connection to be established. Default: 10000
"pasvTimeout": 10000, // integer - How long (in milliseconds) to wait for a PASV data connection to be established. Default: 10000
"keepalive": 10000, // integer - How often (in milliseconds) to send a 'dummy' (NOOP) command to keep the connection alive. Default: 10000
"watch":[]
}
code refer to #Sanjay Verma at [Atom][Remote-ftp] Unable to connect ftps/ftpes . Thank you!
Please find the procedure below
Host colfax
User uXXXX
IdentityFile ~/Downloads/colfax-access-key-xxxx
ProxyCommand ssh -T -i ~/Downloads/colfax-access-key-xxxx guest#cluster.colfaxresearch.com
Set the correct restrictive permissions on the private SSH. To do this, run the following commands in a terminal:
chmod 600 ~/Downloads/colfax-access-key-xxxx
chmod 600 ~/.ssh/config
After the preparation steps above, you should be able to log in to your login node
ssh colfax
Once your connection is set up, you can copy local files to your login node like this:
scp /path/to/local/file colfax:/path/to/remote/directory/

Openstack-Keystone failing to start

I've tried almost everything in the past couple of days to get keystone running to no avail.
The setup is all on the same host, the virtualization and openstack and keystone are all on the same host, so I've tried setting up keystone with 127.0.0.1 and localhost and the IP of the host with no luck
[DEFAULT] log_file = /var/log/keystone/keystone.log
admin_token = ***
bind_host = 192.168.33.11
public_port = 5000
admin_port = 35357
compute_port = 8774
# === Logging Options ===
# Print debugging output verbose = True
# Print more verbose output
# (includes plaintext request logging, potentially including passwords)
# debug = False
# Name of log file to output to. If not set, logging will go to stdout. log_file = keystone.log
# The directory to keep log files in (will be prepended to --logfile) log_dir = /var/log/keystone
# Use syslog for logging.
# use_syslog = False
# syslog facility to receive log lines
# syslog_log_facility = LOG_USER
# If this option is specified, the logging configuration file specified is
# used and overrides any other logging options specified. Please see the
# Python logging module documentation for details on logging configuration
# files. log_config = logging.conf
# A logging.Formatter log message format string which may use any of the
# available logging.LogRecord attributes.
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# Format string for %(asctime)s in log records.
# log_date_format = %Y-%m-%d %H:%M:%S
# onready allows you to send a notification when the process is ready to serve
# For example, to have it notify using systemd, one could set shell command:
# onready = systemd-notify --ready
# or a module with notify() method:
# onready = keystone.common.systemd
[sql] connection = mysql://keystone:***#localhost/keystone
# idle_timeout = 200
[identity] driver = keystone.identity.backends.sql.Identity
[catalog] template_file = /etc/keystone/default_catalog.templates driver = keystone.catalog.backends.sql.Catalog
# dynamic, sql-based backend (supports API/CLI-based management commands)
# driver = keystone.catalog.backends.sql.Catalog
# static, file-based backend (does *NOT* support any management commands)
# driver = keystone.catalog.backends.templated.TemplatedCatalog
# template_file = default_catalog.templates
[token] driver = keystone.token.backends.sql.Token
# driver = keystone.token.backends.kvs.Token
# Amount of time a token should remain valid (in seconds)
# expiration = 86400
I've enabled logging in the logging.conf file and set the level to DEBUG and INFO, however nothing in log files.
[root#* keystone]# service openstack-keystone restart
Stopping keystone: [FAILED]
Starting keystone: [ OK ]
[root#* keystone]# service openstack-keystone restart
Stopping keystone: [FAILED]
Starting keystone: [ OK ]
[root#* keystone]# ps aux | grep keystone
root 25580 0.0 0.0 103236 880 pts/1 S+ 09:41 0:00 grep keystone
[root#* keystone]#
Any ideas will be greatly appreciated.Thank you
As I mentioned in the comment, I've never seen a config file with the section headings on the same line as config option:
[DEFAULT] log_file = /var/log/keystone/keystone.log
I've also seen it like this instead:
[DEFAULT]
log_file = /var/log/keystone/keystone.log
However, I have no idea if this is related to your issue.
To enable debug-level logging, make sure you set the following in /etc/keystone/logging.conf:
[logger_root]
level=DEBUG
Then try running keystone manually instead of as a service:
$ sudo -u keystone bash
$ HOME=/var/lib/keystone keystone-all --debug
Hopefully you'll see a relevant error message on standard out.
(I believe it will still send the logging to /var/log/keystone/keystone.log, not sure how to actually get it to log to standard out when running manually like this).
Add a valid token for admin_token. It should not be "*".
Check the below line:
[sql] connection = mysql://keystone:*#localhost/keystone
It should be something like:
connection = mysql://keystone:keystone#localhost/keystone
Refer to this url for an example keystone.conf file
http://docs.openstack.org/trunk/openstack-compute/install/yum/content/keystone-conf-file.html
I ran into this issue as well. I am running on Ubuntu 12.04LTS. What i found was the the service start command in /etc/init/keystone.conf is using start-stop-daemon to run the service. It was written for a newer version than the one on my box. The --chdir variable is not accepted as an input. once i removed that line keystone started right up.
Try running:
start-stop-daemon --start --chuid keystone --name keystone --exec /usr/bin/keystone-all
/etc/init/keystone.conf after
description "Keystone API server"
author "Soren Hansen <soren#linux2go.dk>"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec start-stop-daemon --start --chuid keystone \
--name keystone \
--exec /usr/bin/keystone-all
Check if your IP-adress is equal to HOST_IP=... in localrc
This might be due to keystone not getting started properly and therefore port 35357 is not in listening mode.
This seems to be anomalous behavior of service keystone.
I am mentioning steps which have worked on my system for havana installtion on Ubuntu 12.04 Kernel version 3.2.0-67-generic. After a day of headache around this issue. Try these steps, preferably in the same order.
1) Remove keystone package:-
apt-get remove keystone
2) Reboot your system
reboot
3) After reboot again INSTALL KEYSTONE.
apt-get install keystone
4) Check status of keystone service
service keystone status
It will show start/running
5) Now do the necessary changes you want to do in /etc/keystone/keystone.conf
after making changes in conf file DO NOT RESTART KEYSTONE SERVICE
Use stop and start command to make an effect of restart but don't restart.
service keystone stop
service keystone start
For further help, pasting a dump of my CLI :-
http://pastebin.com/sduuFCL7
There are multiple problems with the icehouse documentations and install. packstack is broken so the only way to get started is to manually follow the upstream docs for your distro. keystone is very important to set up first correctly before moving on, because other services rely on it.
the paste-file /usr/share/keystone/keystone-dist-paste.ini should be copied to /etc/ to be accessible to the config scripts like this:
cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/
chown keystone:keystone /etc/keystone/*
make sure to update keystone.conf with the new config_file value
documentation is wrong about the mysql connection, it should go to [sql] and not [database] so:
openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:PASSWD#controller/keystone
the name controller should be resolved to whatever mysql is bound to, I will add it to /etc/hosts like this if [mysqld]/bind-address in /etc/my.cnf is 10.1.1.100:
10.1.1.100 controller
make sure to uncomment log_file in keystone.conf to get what is happening.
I was facing similar issue.I followed below mentioned steps and openstack-keystone service got started.
Edit the /etc/keystone/keystone.conf file and complete the following actions:
In the [DEFAULT] section
[DEFAULT]
admin_token = ADMIN_TOKEN
In the [database] section
[database]
connection = mysql://keystone:KEYSTONE_DBPASS#controller/keystone
In the [token] section, configure the UUID token provider and SQL driver
[token]
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token
In the [revoke] section
[revoke]
driver = keystone.contrib.revoke.backends.sql.Revoke
After making above changes populate the Identity service database using command
su -s /bin/sh -c "keystone-manage db_sync" keystone
Start the openstack-keystone service using below command
systemctl start openstack-keystone

Authentication failed with capifony

I'm trying to do a Symfony 2 project deployment web app based on capifony and Symfony2.
It uses Process to trigger my "cap deploy" task and display my output in a web browser.
When in a shell, if I run my "cap deploy" as user www-data (the same as used by Process) , my deployement works fine so there's nothing wrong either with my deploy task nor with my authentication keys.
Though, when I call my task from my web app, capifony tells me it can't authenticate on the remote server.
triggering start callbacks for `deploy'
* executing `deploy:setdomain'
* executing `deploy'
* executing `deploy:update'
** transaction: start
* executing `deploy:update_code'
triggering before callbacks for `deploy:update_code'
[32m--> Updating code base with checkout strategy[0m
executing locally: "git ls-remote [ myrepo ]"
command finished in 2068ms
* executing "git clone -q -o [ remote server ] [ my repo ]
/var/www/spinfony/releases/20121211100449 && cd /var/www/spinfony/releases/20121211100449 && git checkout -q -b deploy be53233e51a4c542c3bc8603b424e57f988898a4 && (echo be53233e51a4c542c3bc8603b424e57f988898a4 > /var/www/spinfony/releases/20121211100449/REVISION)"
servers: ["[ remote server ]"]
Password: stty: standard input: Invalid argument
stty: standard input: Invalid argument
stty: standard input: Invalid argument
*** [deploy:update_code] rolling back
* executing "rm -rf /var/www/spinfony/releases/20121211100449; true"
servers: ["[ remote server ]"]
** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: [ remote server ] (Net::SSH::AuthenticationFailed: [ user ])
connection failed for: [ remote server ] (Net::SSH::AuthenticationFailed: [ user ])
I'm trying to figure out why capifony seems to expect a password I can't provide since i'm not running it from a shell, whereas when I do run it from a shell, it works fine without asking me anything.
Once again, the same file is called from the same user.
This is a known "bug"
You need to tell capistrano wich key to use
Try adding this to your deploy.rb :
ssh_options[:keys] = %w(/what/ever/.ssh/id_rsa)
Source : http://adam.goucher.ca/?p=1253
When you call your task from a web app, you need to tell it what user to use.
set :user, "www-data"
set :domain, "webserverdomainname.com"

Resources