How to run a cmd in Windows with Admin priviledges in Salt - salt-stack

I have a simple system that is managed by Saltstack. The minions are Windows Server 2016 machines. I would like to run some command with Admin privileges. How would I achieve that in the .sls for the minion?
run-my-cmd:
cmd.run:
- cwd: C:\temp\
- bg: True
- name: 'some command'

Execute the command using Runas
C:\Users\abcxyz>Runas /profile /user:Administrator "your command"

Related

Need Ansible playbook inorder to calculate number of users currently login into VPN

Writing ansible playbook for "Count number of users currently login to VPN".Using Junos modules as suggested by network team.I have installed below softwares on my RHEL 7 machine with Ansible 2.9 version installed.
Junos Ansible Requirements
===============================
-->Install Dependencies
# pip install ncclient
# pip install junos-eznc
--> Install Juniper.junos Galaxy role
ansible-galaxy install juniper.junos
---> Have NETCONF enabled on Juniper devices over SSH
# set system services netconf ssh
--->(Optional)
#pip install junos-netconify (python lib for juniper console)
Whenever i am writing any playbook, I am getting below error.
Playbook:-
---
- name: Get device uptime
hosts:
- dc1
roles:
- Juniper.junos
connection: local
gather_facts: no
vars_prompt:
- name: username
prompt: Junos Username
private: no
- name: password
prompt: Junos Password
private: yes
tasks:
- name: get uptime using galaxy module
junos_command:
commands: show system uptime
register: uptime
- name: display uptimes
debug: var=uptime
Error:-
PLAY [Get device uptime] **************************************************************************************************************
TASK [get uptime using galaxy module] *************************************************************************************************
fatal: [172.16.130.1]: FAILED! => {"changed": false, "msg": "invalid rpc for running in check_mode"}
PLAY RECAP ****************************************************************************************************************************
172.16.130.1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
I was just exploring ansible networking commands & got above error.Please suggest what configurations required working with junos.
Please find below playbook to check no. of users currently login to VPN:-
name: Get system users currently logged in
hosts: all
connection: local
gather_facts: no
roles:
Juniper.junos
tasks:
name: Retrieve facts from device running Junos OS
juniper_junos_facts:
name: Print version
debug:
var: junos.fqdn
name: Run RPC Commands
juniper_junos_command:
commands="show security dynamic-vpn users"
format=text
dest={{ junos.fqdn }}.output

Add OpenID users to Open Distro Kibana

I've configured opendistro_security for OpenID. When I attempt to authenticate a user, it fails. Presumably because that user has no permissions. How do I give permissions an openid user? I can't seem to find an obvious way to do so with the internal_user.yml.
I Solved it. For posterity, here's what needed to do in addition to the openis settings in the Kibana.yml File.
1: In the config.yml file on each of my Elasticsearch nodes I needed to add the following:
authc:
openid_auth_domain:
http_enabled: true
transport_enabled: true
order: 0
http_authenticator:
type: openid
challenge: false
config:
subject_key: email
roles_key: roles
openid_connect_url: https://accounts.google.com/.well-known/openid-configuration
authentication_backend:
type: noop
Since I'm using google as my identity provider I needed to make sure my subject_key was "email"
2: Needed to run security config script on each node:
docker exec -it elasticsearch-node1 /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cacert /usr/share/elasticsearch/config/root-ca.pem -cert /usr/share/elasticsearch/config/kirk.pem -key /usr/share/elasticsearch/config/kirk-key.pem -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl && docker exec -it elasticsearch-node2 /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cacert /usr/share/elasticsearch/config/root-ca.pem -cert /usr/share/elasticsearch/config/kirk.pem -key /usr/share/elasticsearch/config/kirk-key.pem -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl
3: I needed to configure the usersthat I want to have access admin access to a role:
all_access:
reserved: false
backend_roles:
- "admin"
users:
- "name#email.com"
description: "Maps an openid user to all_access"
Now I can assign other users from Kibana

SSH connectivity issues with ntc-ansible modules

I am trying to using the ntc-ansible module with Ansible running on Ubuntu (WSL). I have ssh connectivity to my remote device (Cisco 2960X) and I can run ansible playbooks to the same remote switch using the built in Ansible networking modules (ios_command) and it works fine.
Issue:
When I try to run any of the ntc-ansible modules, it fails, unable to connect to the device. Probably something simple, but I have hit a wall. There is something I am missing about how to use ntc-ansible modules. Ansible is seeing the modules as I can look at the docs as was suggested as a test in the readme.
I have ntc-ansible module installed here: /home/melshman/.ansible/plugins/modules/ntc-ansible
I am running my playbooks from here: ~/projects/ansible/
The first time I ran the playbook with the ntc-ansible modules it failed and based on error message and some research I installed sshpass (sudo apt-get install sshpass). But still having ssh problems using ntc-ansible… (playbook and traceback below)
I hear folks taking about an index file, but I can’t find that file? Where does it live and what do I need to do with it?
What is my connection supposed to be setup to be? Local? SSH? Netmiko_ssh?
What should I be using for platform? Cisco_ios? cisco_ios_ssh?
Appreciate any help I can get. I have been running in circles for hours and hours.
Ansible Version Info:
VTMNB17024:~/projects/ansible $ ansible --version
ansible 2.5.3
config file = /home/melshman/projects/ansible/ansible.cfg
configured module search path = [u'/home/melshman/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
Working playbook (ios_command:) note: ansible_ssh_pass and ansible_user in group var:
- name: Test Net Automation
hosts: ctil-ios-upgrade
connection: local
gather_facts: no
tasks:
- name: Grab run config
ios_command:
commands:
- show run
register: config
- name: Create backup of running configuration
copy:
content: "{{config.stdout[0]}}"
dest: "backups/show_run_{{inventory_hostname}}.txt"
Playbook (not working) using ntc-ansible module (Note: username and password are defined in Group VAR:
- name: Cisco IOS Automation
hosts: ctil-ios-upgrade
connection: local
gather_facts: no
tasks:
- name: GET UPTIME
ntc_show_command:
connection: ssh
platform: "cisco_ios"
command: 'show version | inc uptime'
host: "{{ inventory_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
use_templates: True
template_dir: /home/melshman/.ansible/plugins/modules/ntc-ansible/ntc-templates/templates
Here is the traceback I get when the error occurs:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: netmiko.ssh_exception.NetMikoTimeoutException: Connection to device timed-out: cisco_ios VTgroup_SW:22
fatal: [VTgroup_SW]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_RJRY9m/ansible_module_ntc_save_config.py\", line 279, in \n main()\n File \"/tmp/ansible_RJRY9m/ansible_module_ntc_save_config.py\", line 251, in main\n device = ntc_device(device_type, host, username, password, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/pyntc-0.0.6-py2.7.egg/pyntc/__init__.py\", line 35, in ntc_device\n return device_class(*args, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/pyntc-0.0.6-py2.7.egg/pyntc/devices/ios_device.py\", line 39, in __init__\n self.open()\n File \"/usr/local/lib/python2.7/dist-packages/pyntc-0.0.6-py2.7.egg/pyntc/devices/ios_device.py\", line 55, in open\n verbose=False)\n File \"build/bdist.linux-x86_64/egg/netmiko/ssh_dispatcher.py\", line 178, in ConnectHandler\n File \"build/bdist.linux-x86_64/egg/netmiko/base_connection.py\", line 207, in __init__\n File \"build/bdist.linux-x86_64/egg/netmiko/base_connection.py\", line 693, in establish_connection\nnetmiko.ssh_exception.NetMikoTimeoutException: Connection to device timed-out: cisco_ios VTgroup_SW:22\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}
Here is a working solution using ntc_show_command to a Cisco IOS device.
- name: Cisco IOS Automation
hosts: pynet-rtr1
connection: local
gather_facts: no
tasks:
- name: GET UPTIME
ntc_show_command:
connection: ssh
platform: "cisco_ios"
command: 'show version'
host: "{{ ansible_host }}"
username: "{{ ansible_user }}"
password: "{{ ansible_ssh_pass }}"
use_templates: True
template_dir: '/home/kbyers/ntc-templates/templates'
If you are going to use ntc-templates, I probably would not have the '| include uptime' in the 'show version'. In other words, let TextFSM convert the output to structured data first and then grab the uptime from that structured data.
I modified inventory_hostname to ansible_host to be consistent with my inventory format (my inventory_hostname doesn't actually resolve in DNS).
I modified username and password to 'ansible_user' and 'ansible_ssh_pass' to be consistent with my inventory and also to be more consistent with Ansible 2.5/2.6 variable naming.
On your above issue, your exception message does not match your playbook (i.e. are you sure that is the exception you get for that playbook).
Here is my inventory file (I simplified this to remove some unnecessary devices and to hide confidential information)
[all:vars]
ansible_connection=local
ansible_python_interpreter=/home/kbyers/VENV/ansible/bin/python
ansible_user=user
ansible_ssh_pass=password
[local]
localhost ansible_connection=local
[cisco]
pynet-rtr1 ansible_host=cisco1.domain.com
pynet-rtr2 ansible_host=cisco2.domain.com

Run grunt watch from the startup windows 10

Is there a way to open the command prompt and run grunt watch from my project directory each time my PC starts? Just trying to automate the automation, have spent a good half hour googling with no luck.
There is one module which installs a node script as a windows service, it's called node-windows (npm, github, documentation). I've used before and worked like a charm.
var Service = require('node-windows').Service;
// Create a new service object
var svc = new Service({
name:'Hello World',
description: 'The nodejs.org example web server.',
script: 'C:\\path\\to\\helloworld.js'
});
// Listen for the "install" event, which indicates the
// process is available as a service.
svc.on('install',function(){
svc.start();
});
svc.install();
Installing it:
npm install -g qckwinsvc
Installing your service:
> qckwinsvc
prompt: Service name: [name for your service]
prompt: Service description: [description for it]
prompt: Node script path: [path of your node script]
Service installed
Uninstalling your service:
> qckwinsvc --uninstall
prompt: Service name: [name of your service]
prompt: Node script path: [path of your node script]
Service stopped
Service uninstalled

salt-ssh permission denied when attempting to log into remote system

I am new to salt-ssh and I have gotten it to work successfully for setting up a remote system. However, I have a login issue that I don't know how to address. What is happening is that when I try to run the salt-ssh commands I have to fight with then initial login process before eventually it just works. I am looking to see if I can narrow down what is causing me to have to fight with login process.
I am using OS X to run my salt-ssh commands against an ubuntu vagrant vm.
I have added my root user's ssh key to the root user authorized_keys on the vagrant vm. I have verified that I can log into the system using ssh without any issues
sudo ssh root#192.168.33.10
Here are what my config files look like:
roster
managed:
host: 192.168.33.10
user: root
sudo: true
Saltfile
salt-ssh:
config_dir: /users/vmcilwain/projects/salt-ssh-rails
roster_file: /users/vmcilwain/projects/salt-ssh-rails/roster
log_file: /users/vmcilwain/projects/salt-ssh-rails/saltlog.txt
master
file_roots:
base:
- /users/vmcilwain/projects/salt-ssh-rails/states
pillar_roots:
base:
- /users/vmcilwain/projects/salt-ssh-rails/pillars
I run this command:
sudo salt-ssh -i '*' test.ping
I enter my local user's password and I get this output
Permission denied for host 192.168.33.10, do you want to deploy the salt-ssh key? (password required):
[Y/n]
This is where my fight is. If the vagrant vm has the ssh key for the user I am executing salt-ssh as, why am I being told that permission is denied? Especially when I verified I could ssh into the system without using salt-ssh.
Clicking yes prompts me for the remote root user's password, which I didn't set and don't necessarily want to since an ssh key should have worked.
I'm hoping someone can tell me the best way to setup connections between both systems so that I don't have to have this fight every time.
I needed to set the priv in my roster to the rsa key that I am using to connect to the remote host:
priv: /Users/vmcilwain/.ssh/id_rsa

Resources