I have to setup a new salt configuration.
For minion setup I want to devise an approach. I came up with this.
Make entry of the new minion in /etc/salt/roster file so that I can use salt-ssh.
Run a salt formula to install salt-minion on this new minion.
Generate minion fingerprint with salt-call key.finger --local on the minion and somehow(still figuring) get it to master and maintain it in some file till the minion actually tries to connect.
When the minion actually tries to connect to the master, master makes sure about the minion identity with the stored fingerprint and then accepts the key.
Once this is done salt state can then bring the minion up to its desired state.
The manual chores associated with this:
I'll have to do manual entries viz. minion-id, IP and user in the /etc/salt/roster file for every new minion that I want up.
Other than this I can't figure any drawbacks.
My questions are:
Is this approach feasible?
Are there any security risks?
Is a better approach already out there ?
P.S. Master and minions may or may not be on public network.
There is salt-cloud to provision new nodes. This includes among others a provider saltify that will use SSH for the provisioning. See here for the online documentation. It will do the following all in one step:
create a new set of keys for the minion
register the minion's key with the master
connect to the minion using SSH and bootstrap the minion with salt and the minion's keys
If you want the minions to verify the master's key once they connect, you can publish a certificate to the minions and sign the master's key with the certificate like described here. Please double-check if saltify already supports this.
Some time ago I have prepared a salt-cloud setup that works both with DigitalOcean and with Vagrant on my Github account. The Vagrant provisioning uses salt-cloud with saltify. Have a look at the included cheatsheet.adoc for the Vagrant commands.
Related
I'm running a Salt master in a very constrained Kubernetes environment where the ingress controller only listens on a single port.
Can I configure my minion so that it uses a different SNI for publishing and returning?
e.g. publish https://salt-master.publish.com
ret https://salt-master.ret.com
Unfortunately it is not possible. salt is setup to pay attention and to make sure that the information goes to the same master that it picked up from.
Connect to instance: i-38942195
To connect to your instance, be sure security group my-test-security-group has TCP port 22 open to inbound traffic and then perform the following steps (these instructions do not apply if you did not select a key pair when you launched this instance):
Open an SSH terminal window.
Change your directory to the one where you stored your key file my-test-keypair.pem
Run the following command to set the correct permissions for your key file:
chmod 400 my-test-keypair.pem
Connect to your instance via its public IP address by running the following command:
ssh -i my-test-keypair.pem root#192.168.0.29
Eucalyptus no longer supports VMware, but to generally troubleshoot instance connectivity you would first check that you are using a known good image such as those available via:
# python <(curl -Ls https://eucalyptus.cloud/images)
and ensure that the instance booted correctly:
# euca-get-console-output i-38942195
if that looks good (check for instance meta-data access for the SSH key) then check that the security group rules are correct, and that the instance is running using the expected security group and SSH key.
VMWare deprecation notice from version 4.1:
Support for VMWare features in Eucalyptus has been deprecated and will be removed in a future release.
http://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#release-notes/4.1.0/4.1.0_rn_features.html
Euca2ools command:
http://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#euca2ools-guide/euca-get-console-output.html
For my use case, I am provisioning VM's using pre defined VM template in vCenter. The hostname in this template is already set, also salt minion is installed with no minion_id file. Once VM is provisioned and minion service starts, it automatically sets the hostname as minion id.
Now same template is used for provisioning more machines, due to which all machines gets same minion id.
One of the way to solve the problem is to manually change the minion_id file inside the newly created VM, but due to business reasons this is not possible.
Other way I can think about to set the unique minionid in VM guest advacned option like guestinfo and read it when VM is booting up, but this can only be set when VM is in powered off state.
I need help to set the different minion ids for each VM, how can this be accomplish without going inside the provisioned VM?
In our case, hostname collisions are a possibility. So we set the minion id to the UUID of the device. On linux that's obtainable with dmidecode -s system-uuid, there's a similar command for windows.
I'm still new to AWS and this is my first attempt at working with MariaDB; I'm used to dealing with hosting providers that already have something like cPanel installed so please be nice. :)
I'm using Bitnami's WordPress Multi-Tier with Amazon RDS for MariaDB
Bitnami's documentation is usually quite good, but in this particular case I'm not finding anything. I've reached out to their support and the only reply I've received until now was something akin to: "use a WordPress plugin to make database exports" which is obviously isn't going to cut the mustard when it comes to importing.
What I want to accomplish:
Connect to my database
Export my database
Import (overwrite) a database
Essentially, I want to deploy my local WordPress to AWS...files are all good, but I'm lost when it comes to databases.
(NOTE: I want to get out of the habit of relying on phpMyAdmin and, ideally, don't want to have to go through installing it, etc)
I started here: [Connecting to a DB Instance Running the MariaDB Database Engine][2]
After SSH'ing in I've tried:
Command: mysql
Outputs: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/opt/bitnami/mariadb/tmp/mysql.sock' (2)
Command: mysql -h MY-DB-INSTANCE.us-east-1.rds.amazonaws.com -P 3306 -u bitnami
Outputs: Access denied for user 'bitnami'#'10.0.4.110' (using password: NO)
EDIT: I've split this thread into a separate one for other issues that I ran into.
Presumably your MySQL user bitnami actually has a password, so you may try this:
mysql -h MY-DB-INSTNACE.us-east-1.rds.amazonaws.com -P 3306 -u bitnami -p
^^^ add this
Your shell should prompt you for the password.
Beyond this, you need to make sure that you have opened your RDS instance to the IP from which you are trying to connect. You could open it to all IPs, but it is better practice to just open it to your dev machine, as well as the production machines which would be hitting the database. If you don't do this step, you also would not be able to connect.
Edit: If your user bitnami does not yet exist, then you may have to login as root and configure. Or perhaps you would have to reset the password if forgotten. You should always write down the admin credentials, as a last resort means of accessing your RDS instance.
I am newbie to openstack. I am creating a stack using HEAT template. In the yaml file I mentioned the key name as
parameters: # Common parameters
key_name: my-key-pair
After the stack is created, I am able to ssh to all VMs from my control node without password, successfully like this:
ssh -i /root/my-key-pair.pem
user#instanceip
My requirement here is, similarly I need to do ssh between the VMs also. Just like I did between ControlNode and VMs, I wanted to do ssh without password from VM1 to VM2.
If I copy the pem file to VM1, then I can do ssh without password from this VM1 to other VMS like
ssh -i /VM1-home/my-key-pair.pem
user#otherinstanceip
But, is there a way that this can be accomplished during stack creation itself? So that, immediately after stack creation via heat template, I can ssh from any instance to other instances?
Can someone help please.
Thank You,
Subeesh
You can do this without HEAT.
You should be able to make use of ssh agent forwarding.
steps:
Start the ssh-agent in the background.
eval "$(ssh-agent -s)"
Add your key to the agent
ssh-add /root/my-key-pair.pem
Then ssh into the first host, you should be able to jump between servers now.
The way of doing it with HEAT would be to place that pem file into the correct location on the created instances, this should be possible with the personality function
personality: {"/root/my-key-pair.pem": {get_file: "pathtopemfilelocaly"}}