For my use case, I am provisioning VM's using pre defined VM template in vCenter. The hostname in this template is already set, also salt minion is installed with no minion_id file. Once VM is provisioned and minion service starts, it automatically sets the hostname as minion id.
Now same template is used for provisioning more machines, due to which all machines gets same minion id.
One of the way to solve the problem is to manually change the minion_id file inside the newly created VM, but due to business reasons this is not possible.
Other way I can think about to set the unique minionid in VM guest advacned option like guestinfo and read it when VM is booting up, but this can only be set when VM is in powered off state.
I need help to set the different minion ids for each VM, how can this be accomplish without going inside the provisioned VM?
In our case, hostname collisions are a possibility. So we set the minion id to the UUID of the device. On linux that's obtainable with dmidecode -s system-uuid, there's a similar command for windows.
Related
I'm running a Salt master in a very constrained Kubernetes environment where the ingress controller only listens on a single port.
Can I configure my minion so that it uses a different SNI for publishing and returning?
e.g. publish https://salt-master.publish.com
ret https://salt-master.ret.com
Unfortunately it is not possible. salt is setup to pay attention and to make sure that the information goes to the same master that it picked up from.
Connect to instance: i-38942195
To connect to your instance, be sure security group my-test-security-group has TCP port 22 open to inbound traffic and then perform the following steps (these instructions do not apply if you did not select a key pair when you launched this instance):
Open an SSH terminal window.
Change your directory to the one where you stored your key file my-test-keypair.pem
Run the following command to set the correct permissions for your key file:
chmod 400 my-test-keypair.pem
Connect to your instance via its public IP address by running the following command:
ssh -i my-test-keypair.pem root#192.168.0.29
Eucalyptus no longer supports VMware, but to generally troubleshoot instance connectivity you would first check that you are using a known good image such as those available via:
# python <(curl -Ls https://eucalyptus.cloud/images)
and ensure that the instance booted correctly:
# euca-get-console-output i-38942195
if that looks good (check for instance meta-data access for the SSH key) then check that the security group rules are correct, and that the instance is running using the expected security group and SSH key.
VMWare deprecation notice from version 4.1:
Support for VMWare features in Eucalyptus has been deprecated and will be removed in a future release.
http://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#release-notes/4.1.0/4.1.0_rn_features.html
Euca2ools command:
http://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#euca2ools-guide/euca-get-console-output.html
I have to setup a new salt configuration.
For minion setup I want to devise an approach. I came up with this.
Make entry of the new minion in /etc/salt/roster file so that I can use salt-ssh.
Run a salt formula to install salt-minion on this new minion.
Generate minion fingerprint with salt-call key.finger --local on the minion and somehow(still figuring) get it to master and maintain it in some file till the minion actually tries to connect.
When the minion actually tries to connect to the master, master makes sure about the minion identity with the stored fingerprint and then accepts the key.
Once this is done salt state can then bring the minion up to its desired state.
The manual chores associated with this:
I'll have to do manual entries viz. minion-id, IP and user in the /etc/salt/roster file for every new minion that I want up.
Other than this I can't figure any drawbacks.
My questions are:
Is this approach feasible?
Are there any security risks?
Is a better approach already out there ?
P.S. Master and minions may or may not be on public network.
There is salt-cloud to provision new nodes. This includes among others a provider saltify that will use SSH for the provisioning. See here for the online documentation. It will do the following all in one step:
create a new set of keys for the minion
register the minion's key with the master
connect to the minion using SSH and bootstrap the minion with salt and the minion's keys
If you want the minions to verify the master's key once they connect, you can publish a certificate to the minions and sign the master's key with the certificate like described here. Please double-check if saltify already supports this.
Some time ago I have prepared a salt-cloud setup that works both with DigitalOcean and with Vagrant on my Github account. The Vagrant provisioning uses salt-cloud with saltify. Have a look at the included cheatsheet.adoc for the Vagrant commands.
After editing /etc/network/interfaces on a GCloud VM instance I cannot access the machine at all through SSH. GCloud SDK shell still shows the instance running but the applications are no longer available. I have tried to SFTP to the machine as well, but without success. Is there any way to edit/repair the VM instance interfaces file without having to revert back to an earlier snapshot?
Many thanks!
You can create a new GCE instance and attach the disk of the old instance to the new instance. Then you can connect to the new instance and change anything you want in the old disk.
I am trying to setup a cloudstack (v4.4 on CentOS 6.5) management instance to talk to one physical host with XenServer (6.2) on it.
I have got so far as it setting the zone/pod/cluster/host and it can see the XenServer machine. Primary storage is also visible to it - I can see it in the dashboard. However it can't see the secondary storage and thus I can't download templates/ISOs. The dashboard says 0kb of 0kb in use for secondary storage.
I have tried having the secondary storage as local to the cloudstack management instance (whilst setting the use.local global setting to true). I have also tried setting up a new host and setting that up as the NFS share and it did not work.
I have checked in both instances that the shares I have made are mountable - and they are. I have also seeded them with the template VM by running the command outlined in the installation guide. Both places I set to be secondary storage had ample space available - 1 greater than 200GB. The other around 70GB. I have also restarted the management machine a few times.
Any help would be much appreciated!
You need secondary storage enabled in order to supply templates to your hosts. The simplest way to achieve that is to create an NFS export that is available to the host. I usually do it in the host it self. In your case that would be the XenServer. Then in the management server add the secondary storage in: Infrastructure -> Secondary Storage -> Add Secondary Storage.
Secondary storage is provided by a dedicated system VM. Once you add a secondary storage, CloudStack will create a system VM for that. Start by checking the status of the system VMs in: Infrastructure -> System VMs
The one you are looking for should be called Secondary Storage VM.
It should be running and the agent should be ready (two green circles). If the agent is not ready, first ssh to your XenServer host and then to the system VM using the link local IP (you can see the IP in the details of the VM) with the following command:
ssh -i /root/.ssh/id_rsa.cloud -p 3922 LIKN_LOCAL_IP_ADDRESS
Then in the system VM, run a diagnostic tool to check what could be wrong:
/usr/local/cloud/systemvm/ssvm-check.sh