Minion ID configuration/detection - salt-stack

I am really curious about when the minion id loads?
http://docs.saltstack.com/en/latest/ref/configuration/minion.html#std:conf_minion-id
Here, it says that the minion id is the system's default hostname.
When does this value get loaded up? Everytime it starts or everytime a change to the system hostname is detected?
What happens if someone comes along and changes the hostname by hand without informing other people which have access to that minion? Does it reload automatically or what?

Here's what documentation has to say regarding ID generation of minion.
I have tried it my self. I was using ubuntu ec2 instance.
The first time you run the minion it uses FQDN to set the id of the instance, so whatever result of hostname --fqdn was there when minion first started that becomes the ID.
Subsequent restart the ID does not change even if you change the
hostname.
If you want to change the ID you need to change it manually in minion
config
file.

You can change current minion id in /etc/salt/minion_id.
When I use a Docker container, I usually add hostname > /etc/salt/minion_id to docker-entrypoint.sh or to the Dockerfile. If you do that, remember to start salt-minion after changing the minion_id (not before).

Related

Can you FTP into an EC2 instance if you opted out of creating a key pair when you generated it?

I followed this AWS tutorial to get a Wordpress site up and running but it instructed me not to use the keypair option so now I can not follow those instructions to FTP and make simple CSS etc. changes.
Before I blow up the whole instance, am I missing an approach that can make FTP possible?
If you skipped creating key pair during instance launch, you can't connect to it. The only way to connect to that instance with (S)FTP now is to put a working key on the disk:
Stop the instance.
Detach the EBS volume and attach it to the instance that you can connect to.
Mount the volume and put a public key in ./ssh folder in the home directory of the user named bitnami.
Dismount the volume, detach it and attach back to the original instance.
Seem like it's easier to just recreate the instance, this time with a private key.

Docker: unix "who" command doesn't work inside container

I have a Docker image that has one non-root user created named builder.
The application that supposed to run inside the container uses unix who command.
For some reason it returns empty string inside the container:
builder#2dc3831c558b:~$ who
builder#2dc3831c558b:~$
I cannot use whoami because of implementation details.
(I'm using Docker 1.6.2 on Debian Jessie)
EDIT (additional details regarding why I use "who"):
I use the command who with the parameters am i, that is who am i. This suppose to return the user who first made the login. So, for example, sudo who am i returns builder, while sudo whoami returns root.
The command who includes options like -b: time of last system boot.
Since all commands from a container translates into system calls to the kernel, that would not return anything container related, but docker-host related (ie the underlying host).
See also "Difference between who and whoami commands": whoami prints effective username of being ran whoami, which is not the same as who (printing information about users who are currently logged in).
The current workarounds listed in issue 18547 are:
The registry configuration is stored in the client, so something as simple as cat ~/.docker/config.json will give you the answer you're looking for.
docker info | grep Username should give you this information.
But that is not the same as running the command from within a container session. id -u might be closer.
By default, there is no direct loggin when a container is started by the docker daemon.
As Auzias commented, only a direct ssh connection (initiating a login session) would allow who to return anything. But with docker, this is generally not needed since docker exec (for debug purposes) exists (and spare the image maintainer to include ssh unless it is really needed).

Deploying a Meteor app with Distelli

I've gotten pretty far into a deployment of my Meteor application on Distelli. Like, almost there. I've done everything as far as setting up the EC2 box, creating a user group [which didn't even seem necessary as I was able to SSH into the box with full rights without specifying my machine's IP], creating an elastic IP, successful build, and deployment to that box. But, I can't seem to check if Meteor is actually running (note: when I ssh in, there are active instances of Mongo and Node, so SOMETHING is running).
The problem has something to do with associating the elastic IP with my ROOT_URL and domain. I'm just not sure what to do at this step and can't seem to find any directions that are Meteor specific. Been using these guides:
https://www.distelli.com/docs/tutorials/how-to-set-up-aws-ec2
https://www.distelli.com/docs/tutorials/deploying-meteor-applications
http://gregblogs.com/tlt-associate-a-namecheap-domain-with-an-amazon-ec2-instance/
Recap: Distelli deployment is a success, but I get the follow error just before finishing:
Error: $ROOT_URL, if specified, must be an URL
I've set my ROOT_URL to my domain, and associated according to the previous guide. I can run traceroute on the IP, but like port 3000, so my inclination is the Meteor build is silently failing.
My manifest: https://gist.github.com/newswim/c642bd9a1cf136da73c3
I've noticed that when I point the CNAME record to my ec2 public DNS, NameCheap (aptly named) adds a . to the end of the record. Beyond that, I'm pretty much stumped.

When using salt-run virt.init, how can I specify initial login credentials for the new guest?

I'm deploying virtual guests this way:
salt-run virt.init vmtest 2 2048 salt://images/ubuntu-image.qcow2
It only partially works; vmtest is created and its key is added to the master, but the new minion never connects. So I pull up the vnc interface (which works fine) to see what's going on from the minion end, and...can't log in, because I don't know what credentials to use. Oops.
How do I specify initial login credentials when creating a VM with virt.init?
Well, this may not be exactly what you were looking for, but you can use libguestfs-tools in order to set a password on the image itself.
In salt, you can use cmd.run or pass it in a state to change the password after you install libguestfs-tools like so:
salt 'hypervisor' cmd.run "virt-sysprep --root-password password:'myrootpassword' -a /path/to/image.img"
or
update_pass:
cmd.run:
- name: virt-sysprep --root-password password:'myrootpassword' -a /path/to/image.img
Side note:
If you create or update the image you use to spawn new vms to pre-install salt, and update the /etc/salt/minion conf to set your master, and set it to come up at your desired run level, you should be able to work out a solution where the minion connects on creation.
Good luck, I hope this helps.

SaltStack : Identify environment with DNS record

I have multiple isolated environments to setup with SaltStack. I have created some base states and custom states for each environment. For the moment, the only way I can identify an environment is by requesting a TXT record on the DNS server.
Is there a way I can select the right environment in SaltStack.
How can I put this information in a pillar or a grain?
Salt's dig module might help you here. You can use it to query information from DNS records. It needs the command line dig tool to be installed.
Use a command line:
salt-call dig.TXT google.com
to produce an output like this:
local:
- "v=spf1 include:_spf.google.com ~all"
Use a salt state to put it into a grain:
# setupgrain.sls
mygrainname:
grains.present:
- value: {{ salt['dig.TXT']('google.com') }}
Once you have the information in a grain you can select salt nodes on the grain information using matchers.

Resources