Apply single statefile using salt-call - salt-stack

Is there a way to apply a single statefile?
I would very much like to do a salt-call locally and apply a single file, equivalent of puppet-apply /tmp/some-manifest.pp
I would very much like to keep it in single file and not change salt roots or paths etc.

In master mode, if you have a common/users.sls file in your salt root, you can use:
salt-call state.sls common.users
In standalone mode, you can use salt-call --local if the salt root is configured. See https://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html

Related

How to run a package.sls file on a minion?

I have a package.sls file with packages I want to test on my minion.
How can I run this file on my salt minion using the command line?
If you have a file called mypackage.sls in the current directory, this should execute all of the states within the file:
sudo salt-call --local --file-root=. state.sls mypacakge
--local means run without looking for a master
--file-root=. means look for state files in the current directory
state.sls means execute the states in an sls file
mypackage is the name of the sls file, without the .sls extension
Note that calling it this way will not have access to any sls includes or dependencies on the master.

Dockerfile - Touch file at container path

docker-compose.yml file:
web:
build: ./code
ports:
- "80:80"
volumes:
- ./mount:/var/www/html
dockerfile in ./code:
FROM wordpress
WORKDIR /var/www/html
RUN touch test.txt
This is a production environment I'm using to set up a simple WordPress blog (omitted other services in docker-compose.yml & Dockerfile for simplicity).
Here's what I'm doing:
Bind mounting host directory at container destination /var/www/html
Create test.txt file during build time
What's NOT working:
When I inspect /www/var/html on the container, I don't find my test.txt file
What I DO understand:
Bind mounting happens at run-time
In this particular case file gets created, but the when you mount the host directory, commands in Dockerfile get overridden
When you use volume mount, It works
What I DON'T understand:
What are the ways in which you can get your latest code into the container which is using a bind mount to persist data?
How can one create a script that can let me achieve this in runtime?
How else can I achieve this considering I HAVE to use a bind mount (AWS ECS persists data only when you use a host directory path for a volume)
Your data will be persisted at runtime. Everything stored in /var/www/html at runtime will be persisted in the host ./mount directory.
At build time, everything happens at docker layer inside the container image.
If you want to do things before anything you could create a script, ADDit to your image and use CMD or ENTRYPOINT to run the script when the container starts.
In summary
What are the ways in which you can get your latest code into the container which is using a bind mount to persist data?
You add your latest code to the image (i.e. git clone, COPY, ADD, or what suits you). A container shouldn't be mutable, so you keep your code versioned and you define persist folder (e.g. for uploads)
How can one create a script that can let me achieve this in runtime?
If you want to do it at runtime, you add your shell script to the image and then you run it. Although, this is not the best approach for this use case.
How else can I achieve this considering I HAVE to use a bind mount (AWS ECS persists data only when you use a host directory path for a volume)
IMHO, you should use your images as your build of your code. Your image should not be mutable and must reflect an instance in your code lifecycle. Define paths with data and those paths would be your mounts at host level.

How to archive a directory in master and copy to minions using salt

I know I can use cp.get_dir to download a directory from master to minions, but when the directory contains a lot of files, it's very slow. If I can tar up the directory and then download to minion, it will be much faster. But I can't find out how to archive a directory at master prior to downloading it to minions. Any ideas?
What we do is tar the files manually, then extract them on the minion, as you said. We then either replace or modify any files that should be different from what is in the tar-file. This is a good approach for a configuration file that resides in the .tar file, for example.
To archive the file, we just ssh on the salt master and then use something like tar -cvzf files.tar.gz <yourfiles>.
You could also consider having the files on the machines from the start, with a rsync afterwards (via salt.states.rsync for example). This would just push over the changes in the files, not all the files.
Adding to what Kai suggested, you could have a minion running on the salt master box and have it tar up the file before you send it down to all the minions.
You can use the archive.extracted state. The source argument uses the same syntax as its counterpart in the file.managed state. Example:
/path/on/the/minion:
archive.extracted:
- source: salt://path/on/the/master/archive.tar.gz

Is there a way to wrap arbitary commands located under a subdirctory in a shell script

I have a bunch of customizations and would like to run my test program in a pristine environment.
Sure I could use a tiny shell script to wrap and pass of arguments but it would be cool and useful if I could invoke a pre and possibly post script only to commands located under certain sub directories. The shell I'm using is zsh.
I don't know what you include in your “pristine environment”.
If you want to isolate yourself from the whole system, then maybe chroot is what you're after. You can set up a complete new system, with its own /etc, /bin and so on, but sharing the kernel, networking and other non-filesystem stuff with your running system. Root's cooperation is required (the chroot system call is reserved to root).
If you want to isolate yourself from your dot files, run the program with a different value for the HOME environment variable:
HOME=~/test-environment /path/to/test-program
HOME=~/test-environment zsh
If this is specifically about zsh's configuration files, you can set the ZDOTDIR environment variable before starting it to tell zsh to run its own dot files from a directory other than $HOME (or zsh --no-rcs to not load any dot file).
If by pristine environment you mean a fully controlled set of environment variables, then the env program does this.
env -i PATH=$PATH HOME=$HOME program args
will run program args with only the environment variables you specified.

Which all processes are using shared library

I have a shared library(.so file) on UNIX.
I need to know what all running processes are using it.
Do unix provide any such utility/command?
You can inspect the contents of /proc/<pid>/maps to see which files are mapped into each process. You'll have to inspect every process, but that's easier than it sounds:
$ grep -l /lib/libnss_files-2.11.1.so /proc/*/maps
/proc/15620/maps
/proc/22439/maps
/proc/22682/maps
/proc/32057/maps
This only works on the Linux /proc filesystem, AFAIK.
A quick solution would be to use the lsof command
[root#host]# lsof /lib/libattr.so.1
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
gdm-binar 11442 root mem REG 8,6 30899 295010 /lib/libattr.so.1.1.0
gdm-binar 12195 root mem REG 8,6 30899 295010 /lib/libattr.so.1.1.0
This should work not only for .so files but any other files, dirs, mount points, etc.
N.B. lsof displays all processes that use a file, so there is a very remote possibility of a false positive if is a process that opens the *.so file but not actually use it. If this is an issue for you, then Marcelo's answer would be the way to go.
Do in all directories of interest
ldd * >ldd_output
vi ldd_output
Then look for the the library name, e.g. “aLib.so”. This shows all modules linked to e.g. "aLib.so"

Resources