How do I query nodegroups and file_roots on salt-master? - salt-stack

I have a few nodegroups set in a separate master.d/*.conf file and can correctly target them. I was wondering if there is a mechanism on the salt master to list nodegroups and their members without having to look through the master file or master.d/*.conf files. Is there a way to list file_roots too without having to look through files?

i'm not sure that there is some way to get nodegroup file contents, but there are some methods to look into file_roots content:
salt-run fileserver.file_list
should run on the master.
There are some similar commands, i'm not sure which one you exactly need.
salt-run fileserver.dir_list
salt-run fileserver.envs
you can see more about salt-run through the document

Related

Pointing SLS files to salt directory

I'm a little confused on using file_roots. Currently we setup our salt directory in the following way.
srv/salt/<folder-connected-to-git>: contains all the folders we want for to use like win (repo / repo-ng), / scripts /states etc... for our salt build. But it doesn't have our binaries folder (holds the installers for programs).
The master config file uses the following:
file_roots:
base:
- /srv/salt/<folder-connected-to-git>
So when setting up SLS package installers we would use salt:// to point to the base folder. Since the binaries folder is outside that path (in /srv/salt), I gave the absolute path (ie. /srv/salt/binaries). It seems that when running it, salt doesn't recognize this path as an absolute path (maybe its looking for it on the minion instead).
Is there a way to point to directory outside of base. If not, I could change my file_roots to:
file_roots:
base:
- /srv/salt/
prod:
- /srv/salt/<git-folder>
But then, would salt look for the repo (to cache to the minion) inside /srv/salt/ instead of /srv/salt/<git-folder>? Could I change what salt:// points to without changing file_roots?
There is the built in fileserver that works together with salt '*' cp.get_file or salt '*' cp.get_dir. To use this with file_roots, you might want to create a separate environment for the binaries.
I am not sure, if it is intended to be used like that. Especially the file_roots environments. But I recently learned, that environments are made as flexible as possible, to enable you to use them for whatever you might need them.
You might want to have a look at gitfs - which allows you to mount git repositories into your state tree. This would make the environment unnecessary. We use this approach for formulas.
We currently solve this with a private network and a webserver that makes larger files available to all minions inside of this network. This works quite well, as all our minions are connected to this private network. Running such a network forces you to keep an eye on securing the minions and the masters communication inside of this network. We use local firewalls on all connected minions to achieve this.

mkfifo command failing on clearcase vobs

I am trying to create named pipe in a directory which is created under clearcase's vobs tree (/vobs/something/something) but not checked-in. I am getting this error:
"mkfifo: No such device or address"
I am not able to understand why pipe creation is failing while other files are getting created.
I am using Solaris 10. Is there any way I can create named pipes in vobs?
/vobs/something/something means MVFS path with a view set (as in cleartool setview).
First, try the same operation with the fumm path instead of trying to set a view. As I explain in "Python and ClearCase setview", setting a view creates a sub-shell, with all kinds of side effect for your processes (in term of environment variables and other non-heirted attributes).
So try it in /views/MyView/vobs/something/something.
Second, regarding pipe, check if this thread applies to your case:
Just off the top of my head if you using a pipe and not a file, then it should be specified something like this ..
destination my_pipe pipe("/data/pipes/net_pipe");
rather than
destination my_file file("/data/pipes/net_pipe");
Note that, for ClearCase up to 7.0.x:
ClearCase does not support adding to source control special files such as named pipes, fifos or device files. There are no type mangers available to manage these special files.
Note: Attempts to execute these files in the MVFS is not supported.
WORKAROUNDS:
Keep multiple versions of directories with device files outside of a VOB and versioned directories/symlinks in a VOB to point to correct directory location outside the VOB.
Keep a tar or zip archive of the tree with device files in the VOB, and extract it to a temporary workspace when needed in the development process.

How to use vhd-util to manage snapshots

I'm running several VMs within Xen, and now I'm trying to create/revert snapshots of my VMs.
Along with Xen and blktap2, another utility, vhd-util is also delivered, and according to its description, I guess I can use it to create/revert VM snapshots.
To create a snapshot is actually easy, I just call:
vhd-util snapshot -n aSnapShot.vhd -p theVMtoBackup.vhd
But when it comes to reverting a snapshot, things get really annoying.
The "revert" command requires a mandatory argument "journal", like this:
vhd-util revert -n aSnapShot.vhd -j someThingCalledJournalOfWhichIHaveNoIdea
And vhd-util expects some info from the journal, which means it's not some empty file you can write logs in.
But I've went through the code and the internet, still get no idea where this Journal comes from.
Similar question is asked
http://xen.1045712.n5.nabble.com/snapshots-with-vhd-util-blktap2-td4639476.html but poor guy never get answered.
Hope someone here could help me out.
Creating snapshots in VHD works by putting an overlay over the existing VHD image, so that any change get written into the overlay file instead of overwriting existing data. For reading the top-most data is returned: either the data from the overlay if that sector/cluster was already over-written, or from the original VHD file if it was not-yet over-written.
The vhd-util command creates such an overlay-VHD-file, which uses the existing VHD image as its so-called "backing-file". It is important to remember, that the backing-file must never be changed while snapshots still using this backing-file exist. Otherwise the data would change in all those snapshots as well (unless the data was overwritten there already.)
The process of using backing files can be repeated multiple times, which leads to a chain of VHD files. Only the top-most file should ever be written to, all other files should be handled as immutable.
Reverting back to a snapshot is as easy as deleting the current top-most overlay file and creating a new empty overlay file again, which again expose the data from the backing file containing the snapshot. This is done by using the same command again as above mentioned. This preserves your current snapshot and allows you to repeat that process multiple times.
(renaming the file would be more like "revert back to and delete last snapshot".)
Warning: before re-creating the snapshot file, make sure that no other snapshots exists, which uses this (intermediate) VHD file as its backing file. Otherwise you would not only loose this snapshot, but all other snapshots depending on this one.
You don't need to use revert, all you need to do is shut down the VM, rename aSnapShot.vhd to theVMtoBackup.vhd and restart the VM.

exclude read-only files with rsync

While using rsync I would like to filter the files based on read/write attribute and potentially on timestamp. The manual does not mention that this would be possible. Well, is it?
In my shell I can do:
dir *.[CHch](w)
to list all writable source C sources, so I hoped that:
rsync -avzL --filter="+ */" --filter='+ *.[CHch](w)' --filter='- *' remote_host:dev ~/
might work, but apparently it does not.
Any ideas?
As of version 3.0.8, rsync doesn't support filtering on anything other than filename.
Your best bet is probably using find to generate the list of files to sync, and then using rsync's --files-from option. find has most all the options you could ever want for differentiating files.

Unix invoke script when file is moved

I have tons of files dumped into a few different folders. I've tried organizing them several times, unfortunatly, there is no organization structure that consistently makes sense for all of them.
I finally decided to write myself an application that I can add tags to files with, then the organization can be custom to the actual organizational structure.
I want to prevent from getting orphaned data. If I move/rename a file, my tag application should be told about it so it can update the name in the database. I don't want it tagging files that no longer exist, and having to readd tags for files that used to exist.
Is there a way I can write a callback that will hook into the mv command so that if I rename or move my files, they will invoke the script, which will notify my app, which can update its database?
My app is written in Ruby, but I am willing to play with C if necessary.
If you use Linux you can use inotify (manpage) to monitor directories for file events. It seems there is a ruby interface for inotify.
From the Wikipedia:
Some of the events that can be monitored for are:
IN_ACCESS - read of the file
IN_MODIFY - last modification
IN_ATTRIB - attributes of file change
IN_OPEN and IN_CLOSE - open or close of file
IN_MOVED_FROM and IN_MOVED_TO - when the file is moved or renamed
IN_DELETE - a file/directory deleted
IN_CREATE - a file in a watched directory is created
IN_DELETE_SELF - file monitored is deleted
This does not work for Windows (and I think also not for other Unices besides Linux) as inotify does not exist there.
Can you control the path of your users? Place a script or exe and have the path point to it before the standard mv command. Have this script do what you require and then call the standard mv to perform the move.
Alternately an alias in each users profile. Have the alias call your replacement mv command.
Or rename the existing mv command and place a replacement in the same dir, call it mv and have it call your newly renamed mv command after doing what you want.

Resources