Salt GitFS and Pillar Checkout location - salt-stack

I am wondering for for both GitFS file systems and GIT Pillar, where are those files checked out? I don't seem to be able to locate them on my system.
My issue might be related to this: Salt External Pillar Issues

/var/cache/salt/master/gitfs and /var/cache/salt/master/git_pillar

Related

Is there a way to run "sys.list_modules" command without a minion?

As far as the documentation goes, if one would like to query for available execution modules, the following command should be used:
salt '<minion_name>' sys.list_modules
My question is,
- whether it possible to run the above without a minion?
(e.g salt sys.list_modules , isn't it cleaner?)
If you just want to know all the modules available to you, or the latest modules available for the current saltstack version.
salt-call --local sys.list_modules

How to install something using saltstack?

Saltstack documentation is very difficult and unclear for beginner. If you could give simple example how to install something on vagrant machine using saltstack I'd be very grateful
I believe some tutorials are out on the web. I could offer some of mine:
Setup Zabbix using Salt in a masterless setup, this installs among others a PHP stack needed for Zabbix.
Setup Consul in the cloud at DigitalOcean, with Saltstack. Includes a full script, but also works with Vagrant (see cheatsheet.adoc)
I think the biggest help for me when starting was the SaltStack tutorials.
https://docs.saltstack.com/en/getstarted/fundamentals/index.html
The states tutorial gives an example of installing rsync, lftp and curl:
https://docs.saltstack.com/en/getstarted/fundamentals/states.html
This tutorial shows how to setup what you need with vagrant with a master and a couple of minions, shows basics of targeting minions and setting up state files (the files that tell Salt what to do on a minion).
There is a lot more to Salt than that, but it is a good start.
In Saltstack there are two types of machines :
Master : As the name suggests this is the controlling. You can use this to run tasks on multiple minions.
Minion : Minions are like slaves. You can run commands on minions, or install any packages, run scripts on minions through master. Basically any command or any task that you can run by logging into a minion machine you should be able to accomplish through the master machine.
You can write all the tasks you want to perform on minion in an sls file and run it.
Saltstack has functions which you should call along with the desired arguments. Every function performs a specific task. Saltstack has Execution modules and States modules
Execution modules:
They are designed to perform tasks on a minion. For example: mysql.query will query a specified database. The execution module does not check if the database needs to be queried or not. It just executes its task.
Have a look at the complete list of modules and you will see that they will just execute a task for you. https://docs.saltstack.com/en/latest/ref/modules/all/index.html
States module:
It's called THE states module.
The states module is a module too. But it's a special one. With the states module you can create states (the sls files under /srv/salt ) for your Minions.
You could for example create a state that ensures the Minion has a web server configured for www.example.com.
After you have created your state you can apply it with the states module: salt state.apply example_webserver
The example_webserver state specifies what the Minion needs to have. If the Minion is already in the correct state, it does nothing. If the Minion is not in the correct state it will try to get there.
The states module can be found here: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.module.html
Sample sls file :
//This state is to make sure git is installed. If yes : no action will be taken if not it will be installed.
git_install:
pkg.installed:
- name: git
//This step makes sure the folder with the specified name is not present. If it is present it will be deleted. Here "delete_appname_old" is the step name and should not be duplicated in the same sls file
delete_appname_old:
file.absent:
- name: /home/user/folder_name
//This step is for cloning a git project
clone_project:
module.run:
- name: git.clone
- url: ssh://gitreposshclonelink
- cwd: /home/user/folder_name
- user: $username
- identity: $pathofsshkey

Preserve files/directories for rpm upgrade in .spec file(rpmbuild)

I wrote a .spec file on RHEL and I am building RPM using rpmbuild. I need ideas on how to handle the situation below.
My RPM creates an empty logs directory when it installs first time within the installation folder like below
/opt/MyInstallation-1.0.0-1/some executables
/opt/MyInstallation-1.0.0-1/lib/carries shared objects(.so files)
/opt/MyInstallation-1.0.0-1/config/carries some XML and custom configuration files(.xml, etc)
/opt/MyInstallation-1.0.0-1/log--->This is where application writes logs
When my RPM upgrades MyInstallation-1.0.0-1, to MyInstallation-1.0.0-2 for example, I get everything right as I wanted.
But, my question is how to preserve log files written in MyInstallation-1.0.0-1? Or to precisely copy the log directory to MyInstallation-1.0.0-2.
I believe if you tag the directory as %config, it is expected that the user will have files in there, so it will leave it alone.
I found a solution or workaround to this by hit and trial method :)
I am using rpmbuild version 4.8.0 on RHEL 6.3 x86_64. I believe it will work on other distros as well.
If you install with one name only like "MyInstallation" rather than "MyInstallation-version number-RPM Build Number" and create "logs directory as a standard directory(no additional flags on it)[See Original Question for scenario] Whenever you upgrade, you normally don't touch logs directory. RPM will leave its contents as it is. All you have to do is to ensure that you keep the line below in the install section.
%install
install --directory $RPM_BUILD_ROOT%{_prefix}/%{name}/log
Here, prefix and name are macros. That has to do nothing with underlying concept.
Regarding config files, the following is a very precise table that will help you guarding your config files. Again, this rule can't be applied on logs our applications create.
http://www-uxsup.csx.cam.ac.uk/~jw35/docs/rpm_config.html
Thanks & Regards.

How do I migrate an Artifactory repo to a shared filesystem repo?

I teach high school computer science using Scala and I've managed to set up an Artifactory repo so that when my students download dependencies, we're doing most of our downloading inside the lab, rather than over the internet.
However, all our home folders are on a network drive and the terminals the students use don't have their own hard disks, so it seems silly to have dozens of copies of the same dependencies. Unfortunately, even with an Artifactory repo, SBT/Ivy copies all the artifacts into each user's ~/.ivy2/cache directory.
I've heard that, if I set up a shared filesystem repo then the artifacts won't be copied. What I can't figure out is how to export all the artifacts that Artifactory has cached for me in a format that would be recognized as a filesystem repo. (Exporting normally puts each remote repo in a separate folder that I suppose I'd have to somehow unify, but I'm not exactly sure how to do that. If that's the easiest thing to do, please explain how carefully.)
What I think I'd like to export is the remote-repos virtual repository, but that's not available as a choice on the Export page.
The other tricky part of this is that the same build file should be usable at home, where there is no proxy repo, so I'm relying on the fact that I can use /etc/sbt/sbtopts to override the repository resolution within the lab environment.
Change Ivy home
Define your sbt script with ${SBT_OPTS}:
exec java -Xmx1512M -XX:MaxPermSize=512M ${SBT_OPTS} -jar /etc/sbt/sbt-launch-0.13.0.jar "$#"
Then only in your network environment set SBT_OPTS as:
$ export SBT_OPTS="-Dsbt.ivy.home=/etc/sbt/repository"
The students probably need writing rights to the dir.
What you also can do is use davfs2 on Linux or "web folders" on Windows to simply mount Artifactory as a WebDAV resource (read-only). This avoids any indirection via a local file system, and keeping such a copy up to date.
Unmanaged dependency
How about a solution that will get you out of Ivy?
In your original build using managed dependency, run
> show full-classpath
This should display something like the following:
[info] List(Attributed(/home/foo/helloworld/target/scala-2.10/classes), Attributed(/home/foo/.ivy2/cache/com.eed3si9n/treehugger_2.10/jars/treehugger_2.10-0.3.0.jar), Attributed(/foo/.sbt/0.13.0/boot/scala-2.10.2/lib/scala-library.jar), Attributed(/home/foo/.ivy2/cache/com.github.scopt/scopt_2.10/jars/scopt_2.10-3.0.0.jar))
Create a directory named /shared/project1/lib or something and copy all the above jars in there, except for scala-library.jar.
Next, make a copy of your build and replace libraryDependency with the following:
unmanagedBase := file("/shared/project1/lib")
You should still be able to compile the code.

How to modify salt-minion settings in WINDOWS without reinstallation

I have a question about modifying salt minion settings in WINDOWS.
When I try to modify the salt master address or minion name, I have to uninstall and reinstall to input these information.
I tried to find files, registry describing these information, but in vain.
Is there any solution of wisdom?
Thanks in advance.
You just need to modify the Salt Minion's config file which is found here, by default:
c:\salt\conf\minion
‪It's now C:\ProgramData\Salt Project\Salt\conf\minion
Activate (if you don't see ProgramData) :
View -> Hidden items

Resources