SaltStack: Get logged in user name - salt-stack

I'm trying to use Salt to set up my dev environment so that it can be identical to my staging environment. One of the things I need to do is add some environment variables to the current users .bashrc file.
I currently have this in my .sls file:
/home/fred/.bashrc:
file.append:
- text:
- export GOROOT=/usr/local/go
- export GOPATH=/home/fred/dev/go
- export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
I then use salt-call to run the state locally under root (since there are other things that I need root for). This isn't ideal, however, if you name isn't fred. How can I rewrite this so that it will work for the current user even when salt-call is run under root?
It seems like I can get to the name of the machine, so if the machine name is something like username-dev, would I be able to parse that somehow and replace all of the instances of fred with the new username? Is there a better way?

This method will gives you the ability to submit different username per one time. I assumed that you don't have a specific users list
/home/{{ pillar.newuser }}/.bashrc:
file.append:
- text:
- export GOROOT=/usr/local/go
- export GOPATH=/home/fred/dev/go
- export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
the commandline:
salt-call state.sls golang_env pillar='{"newuser": "fred"}'
But if you have a specific users list then you can follow this method as mentioned at the docs

Related

How to load Autosys profile in jil

I have an autosys job which has a profile value specified in the jil. I am very new to autosys and I need to check the logs of the job. When I go to the host, it says that the log locationn specified in the jil(like $AUTLOG/errorlogs) does not exist. Do I have to load some profile on the host? How can I do this so that I am able to access the variables in the profile.
While defining the JIL for the job add the following attribute.
profile:/path/profile.ini
All the variables defined in /path/profile.ini file would be read.
Remember to export the variables inside the profile.ini file, Eg.
export Var1="some_path"
export Var2="DB_USER"
Goodluck !!

Salt state to change permissions on a single existing file

I know this is probably something super easy that I am just overlooking. For the life of me, I don't see an existing salt state that one can use to simply change the permissions on an already existing file. There's a file.managed state that can be used to "create" a file based on source but what if you just want to insure the permissions on a file that is not created through salt has the correct permission and update them if not.
For example, I can create a state like the following:
base security tcpd host-allows:
file.managed:
- name: /etc/hosts.allow
- create: False
- user: root
- group: root
- mode: 644
However, when I apply this state, I get a warning:
[WARNING ] State for file: /etc/hosts.allow - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
Is there a better way to handle something like this?
Yes, file.managed is used to modify file content but it also has a replace argument which can be used to change only permissions and ownership.
See https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html#salt.states.file.managed
replace
: True
If set to False and the file already exists, the file will not be modified even if changes would otherwise be made. Permissions and ownership will still be enforced, however.

What is the best means of securely delivering minion specific files using Salt?

I have a number of files that I need to transfer to specific minion hosts in a secure manner, and I would like to automate that process using Salt. However, I am having trouble figuring out the best means of implementing a host restricted transfer.
The salt fileserver works great for non-host-specific transfers. However, some of the files that I need to transfer are customer specific and so I need to ensure that they are only accessible from specific hosts. Assumedly Pillar would be the ideal candidate for minion specific restrictions, but I am having trouble figuring out a means of specifying file transfers using pillar as the source.
As far as I can tell Pillar only supports SLS based dictionary data, not file transfers. I’ve tried various combinations of file.managed state specifications with paths constructed using various convolutions (including salt://_pillar/xxx), but thus far I have not been able to access anything other than token data defined within an SLS file.
Any suggestions for how to do this? I am assuming that secure file transfers should be a common enough need that there should be a standard means of doing it, as opposed to writing a custom function.
The answer depends on what exactly you're trying to secure. If only a part of the files involved are "sensitive" (for example, passwords in configuration files), you probably want to use a template that pulls the sensitive parts in from pillar:
# /srv/salt/app/files/app.conf.jinja
[global]
user = {{ salt['pillar.get']("app:user") }}
password = {{ salt['pillar.get']("app:password") }}
# ...and so on
For this case you don't need to care if the template itself is accessible to minions.
If the entire file(s) involved are sensitive, then I think you want to set up the file_tree external pillar, and use file.managed with the contents_pillar option. That's not something I've worked with, so I don't have a good example.
Solution Synopsis: Using PILLAR.FILE_TREE
A: On your master, set-up a directory from which you will server the private files (e.g: /srv/salt/private).
B: Beneath that create a “hosts” subdirectory, and then beneath that create a directory for each of the hosts that will have private files.
/srv/salt/private/hosts/hostA
/srv/salt/private/hosts/hostB
… where hostA and hostB are the ids of the target minions.
See the docs if you want to use node-groups instead of host ids.
C: Beneath the host dirs, include any files you want to transfer via pillar.
echo “I am Foo\!” > /srv/salt/private/hosts/hostA/testme
D: In your master config file (e.g: /etc/salt/master), include the following stanza:
ext_pillar:
- file_tree:
root_dir: /srv/salt/private
follow_dir_links: False
keep_newline: True
debug: True
E: Create a salt state file to handle the transfer.
cat > /srv/salt/files/base/foo.sls << END
/tmp/pt_test:
file.managed:
- contents_pillar: testme
END
F: Run pillar refresh, and then run your state command:
salt hostA state.apply foo
Following the last step, hostA should have a file named /tmp/pt_test that contains the text “I am Foo!”.

database not stored on docker-image with shinyproxy

I have developed a solution and decided to use shinyproxy.
I have the following problem:
A user needs to capture data on the solution, which must be stored and updated on the database for all users accessing the solution, I used SQLite and with R for this.
Now when I log in and capture data, it saves, but when i log in with a different user I dont get to find the captured data.
The problem is that saving data doesnt seem to be saving on the docker image, why is this and how can I go about remedying it?
For problem testing purposes:
Solution link: https://xxasdfqexx.com
Data Capturer user:
username: xxxxx
password: Fxxxx
Admin user:
username: inxxx
password: prupxxxxx
Testing:
Inside the solution, if you go to the data management tab, data entry and then right click on an table and insert a new row, click save changes, it must safe new changes to the docker image but its only temporarily doing so, the other user cant see the changes made.
This is expected behavior. The SQLite DB is stored within the running container, not the image. It is therefore lost when the container is shut down. And shinyproxy starts a new container for every user. The general solution with docker is to use an external volume that is mounted into the running container and use that file/directory to store persistent data. Together with shinyproxy, you have to use docker-volumes, c.f.
https://www.shinyproxy.io/configuration/#apps.
Putting this together you can use in the shinyproxy config something like:
apps:
- name: ...
docker-cmd: ...
docker-image: ...
docker-volumes: /some/local/path/:/mnt/persistent
And in the shiny app something like:
dbConnect(RSQLite::SQLite(), "/mnt/persistent/my-db.sqlite")

How do I manage minion roles with salt-stack pillars?

I'm trying to understand how to use Salt with roles like Chef can be used, but I have some holes in my understanding that reading lots of docs has failed to fill at this point.
The principle issue is that I'm trying to manage roles with Salt, like Chef does, but I don't know how to appropriately set the pillar value. What I want to do is assign, either by script or by hand, a role to a vagrant box and then have the machine install the appropriate files on it.
What I don't understand is how I can set something that will tell salt-master what to install on the box given the particular role I want it to be. I tried setting a
salt.pillar('site' => 'my_site1')
in the Vagrantfile and then checking it in the salt state top.sls file with
{{ pillar.get('site') == 'my_site1'
-<do some stuff>
But this doesn't work. What's the correct way to do this?
So, it becomes easier when matching ids in pillars. First, set the minion_id to be something identifiable, for example test-mysite1-db (and above all unique. Hence the username initials at the end as an example.
in the top.sls in /srv/pillar do
base:
'<regex_matching_id1>':
- webserver.mysite1
'<regex_matching_id2>':
- webserver.mysite2
And then in webserver.mysite1 put
role : mysiteid1
for example.
Then in /srv/state/top.sls you can then match with jinja or just with
base:
'role:mysiteid1':
- match: pillar
- state
- state2
Hence, roles derived from the ids. Works for me.
How to implement and utilize roles is intentionally vague in the salt documentation. Every permutation of how to implement, and then how to use, roles carries with it trade-offs -- so it is up to you to decide how to do it.
In your scenario I can assume that you want rather singular 'roles' or purposes associated with a virtualbox VM, and then have state.highstate run the states associated with that role.
If the above is correct, I would go with grains rather than pillars while learning salt for the sake of simplicity.
On each minion
Just add role: webserver to /etc/salt/grains and restart the salt-minion.
On the master
Update /srv/state/top.sls file to then associate state .sls files with that grain.
base:
'*':
- fail2ban
role:webserver:
- match: grain
- nginx
role:dbserver:
- match: grain
- mysql

Resources