How to I retrieve the home directory of a user - salt-stack

let's say I have a service that has a pillar-configured user associated with it
now I want to fetch a tar.gz and put it in this user home directory...
how do I do that...
user.info return a bunch of data including the home directory.. but how do I get only that ?
iow
foo:
archive:
- extracted
- name {{ <get the user home directory here> }}
...

Trial and error got me there:
{% set my_user = ..get your pillar user or default to a sane value.. %}
{{ salt['user.info'](my_user).home }}
resolves to the user's home directory

Related

I get no image in the viewer panel when I run blogdown's serve_site()

I used the following code to create the blogdown site with HUGO 0.88.1;
new_site(theme = 'puresyntax71/hugo-theme-chunky-poster',
format = 'toml')
And as per the instructions I am following, a site is supposed to load on the viewer panel which is not happening.
I have tried the check_site() function and within the results I get;
Checking .gitignore
| Checking for items to remove...
○ Nothing to see here - found no items to remove.
| Checking for items to change...
○ Nothing to see here - found no items to change.
| Checking for items you can safely ignore...
○ Found! You have safely ignored: .DS_Store, Thumbs.db
| Checking for items to ignore if you build the site on Netlify...
● [TODO] When Netlify builds your site, you can safely add to .gitignore: /public/, /resources/
| Checking for files required by blogdown but not committed...
● [TODO] Found 1 file that should be committed in GIT:
layouts/shortcodes/blogdown/postref.html
so I went to inspect the shortcode referred above which had the following content;
{{ if eq (getenv "BLOGDOWN_POST_RELREF") "true" }}{{ .Page.RelPermalink }}{{ else }}{{ .Page.Permalink }}{{ end }}
How can I commit that file to GIT as instructed? And is that the reason I am not getting a preview of the site?
NOTE: Important information I had not mentioned before; the following is the error code that I receive when I try to serve_site()
Launching the server via the command:
C:/Users/User/AppData/Roaming/Hugo/0.88.1/hugo.exe server --bind 127.0.0.1 -p 4321 --themesDir themes -t hugo-theme-chunky-poster -D -F --navigateToChanged
Error: "C:\Users\User\Documents\20210913_blogdown\web-tea\config.toml:1:1": unmarshal failed: toml: table markup already exists
Update: This error message
Error: "C:\Users\User\Documents\20210913_blogdown\web-tea\config.toml:1:1": unmarshal failed: toml: table markup already exists
indicates that you have two markup fields in config.toml like this:
[markup]
[markup.highlight]
codeFences = false
[markup]
[markup.goldmark]
[markup.goldmark.renderer]
unsafe = true
You have to merge them like this:
[markup]
[markup.highlight]
codeFences = false
[markup.goldmark]
[markup.goldmark.renderer]
unsafe = true
I cannot reproduce your problem. It seems this theme requires an access token for Instagram. I just configured an arbitrary string in accessToken like this in config.toml:
[services]
[services.instagram]
disableInlineCSS = true
accessToken = "aaaa"
Then blogdown::serve_site() worked fine.

How to create/deploy multiple instance of tomcat in single server

I have excercise where i have to deploy app war files onto multiple tomcat instances available on the same server. I am using Salt has my configuration mangement tool here, i have also gone through some examples of salt orchestrate runner but nothing seems to help. I am also confused arranging the pillar variables for multiple instances in the pillar file.
I am able to deploy app on only instance without any trouble.
Pillar file :
appname:
name : location to instance1 webapps folder
typer : war
State File:
archive.download:
download the war directly to instance1 webapp folder
cmd.run
restart instance1
Need help to include the second instance details and achieving the state deployment in optimized way possible. Thanks.
You might be able to use on the pillar side an array and then a jinja loop for the installation in the state file.
pillar:
applist:
- location : salt://path_to_archive_wi1
destination: /webapps_i1
name: test1
typer : war
- location : salt://path_to_archive_wi2
destination: /webapps_i2
name: test2
typer : war
- location : salt://path_to_archive_wi3
destination: /webapps_i3
name: test3
typer : war
state file:
{%- for app in salt['pillar.get']("applist",[]) %}
copy {{ app[name] }} :
file.managed:
- name: {{ app['destination'] }}
- source: {{ app['location'] }}
{%- endfor %}
Something like this should do it.
In the loop if you are only installing one app in one instance, you can also restart the instance.

FOSUserBundle with Symfony 3.0

I'm trying to get FOSUserBundle up and running after upgrading Symfony from version 2.6 to 3.0. On 2.6 I basically had a site, css and js we're handles by assetic, FOSUserBundle was up and running an I had configured SQLite via doctrine.
So before picking this project up again I did the upgrade because 2.6 was no longer supported.
Because I'm new to Symfony everything I did is more or less copied from the documentation. FOSUserBundle is showing up as an enabled bundle in dev-mode. Alle my code is in /src/AppBundle. I'm using the 2.0-dev version of FOSUserBundle (e770bfa to be precise).
Here's a part of my template:
<li>
{% if is_granted("IS_AUTHENTICATED_REMEMBERED") %}
<a href="{{ path('fos_user_security_logout') }}">
{{ 'layout.logout'|trans({}, 'FOSUserBundle') }}
</a>
{% else %}
<a href="{{ path('fos_user_security_login') }}">
{{ 'layout.login'|trans({}, 'FOSUserBundle') }}
</a>
{% endif %}
</li>
At 2.6 I got
fos_user:
resource: "#FOSUserBundle/Resources/config/routing/all.yml"
in my /src/AppBundle/config/routing.xml. If I keep it this way I can't even load the site because of an Exception
"Unable to generate a URL for the named route "fos_user_security_login" as such route does not exist."
After putting it in /app/config/routing.yml my site shows and the other routes defined in /src/AppBundle/config/routing.xml work. So it was nothing wrong with this file.
When the site shows up the translation of the links is broken.
{{ 'layout.login'|trans({}, 'FOSUserBundle') }}
Shows up as "layout.login" instead aus Login as before with 2.6. And if I click the link to log in (the path is as expected /login). Symfony tells me:
Unable to find template "AppBundle:Pages:login.html.twig".
I don't get why it's looking for it in the AppBundle folder? According to what I read it should look for it in the FOSUserBundle folder or in /app/Ressources/FOSUserBundle/... which is where I put it to override the default template. I confirmed that this is still the way to override templates even with version 3.0.
I also tried to put the template in the AppBulde/Pages/ folder. Then it finds the template but it's still not working.
I cleared the cache multiple times (this solved another problem with assetic).
It look for me as if I'm missing a significant part to get FOSUserBundle working. Any suggestions on what I might have overlooked?
FOSUserBundle seems to be use 'xml' files for routing, and you have include an 'yml' file in your routing.xml:
fos_user:
resource: "#FOSUserBundle/Resources/config/routing/all.yml"

How can I store static data to be available for both Reactor and minions?

I wish to store my slack api key so that it will be accessible from both Reactor states as well as states executed by minions (such as when running a highstate):
slack_api_key: xxx
If I add the data to a pillar, it can only be accessible from minions executing states:
{{ salt['pillar.get']('slack_api_key') }}
If I add the data to the master config, it can only be accessible from the Reactor:
{{ opts['slack_api_key'] }}
How can I store this data and be able to access it from both the Reactor and from states included in my highstate?
One solution is to set the following in the master configuration:
# The pillar_opts option adds the master configuration file data to a dict in
# the pillar called "master". This is used to set simple configurations in the
# master config file that can then be used on minions.
pillar_opts: True
# Slack API key
slack_api_key: 'xxx'
Then any data in the master configuration can be accessed like this...
From minions:
{{ salt['pillar.get']('master:slack_api_key') }}
- or -
{{ pillar['master']['slack_api_key'] }}
From Reactor:
{{ opts['slack_api_key'] }}
However, this is not a great answer, as any data in the master configuration is now exposed to minions.
You might try using sdb for this.
https://docs.saltstack.com/en/latest/topics/sdb/

Ensure that only required data stay and outdated are removed

We use salt to bootstrap our web server. We host multiple different domains. I create a file in /etc/apache2/sites-available for each of our domains. Then I symlink it to sites-enabled.
The problem is that if I move the domain to different server, the link from sites-enabled is not removed. If I change the domain name and keep the data in place - then I have old.domain.com and new.domain.com vhost files. I expect to end up with only new.domain.com in sites-enabled, but both files are there and the working domain depends on alphabet (I guess) - which of the vhosts is loaded first.
I have the domains stored in pillars and generate the vhosts like:
{%- for site in pillar.sites %}
/etc/apache2/sites-available/{{ site.name }}:
file:
- managed
- source: salt://apache/conf/sites/site
- template: jinja
- require:
- file: /etc/apache2/sites-available/default
- cmd: apache_rewrite_enable
- defaults:
site_name: "{{ site.name }}"
/etc/apache2/sites-enabled/{{ site.name }}:
file.symlink:
- target: /etc/apache2/sites-available/{{ site.name }}
- require:
- file: /etc/apache2/sites-available/{{ site.name }}
{% endfor %}
I need to make sure that only the vhosts listed in pillars stay after highstate. I thought about emptying the folder first, but that feels dangerous as the highstate may fail mid-air and I would be left withou any vhosts - crippling all the other domains - just because I tried to add one.
Is there a way to enforce something like: "remove everything that was not present in this highstate run"?
Yes, the problem is that Salt doesn't do anything you don't specify. It would be too hard (and quite dangerous) to try to automatically manage a whole server by default. So file.managed and file.symlink just make sure that their target files and symlinks are present and in the correct state -- they can't afford to worry about other files.
You have a couple of options. The first is to clean the directory at the beginning of each highstate. Like you mentioned, this is not ideal, because it's a bit dangerous (and if a highstate fails, none of your sites will work).
The better option would be to put all of your sites in each minion's pillar: some would go under the 'sites' key in pillar, and the rest might go under the 'disabled' key in pillar. Then you could use the file.absent state to make sure each of the 'disabled' site-files is absent. (as well as the symlink for those files)
Then when you move a domain from host to host, rather than just removing that domain from the pillar of the previous minion, you would actually move it from the 'sites' key to the 'disabled' key. Then you'd be guaranteed that that site would be gone.
Hope that helps!

Resources