I am trying to work through the Salt Formulas documentation and seem to be having a fundamental misunderstanding of what a salt formula really is.
Understandably, this question may seem like a duplicate of these questions, but due to my failing to grasp the basic concepts I'm also struggling to make use of the answers to these questions.
I thought, that a salt formula is basically just a package that implements extra functions, a lot like
#include <string.h>
in C, or
import numpy as np
in Python. Thus, I thought, I could download the salt-formula-linux to /srv/formulas/salt-formula-linux/, add that to file_roots, restart the master (all as per the docs), and then write a file like swapoff.sls containing
disable_swap:
linux:
storage:
swap:
file:
enabled: False
(the above is somewhat similar to the examples in the repo's root) in hope that the formula would then handle removing the swap entry from /etc/fstab and running swapoff -a for me. Needless to say, this didn't work, clearly because I'm not understanding what a salt formula is meant to be.
So, what is a salt formula and how do I use it? Can I make use of it as a library of functions too?
This answer might not be fully correct in all technicalities, but this is what solved my problem.
A salt formula is not a library of functions. It is, rather, a collection of state files. While often a state file can be very simple, such as some of my user defined
--> top.sls <--
base:
'*':
- docker
--> docker.sls <--
install_docker_1703:
pkgrepo.managed:
# stuff
pkg.installed:
- name: docker-ce
creating a state file like
--> swapoff.sls <--
disable_swap:
linux.storage.swap: # and so on
is, perhaps, not the way to go. Well, at least, maybe not for a beginner with lacking knowledge.
Instead, add an item to top.sls:
- linux.storage.swap
This is not enough, however. Most formulas (or the state files within them, if you will) are highly parametrizable, i.e. they're full of placeholders with variable names, such as {{ swap.device }}. If there's nothing to fill this gap, the state fill will not be able to do anything. These gaps are filled from pillars.
All that remains, is to create a file like swap.sls in /srv/pillar/ that would contain something like (as per the examples of that formula)
linux:
storage:
enabled: true
swap:
file:
enabled: true
engine: file
device: /swapfile
size: 1024
and also /srv/pillar/top.sls with
base:
'*':
- swap
Perhaps /srv/pillar should also be included in pillar_roots in /etc/salt/master.
So now /srv/salt/top.sls runs /srv/formulas/salt-formula-linux/linux/storage/swap.sls which using the guidance of /srv/pillar/top.sls pulls some parameters from /srv/pillar/swap.sls and enables a swapfile.
Related
I have a main config file, let's say config.yaml:
num_layers: 4
embedding_size: 512
learning_rate: 0.2
max_steps: 200000
I'd like to be able to override this, on the command-line, with another file, like say big_model.yaml, which I'd use conceptually like:
python my_script.py --override big_model.yaml
and big_model.yaml might look like:
num_layers: 8
embedding_size: 1024
I'd like to be able to override with an arbitrary number of such files, each one taking priority over the last. Let's say I also have fast_learn.yaml
learning_rate: 2.0
And so I'd then want to conceptually do something like:
python my_script.py --override big_model.yaml --override fast_learn.yaml
What is the easiest/most standard way to do this in hydra? (or potentially in omegaconf perhaps?)
(note that I'd like these override files to ideally just be standard yaml files, that override the earlier yaml files, ideally; though if I have to write using override DSL instead, I can do that, if that's the easiest/best/most standard way)
It sounds like package override might be the a good solution for you.
The documentation can be found here: https://hydra.cc/docs/next/advanced/overriding_packages
an example application can be found here:
https://github.com/facebookresearch/hydra/tree/master/examples/advanced/package_overrides
using the example application as an example, you can achieve the override by doing something like
$ python simple.py db=postgresql db.pass=helloworld
db:
driver: postgresql
user: postgre_user
pass: helloworld
timeout: 10
Refer to the basic tutorial and read about config groups.
You can create arbitrary config groups, and select one option from each (As of Hydra 1.0, config groups options are mutually exclusive), you will need two config groups here:
one can be model, with a normal, small and big model, and another can trainer, with maybe normal and fast options.
Config groups can also override things in other config groups.
You can also always append to the defaults list from the command line - so you can also add additional config groups that are only used in the command line.
an example for that can an 'experiment' config group. You can use it as:
$ python train.py +experiment=exp1
In such config groups that are overriding things across the entire config you should use the global package (read more about packages in the docs).
# #package _global_
num_layers: 8
embedding_size: 1024
learning_rate: 2.0
I've got a few configuration values in my application that are tied to ip, mac addr, server name, etc and am trying to come up with a modular state that will match on the correct value from my pillar with a pillar.get. I'm not finding any info on how to use grain values in pillar.get.
pillar example:
account:
qa:
server1:
appUsername: 'user2'
appPassword: 'password2'
server2:
appUsername: 'user2'
appPassword: 'password2'
prod:
server3:
appUsername: 'user3'
appPassword: 'password3'
server4:
appUsername: 'user4'
appPassword: 'password4'
Lines from my template:
keyUser={{ salt['pillar.get']('account:grains['env']:grains['id']:appUsername', 'default_user') }}
keyPass={{ salt['pillar.get']('account:grains['env']:grains['id']:appPassword', 'default_pass') }}
This just seems so natural, but whatever I try errors out, or will escape actual grain lookup and give me the defaults. I also can't find anything on google. Anybody have a solution? Should I dynamically set the appUsername and appPassword values on the pillar instead? I like the layout in the pillar as is, because it's a great easy to read lookup table, without a ton of conditional jinja.
First you can't just embed grains['env'] into the pillar lookup string- you'll need to concatenate. Second, your Jinja assignment looks wrong. Try this:
{% set keyUser = pillar.get('account:'~grains['env']~':'~grains['id']~':appUsername', 'default_user') %}
~ is the concatenate operator in Jinja.
Also, salt['pillar.get']('blah') is the same as pillar.get('blah').
However! It's difficult to be sure without the actual error and/or the full template.
For the AFP entry Dijkstra's Shortest Path Algorithm, both the proof outline and proof document were nonexistent *. Unfortunately, I did not find an IsaMakefile either to build those documents locally. What is the best way to get those documents?
Another question, as the Dijkstra.thy depends on a lot of other theories, is there a way to load everything faster?
*) It is fixed now.
(There seems to be something broken at AFP right now, please tell the editors about it.)
In general, you can download the sources of AFP entries and produce the documents yourself like this:
Get and unpack all AFP sources -- downloading separate entries is offered as well, but then you have to disentangle dependencies manually.
Invoke isabelle build like this:
isabelle build -d afp-2013-03-02 -o document=pdf -v Dijkstra_Shortest_Path
Here afp-2013-03-02 is the directory that was obtained by unpacking the current AFP sources.
See also the Isabelle System manual about "Isabelle sessions and build management", which is all new in Isabelle2013.
See isabelle build -b there to make things load faster, by producing persistent heap images from sessions.
The links in the AFP entry were indeed broken and should now be fixed again, sorry about that.
As Makarius writes, the AFP new uses Isabelle's new build system, i.e. has a ROOT file for each entry that can be used to check the associated theories and build the document.
Makarius' answer is pretty much the official way to do it, although I would additionally recommend setting up the AFP as a component. This gives you the following steps:
Download the AFP to e.g. ~/afp
Set it up as component e.g. by adding ~/afp to ~/.isabelle/Isabelle2013/components (see also AFP as a component)
build the entry with
isabelle afp_build Dijkstra_Shortest_Path
You can also have jEdit build the heap image for you. If the AFP is setup as a component (see the other answers for that), just start jEdit with
isabelle jedit -d '$AFP' -l Dijkstra_Shortest_Path
and jEdit will select Dijkstra_Shortest_Path as base logic and (re)build it if necessary.
If you make regular use of the AFP, it might be useful to add the AFP path by default. For this, create a file ROOTS in $ISABELLE_HOME_USER with the line $AFP in it (or add this line, if the file already exists).
Now, I'm pretty sure of the limitation here. But let's step back.
The simple statement
READNULLCMD="less -R"
doesn't work, generating the following error:
$ <basic.tex
zsh: command not found: less -R
OK. Pretty sure this is because, by default, zsh doesn't split string variables at every space. Wherever zsh is using this variable, it's using $READNULLCMD where it should be using ${=READNULLCMD}, to ensure the option argument is properly separated from the command by a normal space. See this discussion from way back in 1996(!):
http://www.zsh.org/mla/users/1996/msg00299.html
So, what's the best way around this, without setting SH_WORD_SPLIT (which I don't want 99% of the time)?
So far, my best idea is assigning READNULLCMD to a simple zsh script which just calls "less -R" on STDIN. e.g.
#!/opt/local/bin/zsh
less -R /dev/stdin
Unfortunately this seems to be a non-starter as less used in this fashion for some reason misses the first few lines on input from /dev/stdin.
Anybody have any better ideas?
The problem is not that less doesn't read its environment variables (LESS or LESSOPEN). The problem is that the READNULLCMD is not invoked as you might think.
<foo
does not translate into
less $LESS foo
but rather to something like
cat foo | less $LESS
or, perhaps
cat foo $LESSOPEN | less $LESS
I guess that you (like me) want to use -R to obtain syntax coloring (by using src-hilite-lesspipe.sh in LESSOPEN, which in turn uses the "source-highlight" utility). The problem with the latter two styles of invocation is that src-hilite-lesspipe.sh (embedded in $LESSOPEN) will not receive a filename, and hence it will not be able to deduce the file type (via the --infer-lang option to "source-highligt"). Without a filename suffix, "source-highlight" will revert to "no highlighting".
You can obtain syntax coloring in READNULLCMD, but in a rather useless way. This by specifying the language explicitly via the --lang-def option. However, you'll have as little clue as "source-higlight", since the there's no file name when the data is passed anonymously through the pipe. Maybe there's a way to do a on-the-fly heuristic parser and deduce it by contents, but then you've for sure left this little exercise.
export LESS=… may be a good solution exclusively for less and if you want such behavior the default in all cases, but if you want more generic one then you can use functions:
function _-readnullcmd()
{
less -R
}
READNULLCMD=_-readnullcmd
(_- or readnullcmd have no special meaning just the former never appears in any distributed zsh script and the latter indicates the purpose of the function).
Set the $LESS env var to the options you always want to have in less.
So don't touch READNULLCMD and use export LESS="R" (and other options you want) in your zshrc.
I'd like to use alias to make some commands for myself when searching through directories for code files, but I'm a little nervous because they start with ".". Here's some examples:
$ alias .cpps="ls -a *.cpp"
$ alias .hs="ls -a *.h"
Should I be worried about encountering any difficulties? Has anyone else done this?
What is the advantage of putting the dot in the names? It seems like an unnecessary extra character. I'd just use the base names (hs and cpps) for the aliases.
I suppose that it might be argued that the dot indicates that the command is an alias - but why is that distinction beneficial? One of the great things about Unix was that it removed the distinction between hallowed commands provided by the O/S and programs written by the user. They are all equal - just located in different places.
I don't see any real dangers with using aliases that start with a dot. It would never have occurred to me to try; I'm mildly surprised that they are allowed. But given that they are allowed, there is no real risk involved that I can see.
I wouldn't use '.' to begin your aliases because it's next to '/' and you could hit the two together by mistake and accidentally run an executable in your current directory (especially if you use tab completion).
I doubt that there's any technical problem though it's likely to be confusing to anyone who has used Unix for a long time. In my world commands don't have dots in them and file names don't have spaces or upper case letters!