I have a package.sls file with packages I want to test on my minion.
How can I run this file on my salt minion using the command line?
If you have a file called mypackage.sls in the current directory, this should execute all of the states within the file:
sudo salt-call --local --file-root=. state.sls mypacakge
--local means run without looking for a master
--file-root=. means look for state files in the current directory
state.sls means execute the states in an sls file
mypackage is the name of the sls file, without the .sls extension
Note that calling it this way will not have access to any sls includes or dependencies on the master.
Related
I have a Gitlab repository containing a WordPress theme - php, js, and css files. My desired result is that when I push a change to the 'main' branch of the repo, the theme files are deployed, raw, without any build or test steps, to my remote server.
I have a .gitlab-ci.yml file set up with 'deploy' as its only step.
The script triggers on 'only: -main' and successfully accesses my remote server via ssh.
What I'm unsure of is how to send the entire raw repository to the remote.
Here is the 'script' portion of my yml:
- rsync -rav --delete project-name/ /var/opt/gitlab/git-data/repositories/project-name/ username#my.ip.add.ress:public_html/wp-site/wp-content/themes/
When the pipeline runs, I receive the following two errors:
rsync: [sender] change_dir "/builds/username/project-name/project-name" failed: No such file or directory (2)
rsync: [sender] change_dir "/var/opt/gitlab/git-data/repositories/project-name" failed: No such file or directory (2)
Is GitLab looking in /builds/ its default behavior? I am not instructing it to do so in my yml.
Is there some other file path I should be using to access the working tree for 'main' in my repository?
Ok, I misunderstood the rsync syntax. I thought the --delete flag included a parameter thereafter, meaning 'delete any existing files in the following directory' rather than what it actually does, which is to auto-choose the destination directory. Once I removed 'project-name/' and corrected the GitLab (origin) file path to '/builds/username/project-name/' the deployment occurs as intended.
I want to run an R script using SLURM. I have created the R script, "test.R" as shown:
print("Running the test script")
write.csv(head(mtcars), "mtcars_data_test.csv")
I created a bash script to run this R script "submit.sh"
#!/bin/bash
#sbatch --job-name=test.job
#sbatch --output=.out/abc.out
Rscript /home/abc/job_sub_test/test.R
And I submitted the job on the cluster
sbatch submit.sh
I am not sure where my output is saved. I looked in the home directory but no output file.
Edit
I also set my working directory in test.R, but nothing different.
setwd("/home/abc")
print("Running the test script")
write.csv(head(mtcars), "mtcars_data_test.csv")
When I run the script without SLURM Rscript test.R, it worked fine and saved the output according to the set path.
Slurm will set the job working directory to the directory which was the working directory when the sbatch command was issued.
Assuming the /home directory is mounted on all compute nodes, you can change explicitly the working directory with cd in the submission script, or setwd() in the R syntax. But that should not be necessary.
Three possibilities:
either the job did not start at all because of a configuration or hardware issue ; that you can find out with the sacct command, looking at the state column.
either the file was indeed created but on the compute node on a filesystem that is not shared; in that case the best option is to SSH to the compute node (which you can find out with sacct) and look for the file there; or
the script crashed and the file was not created at all, in that case you should look into the output file of the job (.out/abc.out). Beware that the .out directory must be present before the job starts, and that, as it starts with a ., it will be a hidden file, revealed in ls only with the -a argument.
The --output argument to sbatch is relative to the folder you submitted the job from. setwd inside the R script wouldn't affect it, because Slurm has already parsed that argument and started piping output to the file by the time the R script is running.
First, if you want the output to go to /home/abc/.out/ make sure you're in your homedir when you submit the script, or specify the full path to the --output argument.
Second, the .out folder has to exist; I tested this and Slurm does not create it if it doesn't.
I know I can use cp.get_dir to download a directory from master to minions, but when the directory contains a lot of files, it's very slow. If I can tar up the directory and then download to minion, it will be much faster. But I can't find out how to archive a directory at master prior to downloading it to minions. Any ideas?
What we do is tar the files manually, then extract them on the minion, as you said. We then either replace or modify any files that should be different from what is in the tar-file. This is a good approach for a configuration file that resides in the .tar file, for example.
To archive the file, we just ssh on the salt master and then use something like tar -cvzf files.tar.gz <yourfiles>.
You could also consider having the files on the machines from the start, with a rsync afterwards (via salt.states.rsync for example). This would just push over the changes in the files, not all the files.
Adding to what Kai suggested, you could have a minion running on the salt master box and have it tar up the file before you send it down to all the minions.
You can use the archive.extracted state. The source argument uses the same syntax as its counterpart in the file.managed state. Example:
/path/on/the/minion:
archive.extracted:
- source: salt://path/on/the/master/archive.tar.gz
Is there a way to apply a single statefile?
I would very much like to do a salt-call locally and apply a single file, equivalent of puppet-apply /tmp/some-manifest.pp
I would very much like to keep it in single file and not change salt roots or paths etc.
In master mode, if you have a common/users.sls file in your salt root, you can use:
salt-call state.sls common.users
In standalone mode, you can use salt-call --local if the salt root is configured. See https://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html
I added an alias (alias homedir='cd /export/home/file/myNmae'
) to .bashrc in my home directory and restarted the session. When I run the alias it says homedir: command not found.
Please advice.
This is because .bashrc is not sourced everytime, only for interactive non login shells .bashrc is sourced.
From the bash man page.
When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/pro-
file, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the
first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior.
When a login shell exits, bash reads and executes commands from the files ~/.bash_logout and /etc/bash.bash_logout, if the files exists.
When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc, if that file exists. This may be inhibited by using the
--norc option. The --rcfile file option will force bash to read and execute commands from file instead of ~/.bashrc.
i found the solution - i added it to the .profile file and restarted the session - it worked