Saltstack documentation is very difficult and unclear for beginner. If you could give simple example how to install something on vagrant machine using saltstack I'd be very grateful
I believe some tutorials are out on the web. I could offer some of mine:
Setup Zabbix using Salt in a masterless setup, this installs among others a PHP stack needed for Zabbix.
Setup Consul in the cloud at DigitalOcean, with Saltstack. Includes a full script, but also works with Vagrant (see cheatsheet.adoc)
I think the biggest help for me when starting was the SaltStack tutorials.
https://docs.saltstack.com/en/getstarted/fundamentals/index.html
The states tutorial gives an example of installing rsync, lftp and curl:
https://docs.saltstack.com/en/getstarted/fundamentals/states.html
This tutorial shows how to setup what you need with vagrant with a master and a couple of minions, shows basics of targeting minions and setting up state files (the files that tell Salt what to do on a minion).
There is a lot more to Salt than that, but it is a good start.
In Saltstack there are two types of machines :
Master : As the name suggests this is the controlling. You can use this to run tasks on multiple minions.
Minion : Minions are like slaves. You can run commands on minions, or install any packages, run scripts on minions through master. Basically any command or any task that you can run by logging into a minion machine you should be able to accomplish through the master machine.
You can write all the tasks you want to perform on minion in an sls file and run it.
Saltstack has functions which you should call along with the desired arguments. Every function performs a specific task. Saltstack has Execution modules and States modules
Execution modules:
They are designed to perform tasks on a minion. For example: mysql.query will query a specified database. The execution module does not check if the database needs to be queried or not. It just executes its task.
Have a look at the complete list of modules and you will see that they will just execute a task for you. https://docs.saltstack.com/en/latest/ref/modules/all/index.html
States module:
It's called THE states module.
The states module is a module too. But it's a special one. With the states module you can create states (the sls files under /srv/salt ) for your Minions.
You could for example create a state that ensures the Minion has a web server configured for www.example.com.
After you have created your state you can apply it with the states module: salt state.apply example_webserver
The example_webserver state specifies what the Minion needs to have. If the Minion is already in the correct state, it does nothing. If the Minion is not in the correct state it will try to get there.
The states module can be found here: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.module.html
Sample sls file :
//This state is to make sure git is installed. If yes : no action will be taken if not it will be installed.
git_install:
pkg.installed:
- name: git
//This step makes sure the folder with the specified name is not present. If it is present it will be deleted. Here "delete_appname_old" is the step name and should not be duplicated in the same sls file
delete_appname_old:
file.absent:
- name: /home/user/folder_name
//This step is for cloning a git project
clone_project:
module.run:
- name: git.clone
- url: ssh://gitreposshclonelink
- cwd: /home/user/folder_name
- user: $username
- identity: $pathofsshkey
Related
I have setup a Google Compute Engine (GCE) instance and I want to mount a Google Cloud Bucket to it. Basically, I have uploaded my data to Google Cloud and I want to make it available for use in the R Studio-server I have installed in my instance. It seems my mounting was successful, but I cannot see the data on R (or in the shell).
I want the bucket to be mounted in /home/roberto/remote. I have run chmod 777 /home/roberto/remote and then gcsfuse my-project /home/roberto/remote. I got the following output:
2023/01/28 22:49:01.004683 Start gcsfuse/0.41.12 (Go version go1.18.4) for app "" using mount point: /home/roberto/remote
2023/01/28 22:49:01.022553 Opening GCS connection...
2023/01/28 22:49:01.172583 Mounting file system "my-project"...
2023/01/28 22:49:01.176837 File system has been successfully mounted.
However, I can't see anything inside /home/roberto/remote when I run ls or when I look inside of it from R Studio-server (see image below). What should I do?
UPDATE: I had uploaded my folders to google cloud, but when I uploaded an individual file, it suddenly showed up! This makes me think the issue has something to do with implicit directories. Supposedly, if I run the same command as before with the --implicit-dirs flag that would be enough (something like this: gcsfuse --implicit-dirs my-project /home/roberto/remote). However, this is returning an error message and I am not sure how to deal with it.
Error message:
2023/01/29 01:33:15.428752 Start gcsfuse/0.41.12 (Go version go1.18.4) for app "" using mount point: /home/roberto/remote
2023/01/29 01:33:15.446696 Opening GCS connection...
2023/01/29 01:33:15.548211 Mounting file system "my-project"...
daemonize.Run: readFromProcess: sub-process: mountWithArgs: mountWithConn: Mount: mount: running /usr/bin/fusermount3: exit status 1
Try to edit the VM Cloud API access scopes of Storage to Full.
Follow the steps below:
Click/Select the VM instance
Stop the VM instance, then edit the VM instance.
Scroll down to Access scopes and select "Set access for each API"
Change the Storage from Read Only to Full.
Save and start your VM instance.
Then SSH to your VM instance and try to ls /home/roberto/remote
Other answers here could be useful depending on your issue. In my case, what solved it was indeed running the command gcsfuse --implicit-dirs my-project /home/roberto/remote. The error I was getting in the edit for my question was due to the fact that I had previously mounted the bucket and was trying to mount it again without unmounting it first (here is the official documentation on how to unmount the bucket). For more details on the importance of the --implicit-dirs flag take a look at the official documentation here. There are very similar solutions using, for instance, the /etc/fstab file. For that, take a look at this discussion in the official github page of gcsfuse.
Try running gcsfuse mount command with debug flags which will help in knowing why the mount failed.
Eg: gcsfuse --implicit-dirs --debug_fuse --debug_gcs --debug_http my-project /home/roberto/remote
When I use saltstack to manage my servers. I found an interesting thing:
When I run salt '*' pkg.installed httpd, I get the following message: pkg.installed is not available. But I can use pkg.installed function in my .sls files and it worked very well. So, I am confused about that. And I think this is happening because of saltstack.
Who can help me?
There are two related but different concepts here.
Salt Execution Modules
Salt State Modules.
Execution modules are where most of the work actually happens and is what you are running on the command line, generally. For example:
salt '*' pkg.install vim
That will call out directly to your OS's package manager, such as yum or apt, and install vim.
State modules are statefull commands that kind of sit "above" the execution modules. A state module will check if the desired result already exists and make any necessary changes to get the desired state. They are conjugated differently than the execution modules. For example in this salt state file (sls file):
cat /srv/salt/vim.sls
install_vim_please:
pkg.installed:
- name: vim
Then you could run the state.sls execution module to apply this sls file with the pkg.installed state.
salt '*' state.sls vim
Because we're using the pkg.installed state Salt will check with your OS's package manager and see if vim is already installed. Salt will only attempt to install vim if the package manager says that vim is not installed already.
Keeping your Salt States in sls files makes it easy to keep them in git or whatever vcs you use to track them.
You could skip the sls file and run the command statefully from the command line like this:
salt '*' state.single pkg.installed name=vim
As far as the documentation goes, if one would like to query for available execution modules, the following command should be used:
salt '<minion_name>' sys.list_modules
My question is,
- whether it possible to run the above without a minion?
(e.g salt sys.list_modules , isn't it cleaner?)
If you just want to know all the modules available to you, or the latest modules available for the current saltstack version.
salt-call --local sys.list_modules
I have a Google Compute Engine VM instance with a Asterisk Server running on it. I get this message when I try to run sudo:
sudo: parse error in /etc/sudoers near line 21
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
Is there a password for root so I can try to change it there? Any suggestions on this?
It looks like you have manually edited the /etc/sudoers file so while you would normally have sudo access, due to the parse error, you won't be able to do this directly.
Here's how to fix this situation.
1. Save the current boot disk
go to to the instance view in Developers Console
find your VM instance and click on its name; you should now be looking at a URL such as
https://console.cloud.google.com/project/[PROJECT]/compute/instancesDetail/zones/[ZONE]/instances/[VM-NAME]
stop the instance
detach the boot disk from the instance
2. Fix the /etc/sudoers on the boot disk
create a new VM instance with its own boot disk; you should have sudo access here
attach the disk saved above as a separate persistent disk
mount the disk you just attached
fix the /etc/sudoers file on the disk
unmount the second disk
detach the second disk from the VM
delete the new VM instance (let it delete its boot disk, you won't need it)
3. Restore the original VM instance
re-attach the boot disk to the original VM
restart the original VM with its original boot disk, with fixed config
How to avoid this in the future
Always use the command visudo rather just any text editor directly to edit the /etc/sudoers file which will validate the contents of the file prior to saving it.
I ran into this issue as well and had the same issue Nakilon was reporting when trying the gcloud workaround.
What we ended up doing was configure a startup script that removed the broken sudoers file.
So in your metadata put something like:
#/bin/sh
rm "/etc/sudoers.d/broken-config-file"
echo "ok" > /tmp/ok.log
https://cloud.google.com/compute/docs/startupscript
As you probably figured out this requires the /etc/sudoers file to be fixed. As nobody has root access to the instance, you will not be able to do this from inside the instance.
The best way to solve this is to edit the disk from another instance. The basic steps to do this are:
Take a snapshot of your disk as a backup (!)
Shutdown your instance, taking care not to delete the boot disk.
Start a new "debugger" instance from one of the stock GCE images.
Attach the old boot disk to the new instance.
In the debugger instance, mount the disk.
In the debugger instance, fix the sudoers file on the mounted disk.
In the debugger instance, unmount the disk
Shutdown the debugger instance.
Create a new instance with the same specs as your original instance using the fixed disk as the boot disk.
The new disk will then have the fixed sudoers file.
Since i bumped into this issue too, if you have another instance or any place where you can run with gcloud privileges, you can run:
gcloud compute --project "<project id>" ssh --zone "europe-west1-b" "<servername>"
I ran this on a server which had gcloud as root, so you login to the other box as root too! Then fix your issue. (if you don't have a box, just spin a micro up with the correct gcloud privileges) saves the hassle of disk stuff etc.
As mentioned in above comments, I am getting the same error like below in gcp VM.
sudo: parse error in /etc/sudoers near line 21
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
To solve this
I have ssh to another vm and become root then I ran gcloud ssh command to our main vm (where you are getting the sudo error.)
gcloud compute --project "<project id>" ssh --zone "europe-west1-b "<servername>"
And BOOM!, now are login as root in the VM.
Now you can access/change the /etc/sudoers file accordingly.
I found this hack better than recreating vm/disks.
Hope this helps to someone!
It is possible to connect to a VM as root from your developers console Google Cloud Shell. Make sure the VM is running, start the shell and use this command:
gcloud compute ssh root#<instance-name> --zone <zone> [--project <project-id>]
where instance-name is found in the Compute Engine VM Instances screen. project-id is optional but required if you are connecting to an instance in a different project from the project where you started the shell.
You can then fix this and other issues that may prevent you from using sudo.
I got a Permission denied error when trying to ssh to the problem instance via gcloud. Using a startup script as mentioned above by #Jorick works. Instructions for it are here. You will have to stop and restart the VM instance for the startup script to get executed. I modified the script slightly:
rm -f /etc/sudoers.d/google_sudoers >& /tmp/startup.log
After the restart, launch an SSH session from the cloud console and check that you are able to view the file contents (with sudo more /etc/sudoers.d/google_sudoers for example). If that works your problem has been solved.
I have several buckets mounted using the awesome riofs and they work great, however I'm at a loss trying to get them to mount after a reboot. I have tried entering in the following to my /etc/fstab with no luck:
riofs#bucket-name /mnt/bucket-name fuse _netdev,allow_other,nonempty,config=/path/to/riofs.conf.xml 0 0
I have also tried adding a startup script to run the riofs commands to my rc.local file but that too fails to mount them.
Any idea's or recommendations?
Currently RioFS does not support fstab. In order to mount remote bucket at the startup time, consider adding corresponding command line to your startup script (rc.local, as you mentioned).
If for some reason it fails to start RioFS from startup script, please feel free to contact developers and/or fill issue report.
If you enter your access key and secret access key in the riofs config xml file, then you should be able to mount this via fstab or an init.d or rc.local script ..
See this thread
EDIT:
I tested this myself and this is what I find. Even with the AWS access details specified in the config file, there is no auto-matic mounting at boot. But to access the system, all one needs to do is to issue mount /mount/point/in-fstab .. and the fstab directive would work and persist like a standard fstab mounted filesystem.
So, it seems the riofs system is not ready at that stage of the boot process when filesystems are mounted. That's the only logical reason I can find so far. This can be solved with an rc.local or init.d script that just issues a mount command (at the worst)
But riofs does work well, even as the documentation seems sparse. It is certainly more reliable and less buggy than s3fs ..
Thanks all,
I was able to get them auto-mounting from rc.local with the syntax similar to:
sudo riofs --uid=33 --gid=33 --fmode=0777 --dmode=0777 -o "allow_other" -c ~/.config/riofs/riofs.conf.xml Bucket-Name /mnt/mountpoint
Thanks again!