I'm a little confused on using file_roots. Currently we setup our salt directory in the following way.
srv/salt/<folder-connected-to-git>: contains all the folders we want for to use like win (repo / repo-ng), / scripts /states etc... for our salt build. But it doesn't have our binaries folder (holds the installers for programs).
The master config file uses the following:
file_roots:
base:
- /srv/salt/<folder-connected-to-git>
So when setting up SLS package installers we would use salt:// to point to the base folder. Since the binaries folder is outside that path (in /srv/salt), I gave the absolute path (ie. /srv/salt/binaries). It seems that when running it, salt doesn't recognize this path as an absolute path (maybe its looking for it on the minion instead).
Is there a way to point to directory outside of base. If not, I could change my file_roots to:
file_roots:
base:
- /srv/salt/
prod:
- /srv/salt/<git-folder>
But then, would salt look for the repo (to cache to the minion) inside /srv/salt/ instead of /srv/salt/<git-folder>? Could I change what salt:// points to without changing file_roots?
There is the built in fileserver that works together with salt '*' cp.get_file or salt '*' cp.get_dir. To use this with file_roots, you might want to create a separate environment for the binaries.
I am not sure, if it is intended to be used like that. Especially the file_roots environments. But I recently learned, that environments are made as flexible as possible, to enable you to use them for whatever you might need them.
You might want to have a look at gitfs - which allows you to mount git repositories into your state tree. This would make the environment unnecessary. We use this approach for formulas.
We currently solve this with a private network and a webserver that makes larger files available to all minions inside of this network. This works quite well, as all our minions are connected to this private network. Running such a network forces you to keep an eye on securing the minions and the masters communication inside of this network. We use local firewalls on all connected minions to achieve this.
Related
I'm a freshman, and I created a server with my roomates in order to practice in maintaining a server.
We installed CentOS7. And I would like to ask how I can install a tool for everyone to use?
More particularly, we want to install Cromwell. But since, they don't have instructions on how to install on Unix, I downloaded Linuxbrew and installed it like this.
The downside is that it's not visible to the other users connected to the servers.
I know this is a noob question, but any response would be appreciated.
A standard unix machine has programs (tools and so on) installed in predefined directories like /bin, /usr/bin, perhaps /usr/local/bin. Which to choose is another matter, probably you want /usr/bin. Also the environ variable PATH plays a role.
Into the chosen directory there should be a file representing the "tool". You can put a copy of the executable file in that directory, and set (or check) its permissions. Execution permission can be granted to all users, or only some, it depends. In other words,
/home/me/.linuxbrew/Cellar/cromwell
is not a good place for a "system" tool or app; you should copy that executable in /usr/bin, set ownership (perhaps to root?) with chown, and set the correct permissions with chmod.
You can make a hard link of your executable into the directory; this saves space, but also means that there is only one copy of the executable. Having two different copies (the "stable" one, and the other one you can fiddle with) can be handy.
After the executable is reachable and executable from the chosen users, maybe it needs some support files. To find them, it can rely on fixed locations, or some environment variable, or some configuration file. But all these things are outside of the scope of the question.
Try this command:
you#machine$ sudo chmod [who][op][permissions] filename
"who" refers to the users that have a particular permission: the user ("u"), the group ("g"), or other users ("o", also known as "world"). "op" determines whether to add ("+"), remove ("-") or explicitly set ("=") the particular permissions. "permissions" are whether the file should be readable ("r"), writable ("w"), or executable ("x"). As an example:
you#machine$ chmod o+x file
will add executable permission for others to file.
As a new user of Fossil, I'm curious if there are any negative implications with using Fossil to store things like /etc/, /usr/local/etc files from Unix like systems like FreeBSD & OpenBSD. If I'm doing this for multiple systems, I think I'd create a branch with each hostname to track those files.
Q1: Have you done this? Do you prefer a different VCS to handle the system files?
Q2: Lots of changes have happened in Fossil over the years and I'm curious if it's possible to restrict who can merge branches with trunk. From reading earlier threads it wasn't possible but there are two workarounds:
a) tell people not to merge to trunk
b) have people clone and trunk maintainer pick up changes from their repo
System configuration files stored in /etc, /var or /usr/local/etc can generally only edited by the root user. But since root has complete access to the whole system, a mistaken command there can have dire consequences.
For that reason I generally use another location to keep edited configuration files, a directory in my home-directory that I call setup, which is under control of git. Since I have multiple machines running FreeBSD, each machine gets its own subdirectory. There is a special subdirectory of setup called shared for those configuration files that are used on multiple machines. Maintaining multiple copies of identical files in separate repositories or even branches can be a lot of extra work.
My workflow is the following;
Edit a configuration file in my repository.
Copy it to its proper location.
Test the changes. If problems occur, go back to step 1.
Commit the changes to the revision control system. Copy the
committed files to their proper location.
Initially I had a shell script (basically a list of install commands) to install the files for me. But I also wanted to see the differences between the working tree and the installed files.
So for my convenience, I wrote a script called deploy to help me with this. It can tell me which files in the repo are different from the installed files and can show me the differences. It can also install files to their proper locations.
Trying to evaluate CoreOS. It really looks like it is an interesting product and I was trying to see about simply starting up networking. I got a static configuration to work by doing the following:
Create a static network file in the /etc/systemd/network/ folder.
It is my understanding that the important parts of the file name I drop into this directory are the number at the beginning of the file for cases when I have multiple network files this will help to determine which file is applied first and the ".network" suffix to declare that this is a network configuration file
The contents of /etc/systemd/network/10-static.network is as follows (yes, this is a very simple configuration):
[Network]
Address=192.168.1.102/24
Gateway=192.168.1.2
I then tried starting the service: sudo systemctl start systemd-networkd
This actually worked and assigned a static ip address that was visible when running ifconfig.
Here is my problem. I rebooted the CoreOS virtual machine and noticed that the networking was no longer set on reboot. When I check the /etc/systemd/network/ folder it is empty and my configuration file apparently disappeared on reboot.
Does anyone know why this would have happened?
Thanks in advance for any help on this!
You must remove ISO image, coreOS maybe reboot same ISO image. If you remove ISO image, coresystem can reboot from new system.
I experienced the same situation before.
Files on disk shouldn't disappear on you like that. Did you happen to PXE-boot this VM or somehow use a file system in RAM?
A better way to do this config is with cloud-config, which CoreOS uses to configure machines at boot. It's intended to provide a repeatable way to set up networking, mount disks and things like that. The steps that you completed manually can be done with cloud-config like this: https://coreos.com/docs/cluster-management/setup/network-config-with-networkd/
More info about cloud-config in general: https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/
I've setup a Vagrant box that runs my webserver to host my Symfony2 application.
Everything works fine except the folder synchronization.
I tried 2 things:
config.vm.synced_folder LOCALFOLDER, HOSTFOLDER
config.vm.synced_folder LOCALFOLDER, HOSTFOLDER, type="rsync"
Option 1: First option works, I actually don't know how file is shared but it works.
Files are copied in both way, but the application is SUPER slow.
Symfony is generating cache files which might be the issue, but I don't really know how to troubleshoot this and see what is happening.
Option 2: Sync is only done in one way (from my local machine to the vagrant box), which covers most of the case and is fast.
Issue is that when I use symfony command line on the vagrant box to generate some files they are not copied over to my local machine.
My question is:
What is the best way to proceed with 2 ways syncing? With option 1 how can I (as it might be the issue) exclude some files from syncing.
With Option 2 how can I make sure changes on remote are copied to my local machine?
If default synced folder strategy (VirtualBox shared folders, I imagine) is slow for your use case, you can choose a different one and, if you need, maintain the two-way sync:
If your host OS is Linux or Mac OS X, you can go with NFS.
If your host OS is Windows you can instead choose SMB.
Rsync is very fast but, as you've pointed out, is one-way only.
As it doesn't seem Vagrant offers a "built-in" way to do this here is what I did:
Configure Vagrant RSYNC folder on the folders that will contains application generated files (in Symfony2 it is your Bundle/Entity folder). Note that I didn't sync the root folder because some folders doesn't have to be rsynced (cache/logs...) and also because it was taking way too much time for the rsync process to parse all the folders/subfolders when I know that only the Entity folder will be generated.
As the Rsync has to be done from the Vagrant box to the host, I use vagrant-rsync-back plugin and thus run this manually everytime I use a command that generates code.
https://github.com/smerrill/vagrant-rsync-back#getting-started
Create an watcher on my local machine that will track any change in code and rsync it to the vagrant box.
https://gist.github.com/laurentlemaire/e423b4994c7452cddbd2
Vagrant mounts your project root as /vargrant folder inside box as 2 way share.
You can run your command there do get required files synced. Any I/O will be damn slow (like you already mentioned), however you will get your files. For other stuff use your 1-way synced folder.
Like most *nix people, I tend to play with my tools and get them configured just the way that I like them. This was all well and good until recently. As I do more and more work, I tend to log onto more and more machines, and have more and more stuff that's configured great on my home machine, but not necessarily on my work machine, or my web server, or any of my work servers...
How do you keep these config files updated? Do you just manually copy them over? Do you have them stored somewhere public?
I've had pretty good luck keeping my files under a revision control system. It's not for everyone, but most programmers should be able to appreciate the benefits.
Read
Keeping Your Life in Subversion
for an excellent description, including how to handle non-dotfile configuration (like cron jobs via the svnfix script) on multiple machines.
I also use subversion to manage my dotfiles. When I login to a box my confs are automagically updated for me. I also use github to store my confs publicly. I use git-svn to keep the two in sync.
Getting up and running on a new server is just a matter of running a few commands. The create_links script just creates the symlinks from the .dotfiles folder items into my $HOME, and also touches some files that don't need to be checked in.
$ cd
# checkout the files
$ svn co https://path/to/my/dotfiles/trunk .dotfiles
# remove any files that might be in the way
$ .dotfiles/create_links.sh unlink
# create the symlinks and other random tasks needed for setup
$ .dotfiles/create_links.sh
It seems like everywhere I look these days I find a new thing that makes me say "Hey, that'd be a good thing to use DropBox for"
Rsync is about your best solution. Examples can be found here:
http://troy.jdmz.net/rsync/index.html
I use git for this.
There is a wiki/mailing list dedicated to the topic.
vcs-home
I would definetly recommend homesick. It uses git and automatically symlinks your files. homesick track tracks a new dotfile, while homesick symlink symlinks new dotfiles from the repository into your homefolder. This way you can even have more than one repository.
You could use rsync. It works through ssh which I've found useful since I only setup new servers with ssh access.
Or, create a tar file that you move around everywhere and unpack.
I store them in my version control system.
i use svn ... having a public and a private repository ... so as soon as i get on a server i just
svn co http://my.rep/home/public
and have all my dot files ...
I store mine in a git repository, which allows me to easily merge beyond system dependent changes, yet share changes that I want as well.
I keep master versions of the files under CM control on my main machine, and where I need to, arrange to copy the updates around. Fortunately, we have NFS mounts for home directories on most of our machines, so I actually don't have to copy all that often. My profile, on the other hand, is rather complex - and has provision for different PATH settings, etc, on different machines. Roughly, the machines I have administrative control over tend to have more open source software installed than machines I use occasionally without administrative control.
So, I have a random mix of manual and semi-automatic process.
There is netskel where you put your common files on a web server, and then the client program maintains the dot-files on any number of client machines. It's designed to run on any level of client machine, so the shell scripts are proper sh scripts and have a minimal amount of dependencies.
Svn here, too. Rsync or unison would be a good idea, except that sometimes stuff stops working and i wonder what was in my .bashrc file last week. Svn is a life saver in that case.
Now I use Live Mesh which keeps all my files synchronized across multiple machines.
I put all my dotfiles in to a folder on Dropbox and then symlink them to each machine. Changes made on one machine are available to all the others almost immediately. It just works.
Depending on your environment you can also use (fully backupped) NFS shares ...
Speaking about storing dot files in public there are
http://www.dotfiles.com/
and
http://dotfiles.org/
But it would be really painful to manually update your files as (AFAIK) none of these services provide any API.
The latter is really minimalistic (no contact form, no information about who made/owns it etc.)
briefcase is a tool to facilitate keeping dotfiles in git, including those with private information (such as .gitconfig).
By keeping your configuration files in a git public git repository, you can share your settings with others. Any secret information is kept in a single file outside the repository (it’s up to you to backup and transport this file).
-- http://jim.github.com/briefcase
mackup
https://github.com/lra/mackup
Ira/mackup is a utility for Linux & Mac systems that will sync application preferences using almost any popular shared storage provider (dropbox, icloud, google drive). It works by replacing the dot files with symlinks.
It also has a large library of hundreds of applications that are supported https://github.com/lra/mackup/tree/master/mackup/applications