Chef like version management in salt stack? - salt-stack

I really like the way that you can upload multiple versions of the same cookbook to Chef server. And also you can specify the cookbook version in metadata file. e.g.
depends 'base-config', '= 1.2.1'
I like Salt. However, I couldn't find any version management and requisite for Salt states/formula. I am really surprised since I think it's a fundamental requirement for a configuration management. Did I miss anything? How do salt handles states/formula file versions?

Some of this will be added in Nitrogen using SPM
There are several more things that can be added to this, we are still working on getting an environment to setup uploading spms too, right now you would need to manage them yourself.
Instructions to build spms are here.
https://docs.saltstack.com/en/develop/topics/spm/spm_formula.html
Daniel

Related

putting Navicat files in git?

I'm looking at switching from MySQL Workbench to Navicat because we're using MariaDB and the incompatibilities are starting to annoy me.
I'm working through the issues of getting Navicat to run on Centos under WINE but assume I will succeed (edit: this failed. The "linux" version requires WINE. Navicat will sort of run with a bit of hacking, but critical features rely on MS-Windows/WINE)
How do I get Navicat to work with git (or any other source code control)? Workbench is sufficiently primitive that file changes either get picked up automatically or completely ignored (almost always a dialog "file on disk has changed, reload?")
Specific problems:
when adding new query files Navicat only seems to rescan the folder when I add a new query. Is there a smart way to do that? (edit: no. You can manually refresh one file at a time by right clicking)
model and query files are buried deep in the WINE tree. Can I relocate them or or symlinks work? I'd rather keep all the DB-related code in one repo, rather than having a special Navicat repo. (edit: yes, but the explanation of how to do so is lengthy)
is there a way to merge a model file if more than one person has changed it? Workbench can't do this but I'd really like the feature. (edit: no, never. Merge the schema SQL files instead)
Also, bonus question: can we make multiple edits using Navicat other than repeated use of the GUI? If I want to change (say) a bunch of columns from VARCHAR(255) to CHAR(20) I'd normally script that in SQL but Navicat models don't do reverse engineering, only "delete the table from the model then re-import it" so there doesn't seem to be a non-tedious way to do that. (edit: no, but they might look at it in the future)
Final edit: I used the Navicat forums and the team were very helpful, but fundamentally Navicat is Windows software and the 64-bit purists behind Centos will never support WINE. For most Linux users this is not a problem, but I work with Centos enthusiasts and have long since lost the argument about which distro to use.
To the 1st question, you can sync it in different ways with a remote database/folder, when you are managing the database with Navicat, just right-click in your current connection and press "refresh", so you will be updated with the server changes. You also can do it with a programmed task.
Another matter is, why would you want to run navicat from wine when it has a native linux version? (I hope that answers the 2nd question)
For the 3rd question note that Navicat has an internal utility to sync data between servers, so you don't need git at all, or at most, you can automate the structure exportation and then sync it with a git repository (in form of a .sql file)
IMHO you need to review your concepts about mariadb and navicat, both are quite flexible and offer several ways to do such things you propose, like sync the data and they also allow to insert git in the workflow, just review your strategy and try to apply some new perspective with the available features.

Railo on AWS Opsworks

Does anyone have any information or experience deploying Railo (cfml) apps on AWS OpsWorks? It seems like it should be possible (similar to cloudbees or heroku) since Opsworks now supports java apps. I'm just having a hard time getting started.
The official and active cookbook for this seems to be : https://github.com/ringgi/railo-cookbook. You're not specifying what specific issue you're having. You would need to modify any Chef community cookbook to implement on Opsworks. You would need to replace any mention of a role with the the name of the layer short name. That usually would be enough to get most simple cookbooks to behave with the Chef 11.10 version of the stack.
Most likely you would need to create a new cookbook with the community cookbook specified above + the required additional cookbooks mentions in the metadata.rb files.

How to upgrade (merge) web.config with web deploy (msdeploy)?

I'm trying to set up a deployment chain for some of our ASP.NET applications. The tool of choice is Web Deploy (msdeploy) - for now. Unfortunately I'm stuck on a problem.
A high level overview of the chain is thus:
Web developer creates the code and checks it in SVN;
Buildserver sees the update and builds the msdeploy .zip package of the website;
The .zip package is automatically put inside our installer and sent to various clients;
The clients run the installer on their webserver(-s);
The installer uses msdeploy internally to deploy the .zip package and create a new website or upgrade an existing one.
Msdeploy makes it easy to deploy a new instance, but I'm stumped about how to perform an "upgrade" install. The main problem is the web.config file. Each client will most certainly have made some customizations there to suit their specific environment. The installer itself offers to set some more critical parameters at the first-time installation (achieved by msdeploy's parameter mechanism), but they can do others by hand.
On the other hand, we developers also occasionally make changes to web.config, adding some new settings or removing obsolete ones. So I can't just tell msdeploy to ignore the file entirely. I need some kind of advanced XML modification mechanism. It could be a script that the developers maintain, but then it needs to be run ONLY at upgrades, not new installs.
I've no idea how to accomplish this.
Besides that, sometimes there's also some completely weird upgrade logic. For example, the application comes with our company logo, but some clients have replaced that .png file to show their own logo. Recently we needed to update the logo - but only for clients that hadn't replaced it with their own.
Similarly, there might be some cache folders that might need to be cleaned at SOME upgrades but not at others. Or folders with user content that may not be touched (but come with default content at the initial installation). Etc.
How do you normally achieve this dual behavior for msdeploy packages? Do I really need to create 2 distinct packages for every application?
Suggestion from personal experience:
Isolate customisations
Your customers should have the ability to customise their set up and the best way is to provide them with something like an override file. That way you install the new package and follow by superimposing your customer's customisations on top of your standard setup. If its a brand new install then there will be nothing to superimpose.
> top-level --
> standard files |
images | This will never be touched or changed by customer
settings.txt |
__
> customer files --
images | Customer hacks this to their heart's content
settings.txt_override |
--
Yes, this does mean that some kind of merging process needs to happen and there needs to be some script that does that but this approach has several advantages.
For settings that suddenly become redundant just issue a warning to that effect
If a customer has their own logo provide the ability to specify this in the override file
The message is clear to customers. Stay off standard files.
If customers request more customisable settings then write the default if it does not exist into the override file during upgrades.
Vilx, in answer to your question, the logic for knowing whether it is an upgrade or not must be contained in the script itself.
To run an upgrade script before installation
msdeploy -verb:sync -source:contentPath="C:\Test1" -dest:contentPath="C:\Test2" -preSync:runcommand="c:\UpgradeScript.bat"
Or to run an upgrade script after installation
msdeploy -verb:sync -source:contentPath="C:\Test1" -dest:contentPath="C:\Test2" -postSync:runcommand="c:\UpgradeScript.bat"
More info here
As to how you know its an upgrade your script could check for a text file called "version.txt" and if it exists the upgrade bat script will run. Version to be contained within the text file. Bit basic but it should work.
This also has the added advantage of giving you the ability of more elegantly merging customer's custom settings between versions as you know which properties could be overriden for that particular version.
There are some general suggestions (not specific to msdeploy), but I hope that helps:
I think you'll need to provide several installers anyway: for the initial setup and for each version-to-version upgrade.
I would suggest to let your clients to merge the config files themselves. You could just provide them either detailed desciption of waht was added/changed/removed, and/or include the utility that simplifies the merge. Maybe this and this links will give you some pointers.
as for merging the replaced logos, other client's customization, I think the best approach would be to support branding your application. I mean - move all branding details to the place where your new/upgrade installers won't touch that.
as for the rest of the adjustments made by your clients, they do that on their own risk, so the only help you could provide them is to include the detailed list of changes (maybe even the list of changed files since the previous version) and the How-To article about merging the sources with tools like Araxis Merge or similar
Or.. you could create a utility and include it to the installer, which will try to do all the tricky merging stuff on client's machine. I would not recommend this way as it requires a lot of efforts/resources to maintain.
One more thing: you could focus on backup-ing the previous client copy before upgrade. So even client will have troubles with upgrading - that will be always possible to roll back. The only thing here for you is to provide a good feedback channel which your clients can use to shoot their troubles. This feedback will allow you to figure out what the troubles your clients have and how to make their upgrade process more comfortable.
I would build on what the above have said, but I would do it with transformations, and strict documentation about who configures what. The way you have it now relies on customer intervention against a config that is mission critical to the app deploy process.
Create three config file areas. One for development, one for the "production generic" build, and one that is an empty template for the customer to edit.
The development instance should be self explanatory. This is the transform that takes the production generic template and creates a web config for your development server. (it sounds like you are shooting for a CI type process here)
The "production generic" transform should set the app up for a hypothetically perfect instance of the app. This is what the install would look like if the architect had his way.
The customer transform is used by the customers to set up the web config as required to meet their own needs. Write some documentation and see what happens. Edit the docs as you help customers through the process.
It that what you were looking for? Thoughts?

getting started with flyway for one with no java experience

I am interested in testing flyway and if I am not wrong I read that it supports db changes both through java and SQL. I am a dba and familiar with SQL but not java.
I read through the “Getting Started” page and wanted to try out the sample application available under the “Downloads tab” link however I couldn’t find any readme document explaining the available downloads which appeared to contain .jar files.
Q) is there an instruction manual for a newbies to explain how to put together this sample application?
Q) can one uses flyway without knowing java? If yes, please provide any how-to url/notes/documents available. If not do you have any how-to for one to get started with java just enough to operate this tool?
Thanks Bob
I think you might find the command line tool useful:
http://flywaydb.org/documentation/commandline/
As it says on the website:
The Flyway command-line tool is meant for users who
do not run their applications on the JVM
wish to migration their database from the command-line without having to install Maven
You may need to browse the source code to figure out some more details:
https://github.com/flyway/flyway
Although I think you should be able to adapt the regular documentation to the CLI option.
Try starting here:
http://flywaydb.org/getstarted/existingDatabaseSetup.html

Keeping dot files synched across machines?

Like most *nix people, I tend to play with my tools and get them configured just the way that I like them. This was all well and good until recently. As I do more and more work, I tend to log onto more and more machines, and have more and more stuff that's configured great on my home machine, but not necessarily on my work machine, or my web server, or any of my work servers...
How do you keep these config files updated? Do you just manually copy them over? Do you have them stored somewhere public?
I've had pretty good luck keeping my files under a revision control system. It's not for everyone, but most programmers should be able to appreciate the benefits.
Read
Keeping Your Life in Subversion
for an excellent description, including how to handle non-dotfile configuration (like cron jobs via the svnfix script) on multiple machines.
I also use subversion to manage my dotfiles. When I login to a box my confs are automagically updated for me. I also use github to store my confs publicly. I use git-svn to keep the two in sync.
Getting up and running on a new server is just a matter of running a few commands. The create_links script just creates the symlinks from the .dotfiles folder items into my $HOME, and also touches some files that don't need to be checked in.
$ cd
# checkout the files
$ svn co https://path/to/my/dotfiles/trunk .dotfiles
# remove any files that might be in the way
$ .dotfiles/create_links.sh unlink
# create the symlinks and other random tasks needed for setup
$ .dotfiles/create_links.sh
It seems like everywhere I look these days I find a new thing that makes me say "Hey, that'd be a good thing to use DropBox for"
Rsync is about your best solution. Examples can be found here:
http://troy.jdmz.net/rsync/index.html
I use git for this.
There is a wiki/mailing list dedicated to the topic.
vcs-home
I would definetly recommend homesick. It uses git and automatically symlinks your files. homesick track tracks a new dotfile, while homesick symlink symlinks new dotfiles from the repository into your homefolder. This way you can even have more than one repository.
You could use rsync. It works through ssh which I've found useful since I only setup new servers with ssh access.
Or, create a tar file that you move around everywhere and unpack.
I store them in my version control system.
i use svn ... having a public and a private repository ... so as soon as i get on a server i just
svn co http://my.rep/home/public
and have all my dot files ...
I store mine in a git repository, which allows me to easily merge beyond system dependent changes, yet share changes that I want as well.
I keep master versions of the files under CM control on my main machine, and where I need to, arrange to copy the updates around. Fortunately, we have NFS mounts for home directories on most of our machines, so I actually don't have to copy all that often. My profile, on the other hand, is rather complex - and has provision for different PATH settings, etc, on different machines. Roughly, the machines I have administrative control over tend to have more open source software installed than machines I use occasionally without administrative control.
So, I have a random mix of manual and semi-automatic process.
There is netskel where you put your common files on a web server, and then the client program maintains the dot-files on any number of client machines. It's designed to run on any level of client machine, so the shell scripts are proper sh scripts and have a minimal amount of dependencies.
Svn here, too. Rsync or unison would be a good idea, except that sometimes stuff stops working and i wonder what was in my .bashrc file last week. Svn is a life saver in that case.
Now I use Live Mesh which keeps all my files synchronized across multiple machines.
I put all my dotfiles in to a folder on Dropbox and then symlink them to each machine. Changes made on one machine are available to all the others almost immediately. It just works.
Depending on your environment you can also use (fully backupped) NFS shares ...
Speaking about storing dot files in public there are
http://www.dotfiles.com/
and
http://dotfiles.org/
But it would be really painful to manually update your files as (AFAIK) none of these services provide any API.
The latter is really minimalistic (no contact form, no information about who made/owns it etc.)
briefcase is a tool to facilitate keeping dotfiles in git, including those with private information (such as .gitconfig).
By keeping your configuration files in a git public git repository, you can share your settings with others. Any secret information is kept in a single file outside the repository (it’s up to you to backup and transport this file).
-- http://jim.github.com/briefcase
mackup
https://github.com/lra/mackup
Ira/mackup is a utility for Linux & Mac systems that will sync application preferences using almost any popular shared storage provider (dropbox, icloud, google drive). It works by replacing the dot files with symlinks.
It also has a large library of hundreds of applications that are supported https://github.com/lra/mackup/tree/master/mackup/applications

Resources