Getting varstack pillar-data for different host - salt-stack

I'm using varstack (https://github.com/conversis/varstack) as an external pillar for Salt. The idea is much like Hiera to produce different pillar data for different hosts, and the setup works great for regular use.
Now I want to configure icinga2 to monitor all hosts present in salt, and pull their respective information from varstack/pillar to be able to use it in the configuration files for each host in icinga2. For now I've set up Salt Mine which enables me to add all the hosts to icinga2 atleast, but I still want some information from varstack for each host.
Does anyone have any idea how to do this?

Related

What was the tool to build a uucp path datqbase

This is truly a blast from the past, but there's a need again....
Does anyone remember the old code that let us use UUCP maps to build a path database from our host to somewhere? We don't use this today, because we don't need it but
There is a BITNET style network for hoobyist mainframes called HNet - and it still has the "hosts" file and IP addresses.
If we can find the old C code, we can easily modify it to map hostnames and IPs. Does anyone remember?

Collect data for Bosun from multiple endpoints

In the observability system we're building from scratch, we'd like to have a single scollector to collect data from all the web servers and send it to Bosun, instead of having an instance of scollector on each server.
Do you know if there's a way to achieve that?
Scollector is implemented as an agent, similar to OpenTSDB's tcollector. It's lightweight and doesn't cause too much overhead on the hosts.
If you want all the data that scollector is capable of collecting forwarded to Bosun, there needs to be a single agent per host to monitor. Scollector makes use of procfs and similar which is only accessible on the hosts directly.
You can additionally create your own additional collectors that scollector will invoke for you.
With that, depending on your use case, you might be able to collect data from remote hosts, but scollector is really designed to run as an agent on every host and collect the data locally.

SaltStack File Server Access Control

I am trying to have different security levels for different minions. I already have different pillars, so a secret ssh key for one minion can not be seen from another.
What I want to attain is: that an easy-to-attack minion, say an edge cloud server run by someone else, cannot download or even see the software packages in the file-roots that I am installing on high-security minions in my own data center.
It appears that the Salt file server, apart from overloaded filenames existing in multiple environments, will serve every file to every minion.
It does not seem that this is possible in any way, using environments, pillars, or clever file-root includes to make certain files inaccessible to a particular minion?
By design the salt file server will serve every file to every minion.
There is something you could do to work around this.
Use a syndic. A minion can only see the file_roots of the master it is directly attached to, so you could have your easy-to-attack minions connect to a specific syndic, but you could still control them from the top level master that the rest of your minions connect directly to.

Defining Apigee Target Servers hosts (the right way) in an HA Architecture

I'm on 14.04 On-Prem
I have an Active and DR setup
see here: http://www.slideshare.net/michaelgeiser/apigee-dc-failover
When I failover to the DR site, I update my DNS entry (at Akamai)
Virtual hosts work fine; Target Servers are giving me a headache
How can I setup and work with the Target Servers so I do not have to modify the Proxy API bundle but have traffic flow to the right VIP based on the DC?
I prefer not to do something like MyService-Target-dc1 and MyService-Target-dc2 and use the deploy script to modify the target name in the bundle.
I do not want to have a JavaScript that is modifies the target or anything else in the Proxy API, I need to define this in Environment setup.
I also cannot put the two DCs each into a separate Org; I need to use the same API Key when I move between the Active and DR sites; different Orgs mean different API Keys (right?).
TIA
One option is to modify the DNS lookup on each set of MP per DC so that a name like 'myservice.target.dc' resolves to different VIP. You'll of course want to document this well
since this, especially since this is external to the Apigee product.
I know you weren't too keen on modifying the target, but if you were open to that option, you could try using the host header of an ELB in front (if you have one) or client IP address (e.g., in geo-based routing) to identify which DC a call is passing through. From there, you can modify the target URL.
And yes, different Orgs do mean different API keys.
You can try Named Target Servers. They are part of the Load Balancing function but you can set them up individually and have different targets for different environments See:
Load Balancing Across Backend Servers
http://apigee.com/docs/api-services/content/load-balancing-across-backend-servers
Create a Named Target Server
http://apigee.com/docs/management/apis/post/organizations/%7Borg_name%7D/environments/%7Benv_name%7D/targetservers

Managing authorized_keys on a large number of hosts

What is the easiest way to manage the authorized_keys file for openssh across a large number of hosts? If I need to add or revoke a new key to an account on 10 hosts say, I must login and add the public key manually, or through a clumsy shell script, which is time consuming.
Ideally there would be a central database linking keys to accounts#machines with some sort of grouping support (IE, add this key to username X on all servers in the web category). There's fork of SSH with ldap support, but I'd rather use the mainline SSH packages.
I'd checkout the Monkeysphere project. It uses OpenPGP's web of trust concepts to manage ssh's authorized_keys and known_hosts files, without requiring changes to the ssh client or server.
I use Puppet for lots of things, including this.
(using the ssh_authorized_key resource type)
I've always done this by maintaining a "master" tree of the different servers' keys, and using rsync to update the remote machines. This lets you edit things in one location, push the changes out efficiently, and keeps things "up to date" -- everyone edits the master files, no one edits the files on random hosts.
You may want to look at projects which are made for running commands across groups of machines, such as Func at https://fedorahosted.org/func or other server configuration management packages.
Have you considered using clusterssh (or similar) to automate the file transfer? Another option is one of the centralized configuration systems.
/Allan

Resources