I am trying to have different security levels for different minions. I already have different pillars, so a secret ssh key for one minion can not be seen from another.
What I want to attain is: that an easy-to-attack minion, say an edge cloud server run by someone else, cannot download or even see the software packages in the file-roots that I am installing on high-security minions in my own data center.
It appears that the Salt file server, apart from overloaded filenames existing in multiple environments, will serve every file to every minion.
It does not seem that this is possible in any way, using environments, pillars, or clever file-root includes to make certain files inaccessible to a particular minion?
By design the salt file server will serve every file to every minion.
There is something you could do to work around this.
Use a syndic. A minion can only see the file_roots of the master it is directly attached to, so you could have your easy-to-attack minions connect to a specific syndic, but you could still control them from the top level master that the rest of your minions connect directly to.
Related
I have more than 900 receive locations associated with the same host.
All receive locations are enabled but sometimes some of them are not working (and are still enabled).
When I disabled and re-enabled it, the receive location works but another one is going into trouble.
Are there any known limitations of the number of receive locations that can be associated with the same host in BizTalk 2016?
I don't know if there is a limitation number, but if you associate all the receive locations to the same Host, problably your problems are due to the Throttling mechanism.
While there are no hard limits to Receive Locations or Send Ports, there are still practical limits based on available resources.
900 is a lot for a single Host. Even if everything was running perfectly, I would still break that up across ~3 Hosts.
If these are File Receive Locations, there are other techniques to reduce the amount even more. Some options:
Use a Windows Scheduler task to move files from various locations to fewer, or maybe one location. If 'source' information is necessary, you can add a tag to the file name which can be extracted in a custom Pipeline Component.
Modify the sample File Adapter in the SDK to scan sub-folders as well. You can combine this with option 1 if you cannot modify the filename for some reason.
Similar to option 1, the script can write a meta-data file before moving the file with any data you need to preserve. The meta-data can then be read in a Pipeline Component.
I am trying to run a multi-master setup in our dev environment.
The idea is that every dev team has their own salt master. However, all minions in the entire dev environment should be able to receive salt commands from all salt master servers.
Since not every team needs their salt master 24/7, most of them are turned off for several days during the week.
I'm running 2016.11.4 on the masters, as well as on the minions.
However, I run into the following problem: If one of the hosts that are listed in the mininons config file is shut down, the minion will not always report back on a 'test.ping' command (not even with -t 60)
My experience is, that the more master servers are offline, the longer the lag of the minion is to answer requests.
Especially if you execute a 'test.ping' on MasterX while the minions' log is at this point:
2017-05-19 08:31:44,819 [salt.minion ][DEBUG ][5336] Connecting to master. Attempt 4 (infinite attempts)
If I trigger a 'test.ping' at this point, chances are 50/50 that I will get a 'minion did not return' on my master.
Obviously though, I always want a return to my 'test.ping', regardless from which master I send it.
Can anybody tell me if what I try is feasible with salt? Because all the articles about salt multi master setup that I could find would only say: 'put a list of master servers into the minion config and that's it!'
The comment from gtmanfred solved my question:
That is not really the way multi master is meant to work. It is supposed to be used more for failover and not for separating out teams.
I'm using varstack (https://github.com/conversis/varstack) as an external pillar for Salt. The idea is much like Hiera to produce different pillar data for different hosts, and the setup works great for regular use.
Now I want to configure icinga2 to monitor all hosts present in salt, and pull their respective information from varstack/pillar to be able to use it in the configuration files for each host in icinga2. For now I've set up Salt Mine which enables me to add all the hosts to icinga2 atleast, but I still want some information from varstack for each host.
Does anyone have any idea how to do this?
I have an application that processes files in a directory and moves them to another directory along with the processed output. Nothing special about that. An interesting requirement was introduced:
Implement fault tolerance and processing throughput by allowing multiple remote instances to work on the same file store.
Additional considerations are that we can not assume the file system, as we support both Windows and NFS.
Of course the problems is, how do I make sure that the different instances do not try and process the same work, potentially corrupting work or reducing throughput? File locking can be problematic, especially across network shares. We can use a more sophisticated method, such as a simple database or messaging framework, (a la JMS or similar), but the entire cluster needs to be fault tolerant. We can't have one database or messaging provider because of the single point of failure that it introduces.
We've implemented a solution that uses multicast messages to self-discover processing instances and elect a supervisor who assigns work. There's a timeout in case the supervisor goes down and another election takes place. Our networking library, however, isn't very mature and the our implementation of messages is clunky.
My instincts, however, tell me that there is a simpler way.
Thoughts?
I think you can safely assume that rename operations are atomic on all network file systems that you care about. So if you arrange an amount of work to be a single file (or keyed to a single file), then have each server first list the directory containing new work, pick a piece of work, and then have it rename the file to its own server name (say, machine name or IP address). For one of the instances who concurrently perform the same operation, the rename will succeed, so they should then process the work. For the others, it will fail, so they should pick a different file from the listing they got.
For creation of new work, assume that directory creation (mkdir) is atomic, but file creation is not (for file creation, the second writer might overwrite the existing file). So if there are multiple producers of work also, create a new directory for each piece of work.
What is the easiest way to manage the authorized_keys file for openssh across a large number of hosts? If I need to add or revoke a new key to an account on 10 hosts say, I must login and add the public key manually, or through a clumsy shell script, which is time consuming.
Ideally there would be a central database linking keys to accounts#machines with some sort of grouping support (IE, add this key to username X on all servers in the web category). There's fork of SSH with ldap support, but I'd rather use the mainline SSH packages.
I'd checkout the Monkeysphere project. It uses OpenPGP's web of trust concepts to manage ssh's authorized_keys and known_hosts files, without requiring changes to the ssh client or server.
I use Puppet for lots of things, including this.
(using the ssh_authorized_key resource type)
I've always done this by maintaining a "master" tree of the different servers' keys, and using rsync to update the remote machines. This lets you edit things in one location, push the changes out efficiently, and keeps things "up to date" -- everyone edits the master files, no one edits the files on random hosts.
You may want to look at projects which are made for running commands across groups of machines, such as Func at https://fedorahosted.org/func or other server configuration management packages.
Have you considered using clusterssh (or similar) to automate the file transfer? Another option is one of the centralized configuration systems.
/Allan