What is the easiest way to manage the authorized_keys file for openssh across a large number of hosts? If I need to add or revoke a new key to an account on 10 hosts say, I must login and add the public key manually, or through a clumsy shell script, which is time consuming.
Ideally there would be a central database linking keys to accounts#machines with some sort of grouping support (IE, add this key to username X on all servers in the web category). There's fork of SSH with ldap support, but I'd rather use the mainline SSH packages.
I'd checkout the Monkeysphere project. It uses OpenPGP's web of trust concepts to manage ssh's authorized_keys and known_hosts files, without requiring changes to the ssh client or server.
I use Puppet for lots of things, including this.
(using the ssh_authorized_key resource type)
I've always done this by maintaining a "master" tree of the different servers' keys, and using rsync to update the remote machines. This lets you edit things in one location, push the changes out efficiently, and keeps things "up to date" -- everyone edits the master files, no one edits the files on random hosts.
You may want to look at projects which are made for running commands across groups of machines, such as Func at https://fedorahosted.org/func or other server configuration management packages.
Have you considered using clusterssh (or similar) to automate the file transfer? Another option is one of the centralized configuration systems.
/Allan
Related
I am trying to have different security levels for different minions. I already have different pillars, so a secret ssh key for one minion can not be seen from another.
What I want to attain is: that an easy-to-attack minion, say an edge cloud server run by someone else, cannot download or even see the software packages in the file-roots that I am installing on high-security minions in my own data center.
It appears that the Salt file server, apart from overloaded filenames existing in multiple environments, will serve every file to every minion.
It does not seem that this is possible in any way, using environments, pillars, or clever file-root includes to make certain files inaccessible to a particular minion?
By design the salt file server will serve every file to every minion.
There is something you could do to work around this.
Use a syndic. A minion can only see the file_roots of the master it is directly attached to, so you could have your easy-to-attack minions connect to a specific syndic, but you could still control them from the top level master that the rest of your minions connect directly to.
I have a weird but interesting use-case. I use CIFS to mount shares from a File Server (NetApp, EMC etc) to an application server (win/linux server where my application runs). My application needs to process each of the file from the shares that I mount via CIFS. My application also needs access to the meta-data of these files such as Name, Size, ACLs etc.
I would like to see if I can achieve the same via NDMP. I have some very basic questions regarding this use-case. It would be great if you guys can help me out here.
Is this even something which is achievable?
Can I transfer only shares that are interesting to me instead of the entire volume?
NDMP is essentially an application protocol to control backup/restore operations. The protocol is supple enough to do interesting things like data migration or tape cloning as well.
However it is not a file access protocol so "mounting" anything via NDMP isn't possible unless a NDMP server vendor writes a NDMP extension to do so: which will be a rather silly given there are specialized protocols that do just that.
Hope this helps.
NDMP is designed for data management (backup, recovery, etc.) and not as a file access protocol such as CIFS. If you application is a backup application then yes, you can use NDMP to control and copy subsets of the data from your filer to the application server. Note that the format of the data will be what the filer provides (via NDMP) and not in your control. Hope that helps!
I have a server application that will be running under a system account because at any given time, it will be processing requests on behalf of any user on the system. These requests consist of instructions for manipulating the filesystem.
Here's the catch: the program needs to keep that particular user's privileges in mind when performing the actions. For example, joe should not be able to modify /home/larry if its permissions are 755.
Currently my strategy is this
Get the owner / group of the file
Compare it to the user ID / group ID of the user trying to perform the action
If either match (or if none match), use the appropriate part of the permission field in the file to either allow or deny the action
Is this wise? Is there an easier way to do this?
At first, I was thinking of having multiple instances of the app running under the user's accounts - but this is not an option because then only one of the instances can listen on a given TCP port.
Take a look at samba for an example of this can be done. The samba daemon runs as root but forks and assumes the credentials of a normal user as soon as possible.
Unix systems have two separate sets of credentials: the real user/group ids and the effective user/group ids. The real set identifies who you actually are, and the effective set defines what you can access. You can change the effective uid/gid as you please if you are root—including to an ordinary user and back again—as your real user/group ids remain root during the transition. So an alternative way to do this in a single process is to use seteuid/gid to apply the permissions of different users back and forth as needed. If your server daemon runs as root or has CAP_SETUID then this will be permitted.
However, notice that if you have the ability to switch the effective uid/gid at whim and your application is subverted, then that subversion could for example switch the effective uid/gid back to 0 and you could have a serious security vulnerability. This is why it is prudent to drop all privileges permanently as soon as possible, including your real user uid/gid.
For this reason it is normal and safer to have a single listening socket running as root, then fork off and change both the real and effective user ids by calling setuid. Then it cannot change back. Your forked process would have the socket that was accept()ed as it is a fork. Each process just closes the file descriptors they don't need; the sockets stay alive as they are referenced by the file descriptors in the opposite processes.
You could also try and enforce the permissions by examining them individually yourself, but I hope it is obvious that this is potentially error-prone, has lots of edge cases and more likely to go wrong (eg. it won't work with POSIX ACLs unless you specifically implement that too).
So, you have three options:
Fork and setgid()/setuid() to the user you want. If communication is required, use pipe(2) or socketpair(2) before you fork.
Don't fork and seteuid()/setegid() around as needed (less secure: more likely to compromise your server by accident).
Don't mess with system credentials; do permission enforcement manually (less secure: more likely to get authorisation wrong).
If you need to communicate with the daemon, then although it might be harder to do it down a socket or a pipe, the first option really is the proper secure way to go about it. See how ssh does privilege separation, for example. You might also consider if can change your architecture so instead of any communication the process can just share some memory or disk space instead.
You mention that you considered having a separate process run for each user, but need a single listening TCP port. You can still do this. Just have a master daemon listen on the TCP port and dispatch requests to each user daemon and communicate as required (eg. via Unix domain sockets). This would actually be almost the same as having a forking master daemon; I think the latter would turn out to be easier to implement.
Further reading: the credentials(7) manpage. Also note that Linux has file system uid/gids; this is almost the same as effective uid/gids except for other stuff like sending signals. If your users don't have shell access and cannot run arbitrary code then you don't need to worry about the difference.
I would have my server fork() and immediately setuid(uid) to give up root privileges. Then any file manipulation would be on behalf of the user you've become. Since you're a child of the server you'd still hold the accept()ed child socket that the request (and I assume response) would go on. This (obviously) requires root privilege on the daemon.
Passing file descriptors between processes seems unnecessarily complicated in this case, as the child already has the "request" descriptor.
Let one server run on the previlegued server port, and spawn child processes for users that log into the system. The child processes should drop privilegues and inpersonate the user that logged in. Now the childs cannot do harm anymore.
I'm looking for a way to setup a server in order to make the static caches created by boost module easily mirrorable to several other servers.
You COULD use rsync to do this but it is brittle and liable to break. You would be better off by using either:
a single shared network filesystem
or my recommended solution, use a cluster distributed filesystem such as glusterFS. This is what is generally used on web server clusters for distributing web apps across nodes automagically.
here are some ideas...
If you want to prevent getting stabbed in the back by your hosting provider wouldn't be better to use a solution that doesn't rely on the hosting provider?
My choice would be to use a third party dns provider which supports Round Robin [ http://en.wikipedia.org/wiki/Round_robin_DNS ] -or your own server configured to support round robin- (which you can also use for auto-load-balancing).
Round robin should allow you to have several A addresses and, every time someone goes to your domain, it checks whether the servers are up or down, and redirects to the servers that are up.
For the static caches I think you could use rsync, but that's involving your hosting provider. Maybe a better way (but I think not resource-efficient) would be to have clones of your drupal installation in each server, then syncing the DBs using MySql Mirroring (and cron to create the boost static cache)... then you would not depend on any server because all of them would have the whole site and use Round Robin to redirect your domain to the working server.
What is the best way to change a user-password remotely in Unix?
This must be performed by the user, in a Web-app or Windows-App, without using SSH or any direct connection between the user and the server (direct command line not allowed).
Thanks
Webmin seemed to be a good application to do that, but I found it extremely hard to configure it right. My Unix users are unable to login to Webmin or Usermin.
Do you know any other alternatives to Webmin and Usermin?
Thanks
Use Webmin (more specifically the UserMin module).
Webmin provides a mini webserver, so you just need to install and configure it slightly. You'll get a lot more than just password-changing, and you can remove functionality you don't want the user to have.
#Rich Bradshaw
Just make sure you don't introduce security issues. The solution should use https encryption (the password should be never sent in clear text). It should be protected against shell injection attacks (strip any newlines from input, escape it properly etc). More details depend on choosen implementation.
I've done this in the past to change passwords on several servers at once by using a script written in Expect. It's perfect for the job but you will need the servers to be listening via SSH.
Once written, the script will execute on your local workstation and will connect to the remote host, do the interaction you've scripted, and then you should be gold. All the while, using the encryption you're already trusting if you're running SSH. Just don't save the passwords in your script: you should be able to prompt yourself for them (even taking them by command line argument is generally considered poor practice.)
Expect is a great language too: lots of fun!
You could write a server side script that ran passwd, you could do that in any language that allows shell commands to be run.