What should I do to secure, chmodly wise, my server? - unix

I want to let my friend access my server so he can host his website. Let's call him Joris.
# useradd joris
note that I'm Debian. So now a /home/joris has been created. This is cool and all. BUT. He can
cd /
cd /etc/
cd /var/www/
He can cd pratically everywhere, maybe not delete but he can see everything, which I don't want him to. Is that normal?

First, I would suggest you reading the Debian Administrator's Handbook by either using aptitude install debian-handbook or using a search engine to find a copy online. It covers many topics about security that will be of use to you, especially when sharing a server with multiple users.
As far as being able to access various directories, Debian is VERY relaxed for my tastes with it's default permissions setup. Check the default UMASK settings (/etc/login.defs) so that you can have a more secure setup when adding users.
I o-rx from things like /var/www and grant access to those using Access Control Lists (ACLs). If you are unfamiliar with ACLs I highly recommend you familiarize yourself with them as they are much more robust than the default permissions system.
As far as what all you should protect, that will depend on your setup. Most things in /etc will be self explanatory whether or not you can remove read access for users outside of the owner/group (like your web server configuration directory). You can also use permissions to limit access to specific binaries that users should never have access to, like mysql or gcc.
In the long run your setup will be unique to your specific needs. Reading the Debian Handbook will be immensely helpful in helping you secure your box not only from the outside, but from the inside as well.
Hope this helps point you in the right direction.

Related

WordPress - Hardened permissions with automatic updates?

Is there a way to allow WordPress to automatically update while still using hardened permissions?
It seems the recommended security setup for WordPress is to use hardened permissions, which are mostly achieved using the permissions given in this answer. However, these permissions result in WordPress not being able to automatically update, or use update through the administrator web interface, resulting in an error:
Downloading update from https://downloads.wordpress.org/release/wordpress-x.x.x-partial-x.zip…
Unpacking the update…
The update cannot be installed because we will be unable to copy some files. This is usually due to inconsistent file permissions.: wp-admin/includes/update-core.php
Installation Failed
By allowing the web server to update update-core.php we violate the hardened permissions (as far as I can tell). Unfortunately, without automatic updates, we also have the problem that we don't get automatic security updates, which leads to another security problem. Is there a way to allow automatic updates without the need for weak permissions? What are the strongest permissions that can be used while still allowing automatic updates, and is that strong enough?
The Hardening Wordpress guide describes on what's a secure setup and recommends automatic updates, but conveniently omits that the former doesn't work with them.
To my knowledge, every admin just has a very unpleasant choice to make:
Keep the hardened permissions, requiring keeping on top of every single minor update and changing permissions back and forth to execute it
Loosen permissions in a non-documented way and risk the associated increased insecurities
As somebody who primarily deals with automation, personally I just can't get behind the manual approach. It seems like less of a risk, but that's only if you never happen to let an update go unattended for a week or two. Then arguably the risk is higher due to the unpatched vulnerabilities than it would have been for the looser permissions.
Here's the extract that I use to switch to "insecure" mode for the few seconds it takes to update (and that I'll be using until something better comes along or my patience with this manual approach ends):
sudo chown -R <wordpress_user> <wp_rootdir>; read; sudo chown -R <myuser> <wp_rootdir>
It changes the owner of everything to the process that runs WordPress and uses the "read" command just to hold up until you press any button to then restore back to the original owner.
TL;DL: No, there is only the choice of two extremities.

Vagrant shared/synced folders permissions

From my research I understand that VirtualBox synced folders have permissions set up during the mounting process. Later, I am unable to change it therefore permissions for the whole synced folder MUST be same for every single file/folder in the shared folder. When trying to change with or without superuser permissions, changes are reverted straight away.
How this can work with for example Symfony PHP framework where there are several different permissions for different files/folders? (i.e. app/console needs execute rights but I don't want to have 7XX everywhere).
I have found in different but similar question (Vagrant and symfony2) that I could set the permissions to 777 for everything in the Vagrantfile, however this is not desirable as I need to use GIT behind my source code which is than deployed to the live environment. Running everything under 777 in the production is, nicely put, not correct.
How do you people cope with this? What are yours permissions setups?
A possible solution could be using the rsync synced folder strategy, along with the vagrant rsync and vagrant rsync-auto commands.
In this way you'll lose bidirectional sync, but you can manage file permission and ownership.
I am in a similar situation. I started using Vagrant mount options, and found out that as I upgraded parts of my tech stack (Kernel, Virtualbox, Vagrant, Guest Additions) I started getting different behavior while trying to set permissions in synced folders.
At some point, I was perfectly fine updating a few of the permissions in my shell provisioner. At first, the changes were being reflected in the guest and the host. At another point in time, it was being done the way I expected, with the changes being reflected only in the guest and not the host file-system. After updating the kernel and VB on my host, I noticed that permission changes in the guest are being reflected on the host only.
I was trying to use DKMS to compile VBOX against an older version of my Kernel. No luck yet.
Now when I have little more experience, I can actually answer this question.
There are 3 solution to this problem:
Use Git in your host system because vagrant basic shared folders setup somehow forces 777 (at least on Windows hosts)
Use NFS shared folders option from the vagrant (not available on Windows out of the box)
Configure more complex rsync as mentioned in Emyl's answer (slower sync speeds).

Where to put a new ASP.NET website?

Where's the best place for a production asp.net application? I mean a place that we need less permission manipulation on folders and probably the experts choice.
under C:\inetpub\wwwroot or C:\inetpub or elswhere ?
In development/test phases I usually put it under C:\inetpub\wwwroot and create a new web application without setting bindings. But on production version with binding I'm not sure where's the right place.
You can put it anywhere you like, they key thing is to ensure that the app pool it is running under is set to run as a low privileged user (like NT AUTHORITY\NETWORK SERVICE), then ensure that user has Read (and possibly Browse if you want it) permissions on the folder you put your web app in. Very seldom (if ever) will the user need Write or Modify permissions on the folder.
and on a new system I had a lot of problem to modify batch files, setting permissions
Setting permissions should not be a problem, you should set the same basic permissions i mentioned above for the user you want to run the app pool as. You can use PowerShell or WMI for this, and you should use the same permissions no matter what folder you install in to.
You could always wrap all this up into an installer, then it can be as simple as hitting Next.. Next... Finish... in an installer wizard to set up your website on any machine. Doing this in an installer also gives you some certainty that nothing has been missed.
Personally I have a 'Development' folder on my D: drive which is then subdivided into different categories depending on the work. I generally don't use inetpub directory and any permission issues I come across I just set directly onto the relevant folder within my own development structure.
On production environments I've used in the past, we've generally done the same thing. Mainly to help backup scenarios really, but also because there's no strict need to use the default IIS directories - you're free to structure things how you like.
Personally, I always create a new folder (in the root of a drive) called WebSites. I then make sure it has the appropriate permissions for the website process(es) (aka App Pools).
eg.
C:\
|_WebSites
|_www.Foo.com
|_www.Bah.com
It also makes it easier to manage because you don't have to hunt through the folder structures to find any/all websites.
But technically, it can be (more or less) anywhere - just needs to have the correct permissions set.
Bonus Answer
I also remove the Default Website from IIS .. which in effect means I can also delete c:\inetpub\wwwroot.
You can put the website any where on the server hard disk, Just make sure it is a secure folder and also I recommend to don't put it in the same OS drive, in case it failed and you needed to formate it.
C:\inetpub\wwwroot and C:\inetpub are just the default places nothing more.
Really depends on how the production server is configured and how operations likes to operate over there. Typically we setup a second "data" drive on servers for a few reasons:
a) Back in the old days, there were a lot of cannonical attacks where the attacker would try to navigate from c:\inetpub to c:\winnt\cmd.exe. Putting things on a different drive prevented this sort of thing.
b) Recovery -- if the OS gets hosed, you can pretty easily reinstall/reimage or move the data disks to another box and get things stood up fast.
c) Typically is lots easier to do things like swap the non-os disk in case you need more disk space or faster disks or whatever.
Basically, off the OS drive is a good idea. Though virtualization and modern deployment tools make lots of this matter less.

How can I edit/update hosts(etc/hosts) file using any programming language

Dynamically I want to edit/update hosts(etc/hosts) file to add domain.
To edit hosts(etc/hosts) file require Admin privileged. Using Linux I can do this by this command
sudo gedit /etc/hosts
But I am trying to do this from using Programming Language.
How can i do it?
Your best bet is to use something like SSH and connect to the computer as root (or sudo in a system()), modify the file then disconnect. The added advantage of this is the convenience of prompting the user for the password.
To do this without prompting, the user would have to set up some means to accomplish it as root. I.e. setuid'ing a helper application, installing a password-less key, modifying a LDAP tree, or various other ways. That's a little 'icky' for the lack of a better term.
There's no way to make this work for user who does not normally have privilege escalation capabilities.
Your program will need to be running with appropriate privileges. One of the classic techniques is to make the file owned by root and set the setuid bit. When your program is run, it will become root and will be able to modify /etc/hosts.
That said, setuid code is risky. A bug in the code can cause the program to do something so bad that your system becomes unuseable. Certain bugs can be used by malicious users to run arbitrary programs as root and take over your system.
You still must have the right permissions to edit the file.
To change the file, open the file in read/write/append mode (ie. mode "a" using fopen()) and write the new text to the end of the file.
I'm assuming you are at the command prompt, where you could issue that sudo command ..
Provided you have the access rights, as you claim you do, then any programming language that can add a line of text to an existing textfile (or create it, when not, which is unlikely), will work. You might habe to give that programm some additional rights, but that's a different topic!
Summary: what language do you know? => use that!

Keeping dot files synched across machines?

Like most *nix people, I tend to play with my tools and get them configured just the way that I like them. This was all well and good until recently. As I do more and more work, I tend to log onto more and more machines, and have more and more stuff that's configured great on my home machine, but not necessarily on my work machine, or my web server, or any of my work servers...
How do you keep these config files updated? Do you just manually copy them over? Do you have them stored somewhere public?
I've had pretty good luck keeping my files under a revision control system. It's not for everyone, but most programmers should be able to appreciate the benefits.
Read
Keeping Your Life in Subversion
for an excellent description, including how to handle non-dotfile configuration (like cron jobs via the svnfix script) on multiple machines.
I also use subversion to manage my dotfiles. When I login to a box my confs are automagically updated for me. I also use github to store my confs publicly. I use git-svn to keep the two in sync.
Getting up and running on a new server is just a matter of running a few commands. The create_links script just creates the symlinks from the .dotfiles folder items into my $HOME, and also touches some files that don't need to be checked in.
$ cd
# checkout the files
$ svn co https://path/to/my/dotfiles/trunk .dotfiles
# remove any files that might be in the way
$ .dotfiles/create_links.sh unlink
# create the symlinks and other random tasks needed for setup
$ .dotfiles/create_links.sh
It seems like everywhere I look these days I find a new thing that makes me say "Hey, that'd be a good thing to use DropBox for"
Rsync is about your best solution. Examples can be found here:
http://troy.jdmz.net/rsync/index.html
I use git for this.
There is a wiki/mailing list dedicated to the topic.
vcs-home
I would definetly recommend homesick. It uses git and automatically symlinks your files. homesick track tracks a new dotfile, while homesick symlink symlinks new dotfiles from the repository into your homefolder. This way you can even have more than one repository.
You could use rsync. It works through ssh which I've found useful since I only setup new servers with ssh access.
Or, create a tar file that you move around everywhere and unpack.
I store them in my version control system.
i use svn ... having a public and a private repository ... so as soon as i get on a server i just
svn co http://my.rep/home/public
and have all my dot files ...
I store mine in a git repository, which allows me to easily merge beyond system dependent changes, yet share changes that I want as well.
I keep master versions of the files under CM control on my main machine, and where I need to, arrange to copy the updates around. Fortunately, we have NFS mounts for home directories on most of our machines, so I actually don't have to copy all that often. My profile, on the other hand, is rather complex - and has provision for different PATH settings, etc, on different machines. Roughly, the machines I have administrative control over tend to have more open source software installed than machines I use occasionally without administrative control.
So, I have a random mix of manual and semi-automatic process.
There is netskel where you put your common files on a web server, and then the client program maintains the dot-files on any number of client machines. It's designed to run on any level of client machine, so the shell scripts are proper sh scripts and have a minimal amount of dependencies.
Svn here, too. Rsync or unison would be a good idea, except that sometimes stuff stops working and i wonder what was in my .bashrc file last week. Svn is a life saver in that case.
Now I use Live Mesh which keeps all my files synchronized across multiple machines.
I put all my dotfiles in to a folder on Dropbox and then symlink them to each machine. Changes made on one machine are available to all the others almost immediately. It just works.
Depending on your environment you can also use (fully backupped) NFS shares ...
Speaking about storing dot files in public there are
http://www.dotfiles.com/
and
http://dotfiles.org/
But it would be really painful to manually update your files as (AFAIK) none of these services provide any API.
The latter is really minimalistic (no contact form, no information about who made/owns it etc.)
briefcase is a tool to facilitate keeping dotfiles in git, including those with private information (such as .gitconfig).
By keeping your configuration files in a git public git repository, you can share your settings with others. Any secret information is kept in a single file outside the repository (it’s up to you to backup and transport this file).
-- http://jim.github.com/briefcase
mackup
https://github.com/lra/mackup
Ira/mackup is a utility for Linux & Mac systems that will sync application preferences using almost any popular shared storage provider (dropbox, icloud, google drive). It works by replacing the dot files with symlinks.
It also has a large library of hundreds of applications that are supported https://github.com/lra/mackup/tree/master/mackup/applications

Resources