access(2) system call security issue - unix

access(2) man page says,
CAVEAT
Access() is a potential security hole and should never be used.
But what is the security hole and why I should not use it?

From my system's man pages:
Warning: Using access() to check if a user is authorized to, for example, open a file before actually doing so using open(2) creates a security hole, because the user might exploit the short time interval between checking and opening the file to manipulate it. For this reason, the use of this system call should be avoided. (In the example just described, a safer alternative would be to temporarily switch the process's effective user ID to the real ID and then call open(2).)
So, the problem is that it creates a race condition can be exploited by the user to gain access to other files.
Imagine the following example scenario. I create a file /tmp/file that I am allowed to write. Then, your uid-0 program calls access() to check if I am allowed to open this file for writing, before providing me write access to it.
In the short space between the calls to access() and open(), I can remove /tmp/file and replace it by a symlink to /etc/crontab. I can now get the system to run any program I like, since the application will happily give me write access to /etc/crontab.

Linux Man pages clearly describes it
Warning: Using access() to check if a user is authorized to, for example, open
a file before actually doing so using open(2) creates a security hole, because
the user might exploit the short time interval between checking and opening
the file to manipulate it. For this reason, the use of this system call
should be avoided.
Also note. For security reason security exploits are not easily reachable to public.

Check out: http://www.kernel.org/doc/man-pages/online/pages/man2/access.2.html
Warning: Using access() to check if a user is authorized to, for
example, open
a file before actually doing so using open(2) creates a security hole, because
the user might exploit the short time interval between checking and opening
the file to manipulate it. For this reason, the use of this system call
should be avoided. (In the example just described, a safer alternative would
be to temporarily switch the process's effective user ID to the real ID and
then call open(2).)
See also:
http://www.csl.sri.com/~ddean/papers/usenix04.pdf
What is wrong with access()?

Related

Qt: Catch external changes on an SQLite database

)
I'm deveoping a program using an SQLite database I acces via QSqlDatabase. I'd like to handle the (hopefully rare) case when some changes are done to the database which are not caused by the program while it's running (e. g. the user could remove write access, move or delete the file or modify it manually).
I tried to use a QFileSystemWatcher. I let it watch the database file, and in all functions wrtiting something to it, I blocked it's signals, so that only "external" changes would trigger the changed signal.
Problem is that the check of the QFileSystemWatcher and/or the actual writing to disk of QSqlDatabase::commit() seems not to happen in the exact moment I call commit(), so that actually, first the QFileSystemWatcher's signals are blocked, then I change some stuff, then I unblock them and then, it reports the file to be changed.
I then tried to set a bool variable (m_writeInProgress) to true each time a function requests a change. The "changed" slot then checks if a write action has be requested and if so, sets m_writeInProgress to false again and exits. This way, it would only handle "external" changes.
Problem is still that if the change happens in the exact moment the actual writing is going on, it's not catched.
So possibly, using a QFileSystemWatcher is the wrong way to implement this.
How could this be done in a safe way?
Thanks for all help!
Edit:
I found a way to solve a part of the problem. Starting an exclusive lock on the database file prevents other connections from changing it. It's quite simple, I just have to execute
PRAGMA locking_mode = EXCLUSIVE
BEGIN EXCLUSIVE
COMMIT
and handle the error that emerges if another instance of my program trys to access the database.
What's left is to know if the user (accidentally) deleted the file during runtime ...
First of all, there's no SQLITE support for this: SQLITE only supports monitoring changes created over a database connection within your direct control. Whatever happens in a separate process concurrently with your process, or when your process is not running, is by design completely out of your control.
The canonical solution to this problem is to encrypt the database with a key specific to your application (and perhaps user, etc.). Then, no third-party process can modify the database using SQLITE. Of course any process can corrupt your database, or get rid of it -- that's too bad. You can detect corruption trivially by using cryptographic signatures, perhaps even error correcting codes so as to be able to restore the data should a certain amount of corruption happen. You don't need notifications of someone moving or deleting the database file: you will know when you attempt to open the database and the "file not found" error is given back to you.
Of course all of the above requires a custom VFS implementation. That's very much par for the course.

In Firebase, how can I determine whether a user has write access to a node?

...without actually saving data to that node?
For example, I have a chat app. I'd like to check whether a user has write access to a node before showing the "Send message" button.
Define another node with the exact same set of security rules, that exists for no reason other than to perform these checks, and attempt a write there first to see if it finishes without error.
The most common approach is to replicate a similar, simplified version of the rules in your application code. You'd typically only replicate the benign checks, and leave extra validations against malicious users solely on the server.
Although I must admit Doug's version also sounds interesting. :-)

Is it insecure to execute code via an HTTP URL?

I'm suspicious of the installation mechanism of Bioconductor. It looks like it is just executing (via source()) the R script from an HTTP URL. Isn't this an insecure approach vulnerable to a man-in-the-middle attack? I would think that they should be using HTTPS. If not, can someone explain why the current approach is acceptable?
Yes, you are correct.
Loading executable code over a cleartext connection is vulnerable to a MITM.
Unless loaded over HTTPS where SSL/TLS can be used to encrypt and authenticate the connection, or unless the code has been signed and verified at the client then a MITM attacker could alter the input stream and cause arbitrary code to be executed on your system.
Allowing code to execute via a HTTP GET request essentially means you're allowing user-input to be directly processed by the application thus directly influencing the behavior of the application. Whilst this is often what the developer would like (say to query specific information from a database) it may be exploited in ways as you have already mentioned (E.g MITM). This is often (however I'm not directly referring to Bioconductor in any way) a bad idea as it opens the system to possible XSS/(B)SQLi attacks amongst others.
However the URL - http://bioconductor.org/biocLite.R is essentially just a file placed on the Web Server and from what is seems source() is being used to directly download it. There does not seem to be any user-input anywhere in this example so no, I wouldn't mark is as unsafe; however your analogy is indeed correct.
Note: This is simply referring to GET requests - E.g: http://example.com/artists/artist.php?id=1. Such insecurities could be exploited in many HTTP requests such as Host Header attacks, however the general concept is the same. No user-input should ever be directly processed by the application in any way.

Asp.net Page access through IP address control

Is it possible to create a page in asp.net that allow the access to a user that has a defined IPaddres? My goal is to add a page "test" (not linked to my website) and I want to define a rule that only a specified IP address can get the access.
How can I implement this throught asp.net?
You could try putting the page(s) in a separate folder and password protect it, then, give the password to your user, so they may access the content. You could go as far as password protecting each file. This helps if your website is password protected or has a login.
You could also create a sub-domain for that user specifically.
These are just a few. I'm sure you'll get better suggestions here on SO!
You could go for a programmatic solution. However, I would use IIS functions to block the access. Less code, easier to configure and no hassle on your developement/test environment.
Assumption: you are using IIS since it is ASP.NET. But other webservers should have similar solutions.
You can add IP restrictions to the directory (meaning you would have to put your page in a separate directory). Example here: http://www.therealtimeweb.com/index.cfm/2012/10/18/iis7-restrict-by-ip
Obviously there are a lot of other and arguably better ways to grant access to a page if what you really want is for a specific "user" or "group" to have access, but assuming that your really want the access control to be based on IP, the answer may still be dependent on peripheral concerns such as what web server you are using. IIS for example has some features for IP based security that you could check out.
Assuming though that you really, really want to check IPs and that you want to do it in code, you would find information about the calling environment in the Request of the current HttpContext, i.e. context.Request.UserHostAddress.
If you want to reject calls based on this information, you should probably do that as early as possible. In the HttpApplication.BeginRequest event you could check if the call is targeted for the page in question and reject the request if the UserHostAddress is not to your liking.
If you prefer to make this control in the actual page, do it in some early page event.
To manage the acceptable IP(s), rather than hard coding them into your checking code, I suggest you work with a ConfigurationSection or similar. Your checking code could be something similar to:
var authorizedIps =
authorizedIpConfiguration.Split(',').Select(ipString => ipString.Trim()).ToList();
isValid = authorizedIps.Any()
&& authorizedIps.Contains(context.Request.UserHostAddress);
If the check fails, you should alter the response accordingly, i.e. at least set its status code to 401 (http://en.wikipedia.org/wiki/List_of_HTTP_status_codes).
NB: There are a lot of things to consider when implementing security features, and the general recommendation would probably stand as "don't do it" - it's so easy to falter. Try to use well proven concepts and "standard implementations" if possible. The above example should not in itself be considered to provide a "secure" solution, as there are generally speaking many ways that restricted data can leak from you solution.
EDIT: From you comment to the answer given by nocturns2 it seems you want to restrict access to the local computer? If so, then there is a much easier and cleaner solution: Just check the Request.IsLocal property. It will return true only for requests originating from the local computer, see HttpRequest.IsLocal Property
(Also, you should really make sure that this "debug page" is not at all published when deploying your solution. If you manage that properly and securely, then perhaps you do not even need the access check any more. If you want debugging options in a "live" environment, you should probably look to HttpContext.Current.Trace or some other logging functionality.)

How do Unix capabilities work?

it seems that starting kernel 2.2, they introduced the concept of Capabilities. According to the unix man page on capabilities, it says if you're not a root user, you can grant yourself of capabilities by calling cap_set_proc per thread basis. So does this mean that if you're writing a malware for unix, do you just grant yourself bunch of capabilities and compromise the system? If not, how does one grant capabilities required to run the program?
it seems that Unix's security model is quite flawed primitive. Am I getting this right?
I'll go more specific:
How do you (when running as a non-root user) send a signal to another process that is running under different user? On signal man page, it says you need CAP_KILL capability to perform this. However, reading the capabilities man page, I'm not sure how I can grant a process that capability.
From man cap_set_proc:
Please note, by default, the only processes that have CAP_SETPCAP available to them are processes started as a kernel-thread. (Typically this includes init(8), kflushd and kswapd). You will need to recompile the kernel to modify this default.
Trust me if it was that easy I'm sure someone would have exploited it by now. Unix's security model may be simple by comparison to other operating systems, but it doesn't mean it's "flawed".
it's impossible. Use Socket or File instead.

Resources