Unable to stop the creation of backups to root by Vim/Emacs - unix

Problem 1: my Vim makes backups with the extension ~ to my root
I have the following line in my .vimrc
set backup backupdir=~/tmp/vim//,~/tmp//,.//,/var/tmp//,/tmp//$
However, I cannot see a root directory in the line.
Why does my Vim make backups of my shell scripts with the extension ~ to my root?
Problem 2: my Zsh run my shell scripts at login which I have in my PATH. For instance, my "replaceUp" shell-script started at my root at login. I keep it at ~/bin/shells/apps by default.
Why does Zsh run shell scripts which are in my PATH at login?

The files ending with ~ are swap files used by vim while editing files. You can try setting the backupdir and directory variables
set backupdir=~/tmp/vim//,~/tmp//,.//,/var/tmp//,/tmp//
set directory=~/tmp/vim//,~/tmp//,.//,/var/tmp//,/tmp//

Related

Laravel Homestead setup but does not sync hidden files

Hi I've setup Homestead correctly but when I ssh into my Vagrant instance I can see all my files just not the hidden ones (.env, .git, .gitignore are missing). I'm trying to run webpack-dev-server within my instance and it needs my .env file to run. Is it normal that hidden files are not synced?
My .yaml file:
Hidden files don't show when using the ls command (which I guess is what u're doing). They are still there, however. You can try something like nano .env and you'll see that you can edit your env from the console :)
Another command you could use is ls -a, which should show all files regardless of whether they're hidden or not.

Rstudio Server Run from specific directory

I am spinning up an instance of rstudio server and I need the working directory of R to be a specific directory. I would also like the file pane in the bottom right corner to be pointing to the same directory. Is there a way to do this? Currently it runs from the home directory of whichever user is running the program. I have tried the --server-working-dir flag, and it does not seem to work. Here is the command I am using:
/usr/lib/rstudio-server/bin/rserver \
--server-daemonize=0 \
--server-user=user \
--server-working-dir=/some/path \
--auth-none=1 \
--auth-minimum-user-id=0
Any help would be useful here.
[edit] Just wanted to clarify that I would like the server to start in this directory. I am building a container that will be deployed multiple times, and I don't want the users to have to set their directories every time it is deployed.
If you want to modify the file pane in the right, you should edit /etc/rstudio/rsession.conf.
And add two lines in below:
session-default-working-dir=/some/path
session-default-new-project-dir=/some/path
You can do this by edditing the (global) R profile startup script. Here's a step by step guide:
1) Run Rscript -e "R.home()" -- this will tell you the location of your R directory home. In my case (Mac) it is /Library/Frameworks/R.framework/Resources
2) Go to /Library/Frameworks/R.framework/Resources/etc -- e.g., $R_HOME/etc
3) sudo touch Rprofile.site if it doesn't exist, then sudo nano Rprofile.site
4) Add the following lines and save:
cat("hi\n")
setwd("/some/path/")
You should avoid overwriting the users home directory.
Amongs the [.Rprofile] files you should only edit the Rprofile.site as a last resort since it acts globally.
Suggested solution:
R read the "initialization file" at start, in the following order:
.Rprofile.site
.Rprofile (located in the current directory).
.Rprofile (in the users home directory).
In your case if you are planning to login to R-Studio server you will end up in the users home directory, so I would suggest you to just edit the [.Rprofile] in the home directory. In case the [.Rprofile] is missing you need to create it.
Add this line in your .Rprofile [in your home directory]:
setwd ('/your/path/')
Logout/login to your R-studio server session and you will notice that the "file pane to the right location" has changed accordingly to what you specified in your .Rprofile.

SUID exploit and patch

I am working on a SUID root binary 'app' that runs a system("ls -la /dir") command and managed to exploit it by writing a malicious ls to get root and changing my user's environment path to set it to higher priority than the kernel's one.
I noticed that executing it as user returns me root shell while executing it with sudo "./example" uses root's path and simply lists the files in dir. As far as i know setuid inherits owner's (in this case root) privileges to user and sudo executes as root.
What are such vulnerabilities called ? How would an app developer patch it? I there any way i can force user's to use sudo ./app to execute a program?
I recommend you change app to use an absolute path for the commands it runs. For example:
system("/bin/ls -la /dir");
Even if the users use the sudo command to execute it, there are sudo arguments they can use (--preserve-env) to preserve their own PATH.
If you want the users to run app using sudo, then there's no need for the binary to be SUID root.

Exec with PHP-FPM on nginx (under chroot) returns nothing

I've created a nginx server in a chroot at /srv/http with php-fpm. Both services use the http user and it works fine. The problem comes when I try to run an exec command such as
echo shell_exec('/usr/bin/ls');
There is no output at all on the web page or in the errors. I've also tried
error_log(shell_exec('/usr/bin/ls');
and still nothing.
Things I've Tried or Know:
safe mode off
exec enabled
user is http (using phpinfo())
display_errors = on
error_reporting = E_ALL
sudo /usr/bin/chroot --userspec=http:http /srv/http ls works fine
Can create file and read from it using file_puts_content and fopen/fread
tried shell_exec,exec,system, and passthrough - nothing worked
tried appending 2>&1 to the end of the command and nothing
I've copied all the executables and libraries necessary over
all libraries, binaries, and everything under /srv/http/www (where the webpages are) have executable and read permissions
doc_root is www
As far as I know, everything works in the chroot, except shell commands through php-fpm. Anyone have any idea where I went wrong and how to fix it?
This may sound stupid but you must just copy /bin/sh (not /bin/bash!) to you chroot.
For example see this question: How do I change the shell for php's exec()
If you chroot to some directory, then this directory becomes the root for all your PHP scripts. That means, that if you execute /usr/bin/ls from within PHP, it will try to exectue /srv/http/usr/bin/ls instead.
You can copy the executable to that directory - but be aware of the security implications. If you copy critical system executables into the chrooted directory you basically bypass the positive effects of chroot.
I get no output for
echo shell_exec('/usr/bin/ls');
either. Presumably because ls isn't a file but a built-in command. Running:
echo shell_exec('ls');
outputs:
css demos favicon.ico images js path.php robots.txt routing.php test
which is the list of files in my root directory for the site.

Fatal error: cannot mkdir R_TempDir

When attempting to run R, I get this error:
Fatal error: cannot mkdir R_TempDir
I found two possible fixes for this problem by googling around. The first was to ensure my tmp directory didn't contain a load of subdirectories - it doesn't and it's virtually empty. The second fix was to ensure that TMP, TMPDIR, and R_USER in my environment weren't set to non-existent paths - I didn't even have these set. Therefore, I created a tmp directory in my home directory and added it's path to TMP in my environment. I was able to run R once and then I got the fatal error again. Nothing was in the TMP directory that I set in my environment. Does anyone know what else I can try? Thanks.
Dirk is right, but misses a point: If /tmp is full, you can't create subdirectories there. Try
df /tmp
I just hit this on a shared server, where /tmp is mounted on it's own partition, and is shared by many users. In this particular case, you can't really see who's fault it is, because permissions restrict you seeing who is filling up the tmp partition. Basically have to ask the sys admins to figure it out.
Your default temporary directory appears to have the wrong permissions. Here I have
$ ls -ld /tmp
drwxrwxrwt 22 root root 4096 2011-06-10 09:17 /tmp
The key part is 'everybody' can read or write. You need that too. It certainly can contain subdirectories.
Are you running something like AppArmor or SE Linux?
Edit 2011-07-21: As someone just deemed it necessary to downvote this answer -- help(tempfile) is very clear on what values tmpdir (the default directory for temporary files or directories) tries:
By default, 'tmpdir' will be the directory given by 'tempdir()'. This
will be a subdirectory of the temporary directory found by the
following rule. The environment variables 'TMPDIR', 'TMP' and 'TEMP'
are checked in turn and the first found which points to a writable
directory is used: if none succeeds '/tmp' is used.
So my money is on checking those three environment variables. But AppArmor and SELinux have shown to be an issue too on some distributions.
Go to your user directory and create a file called .Renviron and add the following line, save it and reopen RStudio or Rgui or Rterm
TMP = '<path to folder where Everyone has full control>'
This worked with me on Windows 7
If you are running one of the rocker docker images (e.g., rocker/verse), you need to map a local directory to the /tmp directory in the container. For example,
docker run --rm -v ${PWD}/tmp:/tmp -p 8787:8787 -e PASSWORD=password rocker/verse:4.0.4
where ${PWD} for me is ~/devProjs/r, and I created a /tmp directory inside it, so that the container's /tmp is mapped to my ~/devProjs/r/tmp directory.
Just had this issue and finally solved it. Simply a windows permission issue. Go to environment variables and find the location of the temp folders. Then right click on the folder > properties > security > advanced > change everyone to full control > tick "replace all child object permission entries with inheritable permission entries from this object" > Ok > ok.
This will also happen when your computer is completely, utterly out of space. Currently, my Mac has 0 kb free and it's causing this error. Freeing up some space solved the problem.
Check for the user account with which you are launching the RStudio with. Now u check the TMP(System Environment variable) for its location. If the user who is launching RStudio has Write access for those directories you will not face this issue. Being said that you are facing this issue, all you have to do is to change the permissions for that user to have write access on those directories.
Running R on CentOS system and had the same issue. I had to remove all R folders from the tmp directory. Usually all R folders will be in the form of /tmp/Rtmp*****
so i tried to delete the folders from /tmp by running the below.
CD into /tmp directory and run rm -rf Rtmp*
R shell Worked for me afterwards
I had this issue, solution was slightly different. I run R on a linux server - it turned out for me R had made a whole load of tempdirs when running jobs with cron that had hung and not been cleaned up, clogging up the root /tmp directory with ~300 RtmpXXXXXX folders.
Using terminal access, I navigated to the /tmp folder did a recursive find/rm - deleting all of them using this command:
find . -type d -name 'Rtmp*' -exec rm -r -v {} \;
After this, Rstudio took a while to load up, but was once again happy and my scripts began to run again.
You will need the appropriate admin rights for this solution. And always be careful when running rm -r, especially with a find command, as it's easy to remove things unexpectedly.
When it comes to deleting tmp files, make sure that the tmp files are in the server or in local.
If its in the remote, 1st check for the df /tmp in the server or in the remote to see who uses more storage.
Then use rm(file_name)` to remove the files which cause the blocking.
If its in the remote, then use rm /tmp/(file_name)..
MOreover, you can also refer to https://support.rstudio.com/hc/en-us/articles/218730228-Resetting-a-user-s-state-on-RStudio-Server

Resources