Exec with PHP-FPM on nginx (under chroot) returns nothing - nginx

I've created a nginx server in a chroot at /srv/http with php-fpm. Both services use the http user and it works fine. The problem comes when I try to run an exec command such as
echo shell_exec('/usr/bin/ls');
There is no output at all on the web page or in the errors. I've also tried
error_log(shell_exec('/usr/bin/ls');
and still nothing.
Things I've Tried or Know:
safe mode off
exec enabled
user is http (using phpinfo())
display_errors = on
error_reporting = E_ALL
sudo /usr/bin/chroot --userspec=http:http /srv/http ls works fine
Can create file and read from it using file_puts_content and fopen/fread
tried shell_exec,exec,system, and passthrough - nothing worked
tried appending 2>&1 to the end of the command and nothing
I've copied all the executables and libraries necessary over
all libraries, binaries, and everything under /srv/http/www (where the webpages are) have executable and read permissions
doc_root is www
As far as I know, everything works in the chroot, except shell commands through php-fpm. Anyone have any idea where I went wrong and how to fix it?

This may sound stupid but you must just copy /bin/sh (not /bin/bash!) to you chroot.
For example see this question: How do I change the shell for php's exec()

If you chroot to some directory, then this directory becomes the root for all your PHP scripts. That means, that if you execute /usr/bin/ls from within PHP, it will try to exectue /srv/http/usr/bin/ls instead.
You can copy the executable to that directory - but be aware of the security implications. If you copy critical system executables into the chrooted directory you basically bypass the positive effects of chroot.

I get no output for
echo shell_exec('/usr/bin/ls');
either. Presumably because ls isn't a file but a built-in command. Running:
echo shell_exec('ls');
outputs:
css demos favicon.ico images js path.php robots.txt routing.php test
which is the list of files in my root directory for the site.

Related

nginx - Failing to load images only, loading css and js

So I setup nginx and uwsgi using this tutorial: http://uwsgi-docs.readthedocs.org/en/latest/tutorials/Django_and_nginx.html
I finished the tutorial completely but for some reason only my images are not being loaded on the page when I run the command...
uwsgi --ini exchange_uwsgi.ini
where exchange_uwsgi.ini is my initialization file for specifying what socket I run on, wheres my project, wheres my virtualenv etc...
Just to re-iterate, the only things not showing up are my images and my images and css are all stored in one folder.
Any reason why this might happens?
Thanks
I fixed the problem.
Make sure to check the permissions on all of your static files. Only 2 images of mine were not loading and they were the only ones with incorrect permissions.
On Linux, first go to the folder with all your static files in the command window, type "ls -l" for list items with the long parameter so you can view permissions.
I set my permission on each file to -rw-rw-r--
Edit: In order to change permissions look into the command "chmod"

Symfony2 "Assetic:dump -env-prod" Permission denied Exception

Before i executed an update (composer.phare update) with the root user, every thing works fine, but now when i tries to run "Assetic:dump -env-prod" i get a "Permission denied" error
[Assetic\Exception\FilterException]
An error occurred while running:
'' '-jar' '/home/symfony/www/app/Resources/java/yuicompressor.jar' '--ch
arset' 'UTF-8' '-o' '/tmp/YUI-OUT-vbRlyu' '--type' 'css' '/tmp/YUI-IN-OoRVH
Q'
Error Output:
sh: 1: : Permission denied
Input:
meta.foundation-version{ ...
I tried all the solutions in this post Fontawesome fonts fail after assets:install and assetic:dump
clear the cache, chown, chgrp and chmod nothing worked always the same problem
One way to deal with file permissions when you are running a web based application which requires either auto deployment or constant manual updates like using bin/console from symfony2, its to make sure that the files belongs to the user under which your application runs.
As you did not provide environment settings, I will be making a few assumptions and provide you with a generic setup scenario, hopefully this will help guide you to the the best solution for your specific case.
Environment Assumptions:
OS: linux flavor;
Web server: nginx will be running as www-data;
PHP: php-fpm will running as testapp and using a socket connection for this application;
Generic set-up steps:
In the /etc/nginx/nginx.conf file, make sure that the user/group are set to www-data;
In the /etc/php5/fpm/pool.d/apptest.conf file, make sure that the user & group are set to testapp;
TIP: The file above might need to be created, if that's the case you should just copy the content of the www.conf file located in the same folder.
In the /etc/php5/fpm/pool.d/apptest.conf file, make sure listen.owner & listen.group are set to www-data;
Make sure that you have a line like the one below in this file /etc/php5/fpm/pool.d/apptest.conf:
listen = /var/run/php5-fpm.apptest.sock.
NOTE: the fpm.apptest.sock portion of that line above, its the name of a file that does not exist yet but will be created when you restart php. The benefit is that you will have an isolated php process for this application;
a) In the case on nginx and if you are using socket connections, make sure to add this line in your apptest conf file:
unix:/var/run/php5-fpm.apptest.sock;
b) If you are using apache add this line in that conf file:
-socket /var/run/php5-fpm.apptest.sock;
If you are on a linux box, create user with no password and it should be called, apptest.
Note: apptest is the name of your application, it will also be the user under which php will be running and it should also be the application files/folders owner.
Restart php and nginx/apache.
Tip: to change to a user in linux which has no password, you should have root privileges and run:
sudo -u apptest -i.
After this, you should perform all your commands as the apptest user previously created, including running the symfony2 bin/console.
These are very generic steps, so if you need any clarification, let me know.
I do not recommend to use root for updating. In my opinion the way to go is to have /app/logs /app/cache writable for the server and the src and vendor folder only readable for the server.
So lets say your user and group is: coolman, than try this:
# everything is yours
chown coolman:coolman -R .
# all and group can access folders and read files, you as user can additionally write them
chmod ag=rX,u=rwX -R .
# full access to logs and cache for everyone (also the server)
chmod a+rwX -R app/logs app/cache
You make your composer update with your coolman user.
There is only one small problem, too. The logs might be www-data:www-data rw-r--r-- so you cannot delete them. So just add a line to your app.php and your app/console file:
\umask(0000);
I think this line is commented out as default. This says, that if no explicit rights are set within PHP, than every file which is created will get 0777 - mask = 0777 so you can delete logs and cache then.

Dokku .profile.d folder, scripts not executing

I cannot get bash scripts to run in the .profile.d folder. This seems like it should be very straight forward but I'm not having any luck.
I have a .profile.d folder in the root of my application. In it I have one script which builds ffmpeg from source called:
ffmpeg.sh
This script executes fine when I run it in the container and makes / installs ffmpeg. I've tried chmod +x and adding a #!/bin/bash to the top of the file.
I feel like I'm missing a basic understanding somewhere here, any tips?
Ok, this was running correctly, it just didn't output echo statements. I only found it by attaching to the running docker container after the deploy had finished.

How to redirect rsyslog messages to some other path instead of /var/log

I am using the rsyslog facility for logging. Everything is working fine; I am able to log the messages in /var/log/MYlog.log path.
But now my requirement is to log the message in some other path like /opt/log/Somepath.log instead of /var/log.
I tried modifying Path in the /etc/rsyslog.conf file, but it only works if I give a log path under /var/log/. Nothing else seems to work. I want the log Path to be a configurable path like /opt/log/somePath.log.
I have an entry like this in the file and it works fine:
local6.* /var/log/Mylog.log
Now if I change it like this:
local6.* /opt/log/Mylog.log
it does not generate the Mylog.log file in /opt/log. The directory /opt/log is present.
After Modifying the configuration file /etc/rsyslog.conf I am Restarting the deamon again.
`/etc/init.d/rsyslog restart`
And There is no possibility of any permission and security issue since both /var/log and /opt/log are having same permissions(I changed /opt/log permissions similar to the /var/log).
I am using CentOs 6.3. It is my local VM and there is no Chance of NFS.
Is there any way or trick so that I can achieve this?
The problem is selinux. SELinux will prevent processes that are labeled syslogd_t to write to files that are (probably) labeled default_t. So we need to label the file with something syslogd_t can write to. Files in /var/log are labeled var_log_t, a type syslogd_t can surely write to.
Temporarily You can achieve this by changing the label of /opt/log directory.
chcon -R -t var_log_t /opt/log
You can check the modified labeling using
ls -Z /opt/log
that will give output something like this
drwxrwxrwx. root root unconfined_u:object_r:var_log_t:s0 log
So after this you will be able redirect syslog to any other directories. For permanent solution you need to write SELinux policy.

Fatal error: cannot mkdir R_TempDir

When attempting to run R, I get this error:
Fatal error: cannot mkdir R_TempDir
I found two possible fixes for this problem by googling around. The first was to ensure my tmp directory didn't contain a load of subdirectories - it doesn't and it's virtually empty. The second fix was to ensure that TMP, TMPDIR, and R_USER in my environment weren't set to non-existent paths - I didn't even have these set. Therefore, I created a tmp directory in my home directory and added it's path to TMP in my environment. I was able to run R once and then I got the fatal error again. Nothing was in the TMP directory that I set in my environment. Does anyone know what else I can try? Thanks.
Dirk is right, but misses a point: If /tmp is full, you can't create subdirectories there. Try
df /tmp
I just hit this on a shared server, where /tmp is mounted on it's own partition, and is shared by many users. In this particular case, you can't really see who's fault it is, because permissions restrict you seeing who is filling up the tmp partition. Basically have to ask the sys admins to figure it out.
Your default temporary directory appears to have the wrong permissions. Here I have
$ ls -ld /tmp
drwxrwxrwt 22 root root 4096 2011-06-10 09:17 /tmp
The key part is 'everybody' can read or write. You need that too. It certainly can contain subdirectories.
Are you running something like AppArmor or SE Linux?
Edit 2011-07-21: As someone just deemed it necessary to downvote this answer -- help(tempfile) is very clear on what values tmpdir (the default directory for temporary files or directories) tries:
By default, 'tmpdir' will be the directory given by 'tempdir()'. This
will be a subdirectory of the temporary directory found by the
following rule. The environment variables 'TMPDIR', 'TMP' and 'TEMP'
are checked in turn and the first found which points to a writable
directory is used: if none succeeds '/tmp' is used.
So my money is on checking those three environment variables. But AppArmor and SELinux have shown to be an issue too on some distributions.
Go to your user directory and create a file called .Renviron and add the following line, save it and reopen RStudio or Rgui or Rterm
TMP = '<path to folder where Everyone has full control>'
This worked with me on Windows 7
If you are running one of the rocker docker images (e.g., rocker/verse), you need to map a local directory to the /tmp directory in the container. For example,
docker run --rm -v ${PWD}/tmp:/tmp -p 8787:8787 -e PASSWORD=password rocker/verse:4.0.4
where ${PWD} for me is ~/devProjs/r, and I created a /tmp directory inside it, so that the container's /tmp is mapped to my ~/devProjs/r/tmp directory.
Just had this issue and finally solved it. Simply a windows permission issue. Go to environment variables and find the location of the temp folders. Then right click on the folder > properties > security > advanced > change everyone to full control > tick "replace all child object permission entries with inheritable permission entries from this object" > Ok > ok.
This will also happen when your computer is completely, utterly out of space. Currently, my Mac has 0 kb free and it's causing this error. Freeing up some space solved the problem.
Check for the user account with which you are launching the RStudio with. Now u check the TMP(System Environment variable) for its location. If the user who is launching RStudio has Write access for those directories you will not face this issue. Being said that you are facing this issue, all you have to do is to change the permissions for that user to have write access on those directories.
Running R on CentOS system and had the same issue. I had to remove all R folders from the tmp directory. Usually all R folders will be in the form of /tmp/Rtmp*****
so i tried to delete the folders from /tmp by running the below.
CD into /tmp directory and run rm -rf Rtmp*
R shell Worked for me afterwards
I had this issue, solution was slightly different. I run R on a linux server - it turned out for me R had made a whole load of tempdirs when running jobs with cron that had hung and not been cleaned up, clogging up the root /tmp directory with ~300 RtmpXXXXXX folders.
Using terminal access, I navigated to the /tmp folder did a recursive find/rm - deleting all of them using this command:
find . -type d -name 'Rtmp*' -exec rm -r -v {} \;
After this, Rstudio took a while to load up, but was once again happy and my scripts began to run again.
You will need the appropriate admin rights for this solution. And always be careful when running rm -r, especially with a find command, as it's easy to remove things unexpectedly.
When it comes to deleting tmp files, make sure that the tmp files are in the server or in local.
If its in the remote, 1st check for the df /tmp in the server or in the remote to see who uses more storage.
Then use rm(file_name)` to remove the files which cause the blocking.
If its in the remote, then use rm /tmp/(file_name)..
MOreover, you can also refer to https://support.rstudio.com/hc/en-us/articles/218730228-Resetting-a-user-s-state-on-RStudio-Server

Resources