Random unix:/tmp/php5-fpm.sock Failed - unix

I am checking my error.log and found a few failed
connect() to unix:/tmp/php5-fpm.sock failed
Permissions are fine afaik.
What gives?
owned by nginx:nginx
permissions 660
running nginx obviously.
www.conf
listen.owner = nginx
listen.group = nginx
listen.mode = 0660
default.conf (nginx)
fastcgi_pass unix:/tmp/php5-fpm.sock;
Running PHP 5.5.14

As of PHP 5.5.12 FPM Socket permissions were changed to resolve a security related bug, you can read more about that here -> https://bugs.php.net/bug.php?id=67060
Your listen.mode = 0660 should now be set to listen.mode = 0666
As for Nginx here is a working example I am currently using:
# PHP-FPM Support
location ~ \.php$ {
fastcgi_pass unix:/usr/local/etc/php-fpm/nginx.sock;
include fastcgi.conf;
}
I was hoping you would have given a lot more configuration details as requested. The lack thereof is making this more difficult than it needs to be by trying to guess at your situation / configuration setup.
Make sure inside of your FPM Pool Configuration that the following settings are defined:
[nginx]
listen = /usr/local/etc/php-fpm/nginx.sock
user = nginx
group = nginx
listen.owner = nginx
listen.group = nginx
listen.mode = 0666
You'll notice my listen paths are using /usr/local/etc/php-fpm but you can replace those with your own path of choice.
I see you are currently using /tmp and although there is not a major problem using that, I'd advise against it and create a dedicated directory for holding your FPM Sockets.
I checked the permissions on my /usr/local/etc/php-fpm directory and they are default as 755 and owned by root:root at the moment.
Give this a try, I'm sure it will work unless you have something else random happening that isn't obvious with the current information you've given.

Related

How much damage is changing the user on fpm/pool.d/website-name.conf NGINX

I have changed the user under the file /etc/php/7.3/fpm/pool.d/website-name.conf
user = ftplatinopeeyush
group = ftplatinopeeyush
Only those parameters were changed on that file. The following line I didn't touch it.
listen = /var/run/user-name.sock
Why Did I change this parameters?
I created a FTP user and changed the ownership of the webfiles to this user so I can upload files to the Server but then wordpress said that the files were not writable.
Now after changing the user on the pool.d/website-name.conf file i can upload files thru filezilla (with the FTP user) and also upload plugins via wordpress dashboard.
Everything seems to be working just fine but could this affect something else on my site or on the Nginx server?
How can I create a FTP user that allows me to upload files to my server without having file permission issues in the future?
You basically did the right thing already: created a separate Linux user and run PHP-FPM pool with that user. You then manage the website files in SFTP with the same user.
If you follow through "NGINX and PHP-FPM. What my permissions should be?", there's one extra step. That is, ensuring that your NGINX web user is a member of your PHP usergroup:
usermod -a -G ftplatinopeeyush www-data
What this achieves, is that NGINX can read any files of your website, which have group permission set to readable. E.g. chmod 0750 on all directories and 0640 on all files will allow NGINX to read all your website files.
Further, you will be able to easily control which files are sensitive and should not be served by NGINX by simply removing the read permission for group, e.g. by setting chmod 0600 on wp-config.php or a similar sensitive file.
Btw it looks like about user permission (not owned by fpm users), please take a look into the config files of nginx & php-fpm ( current example we used nginx) and the user is ftplatinopeeyush
CHECK
Check your files / wordpress user permissions, make sure your wordpress user are ubuntu (you can check with ls -l), and make sure your file placed on user home directory : /home/ftplatinopeeyush (or anywhere home directory and it was writable by the ftplatinopeeyush user)
go to /etc/nginx/nginx.conf
check the parameter user, please make sure the user is www-data
example:
user www-data; # < this
worker_processes auto;
# ... next config
And then go to /etc/php/7.3/fpm/pool.d/website-name.conf
and check the configurations about:
listen.owner = www-data
listen.group = www-data
(fpm) user & group (wordpress files owned user)
user=ftplatinopeeyush
group=ftplatinopeeyush
CASE
fpm : listen.owner & listen.group should match with nginx: user
fpm: user & group should be match with wordpress / root user.
Permission: wordpress file placed on directory that writable by user
RESOLVE
Before start, please make sure you have super admin privileges (root user)
Check your user home directory:
command
cat /etc/passwd | grep ftplatinopeeyush
output (where the /home/ftplatinopeeyush is home directory)
NameOfUser:x:1234:1234:NameOfUser:/home/ftplatinopeeyush:
Place your wordpress (document root) to your home directory. (eg: /home/ftplatinopeeyush/pathtowordpress)
Fix the owner
chown -R ftplatinopeeyush:ftplatinopeeyush /home/ftplatinopeeyush/pathtowordpress
Set the permission (644 to files and 755 to directory)
set files permission
find /home/ftplatinopeeyush/pathtowordpress -type f -exec chmod 644 {} \;
set directories permission
find /home/ftplatinopeeyush/pathtowordpress -type d -exec chmod 755 {} \;
Change your nginx user to www-data
nano /etc/nginx/nginx.conf
change variable user to www-data
6. Change configuration of /etc/php/7.3/fpm/pool.d/website-name.conf
nano /etc/php/7.3/fpm/pool.d/website-name.conf
make sure the variable user & group are ftplatinopeeyush
and variable listen.owner & listen.group are www-data
or example configuration:
[ftplatinopeeyush#7.3]
; Owner
listen.owner = www-data
listen.group = www-data
listen.backlog = 1500
; User & group
user = ftplatinopeeyush
group = ftplatinopeeyush
; Listener this will placed on /run/php/ftplatinopeeyush#7.3.sock
listen=/run/php/$pool.sock
; Process Manager
pm = ondemand
; set max children
pm.max_children = 20
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 10
pm.process_idle_timeout = 10s
; FLAGS
;php_flag[display_errors] = off
;php_admin_value[memory_limit] = 128M
Open site virtual host setting
nano /path/to/virtualhost-of-wordpress.conf
pointing root variable to /home/ftplatinopeeyush/pathtowordpress
find fastcgi_pass (php location block)
location ~ \.php$ {
# split path request
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# include default configuration nginx fastcgi_params
include fastcgi_params;
# environment php file name
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_index index.php;
# try below to handle all 404 not found with script
try_files $fastcgi_script_name =404;
# listen for cgi param port / load balancer upstream
# port for cgi params has followed of fast cgi config
fastcgi_pass unix:/run/php/ftplatinopeeyush#7.3.sock;
# handle error
fastcgi_intercept_errors off;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
}
Reload php & nginx service.
systemctl reload nginx.service
systemctl reload php7.3-fpm.service
done

nginx: create directory if it doesn't exist

I'm new to nginx and I have a given nginx config.
There is a mapping like:
map $http_host $my_customer {
default "default";
"~*cust1" "cust1";
"~*cust2" "cust2";
}
And there is the access_log line:
access_log /my/log/path/access.log
Now I want to have separate log-directories and log-files for each customer, so I changed the access_log line into:
access_log /my/log/path/$my_customer/access.log
This works fine if the $my_customer-directory already exists. But if it doesn't exist, then nginx does not log. I know how I can check if the directory exists:
if (!-d /my/log/path/$my_customer) {}
But how is it possible to create a directory inside the nginx config file?
In order to start nginx process all directories have to be created in advance.
The owner of dir should be the user used by worker processes defined in nginx configuration file (/etc/nginx/nginx.conf by default).
The user should have write permissions to this directory.
As #Alexey Ten noticed, it is a good practice to use default logs location:
/var/log/nginx/$my_customer.access.log
Otherwise, you have to do something like that:
mkdir -p -m 755 /my/log/path/$my_customer

Nginx 403 forbidden for all files

I have nginx installed with PHP-FPM on a CentOS 5 box, but am struggling to get it to serve any of my files - whether PHP or not.
Nginx is running as www-data:www-data, and the default "Welcome to nginx on EPEL" site (owned by root:root with 644 permissions) loads fine.
The nginx configuration file has an include directive for /etc/nginx/sites-enabled/*.conf, and I have a configuration file example.com.conf, thus:
server {
listen 80;
Virtual Host Name
server_name www.example.com example.com;
location / {
root /home/demo/sites/example.com/public_html;
index index.php index.htm index.html;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_param SCRIPT_FILENAME /home/demo/sites/example.com/public_html$fastcgi_script_name;
include fastcgi_params;
}
}
Despite public_html being owned by www-data:www-data with 2777 file permissions, this site fails to serve any content -
[error] 4167#0: *4 open() "/home/demo/sites/example.com/public_html/index.html" failed (13: Permission denied), client: XX.XXX.XXX.XX, server: www.example.com, request: "GET /index.html HTTP/1.1", host: "www.example.com"
I've found numerous other posts with users getting 403s from nginx, but most that I have seen involve either more complex setups with Ruby/Passenger (which in the past I've actually succeeded with) or are only receiving errors when the upstream PHP-FPM is involved, so they seem to be of little help.
Have I done something silly here?
One permission requirement that is often overlooked is a user needs x permissions in every parent directory of a file to access that file. Check the permissions on /, /home, /home/demo, etc. for www-data x access. My guess is that /home is probably 770 and www-data can't chdir through it to get to any subdir. If it is, try chmod o+x /home (or whatever dir is denying the request).
EDIT: To easily display all the permissions on a path, you can use namei -om /path/to/check
If you still see permission denied after verifying the permissions of the parent folders, it may be SELinux restricting access.
To check if SELinux is running:
# getenforce
To disable SELinux until next reboot:
# setenforce Permissive
Restart Nginx and see if the problem persists. To allow nginx to serve your www directory (make sure you turn SELinux back on before testing this. i.e, setenforce Enforcing)
# chcon -Rt httpd_sys_content_t /path/to/www
See my answer here for more details
I solved this problem by adding user settings.
in nginx.conf
worker_processes 4;
user username;
change the 'username' with linux user name.
I've got this error and I finally solved it with the command below.
restorecon -r /var/www/html
The issue is caused when you mv something from one place to another. It preserves the selinux context of the original when you move it, so if you untar something in /home or /tmp it gets given an selinux context that matches its location. Now you mv that to /var/www/html and it takes the context saying it belongs in /tmp or /home with it and httpd is not allowed by policy to access those files.
If you cp the files instead of mv them, the selinux context gets assigned according to the location you're copying to, not where it's coming from. Running restorecon puts the context back to its default and fixes it too.
I've tried different cases and only when owner was set to nginx (chown -R nginx:nginx "/var/www/myfolder") - it started to work as expected.
If you're using SELinux, just type:
sudo chcon -v -R --type=httpd_sys_content_t /path/to/www/
This will fix permission issue.
Old question, but I had the same issue. I tried every answer above, nothing worked. What fixed it for me though was removing the domain, and adding it again. I'm using Plesk, and I installed Nginx AFTER the domain was already there.
Did a local backup to /var/www/backups first though. So I could easily copy back the files.
Strange problem....
We had the same issue, using Plesk Onyx 17. Instead of messing up with rights etc., solution was to add nginx user into psacln group, in which all the other domain owners (users) were:
usermod -aG psacln nginx
Now nginx has rights to access .htaccess or any other file necessary to properly show the content.
On the other hand, also make sure that Apache is in psaserv group, to serve static content:
usermod -aG psaserv apache
And don't forget to restart both Apache and Nginx in Plesk after! (and reload pages with Ctrl-F5)
I was facing the same issue but above solutions did not help.
So, after lot of struggle I found out that sestatus was set to enforce which blocks all the ports and by setting it to permissive all the issues were resolved.
sudo setenforce 0
Hope this helps someone like me.
I dug myself into a slight variant on this problem by mistakenly running the setfacl command. I ran:
sudo setfacl -m user:nginx:r /home/foo/bar
I abandoned this route in favor of adding nginx to the foo group, but that custom ACL was foiling nginx's attempts to access the file. I cleared it by running:
sudo setfacl -b /home/foo/bar
And then nginx was able to access the files.
If you are using PHP, make sure the index NGINX directive in the server block contains a index.php:
index index.php index.html;
For more info checkout the index directive in the official documentation.

nginx - client_max_body_size has no effect

nginx keeps saying client intended to send too large body. Googling and RTM pointed me to client_max_body_size. I set it to 200m in the nginx.conf as well as in the vhost conf, restarted Nginx a couple of times but I'm still getting the error message.
Did I overlook something? The backend is php-fpm (max_post_size and max_upload_file_size are set accordingly).
Following nginx documentation, you can set client_max_body_size 20m ( or any value you need ) in the following context:
context: http, server, location
NGINX large uploads are successfully working on hosted WordPress sites, finally (as per suggestions from nembleton & rjha94)
I thought it might be helpful for someone, if I added a little clarification to their suggestions. For starters, please be certain you have included your increased upload directive in ALL THREE separate definition blocks (server, location & http). Each should have a separate line entry. The result will like something like this (where the ... reflects other lines in the definition block):
http {
...
client_max_body_size 200M;
}
(in my ISPconfig 3 setup, this block is in the /etc/nginx/nginx.conf file)
server {
...
client_max_body_size 200M;
}
location / {
...
client_max_body_size 200M;
}
(in my ISPconfig 3 setup, these blocks are in the /etc/nginx/conf.d/default.conf file)
Also, make certain that your server's php.ini file is consistent with these NGINX settings. In my case, I changed the setting in php.ini's File_Uploads section to read:
upload_max_filesize = 200M
Note: if you are managing an ISPconfig 3 setup (my setup is on CentOS 6.3, as per The Perfect Server), you will need to manage these entries in several separate files. If your configuration is similar to one in the step-by-step setup, the NGINX conf files you need to modify are located here:
/etc/nginx/nginx.conf
/etc/nginx/conf.d/default.conf
My php.ini file was located here:
/etc/php.ini
I continued to overlook the http {} block in the nginx.conf file. Apparently, overlooking this had the effect of limiting uploading to the 1M default limit. After making the associated changes, you will also want to be sure to restart your NGINX and PHP FastCGI Process Manager (PHP-FPM) services. On the above configuration, I use the following commands:
/etc/init.d/nginx restart
/etc/init.d/php-fpm restart
As of March 2016, I ran into this issue trying to POST json over https (from python requests, not that it matters).
The trick is to put "client_max_body_size 200M;" in at least two places http {} and server {}:
1. the http directory
Typically in /etc/nginx/nginx.conf
2. the server directory in your vhost.
For Debian/Ubuntu users who installed via apt-get (and other distro package managers which install nginx with vhosts by default), thats /etc/nginx/sites-available/mysite.com, for those who do not have vhosts, it's probably your nginx.conf or in the same directory as it.
3. the location / directory in the same place as 2.
You can be more specific than /, but if its not working at all, i'd recommend applying this to / and then once its working be more specific.
Remember - if you have SSL, that will require you to set the above for the SSL server and location too, wherever that may be (ideally the same as 2.). I found that if your client tries to upload on http, and you expect them to get 301'd to https, nginx will actually drop the connection before the redirect due to the file being too large for the http server, so it has to be in both.
Recent comments suggest that there is an issue with this on SSL with newer nginx versions, but i'm on 1.4.6 and everything is good :)
You need to apply following changes:
Update php.ini (Find right ini file from phpinfo();) and increase post_max_size and upload_max_filesize to size you want:
sed -i "s/post_max_size =.*/post_max_size = 200M/g" /etc/php5/fpm/php.ini
sed -i "s/upload_max_filesize =.*/upload_max_filesize = 200M/g" /etc/php5/fpm/php.ini```
Update NginX settings for your website and add client_max_body_size value in your location, http, or server context.
location / {
client_max_body_size 200m;
...
}
Restart NginX and PHP-FPM:
service nginx restart
service php5-fpm restart
NOTE: Sometime (In my case almost every time) you need to kill php-fpm process if it didn't refresh by service command properly. To do that you can get list of processes (ps -elf | grep php-fpm) and kill one by one (kill -9 12345) or use following command to do it for you:
ps -elf | grep php-fpm | grep -v grep | awk '{ print $4 }' | xargs kill -9
Please see if you are setting client_max_body_size directive inside http {} block and not inside location {} block. I have set it inside http{} block and it works
Someone correct me if this is bad, but I like to lock everything down as much as possible, and if you've only got one target for uploads (as it usually the case), then just target your changes to that one file. This works for me on the Ubuntu nginx-extras mainline 1.7+ package:
location = /upload.php {
client_max_body_size 102M;
fastcgi_param PHP_VALUE "upload_max_filesize=102M \n post_max_size=102M";
(...)
}
I had a similar problem recently and found out, that client_max_body_size 0; can solve such an issue. This will set client_max_body_size to no limit. But the best practice is to improve your code, so there is no need to increase this limit.
I meet the same problem, but I found it nothing to do with nginx. I am using nodejs as backend server, use nginx as a reverse proxy, 413 code is triggered by node server. node use koa parse the body. koa limit the urlencoded length.
formLimit: limit of the urlencoded body. If the body ends up being larger than this limit, a 413 error code is returned. Default is 56kb.
set formLimit to bigger can solve this problem.
Assuming you have already set the client_max_body_size and various PHP settings (upload_max_filesize / post_max_size , etc) in the other answers, then restarted or reloaded NGINX and PHP without any result, run this...
nginx -T
This will give you any unresolved errors in your NGINX configs. In my case, I struggled with the 413 error for a whole day before I realized there were some other unresolved SSL errors in the NGINX config (wrong pathing for certs) that needed to be corrected. Once I fixed the unresolved issues I got from 'nginx -T', reloaded NGINX, and EUREKA!! That fixed it.
I'm setting up a dev server to play with that mirrors our outdated live one, I used The Perfect Server - Ubuntu 14.04 (nginx, BIND, MySQL, PHP, Postfix, Dovecot and ISPConfig 3)
After experiencing the same issue, I came across this post and nothing was working. I changed the value in every recommended file (nginx.conf, ispconfig.vhost, /sites-available/default, etc.)
Finally, changing client_max_body_size in my /etc/nginx/sites-available/apps.vhost and restarting nginx is what did the trick. Hopefully it helps someone else.
In case you are using Kubernetes, add the following annotations to your Ingress:
annotations:
nginx.ingress.kubernetes.io/client-max-body-size: "5m"
nginx.ingress.kubernetes.io/client-body-buffer-size: "8k"
nginx.ingress.kubernetes.io/proxy-body-size: "5m"
nginx.ingress.kubernetes.io/proxy-buffer-size: "8k"
Confirm the changes were applied:
kubectl -n <namespace> describe ingress <ingress-name>
References:
Client Body Buffer Size
Custom max body size
Had the same issue that the client_max_body_size directive was ignored.
My silly error was, that I put a file inside /etc/nginx/conf.d which did not end with .conf. Nginx will not load these by default.
If you tried the above options and no success, also you're using IIS (iisnode) to host your node app, putting this code on web.config resolved the problem for me:
Here is the reference: https://www.inflectra.com/support/knowledgebase/kb306.aspx
Also, you can chagne the length allowed because now I think its 2GB. Modify it by your needs.
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="2147483648" />
</requestFiltering>
</security>
The following config worked for me. Notice I only set client_max_body_size 50M; once, contrary to what others are saying...
File: /etc/nginx/conf.d/sites.conf
server {
listen 80 default_server;
server_name portal.myserver.com;
return 301 https://$host$request_uri;
}
server {
resolver 127.0.0.11 valid=30s;
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl_certificate /secret/portal.myserver.com.crt;
ssl_certificate_key /secret/portal.myserver.com.pem;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server_name portal.myserver.com;
client_max_body_size 50M;
location /fileserver/ {
set $upstream http://fileserver:6976;
proxy_pass $upstream;
}
}
If you are using windows version nginx, you can try to kill all nginx process and restart it to see.
I encountered same issue In my environment, but resolved it with this solution.

Nginx - 502 bad gateway from changing user in nginx.conf

If I change the user parameter in nginx.conf from:
user www-data
to
user www www
www is a user and www is also a group (existing already)
it says 502 bad gateway
How would I successfully be able to run nginx as the www user.
Cheers
You will need to use the command
chown -R www:www "Document Root"
This will ensure that all of your web files are owned by that user and group meaning nginx can access them.
If yopu have any .php files in your document root you will also have to go to your php fpm config file and change the lines
listen.owner = www
listen.group = www
If you are running centos go to:
/etc/php.fpm.d
where you will find the www.conf file to find those settings.
Hope that helps.
You should also change user in /etc/php/7.0/fpm/pool.d/www.conf(Ubuntu 16.10) file
listen.owner = www
listen.group = www
This error appears when you change nginx process user without changing this params in php-fpm.
After that restart php-fpm process:
service php7.0-fpm restart (for php 7.0)
The error may be caused if your are passing the request to fastcgi(php), to do this nginx has to access the file /run/php/php7.4-fpm.sock (for php7.4), I checked the logs and found out that it was denied the permission to access this file.
I ran this command:
> sudo chown ubuntu /run/php/php7.4-fpm.sock
and then it worked correctly.

Resources