HHVM 502 Bad gateway - Fedora 21 - nginx

G'day.
I have fedora 21, HHVM version 3.7. My issue unfortunately is that I can start the service, and I can access my pages no issue. However if I consistently refresh the page, the HHVM crashes and upon checking the status, it returns this error:
The HHVM error log returns:
Unable to open pid file /var/run/hhvm/pid for write
Now I can restart the server and it works fine, but only after a hand full of requests it will crash as above.
PHP-FPM is not running and nothing except HHVM is running on port 9000
Here is some config info
HHVM - server.ini
; php options
pid = /var/run/hhvm/pid
; hhvm specific
hhvm.server.port = 9000
hhvm.server.type = fastcgi
hhvm.server.source_root = /srv/www
hhvm.server.default_document = index.php
hhvm.log.level = Error
hhvm.log.use_log_file = true
hhvm.log.file = /var/log/hhvm/error.log
hhvm.repo.central.path = /var/run/hhvm/hhvm.hhbc
HHVM - service
[Unit]
Description=HipHop Virtual Machine (FCGI)
[Service]
ExecStart=/usr/bin/hhvm --config /etc/hhvm/server.ini --user hhvm --mode daemon -vServer.Type=fastcgi -vServer.Port=9000
PrivateTmp=true
[Install]
WantedBy=multi-user.target
NGINX - site file
##NGINX STUFF
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index bootstrap.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
##MORE NGINX STUFF
So from the info provided, is there any hint as to what could be the issue?
Cheers guys.

This is a very simple permission issue like your log mentioned. You have no access to the pid folder to generate the pid file.
sudo chmod -R 777 /var/run/hhvm
I had the same problem on Ubuntu.
HHVM Unable to read pid file /var/run/hhvm/pid for any meaningful pid after reboot
Another problem when you have a lot of requests can be the max open files limit. When you come over the limit HHVM crashes. Normally you should see that error in your log and you can increase that limit.
https://serverfault.com/questions/679408/hhvm-exit-after-too-many-open-files
Here is my question on ServerFault.

Related

Basic clean wordpress install on Nginx returns 502 error

I’m learning about running a server on a raspberry pi and just want to run a simple default wordpress site served with Nginx. For some reason loading the site locally in a browser returns a 502 error despite my other basic non-wordpress sites loading correctly. A clean download of the default wordpress installation files are inside /var/www/wp.example.co.uk
I’ve made a wp.example.co.uk.conf file inside /etc/nginx/sites-available - also symlinked to /etc/nginx/sites-enabled - with the code:
upstream wp-php-handler {
server unix:/var/run/php/php7.3-fpm.sock;
}
server {
listen 5432;
server_name _;
root /var/www/wp.example.co.uk;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass wp-php-handler;
}
}
Whenever I view it in a browser (http://mylocalip:4323) it returns a 502 error. Why does this happen?
Note: I’m following a YouTube tutorial (where the relevant part is ~6:43 of https://www.youtube.com/watch?v=8psimaAr1U8) that shows the same code working, which leads me to believe that my code should work as-is.
Thanks
It looks like the tutorial might be outdated after only 6 months. It tells you to install php-fpm, and then it just assumes that version 7.3 is going to be installed. If you run the command apt show php-fpm | grep "Depends:" it'll tell you which version is actually being installed. Now while you could just run apt install php7.3-fpm to follow along with the tutorial, I've included some instructions below on how to use a more recent version of PHP.
Install the version you want e.g. apt install php8.1-fpm or just apt install php-fpm for the current default version.
Run ls -d /var/run/php/* | grep sock --color=never to view all of the versions of PHP-fpm that are available on your system for you to use. The version that you just installed should be listed here.
In the line of your config file that says server unix:/var/run/php/php7.3-fpm.sock;, replace the file referenced with one of the files listed in step 2.
Don't forget to reload Nginx when done. On Ubuntu and Debian systems this is done with the command sudo systemctl reload nginx.

Random unix:/tmp/php5-fpm.sock Failed

I am checking my error.log and found a few failed
connect() to unix:/tmp/php5-fpm.sock failed
Permissions are fine afaik.
What gives?
owned by nginx:nginx
permissions 660
running nginx obviously.
www.conf
listen.owner = nginx
listen.group = nginx
listen.mode = 0660
default.conf (nginx)
fastcgi_pass unix:/tmp/php5-fpm.sock;
Running PHP 5.5.14
As of PHP 5.5.12 FPM Socket permissions were changed to resolve a security related bug, you can read more about that here -> https://bugs.php.net/bug.php?id=67060
Your listen.mode = 0660 should now be set to listen.mode = 0666
As for Nginx here is a working example I am currently using:
# PHP-FPM Support
location ~ \.php$ {
fastcgi_pass unix:/usr/local/etc/php-fpm/nginx.sock;
include fastcgi.conf;
}
I was hoping you would have given a lot more configuration details as requested. The lack thereof is making this more difficult than it needs to be by trying to guess at your situation / configuration setup.
Make sure inside of your FPM Pool Configuration that the following settings are defined:
[nginx]
listen = /usr/local/etc/php-fpm/nginx.sock
user = nginx
group = nginx
listen.owner = nginx
listen.group = nginx
listen.mode = 0666
You'll notice my listen paths are using /usr/local/etc/php-fpm but you can replace those with your own path of choice.
I see you are currently using /tmp and although there is not a major problem using that, I'd advise against it and create a dedicated directory for holding your FPM Sockets.
I checked the permissions on my /usr/local/etc/php-fpm directory and they are default as 755 and owned by root:root at the moment.
Give this a try, I'm sure it will work unless you have something else random happening that isn't obvious with the current information you've given.

Getting random errors when setting up Joomla with nginx instead of apache

I'm trying to set up a joomla 3 instance on my server where I am already using nginx together with owncloud as well as the blogging platform ghost.
My first attempt was actually quite successful and it only failed in the last installation step (creating configuration files). I though this was due to wrong permissions, that the file couldn't be created. I wrote a short test script to verify if php5-fpm had write permissions in the folder, and it worked.
After several failed retries and no log files I decided to delete the directory and download Joomla again. Since than, nothing works. After every time I unpack the zip (freshly downloaded or the same) I get following arbitrary error scenarios:
I get redirected to installation/installation/index.php instead of installation/index.php
I had errors about missing php files
I had errors about missing php classes:
JApplicationBase
JApplicationWebClient
some view class
...
After every unzip and re-download the error changes even though I don't change anything on the nginx or php5-fpm config.
After downloading and extracting the files I use the following command to set up the Joomla-directory properly:
sudo chown -R joomla_user .
optional, only if I downloaded and extracted the zip with another user - you see I really tried every possible combination
sudo chgrp -R www-data .
nginx runs as www-data but joomla_user isn't in the www-data group.
The files and folders are only readable for nginx, but not writable. I thought this isn't a problem since the writes are done by php anyway
sudo chmod -R g+s .
to make sure that all future uploaded files will be readable by nginx
my nginx config in sites-available (and sites-enabled) looks like this:
server {
listen 80;
server_name joomla.server_url;
root /home/joomla_user/www/joomla3;
index index.php index.html index.htm default.html default.htm;
# Support Clean (aka Search Engine Friendly) URLs
location / {
try_files $uri $uri/ /index.php?$args;
}
# deny running scripts inside writable directories
location ~* /(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ {
return 403;
error_page 403 /403_error.html;
}
location ~ \.php$ {
fastcgi_pass unix:/var/run/php5-fpm-joomla_user.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
# caching of files
location ~* \.(ico|pdf|flv)$ {
expires 1y;
}
location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ {
expires 14d;
}
}
the php5-fpm pool-config is basically copy paste of the default config with a changed socket name and name.
In summary again - php5 execution works, permissions allow also creating and writing of files (at least in those directories I checked), however after the installation didn't finish in the beginning, now I am getting really random error messages after every time I unzip the joomla3 zip file, even when I download id fresh (and directly to the server via wget) from their website (http://www.joomla.org/download.html).
Does anyone have experience using Joomla on top of nginx? Any idea how I could get rid of those errors and make it run?
Update:
My PHP version is 5.4.4:
PHP 5.4.4-14+deb7u8 (cli) (built: Feb 17 2014 09:18:47)
Copyright (c) 1997-2012 The PHP Group
Zend Engine v2.4.0, Copyright (c) 1998-2012 Zend Technologies
Also yesterday I was talking with a Joomla developer about the problem, they suggested directory permission problems, but ist still exists even after executing chmod -R u+rw . in the Joomla directory.
I didn't manage to get rid of the errors, but got the suggestion to use the tuxlite-script. Running ./domain.sh add joomla JOOMLA_SERVER_URL created a new config with all the necessary directories. The nginx-config also adds an SSL section which referenced in my case wrong certificate files. After fixing that, Joomla was again up and running.
I still had the first problem with - Joomla did't finish the installation. This was due to a too short fastcgi_read_timeout (the default 60 seconds). Changing it to few more minutes made it work.
The last configuration I changed was in joomla's nginx configuration:
location / {
try_files $uri $uri/ /index.php?$args;
}
was changed to
location / {
try_files $uri $uri/ /index.php?q=$request_uri;
}
as it is described in the Joomla documentation for nginx.

PHP-FPM serving blank pages after fatal php error

I've got a custom setup of nginx and php-fpm on arch linux. I'll post my configs below. I think I've read the documentation for these two programs front to back about 6 times by now, but I've reached a point where I simply can't squeeze any more information out of the system and thus have nothing left to google. Here's the skinny:
I compiled both nginx and php from scratch (I'm very familiar with this, so presumably no problems there). I've got nginx set up to serve things correctly, which it does consistently: php files get passed through the unix socket (which is both present and read-/write-accessible to the http user, which is the user that both nginx and php-fpm run as), while regular files that exist get served. Calls for folders and calls for files that don't exist are both sent to the /index.php file. All permissions are in order.
The Problem
My pages get served just fine until there's a php error. The error gets dumped to nginx's error log, and all further requests for pages from that specific child process of php-fpm return blank. They do appear to be processed, as evidenced by the fact that subsequent calls to the file with errors continue to dump error messages into the log file, but both flawed and clean files alike are returned completely blank with a 200 status code.
What's almost wilder is that I found if I then just sit on it for a few minutes, the offending php-fpm child process doesn't die, but a new one is spawned on the next request anyway, and the new process serves pages properly. From that point on, every second request is blank, while the other request comes back normal, presumably because the child processes take turns serving requests.
My test is the following:
// web directory listing:
mysite/
--index.php
--bad_file.php
--imgs/
----test.png
----test2.png
index.php:
<?php
die('all cool');
?>
bad_file.php*:
<?php
non_existent_function($called);
?>
* Note: I had previously posted bad_file.php to contain the line $forgetting_the_semicolon = true, but found that this doesn't actually produce the error I'm talking about (this was a simplified example that I've now implemented on my own system). The above code, however, does reproduce the error, as it produces a fatal error instead of a parse error.
test calls from terminal:
curl -i dev.mysite.com/ # "all cool"
curl -i dev.mysite.com/index.php # Redirected to / by nginx
curl -i dev.mysite.com/imgs # "all cool"
curl -i dev.mysite.com/imgs/test.png # returns test.png, printing gibberish
curl -i dev.mysite.com/nofile.php # "all cool"
curl -i dev.mysite.com/bad_file.php # blank, but error messages added to log
curl -i dev.mysite.com/ # blank! noooooooo!!
curl -i dev.mysite.com/ # still blank! noooooooo!!
#wait 5 or 6 minutes (not sure how many - probably corresponds to my php-fpm config)
curl -i dev.mysite.com/ # "all cool"
curl -i dev.mysite.com/ # blank!
curl -i dev.mysite.com/ # "all cool"
curl -i dev.mysite.com/ # blank!
#etc....
nginx.conf:
user http;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type text/plain;
sendfile on;
keepalive_timeout 65;
index /index.php;
server {
listen 127.0.0.1:80;
server_name dev.mysite.net;
root /path/to/web/root;
try_files /maintenance.html $uri #php;
location = /index.php {
return 301 /;
}
location ~ .php$ {
include fastcgi_params;
fastcgi_pass unix:/usr/local/php/var/run/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location #php {
include fastcgi_params;
fastcgi_pass unix:/usr/local/php/var/run/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root/index.php;
}
}
}
php-fpm.conf:
[global]
pid = run/php-fpm.pid
error_log = log/php-fpm.log
log_level = warning
[www]
user = http
group = http
listen = var/run/php-fpm.sock
listen.owner = http
listen.group = http
listen.mode = 0660
pm = dynamic
pm.max_children = 5
pm.start_servers = 1
pm.min_spare_servers = 1
pm.max_spare_servers = 3
php.ini upon request
In Summary
All pages are served as expected until there's a php error, at which point all subsequent requests to that particular php-fpm child process are apparently processed, but returned as completely blank pages. Errors that occur are reported and continue to be reported in the nginx error log file.
If anyone's got any ideas, throw them at me. I'm dead in the water til I figure this out. Incidentally, if anyone knows of a source of legitimate documentation for php-fpm, that would also be helpful. php-fpm.org appears to be nearly useless, as does php.net's documentation for fpm.
Thanks!
I've been messing with this since yesterday and it looks like it's actually a bug with output buffering. After trying everything, reading everything, going crazy on it, I finally turned off output buffering and it worked fine. I've submitted a bug report here.
For those who don't know, output buffering is a setting in php.ini that prevents php from sending output across the line as soon as it receives it. Not a totally crucial feature. I switched it from 4096 to Off:
;php.ini:
...
;output_buffering = 4096
output_buffering = Off
...
Hope this helps someone else!

Nginx 403 forbidden for all files

I have nginx installed with PHP-FPM on a CentOS 5 box, but am struggling to get it to serve any of my files - whether PHP or not.
Nginx is running as www-data:www-data, and the default "Welcome to nginx on EPEL" site (owned by root:root with 644 permissions) loads fine.
The nginx configuration file has an include directive for /etc/nginx/sites-enabled/*.conf, and I have a configuration file example.com.conf, thus:
server {
listen 80;
Virtual Host Name
server_name www.example.com example.com;
location / {
root /home/demo/sites/example.com/public_html;
index index.php index.htm index.html;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_param SCRIPT_FILENAME /home/demo/sites/example.com/public_html$fastcgi_script_name;
include fastcgi_params;
}
}
Despite public_html being owned by www-data:www-data with 2777 file permissions, this site fails to serve any content -
[error] 4167#0: *4 open() "/home/demo/sites/example.com/public_html/index.html" failed (13: Permission denied), client: XX.XXX.XXX.XX, server: www.example.com, request: "GET /index.html HTTP/1.1", host: "www.example.com"
I've found numerous other posts with users getting 403s from nginx, but most that I have seen involve either more complex setups with Ruby/Passenger (which in the past I've actually succeeded with) or are only receiving errors when the upstream PHP-FPM is involved, so they seem to be of little help.
Have I done something silly here?
One permission requirement that is often overlooked is a user needs x permissions in every parent directory of a file to access that file. Check the permissions on /, /home, /home/demo, etc. for www-data x access. My guess is that /home is probably 770 and www-data can't chdir through it to get to any subdir. If it is, try chmod o+x /home (or whatever dir is denying the request).
EDIT: To easily display all the permissions on a path, you can use namei -om /path/to/check
If you still see permission denied after verifying the permissions of the parent folders, it may be SELinux restricting access.
To check if SELinux is running:
# getenforce
To disable SELinux until next reboot:
# setenforce Permissive
Restart Nginx and see if the problem persists. To allow nginx to serve your www directory (make sure you turn SELinux back on before testing this. i.e, setenforce Enforcing)
# chcon -Rt httpd_sys_content_t /path/to/www
See my answer here for more details
I solved this problem by adding user settings.
in nginx.conf
worker_processes 4;
user username;
change the 'username' with linux user name.
I've got this error and I finally solved it with the command below.
restorecon -r /var/www/html
The issue is caused when you mv something from one place to another. It preserves the selinux context of the original when you move it, so if you untar something in /home or /tmp it gets given an selinux context that matches its location. Now you mv that to /var/www/html and it takes the context saying it belongs in /tmp or /home with it and httpd is not allowed by policy to access those files.
If you cp the files instead of mv them, the selinux context gets assigned according to the location you're copying to, not where it's coming from. Running restorecon puts the context back to its default and fixes it too.
I've tried different cases and only when owner was set to nginx (chown -R nginx:nginx "/var/www/myfolder") - it started to work as expected.
If you're using SELinux, just type:
sudo chcon -v -R --type=httpd_sys_content_t /path/to/www/
This will fix permission issue.
Old question, but I had the same issue. I tried every answer above, nothing worked. What fixed it for me though was removing the domain, and adding it again. I'm using Plesk, and I installed Nginx AFTER the domain was already there.
Did a local backup to /var/www/backups first though. So I could easily copy back the files.
Strange problem....
We had the same issue, using Plesk Onyx 17. Instead of messing up with rights etc., solution was to add nginx user into psacln group, in which all the other domain owners (users) were:
usermod -aG psacln nginx
Now nginx has rights to access .htaccess or any other file necessary to properly show the content.
On the other hand, also make sure that Apache is in psaserv group, to serve static content:
usermod -aG psaserv apache
And don't forget to restart both Apache and Nginx in Plesk after! (and reload pages with Ctrl-F5)
I was facing the same issue but above solutions did not help.
So, after lot of struggle I found out that sestatus was set to enforce which blocks all the ports and by setting it to permissive all the issues were resolved.
sudo setenforce 0
Hope this helps someone like me.
I dug myself into a slight variant on this problem by mistakenly running the setfacl command. I ran:
sudo setfacl -m user:nginx:r /home/foo/bar
I abandoned this route in favor of adding nginx to the foo group, but that custom ACL was foiling nginx's attempts to access the file. I cleared it by running:
sudo setfacl -b /home/foo/bar
And then nginx was able to access the files.
If you are using PHP, make sure the index NGINX directive in the server block contains a index.php:
index index.php index.html;
For more info checkout the index directive in the official documentation.

Resources