504 Gateway Time-out - upstream timeout - nginx

Everything was running smoothly when suddenly my server stopped working.
I'm using Linode with Nginx fast-cgi
This is my log file:
upstream timed out (110: Connection timed out) while reading response header from upstream, client: 76.66.174.147, server: iskacanada.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.iskacanada.com"
location ~ \.php$ {
include fastcgi_params;
fastcgi_read_timeout 120;
fastcgi_pass localhost:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
When I want to restart mysql it says:
sudo service mysql restart
stop: Unknown instance:
start: Job failed to start
Any idea of what is going on?

After a few hours of Debugging here is how I did it:
Using Ubuntu 12.04, Nginx and php5-fmp
PLease check your log files! log files are your friends. a 504 Gateway problem means that my server is not communicating properly with the website. So In my case I had Nginx and php-fpm that was managing the requests. I had to check 2 log files:
/var/log/nginx/error.log and /var/log/php5-fpm.log
in error.log:
recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 76.66.174.147, server: xxxxxxx.com, request: "GET /wp-admin/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.xxxxxxx.com"
in php5-fpm.log:
unable to bind listening socket for address '127.0.0.1:9000': Address already in use (98)
So I figured out that I needed to check my php5-fpm process by typing
netstat | grep 9000
tcp 0 0 localhost.localdom:9000 localhost.localdo:58424 SYN_RECV
tcp 913 0 localhost.localdom:9000 localhost.localdo:57917 CLOSE_WAIT
tcp 857 0 localhost.localdom:9000 localhost.localdo:58032 CLOSE_WAIT
tcp 1633 0 localhost.localdom:9000 localhost.localdo:58395 CLOSE_WAIT
tcp 961 0 localhost.localdom:9000 localhost.localdo:58025 CLOSE_WAIT
tcp 857 0 localhost.localdom:9000 localhost.localdo:58040 CLOSE_WAIT
tcp 953 0 localhost.localdom:9000 localhost.localdo:58005 CLOSE_WAIT
tcp 761 0 localhost.localdom:9000 localhost.localdo:58016 CLOSE_WAIT
tcp 1137 0 localhost.localdom:9000 localhost.localdo:57960 CLOSE_WAIT
Lots of close_wait!!! that's abnormal...so I killed all the processes by typing
fuser -k 9000/tcp
I then changed my
/etc/php5/fpm/pool.d/www.conf
and changing this:
request_terminate_timeout=30s
Now the website works. I hope this solved the problem since it was intermittent.

Check if PHP is still running: sudo ps aux | grep php
If it is, restart it sudo service php5-fpm restart if not start it sudo service php5-fpm start.
If you need to restart your database only pass restart, stop or start to the service command: sudo service mysql start or sudo service mysql restart or sudo service mysql stop.

I just installed winginx and had the 504 gateway problem. The error log pointed at the upstream server "fastcgi://127.0.0.1:9000". This is where nginx proxies to php.
I opened the php-cgi.conf and found that php was listening on port 9054. Changed the port to 9000 and all is well.
Gateway is the route and/or port that nginx uses to connect to a service. For instance, Mongodb is configured to listen on port 27017 out of the box. For security reasons, I tend to change the default ports on services such as php etc. on production servers.

Related

Verify if nginx is working correctly with Proxy Protocol locally

Environment
I have set up Proxy Protocol support on an AWS classic load balancer as shown here which redirects traffic to backend nginx (configured with ModSecurity) instances.
Everything works great and I can hit my websites from the open internet.
Now, since my nginx configuration is done in AWS User Data, I want to do some checks before the instance starts serving traffic which is achievable through AWS Lifecycle hooks.
Problem
Before enabling proxy protocol I used to check whether my nginx instance is healthy, and ModSecurity is working by checking a 403 response from this command
$ curl -ks "https://localhost/foo?username=1'%20or%20'1'%20=%20'"
After enabling Proxy Protocol, I can't do this anymore as the command fails with below error which is expected as per this link.
# curl -k https://localhost -v
* About to connect() to localhost port 443 (#0)
* Trying ::1...
* Connected to localhost (::1) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* NSS error -5938 (PR_END_OF_FILE_ERROR)
* Encountered end of file
* Closing connection 0
curl: (35) Encountered end of file
# cat /var/logs/nginx/error.log
2017/10/26 07:53:08 [error] 45#45: *5348 broken header: "���4"�U�8ۭ򫂱�u��%d�z��mRN�[e��<�,�
�+̩� �0��/̨��98k�̪32g�5=�/<
" while reading PROXY protocol, client: 172.17.0.1, server: 0.0.0.0:443
What other options do I have to programmatically check nginx apart from curl? Maybe something in some other language?
You can use the --haproxy-protocol curl option, which adds the extra proxy protocol info to the request.
curl --haproxy-protocol localhost
So:
curl -ks "https://localhost/foo?username=1'%20or%20'1'%20=%20'"
Proxy Protocol append a plain text line before the streaming anything
PROXY TCP4 127.0.0.1 127.0.0.1 0 8080
Above is an example, but this happens the very first thing. Now if I have a NGINX listening on SSL and http both using proxy_protocol then it expects to see this line first and then any other thing
So if do
$ curl localhost:81
curl: (52) Empty reply from server
And in nginx logs
web_1 | 2017/10/27 06:35:15 [error] 5#5: *2 broken header: "GET / HTTP/1.1
If I do
$ printf "PROXY TCP4 127.0.0.1 127.0.0.1 0 80\r\nGET /test/abc\r\n\r\n" | nc localhost 81
You can reach API /test/abc and args_given = ,
It works. As I am able to send the proxy protocol it accepts
Now in case of SSL if I use below
printf "PROXY TCP4 127.0.0.1 127.0.0.1 0 8080\r\nGET /test/abc\r\n\r\n" | openssl s_client -connect localhost:8080
It would still error out
web_1 | 2017/10/27 06:37:27 [error] 5#5: *1 broken header: ",(�� #_5���_'���/��ߗ
That is because the client is trying to do Handshake first instead of sending proxy protocol first then handshake
So you possible solutions are
Terminate SSL on LB and then handle http on nginx with proxy_protocol and use the the nc command option I posted
Add a listen 127.0.0.1:<randomlargeport> and execute your test using the same. This is still safe as you are listening to localhost only
Add another SSL port and use listen 127.0.0.1:443 ssl and listen <private_ipv4>:443 ssl proxy_protocol
All solutions are in priority order as per my choice, you can make your own choice
Thanks Tarun for the detailed explanation. I discussed within the team and ended up doing creating another nginx virtual host on port 80 and using that to check ModSecurity as below.
curl "http://localhost/foo?username=1'%20or%20'1'%20=%20'"`
Unfortunately bash version didn't work in my case, so I wrote python3 code:
#!/usr/bin/env python3
import socket
import sys
def check_status(host, port):
'''Check app status, return True if ok'''
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.settimeout(3)
s.connect((host, port))
s.sendall(b'GET /status HTTP/1.1\r\nHost: api.example.com\r\nUser-Agent: curl7.0\r\nAccept: */*\r\n\r\n')
data = s.recv(1024)
if data.decode().endswith('OK'):
return True
else:
return False
try:
status = check_status('127.0.0.1', 80)
except:
status = False
if status:
sys.exit(0)
else:
sys.exit(1)

php-fpm + nginx + upload results in 502 Bad Request recv() failed()

I´m struggeling with a problem while uploading files to my server.
I use the following:
Ubuntu 14.04 64bit
nginx 1.4.6
php5-fpm 5.5.9
The applications which should receive the files is on top of Zend Framework 2.4.0.
Everytime I want to upload a file, I got a 502 Bad Request response.
The error.log of nginx shows:
[error] 21217#0: *5 recv() failed (104: Connection reset by peer) while reading response header from upstream
I read so much about this error but nothing really helped.
I did:
disable opcache in php.ini
switched from sockets to tcp for php-fpm
set filesizes for php
set timeouts for nginx and php-fpm
If anyone has an idea it would be very nice if you could help a little bit :)
Greetz
Nilson
Try to open your fastcgi pool config:
vim /etc/php5/fpm/pool.d/www.conf
Change listen to:
listen = 127.0.0.1:9000
Open your nginx site config:
vim /etc/nginx/sites-available/your-site.conf
Replace unix:/var/run/php5-fpm.sock by:
127.0.0.1:9000;
Restart nginx and php5-fpm.

Docker: Nginx and php5-fpm dockers are not talking

I’d like to make a fully dockerized Drupal install. My first step is to get containers running with Nginx and php5-fpm, both Debian based. I’m on CoreOS alpha channel (using Digital Ocean.)
My Dockerfiles are the following:
Nginx:
FROM debian
MAINTAINER fvhemert
RUN apt-get update && apt-get install -y nginx && echo "\ndaemon off;" >> /etc/nginx/nginx.conf
CMD ["nginx"]
EXPOSE 80
This container build and runs nicely. I see the default Nginx page on my server ip.
Php5-fpm:
FROM debian
MAINTAINER fvhemert
RUN apt-get update && apt-get install -y \
php5-fpm \
&& sed 's/;daemonize = yes/daemonize = no/' -i /etc/php5/fpm/php-fpm.conf
CMD ["php5-fpm"]
EXPOSE 9000
This container also builds with no problems and it keeps running when started.
I start the php5-fpm container first with:
docker run -d --name php5-fpm freek/php5-fpm:1
Ad then I start Nginx,, linked to php5-fpm:
docker run -d -p 80:80 --link php5-fpm:phpserver --name nginx freek/nginx-php:1
The linking seems to work, there is an entry in /etc/hosts with name phpserver. Both dockers run:
core#dockertest ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fd1a9ae0f1dd freek/nginx-php:4 "nginx" 38 minutes ago Up 38 minutes 0.0.0.0:80->80/tcp nginx
3bd12b3761b9 freek/php5-fpm:2 "php5-fpm" 38 minutes ago Up 38 minutes 9000/tcp php5-fpm
I have adjusted some of the config files. For the Nginx container I edited /etc/nginx/sites-enabled/default and changed:
server {
#listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default_server ipv6only=on; ## listen for ipv6
root /usr/share/nginx/www;
index index.html index.htm index.php;
(I added the index.php)
And further on:
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
#
# # With php5-cgi alone:
fastcgi_pass phpserver:9000;
# # With php5-fpm:
# fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
In the php5-fpm docker I changed /etc/php5/fpm/php.ini:
cgi.fix_pathinfo=0
php5-fpm runs:
[21-Nov-2014 06:15:29] NOTICE: fpm is running, pid 1
[21-Nov-2014 06:15:29] NOTICE: ready to handle connections
I also changed index.html to index.php, it looks like this (/usr/share/nginx/www/index.php):
<html>
<head>
<title>Welcome to nginx!</title>
</head>
<body bgcolor="white" text="black">
<center><h1>Welcome to nginx!</h1></center>
<?php
phpinfo();
?>
</body>
</html>
I have scanned the 9000 port from the Nginx docker, it appears as closed. Not a good sign of course:
root#fd1a9ae0f1dd:/# nmap -p 9000 phpserver
Starting Nmap 6.00 ( http://nmap.org ) at 2014-11-21 06:49 UTC
Nmap scan report for phpserver (172.17.0.94)
Host is up (0.00022s latency).
PORT STATE SERVICE
9000/tcp closed cslistener
MAC Address: 02:42:AC:11:00:5E (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 0.13 seconds
The Nginx logs:
root#fd1a9ae0f1dd:/# vim /var/log/nginx/error.log
2014/11/20 14:43:46 [error] 13#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 194.171.252.110, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "128.199.60.95"
2014/11/21 06:15:51 [error] 9#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 145.15.244.119, server: localhost, request: "GET / HTTP/1.0", upstream: "fastcgi://172.17.0.94:9000", host: "128.199.60.95"
Yes, that goes wrong and I keep getting a 502 bad gateway error when browsing to my Nginx instance.
My question is: What exactly goes wrong? My guess is that I’m missing some setting in the php config files.
EDIT FOR MORE DETAILS:
This is the result (from inside the php5-fpm container, after apt-get install net-tools):
root#3bd12b3761b9:/# netstat -tapen
Active Internet connections
(servers and established) Proto Recv-Q Send-Q Local Address
Foreign Address State User Inode PID/Program name
From inside the Nginx container:
root#fd1a9ae0f1dd:/# netstat -tapen
Active Internet connections
(servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program
name tcp 0 0 0.0.0.0:80 0.0.0.0:*
LISTEN 0 1875387 -
EDIT2:
Progression!
In the php5-fpm container, in the file:
/etc/php5/fpm/pool.d/www.conf
I changed the listen argument from some socket name to:
listen = 9000
Now when I go to my webpage I get the error:
"No input file specified."
Probably I have trailing / wrong somewhere. I'll look into it more closely!
EDIT3:
So I have rebuild the dockers with the above mentioned alterations and it seems that they are talking. However, my webpage tells me: "file not found."
I'm very sure it has to do with the document that nginx sents to php-fpm but I have no idea how it should look like. I used the defaults when using the socket method which always worked. Now it doesn't work anymore. What should be in /etc/nginx/sites-enabled/default under location ~ .php$ { ?
The reason it doesn't work is, as you have discovered yourself, that nginx only sends the path of the PHP file to PHP-FPM, not the file itself (which would be quite inefficient). The solution is to use a third, data-only VOLUME container to host the files, and then mount it on both docker instances.
FROM debian
VOLUME /var/www
CMD ['true']
Build the above Dockerfile and create an instance (call it for example: storage-www), then run both the nginx and the PHP-FPM containers with the option:
--volumes-from storage-www
That will work if you run both containers on the same physical server.
But you still could use different servers, if you put that data-only container on a networked file-system, such as GlusterFS, which is quite efficient and can be distributed over a large-scale network.
Hope that helps.
Update:
As of 2015, the best way to make persistent links between containers is to use docker-compose.
So, I have tested all settings and none worked between dockers while they did work with the same settings on 1 server (or also in one docker probably). Then I found out that php-fpm is not taking php files from nginx, it is receiving the path, if it can't find the same file in its own container it generates a "file not found". See here for more information: https://code.google.com/p/sna/wiki/NginxWithPHPFPM So that solves the question but not the problem, sadly. This is quite annoying for people that want to do load balancing with multiple php-fpm servers, they'd have to rsync everything or something like that. I hope someday I'll find a better solution. Thanx for the replies.
EDIT: Perhaps I can mount the same volume in both containers and get it to work that way. That won't be a solution when using multiple servers though.
When you are in your container as
root#fd1a9ae0f1dd:/#
, check the ports used with
netstat -tapen | grep ":9000 "
or
netstat -lntpu | grep ":9000 "
or the same commands without the grep

nginx proxy: connect() to ip:80 failed (99: Cannot assign requested address)

An nginx/1.0.12 running as a proxy on Debian 6.0.1 starts throwing the following error after running for a short time:
connect() to upstreamip:80 failed (99: Cannot assign requested address)
while connecting to upstream, client: xxx.xxx.xxx.xxx, server: localhost,
request: "GET / HTTP/1.1", upstream: "http://upstreamip:80/",
host: "requesteddomain.com"
Not all requests produce this error, so I suspect that it has to do with the load of the server and some kind of limit it hit.
I have tried raising ulimit -n to 50k and worker_rlimit_nofile to 50k as well, but that does not seem to help. lsof -n shows a total of 1200 lines for nginx.
Is there a system limit on outgoing connections that might prevent nginx from opening more connections to its upstream server?
Seems like I just found the solution to my own question: Allocating more outgoing ports via
echo "10240 65535" > /proc/sys/net/ipv4/ip_local_port_range
solved the problem.
Each TCP connection has to have a unique quadruple source_ip:source_port:dest_ip:dest_port
source_ip is hard to change, source_port is chosen from ip_local_port_range but can't be more than 16 bits. The other thing left to adjust is dest_ip and/or dest_port. So add some IP aliases for your upstream server:
upstream foo {
server ip1:80;
server ip2:80;
server ip3:80;
}
Where ip1, ip2 and ip3 are different IP addresses for the same server.
Or it might be easier to have your upstream listen on more ports.
modify /etc/sysctl.conf:
net.ipv4.tcp_timestamps=1
net.ipv4.tcp_tw_recycle=0
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_max_tw_buckets=10000 #after done this: local ports decrease from 26000 to 6000(netstat -tuwanp | awk '{print $4}' | sort | uniq -c | wc -l)
run:
sysctl -p

Nginx Bad Gateway

Hi I'm trying to move my old dev environment to a new machine. However I keep getting "bad gateway errors" from nginx. From nginx's errorlog:
*19 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: ~(?<app>[^.]+).gp2, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "backend.gp2:5555"
Does anyone know why this is?
Thanks!
Turned out that PHP-fpm was not running
Looks like your upstream host at 127.0.0.1:9000 is not accepting connections. Is the upstream process working?
You seem to have nginx configured as a proxy, that tries to proxy its requests to localhost on port 9000, but cannot find anything listening on port 9000.
In my workstation, running php works for me. Take note that I'm using PHP 7.4 in my Mac. Pls adjust the PHP version according to what is installed in your workstation.
Working command:
sudo brew services start php#7.4
Please start your varnish:
sudo varnishd -a 127.0.0.1:80 -T 127.0.0.1:6082 -f /usr/local/etc/varnish/default.vcl -s file,/tmp,500M

Resources