How to run uWSGI and NGINX in different Docker containers - nginx

I've set up uWSGI and NGINX locally and my configurations have no issue serving web requests. However, when I containerize uWSGI and NGINX (in separate Docker containers) I can't seem to make a connection. My NGINX server configuration looks like this:
server {
listen 80;
server_name localhost;
location / {
include uwsgi_params;
uwsgi_pass uwsgi.app:9090;
}
}
My uWSGI ini file looks like this:
[uwsgi]
socket = localhost:9090
module = uwsgi:app
processes = 4
threads = 2
master = true
buffer-size = 32768
stats = localhost:9191
die-on-term = true
vaccuum = true
I run the containers both in the same user-generated bridge network using the following for uWSGI docker run -p 9090:9090 --network main --name uwsgi.app -d uwsgirepo; and docker run -p 80:80 --network main --link uwsgi.app --name nginx.app -d nginxrepo for NGINX.
When I try to make a request on my local machine I get the following error message from the NGINX logs: '[error] 7#7: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: localhost, request: "GET /endpoint HTTP/1.1", upstream: "uwsgi://172.18.0.2:9090", host: "127.0.0.1"'
It doesn't look like it's ever connecting to uWSGI. Not sure where to go from here. Any thoughts?

Configure uwsgi to listen on 0.0.0.0 instead of localhost. That'll make it listen on all the containers interfaces. You are getting a connection refused because it's only listening on the uwsgi container's localhost, and not the container's eth0.

Related

Verify if nginx is working correctly with Proxy Protocol locally

Environment
I have set up Proxy Protocol support on an AWS classic load balancer as shown here which redirects traffic to backend nginx (configured with ModSecurity) instances.
Everything works great and I can hit my websites from the open internet.
Now, since my nginx configuration is done in AWS User Data, I want to do some checks before the instance starts serving traffic which is achievable through AWS Lifecycle hooks.
Problem
Before enabling proxy protocol I used to check whether my nginx instance is healthy, and ModSecurity is working by checking a 403 response from this command
$ curl -ks "https://localhost/foo?username=1'%20or%20'1'%20=%20'"
After enabling Proxy Protocol, I can't do this anymore as the command fails with below error which is expected as per this link.
# curl -k https://localhost -v
* About to connect() to localhost port 443 (#0)
* Trying ::1...
* Connected to localhost (::1) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* NSS error -5938 (PR_END_OF_FILE_ERROR)
* Encountered end of file
* Closing connection 0
curl: (35) Encountered end of file
# cat /var/logs/nginx/error.log
2017/10/26 07:53:08 [error] 45#45: *5348 broken header: "���4"�U�8ۭ򫂱�u��%d�z��mRN�[e��<�,�
�+̩� �0��/̨��98k�̪32g�5=�/<
" while reading PROXY protocol, client: 172.17.0.1, server: 0.0.0.0:443
What other options do I have to programmatically check nginx apart from curl? Maybe something in some other language?
You can use the --haproxy-protocol curl option, which adds the extra proxy protocol info to the request.
curl --haproxy-protocol localhost
So:
curl -ks "https://localhost/foo?username=1'%20or%20'1'%20=%20'"
Proxy Protocol append a plain text line before the streaming anything
PROXY TCP4 127.0.0.1 127.0.0.1 0 8080
Above is an example, but this happens the very first thing. Now if I have a NGINX listening on SSL and http both using proxy_protocol then it expects to see this line first and then any other thing
So if do
$ curl localhost:81
curl: (52) Empty reply from server
And in nginx logs
web_1 | 2017/10/27 06:35:15 [error] 5#5: *2 broken header: "GET / HTTP/1.1
If I do
$ printf "PROXY TCP4 127.0.0.1 127.0.0.1 0 80\r\nGET /test/abc\r\n\r\n" | nc localhost 81
You can reach API /test/abc and args_given = ,
It works. As I am able to send the proxy protocol it accepts
Now in case of SSL if I use below
printf "PROXY TCP4 127.0.0.1 127.0.0.1 0 8080\r\nGET /test/abc\r\n\r\n" | openssl s_client -connect localhost:8080
It would still error out
web_1 | 2017/10/27 06:37:27 [error] 5#5: *1 broken header: ",(�� #_5���_'���/��ߗ
That is because the client is trying to do Handshake first instead of sending proxy protocol first then handshake
So you possible solutions are
Terminate SSL on LB and then handle http on nginx with proxy_protocol and use the the nc command option I posted
Add a listen 127.0.0.1:<randomlargeport> and execute your test using the same. This is still safe as you are listening to localhost only
Add another SSL port and use listen 127.0.0.1:443 ssl and listen <private_ipv4>:443 ssl proxy_protocol
All solutions are in priority order as per my choice, you can make your own choice
Thanks Tarun for the detailed explanation. I discussed within the team and ended up doing creating another nginx virtual host on port 80 and using that to check ModSecurity as below.
curl "http://localhost/foo?username=1'%20or%20'1'%20=%20'"`
Unfortunately bash version didn't work in my case, so I wrote python3 code:
#!/usr/bin/env python3
import socket
import sys
def check_status(host, port):
'''Check app status, return True if ok'''
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.settimeout(3)
s.connect((host, port))
s.sendall(b'GET /status HTTP/1.1\r\nHost: api.example.com\r\nUser-Agent: curl7.0\r\nAccept: */*\r\n\r\n')
data = s.recv(1024)
if data.decode().endswith('OK'):
return True
else:
return False
try:
status = check_status('127.0.0.1', 80)
except:
status = False
if status:
sys.exit(0)
else:
sys.exit(1)

502 Bad Gateway Nginx+Flask+Gunicorn (2: No such file or directory)

I am trying to connect my flask app to Nginx and Gunicorn, based on this tutorial: How To Serve Flask Applications with Gunicorn and Nginx on Ubuntu 14.04.
I am getting a 502 Bad Gateway
var/log/nginx
2017/10/16 21:17:04 [crit] 11284#0: *8 connect() to unix:/home/myproject/myproject.sock failed (2: No such file or directory) while connecting to upstream, client: <myIP>, server: <myIP>, request: "GET / HTTP/1.1", upstream: "http://unix:/home/myproject/myproject.sock:/", host: "<myIP>"
It seems like Nginx couldn't find the myproject.sock file, and I don't know why my upstart script wouldn't create one based on the tutorial. Any guidance is greatly appreciated.
Below are my files:
/home/myproject/myproject.py
from flask import Flask
application = Flask(__name__)
#application.route("/")
def hello():
return "<h1 style='color:blue'>Hello There!</h1>"
if __name__ == "__main__":
application.run(host='0.0.0.0')
/home/myproject/wsgi.py
from myproject import application
if __name__ == "__main__":
application.run()
/etc/init/myproject.conf
note: I ran the cd and exec command in the file below for testing purposes and it works fine.
description "Gunicorn application server running myproject"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
setuid www-data
setgid www-data
script
cd /home/myproject
exec gunicorn --bind unix:myproject.sock -m 007 wsgi
end script
/etc/nginx/sites-available this is symlinked to sites-enabled
server {
listen 80;
server_name <myIPaddressHere>;
location / {
include proxy_params;
proxy_pass http://unix:/home/myproject/myproject.sock;
}
}
Debugging Steps I took:
(1) I checked that the upstart script is running
$ sudo status myproject
myproject start/running, process 22476
(2) Nginx is running
(3) Weird, I don't see my myproject.sock
# netstat -lpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 11279/nginx
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1304/sshd
tcp6 0 0 :::80 :::* LISTEN 11279/nginx
tcp6 0 0 :::22 :::* LISTEN 1304/sshd
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node PID/Program name Path
unix 2 [ ACC ] SEQPACKET LISTENING 7190 386/systemd-udevd /run/udev/control
unix 2 [ ACC ] STREAM LISTENING 8774 1120/acpid /var/run/acpid.socket
unix 2 [ ACC ] STREAM LISTENING 6541 1/init #/com/ubuntu/upstart
unix 2 [ ACC ] STREAM LISTENING 8339 859/dbus-daemon /var/run/dbus/system_bus_socket
[Solved] A mentor of mine pointed this out.
www-data/www-data can not write to /home/myproject/
Either write into tmp or choose a user/group that has more permissions.
I chose to write to /tmp
description "Gunicorn application server running myproject"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
setuid www-data
setgid www-data
script
cd /home/myproject
exec gunicorn --bind unix:/tmp/myproject.sock -m 007 wsgi
end script
And the Nginx file looks like this:
server {
listen 80;
server_name <ip>;
location / {
include proxy_params;
proxy_pass http://unix:/tmp/myproject.sock;
}
}

Nginx permission denied

I want to deploy my flask service in a server with centOS 7. So I followed this tutorial - https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uwsgi-and-nginx-on-centos-7 .
After runnning systemctl start nginx command, I got this error:
nginx: [emerg] bind() to 0.0.0.0:5000 failed (13: Permission denied)
My nginx.conf file:
server {
listen 5000;
server_name _;
location / {
include uwsgi_params;
uwsgi_pass unix:/root/fiproxy/fiproxyproject/fiproxy.sock;
}
}
Note: flask service and wsgi work ok. And I've tried to run nginx with superuser and the error remains.
After search a lot in Internet, I found a solution to my problem.
I ran this command to get all used ports in my machine: semanage port -l.
After that, I filtered the output with: semanage port -l | grep 5000.
I realized that this port 5000 is used by commplex_main_port_t, I searched in speedguide and I found: 5000 tcp,udp **UPnP**.
Conclusion, maybe my problem was bind a standard port.
To add your desired port use this command:
sudo semanage port -a -t http_port_t -p tcp [yourport]
Now run nginx with sudo:
sudo systemctl stop nginx
sudo systemctl start nginx
The Nginx master process needs root permission. Because it needs bind port.
You need start Nginx under root user.
Then you can define the user of child processes in nginx.conf.

Docker: Nginx and php5-fpm dockers are not talking

I’d like to make a fully dockerized Drupal install. My first step is to get containers running with Nginx and php5-fpm, both Debian based. I’m on CoreOS alpha channel (using Digital Ocean.)
My Dockerfiles are the following:
Nginx:
FROM debian
MAINTAINER fvhemert
RUN apt-get update && apt-get install -y nginx && echo "\ndaemon off;" >> /etc/nginx/nginx.conf
CMD ["nginx"]
EXPOSE 80
This container build and runs nicely. I see the default Nginx page on my server ip.
Php5-fpm:
FROM debian
MAINTAINER fvhemert
RUN apt-get update && apt-get install -y \
php5-fpm \
&& sed 's/;daemonize = yes/daemonize = no/' -i /etc/php5/fpm/php-fpm.conf
CMD ["php5-fpm"]
EXPOSE 9000
This container also builds with no problems and it keeps running when started.
I start the php5-fpm container first with:
docker run -d --name php5-fpm freek/php5-fpm:1
Ad then I start Nginx,, linked to php5-fpm:
docker run -d -p 80:80 --link php5-fpm:phpserver --name nginx freek/nginx-php:1
The linking seems to work, there is an entry in /etc/hosts with name phpserver. Both dockers run:
core#dockertest ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fd1a9ae0f1dd freek/nginx-php:4 "nginx" 38 minutes ago Up 38 minutes 0.0.0.0:80->80/tcp nginx
3bd12b3761b9 freek/php5-fpm:2 "php5-fpm" 38 minutes ago Up 38 minutes 9000/tcp php5-fpm
I have adjusted some of the config files. For the Nginx container I edited /etc/nginx/sites-enabled/default and changed:
server {
#listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default_server ipv6only=on; ## listen for ipv6
root /usr/share/nginx/www;
index index.html index.htm index.php;
(I added the index.php)
And further on:
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
#
# # With php5-cgi alone:
fastcgi_pass phpserver:9000;
# # With php5-fpm:
# fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
In the php5-fpm docker I changed /etc/php5/fpm/php.ini:
cgi.fix_pathinfo=0
php5-fpm runs:
[21-Nov-2014 06:15:29] NOTICE: fpm is running, pid 1
[21-Nov-2014 06:15:29] NOTICE: ready to handle connections
I also changed index.html to index.php, it looks like this (/usr/share/nginx/www/index.php):
<html>
<head>
<title>Welcome to nginx!</title>
</head>
<body bgcolor="white" text="black">
<center><h1>Welcome to nginx!</h1></center>
<?php
phpinfo();
?>
</body>
</html>
I have scanned the 9000 port from the Nginx docker, it appears as closed. Not a good sign of course:
root#fd1a9ae0f1dd:/# nmap -p 9000 phpserver
Starting Nmap 6.00 ( http://nmap.org ) at 2014-11-21 06:49 UTC
Nmap scan report for phpserver (172.17.0.94)
Host is up (0.00022s latency).
PORT STATE SERVICE
9000/tcp closed cslistener
MAC Address: 02:42:AC:11:00:5E (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 0.13 seconds
The Nginx logs:
root#fd1a9ae0f1dd:/# vim /var/log/nginx/error.log
2014/11/20 14:43:46 [error] 13#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 194.171.252.110, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "128.199.60.95"
2014/11/21 06:15:51 [error] 9#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 145.15.244.119, server: localhost, request: "GET / HTTP/1.0", upstream: "fastcgi://172.17.0.94:9000", host: "128.199.60.95"
Yes, that goes wrong and I keep getting a 502 bad gateway error when browsing to my Nginx instance.
My question is: What exactly goes wrong? My guess is that I’m missing some setting in the php config files.
EDIT FOR MORE DETAILS:
This is the result (from inside the php5-fpm container, after apt-get install net-tools):
root#3bd12b3761b9:/# netstat -tapen
Active Internet connections
(servers and established) Proto Recv-Q Send-Q Local Address
Foreign Address State User Inode PID/Program name
From inside the Nginx container:
root#fd1a9ae0f1dd:/# netstat -tapen
Active Internet connections
(servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program
name tcp 0 0 0.0.0.0:80 0.0.0.0:*
LISTEN 0 1875387 -
EDIT2:
Progression!
In the php5-fpm container, in the file:
/etc/php5/fpm/pool.d/www.conf
I changed the listen argument from some socket name to:
listen = 9000
Now when I go to my webpage I get the error:
"No input file specified."
Probably I have trailing / wrong somewhere. I'll look into it more closely!
EDIT3:
So I have rebuild the dockers with the above mentioned alterations and it seems that they are talking. However, my webpage tells me: "file not found."
I'm very sure it has to do with the document that nginx sents to php-fpm but I have no idea how it should look like. I used the defaults when using the socket method which always worked. Now it doesn't work anymore. What should be in /etc/nginx/sites-enabled/default under location ~ .php$ { ?
The reason it doesn't work is, as you have discovered yourself, that nginx only sends the path of the PHP file to PHP-FPM, not the file itself (which would be quite inefficient). The solution is to use a third, data-only VOLUME container to host the files, and then mount it on both docker instances.
FROM debian
VOLUME /var/www
CMD ['true']
Build the above Dockerfile and create an instance (call it for example: storage-www), then run both the nginx and the PHP-FPM containers with the option:
--volumes-from storage-www
That will work if you run both containers on the same physical server.
But you still could use different servers, if you put that data-only container on a networked file-system, such as GlusterFS, which is quite efficient and can be distributed over a large-scale network.
Hope that helps.
Update:
As of 2015, the best way to make persistent links between containers is to use docker-compose.
So, I have tested all settings and none worked between dockers while they did work with the same settings on 1 server (or also in one docker probably). Then I found out that php-fpm is not taking php files from nginx, it is receiving the path, if it can't find the same file in its own container it generates a "file not found". See here for more information: https://code.google.com/p/sna/wiki/NginxWithPHPFPM So that solves the question but not the problem, sadly. This is quite annoying for people that want to do load balancing with multiple php-fpm servers, they'd have to rsync everything or something like that. I hope someday I'll find a better solution. Thanx for the replies.
EDIT: Perhaps I can mount the same volume in both containers and get it to work that way. That won't be a solution when using multiple servers though.
When you are in your container as
root#fd1a9ae0f1dd:/#
, check the ports used with
netstat -tapen | grep ":9000 "
or
netstat -lntpu | grep ":9000 "
or the same commands without the grep

Nginx Bad Gateway

Hi I'm trying to move my old dev environment to a new machine. However I keep getting "bad gateway errors" from nginx. From nginx's errorlog:
*19 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: ~(?<app>[^.]+).gp2, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "backend.gp2:5555"
Does anyone know why this is?
Thanks!
Turned out that PHP-fpm was not running
Looks like your upstream host at 127.0.0.1:9000 is not accepting connections. Is the upstream process working?
You seem to have nginx configured as a proxy, that tries to proxy its requests to localhost on port 9000, but cannot find anything listening on port 9000.
In my workstation, running php works for me. Take note that I'm using PHP 7.4 in my Mac. Pls adjust the PHP version according to what is installed in your workstation.
Working command:
sudo brew services start php#7.4
Please start your varnish:
sudo varnishd -a 127.0.0.1:80 -T 127.0.0.1:6082 -f /usr/local/etc/varnish/default.vcl -s file,/tmp,500M

Resources