Multiple docker containers accessible by nginx reverse proxy - nginx

I'd like to run multiple docker containers on one host VM which would be accessible through only one domain. I wanted to use request url to differentiate between containers.
To achieve this I'm trying to set nginx server as reverse proxy and run it in the container also listening on port 80.
Let's say I have two containers running on port 3000 and 4000.
The routing would be following:
docker-host.example.com/3000 -> this will access container exposing port 3000
docker-host.example.com/4000 -> this will access container exposing port 4000
The thing is I'm currently stack even with trying to define static rule for such reverse proxy.
It works fine without any location:
upstream application {
server <docker container>:3000;
}
server {
listen 80;
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://application/;
}
}
But when I add port location and try to access it using localhost:{nginx port}/3000/
upstream application {
server <docker container>:3000;
}
server {
listen 80;
location /3000/ {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://application/3000/;
}
}
It seems that first resource (main html) is requested correctly, but any other depending resource (for example js or css needed for this site) is missing.
If I examine request for those resources I have in logs:
09:19:20 [error] 5#5: *1 open() "/etc/nginx/html/public/css/fonts.min.css" failed (2: No such file or directory), client: 172.17.0.1, server: , request: "GET /public/css/fonts.min.css HTTP/1.1", host: "localhost:8455", referrer:"http://localhost:8455/3000/"
So request url is http://localhost:8455/public/css/fonts.min.css
Instead of http://localhost:8455/3000/public/css/fonts.min.css
Could I ask you for any suggestions ? Is this scenario possible ?

You can select a docker container per port, your example:
http://example.com:4000/css/fonts.min.css
http://example.com:3000/css/fonts.min.css
But there is another approach that I like more, because I think it is clearer, access to a docker container by domain name, e.g:
http://a.example.com/css/fonts.min.css
http://b.example.com/css/fonts.min.css
Whichever you choose, there is a project in github that helps you to implement docker multi-container reverse proxy: https://github.com/jwilder/nginx-proxy
I've written an example using docker-compose for a similar scenario at: http://carlosvin.github.io/posts/reverse-proxy-multidomain-docker/

Related

Nginx preserve $request_uri

I'm not sure if the behavior I want is actually possible natively with nginx but here goes.
I have a server running on port 81 with the following nginx config:
CONFIGURATION OF SERVER1 NGINX
server {
listen 81;
server_name SERVER_DNS_NAME;
location /server1 {
proxy_pass http://127.0.0.1:8084/;
proxy_set_header Host $host;
}
location / {
proxy_pass http://127.0.0.1:8084;
proxy_set_header Host $host:$server_port;
}
}
I have another server running on port 82 with similar configuration. Now what'd i'd like to do is be able to visit them both from port 80 with just different uris.
For example: URL/server1 would take me to the first server, and URL/server2 would take me to the second.
CONFIGURATION OF NGINX LISTENING ON PORT 80
server {
listen SERVER_IP:80;
location /server1{
proxy_set_header Host $host;
http://SERVER_IP:81;
}
location /server2 {
proxy_pass http://SERVER_IP:82;
proxy_set_header Host $host;
}
This works fine when I go to URL/server1. I am successfully routed to the main page on server1. However as soon as I click any of the links present on the page on server1 I get a 404. This is because the site tries to go to URL/some_subdir_of_server1 (for which there is no mapping) rather than doing URL/server1/some_subdir_of_server1. Is this behavior doable? If so how?
Thanks!
Be careful with trailing slashes: in your example, you have
proxy_pass http://SERVER_IP:81/; which will set the proxy URL to root /

docker nginx appear "502".1 upstream server temporarily disabled while connecting to upstream

I use nginx in the docker,this is my nginx configure
server { listen 80; server_name saber;
location / {
root /usr/share/nginx;
index index.html;
}
location /saber {
proxy_pass http://localhost:8080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_connect_timeout 90;
}
}
when I use "http://localhost/saber/blog/getBlog.do" in browser ,browser give me a error with "502".
and nginx`s error.log have new.
2017/07/09 05:16:18 [warn] 5#5: *1 upstream server temporarily disabled while connecting to upstream, client: 172.17.0.1, server: saber, request: "GET /saber/blog/getBlog.do HTTP/1.1", upstream: "http://127.0.0.1:8080/saber/blog/getBlog.do", host: "localhost"
I can promise the "http://127.0.0.1:8080/saber/blog/getBlog.do" have response success in browser.
I try search answer in other question,i find a answer is "/usr/sbin/setsebool httpd_can_network_connect true",this is question url "nginx proxy server localhost permission denied",but I use the docker in win10,the nginx container dont hava setsebool,because the container dont find SELinux.
This all,Thank you in advance.
Localhost inside each container (like the nginx container) is different from localhost outside on your container. Each container gets its own networking namespace by default. Instead of pointing to localhost, you need to place your containers on the same docker network (not the default bridge network) and use the container or service name with Docker's built in DNS to connect. The target port will also be the container port, not the published port on your host.

Spawning node.js server with nginx

So this is something new to me. I have a node.js Application running on my server on Port 3000. I have an nginx Proxy:
upstream websli_nodejs_app_2 {
server 127.0.0.1:3000;
}
# this is the dynamic websli server
server {
listen 80;
server_name test.ch;
#root /var/www/websli/public;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://websli_nodejs_app_2/;
proxy_redirect off;
}
}
This works like a charm. Now I'm spawning (at least I believe that's what it's called). More node application. That's where Wintersmith comes in place: I'm running wintersmith preview
On my localhost that will result in another node.js server on localhost:8000. When I then go to localhost:8000 in my browser I'm getting the expected result, but then on my localhost I don't have the nginx proxy setup.
The issue:
Now on my production setup with nginx, I'm a bit stuck because I obviously cannot access localhost:8000
I have tried to add another upstream server, but this didn't really worked out. I have then also tried to spawn on something like dev.test.ch:8000, but that would result in something like error listen EADDRNOTAVAIL
What I'm looking for
The goal is to start another server from inside my main node.js server and make it accessible from a browser. Any input is highly welcomed.

nginx reverse proxy to backend running on localhost

EDIT: It turns out that the my setup below actually works. Previously, I was getting redirections to port 36000 but it was due to some configuration settings on my backend application that was causing it.
I am not entirely sure, but I believe I might be wanting to set up a reverse proxy using nginx.
I have an application running on a server at port 36000. By default, port 36000 is not publicly accessible and my intention is for nginx to listen to a public url, direct any request to the url to an application running on port 36000. During this entire process, the user should not know that his/her request is being sent to an application running on my server's port 36000.
To put it in more concrete terms, assume that my url is http://domain.somehost.com/
Upon visiting http://domain.somehost.com/ , nginx should pick up the request and redirect it to an application already running on the server on port 36000, the application does some processing, and passes the response back. Port 36000 is not publicly accessible and should not appear as part of any url.
I've tried a setup that looks like:
server {
listen 80;
server_name domain.somehost.com
location / {
proxy_pass http://127.0.0.1:36000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
and including that inside my main nginx.conf
However, it requires me to make port 36000 publicly accessible, and I'm trying to avoid that. The port 36000 also shows up as part of the forwarded url in the web browser.
Is there any way that I can do the same thing, but without making port 36000 accessible?
Thank you.
EDIT: The config below is from a working nginx config, with the hostname and port changed.
You need to may be able to set the server listening on port 36000 as an upstream server (see http://nginx.org/en/docs/http/ngx_http_upstream_module.html).
server {
listen 80;
server_name domain.somehost.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:36000/;
proxy_redirect http://localhost:36000/ https://$server_name/;
}
}

How does supervisord and nginx handle what tornado port is used?

I am using supervisord to spool 2 instances of tornado on different ports and I use nginx as a reverse proxy to these ports. I have noticed that all traffic is directing to only one port. How does supervisord or nginx decide which instance of tornado is used when a user makes a request from the web service?
nginx config:
http {
upstream frontends {
server xx.xxx.x.xxx:8001;
server xx.xxx.x.xxx:8002;
}
server {
listen 80;
server_name xx.xxx.x.xxx;
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://frontends;
}
}
}
From the nginx docs:
Requests are distributed according to the servers in round-robin manner with respect of the server weight.
By default, servers are given equal weight. Are you sure all requests are going to one port?
Also note that supervisord's role is simply process management - only nginx decides how to distribute traffic to the ports you've configured.

Resources