Docker network issues with nginx proxy container - nginx

I am currently trying to setup a docker based jira and confluence platform proxied by nginx and running into some kind of routing and network problems.
The basic setup consists of three docker containers - the nginx conatainer handles the https requests for specific domain names (e.g. jira.mydomain.com, confluence.mydomain.com) and redirects (proxy_pass) the requests to the specific containers for jira and confluence.
This setup is generally working - I can access the jira instance by opening https://jira.mydomain.com and the confluence instance by opening https://confluence.mydomain.com in my browser.
The problem I am running into becomes visible when logging into the jira:
And following the Find-out-more-link to:
The suggested resolutions from the provided JIRA health check link unfortunately did not help me to identify and solve the problem. Instead some exceptions in the log file lead to some more hints on the problem:
2017-06-07 15:04:26,980 http-nio-8080-exec-17 ERROR christian.schlaefcke 904x1078x1 eqafq3 84.141.114.234,172.17.0.7 /rest/applinks/3.0/applicationlinkForm/manifest.json [c.a.a.c.rest.ui.CreateApplicationLinkUIResource] ManifestNotFoundException thrown while retrieving manifest
ManifestNotFoundException thrown while retrieving manifest
com.atlassian.applinks.spi.manifest.ManifestNotFoundException: java.net.NoRouteToHostException: No route to host (Host unreachable)
...
Caused by: java.net.NoRouteToHostException: No route to host (Host unreachable)
And when I follow the hint from this Atlassian knowledge base article and running this curl statement from inside of the JIRA container:
curl -H "Accept: application/json" https://jira.mydomain.com/rest/applinks/1.0/manifest -v
I finally get this error:
* Trying <PUBLIC_IP>...
* connect to <PUBLIC_IP> port 443 failed: No route to host
* Failed to connect to jira.mydomain.com port 443: No route to host
* Closing connection 0
curl: (7) Failed to connect to jira.mydomain.com port 443: No route to host
EDIT:
The external URL jira.mydomain.com can be pinged from inside of the container:
root#c9233dc17588:# ping jira.mydomain.com
PING jira.mydomain.com (<PUBLIC_IP>) 56(84) bytes of data.
64 bytes from rs226736.mydomain.com (<PUBLIC_IP>): icmp_seq=1 ttl=64 time=0.082 ms
64 bytes from rs226736.mydomain.com (<PUBLIC_IP>): icmp_seq=2 ttl=64 time=0.138 ms
64 bytes from rs226736.mydomain.com (<PUBLIC_IP>): icmp_seq=3 ttl=64 time=0.181 ms
From outside of the JIRA container (e.g. docker host or other machine) the curl statement works fine!
I have quite a good experience with linux in general but my knowledge about networks, routing and iptables is rather limited. Docker is running the current 17.03.1-ce version in combination with docker compose on a centos 7 system:
~]# uname -a
Linux rs226736 3.10.0-514.21.1.el7.x86_64 #1 SMP Thu May 25 17:04:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
At the moment I don´t even understand what kind of problem (iptables?, routing, docker?) this actually is and how to debug this :-(
I played around with some iptables and nginx related hints found via google - all without success. Any hint pointing me to the right direction would be very much appreciated.
REQUESTED CONFIGS:
NGINX docker-compose.yml
nginx:
image: nginx
container_name: nginx
ports:
- 80:80
- 443:443
external_links:
- my_domain-jira
- my_domain-confluence
volumes:
- /opt/docker/logs/nginx:/var/log/nginx
- ./nginx.conf:/etc/nginx/nginx.conf
- ./certs/jira.mydomain.com.crt:/etc/ssl/certs/jira.mydomain.com.crt
- ./certs/jira.mydomain.com.key:/etc/ssl/private/jira.mydomain.com.key
- ./certs/confluence.mydomain.com.crt:/etc/ssl/certs/confluence.mydomain.com.crt
- ./certs/confluence.mydomain.com.key:/etc/ssl/private/confluence.mydomain.com.key
JIRA docker-compose.yml (Confluence similar):
jira:
container_name: my_domain-jira
build: .
external_links:
- postgres
volumes:
- ./inst/conf/server.xml:/opt/jira/conf/server.xml
- ./inst/bin/setenv.sh:/opt/jira/bin/setenv.sh
- /home/jira:/opt/atlassian-home
- /opt/docker/logs/jira:/opt/jira/logs
- /etc/localtime:/etc/localtime:ro
NGINX - nginx.conf
upstream jira {
server my_domain-jira:8080;
}
# begin jira configuration
server {
listen 80;
server_name jira.mydomain.com;
client_max_body_size 500M;
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
listen 443 ssl;
server_name jira.mydomain.com;
ssl on;
ssl_certificate /etc/ssl/certs/jira.mydomain.com.crt;
ssl_certificate_key /etc/ssl/private/jira.mydomain.com.key;
ssl_session_timeout 5m;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK';
server_tokens off;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
client_max_body_size 500M;
location / {
proxy_pass http://jira/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_bypass $http_upgrade;
}
}
Ideas (nginx / proxy_pass / upstream) mostly picked up from:
https://www.digitalocean.com/community/tutorials/docker-explained-how-to-containerize-and-use-nginx-as-a-proxy
http://blog.nbellocam.me/2016/03/01/nginx-serving-multiple-sites-docker/
https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+behind+an+NGinX+reverse+proxy

After some discussion with the provider of the virtual server it turned out, that conflicting firewall rules between plesk firewall and iptables caused this problem. After the conflict had been fixed by the provider the container could be accessed.
This problem is solved now - thank´s to anyone who participated!

Related

Nginx has conflict with an application running on 443

In MacOS, I usually run my project in localhost by sudo PORT=443 HTTPS=true ./node_modules/.bin/react-scripts start. As a result, https://localhost/#/start works in a browser.
Now, to run third-party authentications in localhost, I need to run nginx. Here is my /usr/local/etc/nginx/nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream funfun {
server 178.62.87.72:443;
}
server {
listen 443 ssl;
server_name localhost;
ssl_certificate /etc/ssl/localhost/localhost.crt;
ssl_certificate_key /etc/ssl/localhost/localhost.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_session_timeout 1d;
ssl_stapling off;
ssl_stapling_verify off;
add_header Strict-Transport-Security max-age=15768000;
add_header X-Frame-Options "";
proxy_ssl_name "www.funfun.io";
proxy_ssl_server_name on;
location ~ /socialLoginSuccess {
rewrite ^ '/#/socialLoginSuccess' redirect;
}
location ~ /auth/(.*) {
proxy_pass https://funfun/10studio/auth/$1?$query_string;
proxy_set_header Host localhost;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Accept-Encoding "";
proxy_set_header Proxy "";
proxy_pass https://localhost/;
# These three lines added as per https://github.com/socketio/socket.io/issues/1942 to remove socketio error
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
include servers/*;
}
However, launching the nginx returns me the following errors:
$ sudo nginx
nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use)
nginx: [emerg] still could not bind()
It seems that, nginx has conflit with the app running on 443. Does anyone know why?
Additionally, could anyone tell me what's the purpose of the block location / { ... } in my nginx configuration file?
Only one application can bind/listen on a given port at a time.
You started your app running on port 443:
sudo PORT=443 HTTPS=true ./node_modules/.bin/react-scripts start
Then when you tried to start nginx also on port 443 it fails because your app is already using 443.
To fix this:
stop nginx
stop your app and restart it running on a different port (e.g. 3000):
sudo PORT=3000 HTTPS=true ./node_modules/.bin/react-scripts start
edit your nginx config to tell nginx that your app ("upstream") is running on port 3000 now.
proxy_pass https://localhost:3000;
start nginx
Additionally, I would suggest that you do SSL (https) termination on nginx and let nginx connect to your app on localhost insecurely to reduce other problems. Currently it looks like you are doing ssl termination on nginx and then another ssl connection/termination to your app/upstream. This really isn't necessary when connecting on localhost or over a secure/private network (e.g. within AWS VPC).
stop nginx
stop your app and restart it running on a different port (e.g. 3000):
remove HTTPS=true from sudo PORT=3000 HTTPS=true ./node_modules/.bin/react-scripts start
...and any other changes needed in your react app to disable ssl/https.
edit your nginx config to tell nginx that your app ("upstream") is running on port 3000 now and insecure (change https to http).
proxy_pass http://localhost:3000;
start nginx
For production you should really always run nginx in front of your apps. This allows you to easily do ssl termination, load balancing (multiple apps/upstreams) as well as serving static files (jpg, css, etc) without running through nodejs or other application server. It will scale better. Right tool for the right job.
For local development purposes you can just work against the local insecure http://localhost:3000. If you really hate using port 3000 for some reason then you can of course change that using NODE_ENV in tandem with dotenv or similar in order to switch the port your app uses when in development mode vs production. There really isn't any reason you need to use https/443 on localhost during development. You won't be able to get a trusted SSL cert for localhost so there really isn't any point...it just makes your life more difficult.
I have no issues testing oauth login flows against http://localhost:3000 with google for instance.
Any port can be bound once to given interface. Now, if you run your react application server and it already bind port 443 on interface 0.0.0.0 which in this case is used as kind of wildcard which means "listen on port 443 on all interfaces on my computer" then any other application can't use this port because is already taken. In your nginx configuration you can see line which says that it also want to use port 443:
server {
listen 443 ssl; #<--- this is port config
server_name localhost;
You have (at least) 2 choices to fix that error:
change PORT=443 in your local application
change line with port numer in nginx configuration to any other not occupied
Next - location / { ... } means that all request starting from / which are virtually all requests except these catched in the two previous location blocks, will be forwarded to another web server located at https://localhost/ with some additional headers. This is called reverse proxy.

Nginx in docker throws 502 Bad Gateway

I am trying to run a service called Grafana behind Nginx webserver,where both services are being run in a docker-compose file.
docker-compose.yml:
version: '3.1'
services:
nginx:
image: nginx
ports: ['443:443',"80:80"]
restart: always
volumes:
- ./etc/nginx.conf:/etc/nginx/nginx.conf:ro
- /home/ec2-user/certs:/etc/ssl
grafana:
image: grafana/grafana
restart: always
ports: ["3000:3000"]
nginx.conf:
events {
worker_connections 1024;
}
http {
ssl_certificate /etc/ssl/cert.pem;
ssl_certificate_key /etc/ssl/key.pem;
server {
listen 443 ssl;
server_tokens off;
location /grafana/ {
rewrite /grafana/(.*) /$1 break;
proxy_pass http://127.0.0.1:3000/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_bind $server_addr;
}
}
}
The grafana service is running on port 3000.
My goal is to access this nginx server from outside, (lets assume its public ip address is: 1.1.1.1) on the address https://1.1.1.1/grafana. With the current configuration i get 502 Bad Gateway and the error on nginx side:
(111: Connection refused) while connecting to upstream, client: <<my-public-ip-here>>,
Your containers are running on two separate IP addresses in the docker network, usually 172.17.. by default.
By using a proxy pass like this in the nginx container:
proxy_pass http://127.0.0.1:3000/
You are essentially telling it to look for a process on port 3000 local to itself, because of the 127.0.0.1 right?
You need to point it in the direction of the Grafana container, try doing:
docker inspect <grafana ID> | grep IPAddress
Then set the proxy pass to that IP:
proxy_pass http://172.0.0.?:3000/
I've solved the same issue using something like #james suggested:
docker inspect <your inaccessible container is> | grep Gateway
Then use this IP address:
proxy_pass http://172.xx.0.1:3000/

Block access to rocket.chat via http port in reverse proxy mode

I have installed rocket.chat version 0.72.3 on CentOS 7.6 as a private local team chat.
Then for configuring a reverse proxy to force rocket.chat use SSL protocol I installed nginx version 1.12.2 and followed this link https://rocket.chat/docs/developer-guides/mobile-apps/supporting-ssl/ to configure nginx as a reverse proxy.
After the configuration was successful, I have two urls both pointing to my rocket.chat application (http://localhost:3000 and https://localhost:443). I mean rocket.chat is accessible under both of these two links which the http access is redundant.
How can I disable access to rocket.chat via http://localhost:3000?
You need to 1) bind rocketchat service only to localhost interface and 2) let nginx to listen on public interface and to act as proxy (what you probably already did).
So, first open your rocketchat.service file (possibly in /lib/systemd/system/rocketchat.service, but this depends on how you did configure rocketchat service) and in [Service] section add this line:
[Service]
Environment=BIND_IP=127.0.0.1
Don't worry that you already have one (or some) Environment entries, these are aggregated (as for me I have single Environement entry for each variable).
Then open your nginx config (possibly /etc/nginx/sites-enabled/default, but this may differ) and make sure, that server block listens only on port 443 and does its proxy job. My nginx relevant entries look like this:
# Upstreams
upstream backend {
server 127.0.0.1:3000;
}
server {
listen 443;
server_name mydomain.com;
error_log /var/log/nginx/rocketchat.access.log;
ssl on;
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location / {
proxy_pass http://127.0.0.1:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forward-Proto http;
proxy_set_header X-Nginx-Proxy true;
proxy_redirect off;
}
}
You probably need to reload/restart nginx and rocketchat services and reload config issuing
$ sudo systemctl daemon-reload
command.
For me it works flawlessly.
I resolved this issue by blocking external connections to localhost and allowing internal connections to localhost using iptables:
iptables -A INPUT -p tcp --dport 3000 -s 127.0.0.1 -j ACCEPT
iptables -A INPUT -p tcp --dport 3000 -j DROP
But I'm still wondering isn't there any config related to nginx to sort the issue out?

nginx proxy forwarder for a wordpress container fails with upstream timed out

I have an nginx running in a docker which acts as https proxy. I have lot of other services running in other docker containers, like, gitlab and nginx seems to work fine as a web proxy.
Today I setup a wordpress docker and used below config in nginx:
#
# A virtual host using mix of IP-, name-, and port-based configuration
#
server {
listen 80;
listen 443 ssl;
server_name x.example.com;
ssl on;
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
if ($scheme = http) {
return 301 https://$server_name$request_uri;
}
location / {
proxy_pass http://172.19.0.3;
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# these two lines here
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
the wordpress is running on host port 8080 and guest port 80. i.e., i can perfectly access the site with the url http://x.example.com:8080. But when I try to access using https, i.e, https://x.example.com, nginx gives me 504 Gateway Time-out.
docker logs -f nginx-proxy
shows the below log line.
2018/04/23 21:52:21 [error] 28#28: *3202 upstream timed out (110: Connection timed out) while connecting to upstream, client:
37.20.24.26, server: x.example.com, request: "GET / HTTP/1.1", u 0/", host: "x.example.com"
37.201.224.236 - - [23/Apr/2018:21:52:21 +0000] "GET / HTTP/1.1" 504 585 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36" "-"
Can someone please help me how to fix this issue? Wordpress is running under a different docker network as the container was created using docker-compose.xml. Is that the reason that nginx not able to proxy through?
I had similar problem with local upstream. It was pointing to localhost, which was resolved to both ipv4 and ipv6 while docker made bindings only on ipv4. When request from nginx proxy used ipv6, it timed out (according to connection timeout, default 60s) but retry succeeded (cause it used ipv4).
my nginx container was not able to communicate to the docker network created for wordpress. Resolved the issue using
docker network connect
command.

ngixn conditionally reverse proxy or serve directly

The question is to ask for a possibility of making nginx conditionally redirect requests to other servers (by reverse proxy) or process the request by itself.
Here's the details.
I have a Raspberry Pi (RPi) running nginx + wordpress for 24*7 at home. I also have a laptop running Ubuntu for about 5 hours every night.
The wordpress on RPi is working great but it's slow (especially when it's working on php). So I would like to let the laptop help:
If laptop is on, RPi's nginx redirects all requests to Ubuntu by reverse proxy;
If laptop is off, RPi's nginx process the request as usual.
I wonder if it's possible to achieve this? If yes, how to configure RPi and Ubuntu?
The basic solution is, make nginx as a reverse-proxy with fail_timout, when it receives a request, it dispatch to the upstreams where Ubuntu has higher priority, and if Ubuntu is offline, RPi will handle the request by itself.
This requires:
mysql can be access by two clients with different ip, which is already supported;
wordpress should be the same for RPi and Ubuntu, which can be done by nfs share;
nginx should be correctly configured.
Below is the details of configuration.
Note, in my configureation:
RPi's IP is 192.168.1.100, Ubuntu's IP is 192.168.1.101;
The wordpress only allows https, all http requests are redirected to https;
Server listens at port 80 and 443, upstreams listen on port 8000;
Mysql
Set bind-address = 192.168.1.100 in /etc/mysql/my.cnf, and make sure skip-networking is not defined;
Grant permission to RPi and Ubuntu in mysql's console:
grant all on minewpdb.* to 'mineblog'#'192.168.1.100' identified by 'xxx';
grant all on minewpdb.* to 'mineblog'#'192.168.1.100' identified by 'xxx';
Wordpress
Set DB_HOST correctly:
define('DB_NAME', 'minewpdb');
define('DB_USER', 'mineblog');
define('DB_PASSWORD', 'xxx');
define('DB_HOST', '192.168.1.100');
NFS
On RPi, install nfs-kernel-server, and export by /etc/exports
/path/to/wordpress 192.168.1.101(rw,no_root_squash,insecure,sync,no_subtree_check)
To enable nfs server on RPi, rpcbind is also required:
sudo service rpcbind start
sudo update-rc.d rpcbind enable
sudo service nfs-kernel-server start
On Ubuntu, mount the nfs (it should also be set in /etc/fstab to make it mount automatically)
sudo mount -t nfs 192.168.1.100:/path/to/wordpress /path/to/wordpress
Nginx
On RPi, make a new config file /etc/nginx/sites-available/wordpress-load-balance, with below parameters:
upstream php {
server unix:/var/run/php5-fpm.sock;
}
upstream mineservers {
# upstreams, Ubuntu has much higher priority
server 192.168.1.101:8000 weight=999 fail_timeout=5s max_fails=1;
server 192.168.1.100:8000;
}
server {
listen 80;
server_name mine260309.me;
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
listen 443 ssl;
server_name mine260309.me;
ssl_certificate /path/to/cert/cert_file;
ssl_certificate_key /path/to/cert/cert_key_file;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
access_log /path/to/wordpress/logs/proxy.log;
error_log /path/to/wordpress/logs/proxy_error.log;
location / {
# reverse-proxy to upstreams
proxy_pass http://mineservers;
### force timeouts if one of backend is died ##
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
### Set headers ####
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
### Most PHP, Python, Rails, Java App can use this header ###
#proxy_set_header X-Forwarded-Proto https;##
#This is better##
proxy_set_header X-Forwarded-Proto $scheme;
add_header Front-End-Https on;
### By default we don't want to redirect it ####
proxy_redirect off;
}
}
server {
root /path/to/wordpress;
listen 8000;
server_name mine260309.me;
... # normal wordpress configurations
}
On Ubuntu, it can use the same config file.
Now any request received by RPi's nginx server on port 443, it's dispatched to either Ubuntu or RPi's port 8000, where Ubuntu has much higher priority. If Ubuntu is offline, RPi itself can handle the request as well.
Any comments are welcome!

Resources