I'm trying to get Nginx to reverse proxy connections within a lan to several web applications including ones inside docker containers.
Both webapps are reachable with the proxy_pass url
I'm using the following dockerfile:
# Set the base image to Ubuntu
FROM ubuntu
RUN apt-get update
RUN apt-get install -y nginx
RUN rm -v /etc/nginx/nginx.conf
RUN echo "daemon off; \n\
\n\
worker_processes 1; \n\
events { worker_connections 1024; } \n\
\n\
http { \n\
\n\
server { \n\
listen 99; \n\
\n\
server_name dashboard; \n\
location / { \n\
proxy_pass http://dashboard:80; \n\
} \n\
location /app1 { \n\
proxy_pass http://otherhostname:9000/app1; \n\
} \n\
} \n\
} \n\
" >> /etc/nginx/nginx.conf
EXPOSE 99
CMD service nginx start
When running this as a service (container) I can reach app1, but not the dashboard.
The weird thing is that I had this working before, and I'm pretty sure I did not change anything fundamental to the dockerfile. Am I missing something?
EDIT: (I have currently exposed the dashboard on port 80, and am testing on 99 with nginx)
I run the nginx container with:
docker service create \
--replicas 1 \
--name nginx \
-p 99:99 \
nginx_image
the dashboard also has the correct port exposed.
docker service create \
--replicas 1 \
--name dashboard \
-p 80:8080 \
dashboard_image
Looking in the nginx error.log I found:
2016/11/08 08:46:41 [error] 25#25: *42 upstream timed out (110: Connection timed out) while connecting to upstream, client: 10.255.0.3, server: dashboard, request: "GET / HTTP/1.1", upstream: "http://dockerhostip:80/", host: "dashboard:99"
Nginx is working as intended. I found when changing the proxy pass to example.com it works fine. It must be something that changed in the dashboard that messes things up.
Related
I am trying to confirm my logs are in json format but I cannot even see one log. I am using docker-compose
version: '3'
services:
nginx:
image: test_site
volumes:
- /Users/mikeJ/Desktop/test-logs/access:/tmp/logs/access
- /Users/mikeJ/Desktop/test-logs/error:/tmp/logs/error
build:
context: .
restart: unless-stopped
ports:
- "8040:8040"
ngnix.conf
worker_processes 1;
events { worker_connections 1024; }
http {
include mime.types;
sendfile on;
access_log on;
log_format json_combined escape=json
'{'
'"time_local":"$time_local",'
'"remote_addr":"$remote_addr",'
'"remote_user":"$remote_user",'
'"request":"$request",'
'"status": "$status",'
'"body_bytes_sent":"$body_bytes_sent",'
'"request_time":"$request_time",'
'"http_referrer":"$http_referer",'
'"http_user_agent":"$http_user_agent"'
'}';
server {
listen 8040;
error_log /tmp/logs/error/error.log warn;
access_log /tmp/logs/access/access.log;
server_name localhost;
location /{
root /usr/share/nginx/html/;
index index.html;
}
location ~ ^/test/footer {
root /usr/share/nginx/html/;
expires 5m;
access_log on;
}
}
dockerfile
FROM nginx:1.15.0-alpine
RUN rm -v /etc/nginx/nginx.conf
# Copying nginx configuration file
ADD nginx.conf /etc/nginx/
# setup nginx caching
RUN mkdir -p /tmp/nginx/cache
#create directory for logs
RUN mkdir -p /tmp/logs/error
RUN mkdir -p /tmp/logs/access
#adding footer file
ADD footer /usr/share/nginx/html/footer
# Expose ports
EXPOSE 8040
I even ssh into the container and nothing is there.
from inside the container
# ps aux | grep nginx
1 root 0:00 nginx: master process nginx -g daemon off;
7 nginx 0:00 nginx: worker process
Could you confirm if the nginx.conf is correct?
It seems that the nginx process does not have permissions to write to the directory created.
ps -eo "%U %G %a" | grep nginx
Run the command above to learn the user. It is nginx in your case.
Change the owner and group for the log directory and reload the nginx service.
#create directory for logs
RUN mkdir -p /tmp/logs/error
RUN mkdir -p /tmp/logs/access && \
chown -R nginx:nginx /tmp/logs/
#adding footer file
ADD footer /usr/share/nginx/html/footer
Check the logs folder post accessing one of your URLs.
I currently try to use nginx as a proxy for elasticsearch engine, all with docker.
My run command for elasticsearch is the following :
docker run --name elasticsearch_5.2.1 \
-d \
elasticsearch:5.2.1
The one for nginx :
docker run --name nginx_1.11.10 \
-p 8200:80 \
-l elasticsearch_5.2.1:elasticsearch \
-v /my.conf:/etc/nginx/nginx.conf:ro \
-d \
nginx:1.11.10
And my nginx config is :
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream elasticsearch_proxy {
server elasticsearch:9200;
}
server {
listen 80;
location / {
proxy_pass http://elasticsearch_proxy;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
}
}
But, when nginx start, I have this error :
2017/03/01 23:45:47 [emerg] 1#1: host not found in upstream "elasticsearch:9200" in /etc/nginx/nginx.conf:15
nginx: [emerg] host not found in upstream "elasticsearch:9200" in /etc/nginx/nginx.conf:15
I understand that nginx can't found elasticsearch with his alias. But I can't find the problem.
Is there someone who already has this problem ?
Thank you.
You need to create a user network.
docker create network my_app
And then run both containers on that network.
docker run --name elasticsearch_5.2.1 \
-d --network my_app \
elasticsearch:5.2.1
docker run --name nginx_1.11.10 \
-p 8200:80 \
-l elasticsearch_5.2.1:elasticsearch \
--network my_app \
-v /my.conf:/etc/nginx/nginx.conf:ro \
-d \
nginx:1.11.10
Then you should be able to resolve names properly as if they were DNS names.
I've a service listening to 8080 port. This one is not a container.
Then, I've created a nginx container using official image:
docker run --name nginx -d -v /root/nginx/conf:/etc/nginx/conf.d -p 443:443 -p 80:80 nginx
After all:
# netstat -tupln | grep 443
tcp6 0 0 :::443 :::* LISTEN 3482/docker-proxy
# netstat -tupln | grep 80
tcp6 0 0 :::80 :::* LISTEN 3489/docker-proxy
tcp6 0 0 :::8080 :::* LISTEN 1009/java
Nginx configuration is:
upstream eighty {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name eighty.domain.com;
location / {
proxy_pass http://eighty;
}
}
I've checked I'm able to connect with with this server with # curl http://127.0.0.1:8080
<html><head><meta http-equiv='refresh'
content='1;url=/login?from=%2F'/><script>window.location.replace('/login?from=%2F');</script></head><body
style='background-color:white; color:white;'>
...
It seems running well, however, when I'm trying to access using my browser, nginx tells bt a 502 bad gateway response.
I'm figuring out it can be a problem related with the visibility between a open by a non-containerized process and a container. Can I container stablish connection to a port open by other non-container process?
EDIT
Logs where upstream { server 127.0.0.1:8080; }:
2016/07/13 09:06:53 [error] 5#5: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 62.57.217.25, server: eighty.domain.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "eighty.domain.com"
62.57.217.25 - - [13/Jul/2016:09:06:53 +0000] "GET / HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" "-"
Logs where upstream { server 0.0.0.0:8080; }:
62.57.217.25 - - [13/Jul/2016:09:00:30 +0000] "GET / HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" "-" 2016/07/13 09:00:30 [error] 5#5: *1 connect() failed (111: Connection refused) while connecting to upstream, client:
62.57.217.25, server: eighty.domain.com, request: "GET / HTTP/1.1", upstream: "http://0.0.0.0:8080/", host: "eighty.domain.com" 2016/07/13 09:00:32 [error] 5#5: *3 connect() failed (111: Connection refused) while connecting to upstream, client: 62.57.217.25, server: eighty.domain.com, request: "GET / HTTP/1.1", upstream: "http://0.0.0.0:8080/", host: "eighty.domain.com"
62.57.217.25 - - [13/Jul/2016:09:00:32 +0000] "GET / HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" "-"
Any ideas?
The Problem
Localhost is a bit tricky when it comes to containers. Within a docker container, localhost points to the container itself.
This means, with an upstream like this:
upstream foo{
server 127.0.0.1:8080;
}
or
upstream foo{
server 0.0.0.0:8080;
}
you are telling nginx to pass your request to the local host.
But in the context of a docker-container, localhost (and the corresponding ip addresses) are pointing to the container itself:
by addressing 127.0.0.1 you will never reach your host machine, if your container is not on the host network.
Solutions
Host Networking
You can choose to run nginx on the same network as your host:
docker run --name nginx -d -v /root/nginx/conf:/etc/nginx/conf.d --net=host nginx
Note that you do not need to expose any ports in this case.
This works though you lose the benefit of docker networking. If you have multiple containers that should communicate through the docker network, this approach can be a problem. If you just want to deploy nginx with docker and do not want to use any advanced docker network features, this approach is fine.
Access the hosts remote IP Address
Another approach is to reconfigure your nginx upstream directive to directly connect to your host machine by adding its remote IP address:
upstream foo{
//insert your hosts ip here
server 192.168.99.100:8080;
}
The container will now go through the network stack and resolve your host correctly:
You can also use your DNS name if you have one. Make sure docker knows about your DNS server.
For me helped this line of code proxy_set_header Host $http_host;
server {
listen 80;
server_name localhost;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
proxy_pass http://myserver;
}
Just to complete other answers, I'm using mac for development and using host.docker.internal directly on upstream worked for me and no need to pass the host remote IP address. Here is config of the proxy nginx:
events { worker_connections 1024; }
http {
upstream app1 {
server host.docker.internal:81;
}
upstream app1 {
server host.docker.internal:82;
}
server {
listen 80;
server_name app1.com;
location / {
proxy_pass http://app1;
}
}
server {
listen 80;
server_name app2.com;
location / {
proxy_pass http://app2;
}
}
}
As you can see, I used different ports for different apps behind the nginx proxy. I used port 81 for the app1 and port 82 for the app2 and both app1 and app2 have their own nginx containers:
For app1:
docker run --name nginx -d -p 81:80 nginx
For app2:
docker run --name nginx -d -p 82:80 nginx
Also, please refer to this link for more details:
docker doc for mac
What you can do is configure proxy_pass that from container perspective the adress will be pointing to your real host.
To get host address from container perspective you can do as following on Windows with docker 18.03 (or more recent):
Run bash on container from host where image name is nginx (works on Alpine Linux distribution):
docker run -it nginx /bin/ash
Then run inside container
/ # nslookup host.docker.internal
Name: host.docker.internal
Address 1: 192.168.65.2
192.168.65.2 is the host's IP - not the bridge IP like in spinus accepted answer.
I am using here host.docker.internal:
The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker for Windows.
Then you can change nginx config to:
proxy_pass http://192.168.65.2:{your_app_port};
and it should work fine.
Remember to provide the same port as your local application runs with.
# the upstream component nginx needs to connect to
upstream django {
# server unix:///path/to/your/mysite/mysite.sock; # for a file socket
server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
location / {
uwsgi_pass django;
include /path/to/your/mysite/uwsgi_params; # the uwsgi_params file you installed
}
complete reference: https://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.html
nginx.sh
ip=$(ifconfig | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1' | head -n 1)
docker run --name nginx --add-host="host:${ip}" -p 80:80 -d nginx
nginx.conf
location / {
...
proxy_pass http://host:8080/;
}
It‘s works for me
I had this issue and it turned out to be an issue with the docker container not starting up due to a permissions issue.
In my case running
docker-compose ps
showed that the container had not started and exited with status 1. Turns out the permissions had been lost in migrating to a new machine. Adjusting the permissions to a know staff user on the parent directory fixed the problem for me and I was then able to start docker service where as previously I was getting
nginx_1_c18a7f6f7d6d | chown: /var/www/html: Operation not permitted
l'm configuring Nginx on my CentOS 7. l could run the nginx through the command but no through the service. l appreciate any help.
Run Nginx through command
When l start the nginx with
$ sudo nginx
l could see the port is listening, and l've connected to nginx with lynx successfully.
$ netstat -nap | grep 8000
(No info could be read for "-p": geteuid()=1000 but you should be root.)
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN -
No issue with wget as well,
$ wget http://127.0.0.1:8000
--2016-04-05 13:33:01-- http://127.0.0.1:8000/
Connecting to 127.0.0.1:8000... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html.2’
[ <=> ] 11 --.-K/s in 0s
2016-04-05 13:33:01 (1.53 MB/s) - ‘index.html.2’ saved [11]
Run Nginx through Systemd
However, when l start the nginx through systemd
$ sudo systemctl start nginx
Nothing is listening on the port 8000.
$ netstat -nap | grep 8000
(No info could be read for "-p": geteuid()=1000 but you should be root.)
This is the result of wget
$ wget http://127.0.0.1:8000
--2016-04-05 13:34:52-- http://127.0.0.1:8000/
Connecting to 127.0.0.1:8000... failed: Connection refused.
l've checked the error log (/var/log/nginx/error.log),
Apr 5 12:57:24 localhost systemd: Starting The NGINX HTTP and reverse proxy server...
Apr 5 12:57:24 localhost nginx: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Apr 5 12:57:24 localhost nginx: nginx: configuration file /etc/nginx/nginx.conf test is successful
Apr 5 12:57:24 localhost systemd: Failed to read PID from file /var/run/nginx.pid: Invalid argument
Apr 5 12:57:24 localhost systemd: Started The NGINX HTTP and reverse proxy server.
The config file has passed the test
$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
This is the main config file /etc/nginx/nginx.conf
$ cat /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
This is the nginx config file /etc/nginx/conf.d/test_nginx.conf
# the upstream component nginx needs to connect to
upstream django {
server 0.0.0.0:8001;
}
# configuration of the server
server {
# the port your site will be served on
listen 8000;
# the domain name it will serve for
server_name 0.0.0.0; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
location /static {
alias /src/frontend/DjangoServer/static;
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /src/frontend/DjangoServer/uwsgi_params; # the uwsgi_params file you installed
}
}
This is the nginx systemd config file
$ cat /etc/systemd/system/nginx.service
[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network.target remote-fs.target nss-lookup.target
[Service]
Type=forking
PIDFile=/var/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
[Install]
WantedBy=multi-user.target
Probably SELinux is not allowing nginx to read the configs under /etc/nginx/sites-enabled/, I had the same problem when copying the configuration from another site.
chcon -R -t httpd_config_t /etc/nginx
should fix it. If not check in /var/log/audit to see if there is any other problem related to SELinux
This answer is specific to Docker.
I experienced the same issue I could run the nginx through the command but no through the service. on Docker (Debian).
The cause is that daemon tools (init.d, service, systemd) don't work on Docker by default. In Linux, the init process has to have PID 1. However, Docker doesn't run them as PID 1. PID 1 is occupied by dumb-init -- sh -c ...... which executes the CMD statement in you Docker config file. That was why my nginx didn't start as a service.
You can either 'hack' Docker to use systemd, which I don't think is a recommended practice at least according to what I've read on SO, or you could include the nginx start command sorta via terminal as in:
CMD ["sh", "-c", "systemctl start nginx (or "service nginx start" etc.) && (your original command)"]
I have noticed an issue with docker nginx which is not the case when nginx is running on the host machine (apt-get install). Here is how to reproduce my issue:
solution A: 'nc' on container 1, 'nginx' on container 2, 'curl' on host
docker stop $(docker ps -aq)
docker rm $(docker ps -aq)
'nc' on container 1
docker run -ti --name agitated_stallman ubuntu:14.04 bash
nc -l 4545
'nginx' on container 2
LOLPATH=$HOME/testdocker
echo $LOLPATH
mkdir -p $LOLPATH
cd $LOLPATH
subl mple.conf
.
server {
listen 80;
root /var/www/html;
location /roz {
proxy_pass http://neocontainer:4545;
proxy_set_header Host $host;
}
}
.
docker run --link agitated_stallman:neocontainer -v $LOLPATH/mple.conf:/etc/nginx/sites-available/default -p 12345:80 nginx:1.9
'curl' on host
sudo apt-get install curl
curl http://localhost:12345/roz
ERROR response from 'nginx':
2016/03/04 19:59:18 [error] 8#8: *3 open() "/usr/share/nginx/html/roz" failed (2: No such file or directory), client: 172.17.0.1, server: localhost, request: "GET /roz HTTP/1.1", host: "localhost:12345"
172.17.0.1 - - [04/Mar/2016:19:59:18 +0000] "GET /roz HTTP/1.1" 404 169 "-" "curl/7.45.0" "-"
solution B: 'nginx' on host, 'nc' on host, 'curl' on host
'nginx' on host
sudo apt-get install nginx
sudo subl /etc/nginx/sites-available/default
.
server {
listen 80;
root /var/www/html;
location /roz {
proxy_pass http://localhost:4646;
proxy_set_header Host $host;
}
}
.
sudo service nginx restart
'nc' on host
nc -l 4646
'curl' on host
sudo apt-get install curl
curl http://localhost:80/roz
SUCCESS response from 'nc':
GET /roz HTTP/1.0
Host: localhost
Connection: close
User-Agent: curl/7.45.0
Accept: */*
In short: run nginx container with -v $LOLPATH/mple.conf:/etc/nginx/conf.d/default.conf
nginx:1.9 docker image currently uses nginx package from nginx's own repository, not from official debian repository. If you examine that package, you'll find that /etc/nginx/nginx.conf does include only from /etc/nginx/conf.d/*.conf, and that package ships with pre-included /etc/nginx/conf.d/default.conf:
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
# other not important stuff
# ...
}
So your config is not used at all, which explains the open() "/usr/share/nginx/html/roz" failed error.
When you install nginx directly on host, you probably use official debian repository, which has different main config file, which in turn does include /etc/nginx/sites-available/*, and your config is actually used.