I want to set up nginx as a forward proxy - much like Squid might work.
This is my server block:
server {
listen 3128;
server_name localhost;
location / {
resolver 8.8.8.8;
proxy_pass http://$http_host$uri$is_args$args;
}
}
This is the curl command I use to test, and it works the first time, maybe even the second time.
curl -s -D - -o /dev/null -x "http://localhost:3128" http://storage.googleapis.com/my.appspot.com/test.jpeg
The corresponding nginx access log is
172.23.0.1 - - [26/Feb/2021:12:38:59 +0000] "GET http://storage.googleapis.com/my.appspot.com/test.jpeg HTTP/1.1" 200 2296040 "-" "curl/7.64.1" "-"
However - on repeated requests, I start getting these errors in my nginx logs (after say the 2nd or 3rd attempt)
2021/02/26 12:39:49 [crit] 31#31: *4 connect() to [2c0f:fb50:4002:804::2010]:80 failed (99: Address not available) while connecting to upstream, client: 172.23.0.1, server: localhost, request: "GET http://storage.googleapis.com/omgimg.appspot.com/test.jpeg HTTP/1.1", upstream: "http://[2c0f:fb50:4002:804::2010]:80/my.appspot.com/test.jpeg", host: "storage.googleapis.com"
2021/02/26 12:39:49 [warn] 31#31: *4 upstream server temporarily disabled while connecting to upstream, client: 172.23.0.1, server: localhost, request: "GET http://storage.googleapis.com/my.appspot.com/test.jpeg HTTP/1.1", upstream: "http://[2c0f:fb50:4002:804::2010]:80/my.appspot.com/test.jpeg", host: "storage.googleapis.com"
What might be causing these issues after just a handful of requests? (curl still fetches the URL fine)
The DNS resolver was resolving to both IPV4 and IPV6 addresses. The IPV6 part seems to be causing an issue with the upstream servers.
Switching it off made those errors disappear.
resolver 8.8.8.8 ipv6=off;
I'm trying to use nginx as a load balancer / proxy server which points to a series to tomcat servers. This is my current nginx configuration.
server {
listen 80;
server_name _;
rewrite ^ https://$http_host$request_uri? permanent;
}
server {
listen 443;
resolver 127.0.0.11 valid=5s;
ssl on;
ssl_certificate /etc/nginx/certs/default.crt; # path to your cacert.pem
ssl_certificate_key /etc/nginx/certs/default.key; # path to your privkey.pem
ssl_verify_client off;
server_name localhost;
fastcgi_param HTTPS on;
fastcgi_param HTTP_SCHEME https;
charset utf-8;
client_max_body_size 200M;
set $app https://app:8443;
set $auth https://auth:8443/authentication/;
set $discovery https://discovery:8443/discovery/;
location / {
proxy_pass $app;
}
location /authentication {
proxy_pass $auth;
}
location /discovery {
proxy_pass $discovery;
proxy_set_header Host $http_host;
proxy_set_header X_FORWARDED_PROTO https;
}
}
This is dockerized if it makes any difference, but provisioning fails to resolve correctly while docs is working fine. The only difference between docs and provisioning is that 'docs' is serving pure html files via tomcat. (The tomcat7 standard /docs/) while provisioning is actually a java servlet (JaxRS/spring etc ).
If I hit the image directly it works as expected, while if I try to hit the same endpoint via the nginx it fails to resolve.
My docker-compose configuration for reference.
version: '2'
services:
db:
image: db:nodata
expose:
- 5433
zk:
image: zookeeper
ports:
- 2181:2181
discovery:
image: services_discovery:latest
env_file: docker_environment
expose:
- 8443
ports:
- 8443:8443
links:
- db
- zk
app:
image: tomcat-jsse-ssl:7-jdk8
volumes:
- ./app/www/:/usr/local/tomcat7/webapps/ROOT/
expose:
- 8443
ports:
- 8444:8443
auth:
image: tomcat-jsse-ssl:7-jdk8
volumes:
- ./authentication/www/authentication/:/usr/local/tomcat7/webapps/authentication/
expose:
- 8443
proxy:
build: ./proxy/
depends_on:
- 'auth'
- 'app'
- 'discovery'
ports:
- 443:443
restart: always
With the images running I can resolve the following URLs just fine.
https://localhost:8443/discovery/ready
https://localhost:8444/
ie. both containers are running fine:
https://localhost/ loaded via nginx works fine.
https://localhost/authentication/ loaded via nginx works fine.
https://localhost/discovery/ready ==> 404.
Server Logs:
proxy_1 | 172.20.0.1 - - [24/Apr/2017:00:04:28 +0000] "GET /discovery/ready HTTP/1.1" 404 400 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36"
proxy_1 | 172.20.0.1 - - [24/Apr/2017:00:04:43 +0000] "GET /discovery/api/swagger.json HTTP/1.1" 404 400 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36"
proxy_1 | 172.20.0.1 - - [24/Apr/2017:00:04:57 +0000] "GET /discovery/ready HTTP/1.1" 404 400 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36"
Discovery Tomcat Access Log (when access directly)
172.20.0.1 - - [23/Apr/2017:00:02:38 +0000] "GET /discovery/ready HTTP/1.1" 200 70
172.20.0.2 - - [24/Apr/2017:00:04:28 +0000] "GET /discovery/ HTTP/1.0" 404 949
172.20.0.2 - - [24/Apr/2017:00:04:43 +0000] "GET /discovery/ HTTP/1.0" 404 949
172.20.0.2 - - [24/Apr/2017:00:04:57 +0000] "GET /discovery/ HTTP/1.0" 404 949
The first entry is when I hit the server directly via https://localhost:8443/discovery/ready everything else
is when nginx sends the request to the server. For some reason it's not translating the request correctly.
Any thoughts/suggestions would be appreciated?
Note: I simplified my example/config for the purposes of this question and any references to "provisioning" are now "discovery".
UPDATE: I figured out why it's breaking for the 'servlet'. It's actually breaking constantly. It's stripping away all of the URL except the base.
for example.
https://localhost/authentication?q=dummy
becomes
172.20.0.3 - - [24/Apr/2017:03:22:06 +0000] "GET /authentication/ HTTP/1.0" 200 28
note that the query parameters are stripped away.
The nginx documentation says that you are responsable to rebuild the url:
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass
So, you can try to capture the rest of the URI using regex and send it on the proxy_pass section:
location ~* ^/discovery/(.*) {
proxy_pass $discovery$1$is_args$args;
.... other configs....
}
In nginx to drop connection I can return 444, however there is a problem with that IMO. It seems that 444 doesn't silently drop the connection, but actually closes it gracefully, as a result tools that all these spammers use will rapidly retry the request:
149.56.28.239 - - [22/Sep/2016:20:33:18 +0200] "PROPFIND /webdav/ HTTP/1.1" 444 0 "-" "WEBDAV Client"
149.56.28.239 - - [22/Sep/2016:20:33:18 +0200] "PROPFIND /webdav/ HTTP/1.1" 444 0 "-" "WEBDAV Client"
is there a way to abort tcp (not gracefully as if my server was suddenly unplugged from the net) so that on the requester end it would continue waiting? Are there any drawbacks/problems with that and is that possible with nginx?
To drop requests without Host header in nginx you use the following config:
server {
listen 80;
return 444;
}
Is there a way to handle some of these requests for example if requested url matches some regex?
I've a service listening to 8080 port. This one is not a container.
Then, I've created a nginx container using official image:
docker run --name nginx -d -v /root/nginx/conf:/etc/nginx/conf.d -p 443:443 -p 80:80 nginx
After all:
# netstat -tupln | grep 443
tcp6 0 0 :::443 :::* LISTEN 3482/docker-proxy
# netstat -tupln | grep 80
tcp6 0 0 :::80 :::* LISTEN 3489/docker-proxy
tcp6 0 0 :::8080 :::* LISTEN 1009/java
Nginx configuration is:
upstream eighty {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name eighty.domain.com;
location / {
proxy_pass http://eighty;
}
}
I've checked I'm able to connect with with this server with # curl http://127.0.0.1:8080
<html><head><meta http-equiv='refresh'
content='1;url=/login?from=%2F'/><script>window.location.replace('/login?from=%2F');</script></head><body
style='background-color:white; color:white;'>
...
It seems running well, however, when I'm trying to access using my browser, nginx tells bt a 502 bad gateway response.
I'm figuring out it can be a problem related with the visibility between a open by a non-containerized process and a container. Can I container stablish connection to a port open by other non-container process?
EDIT
Logs where upstream { server 127.0.0.1:8080; }:
2016/07/13 09:06:53 [error] 5#5: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 62.57.217.25, server: eighty.domain.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "eighty.domain.com"
62.57.217.25 - - [13/Jul/2016:09:06:53 +0000] "GET / HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" "-"
Logs where upstream { server 0.0.0.0:8080; }:
62.57.217.25 - - [13/Jul/2016:09:00:30 +0000] "GET / HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" "-" 2016/07/13 09:00:30 [error] 5#5: *1 connect() failed (111: Connection refused) while connecting to upstream, client:
62.57.217.25, server: eighty.domain.com, request: "GET / HTTP/1.1", upstream: "http://0.0.0.0:8080/", host: "eighty.domain.com" 2016/07/13 09:00:32 [error] 5#5: *3 connect() failed (111: Connection refused) while connecting to upstream, client: 62.57.217.25, server: eighty.domain.com, request: "GET / HTTP/1.1", upstream: "http://0.0.0.0:8080/", host: "eighty.domain.com"
62.57.217.25 - - [13/Jul/2016:09:00:32 +0000] "GET / HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" "-"
Any ideas?
The Problem
Localhost is a bit tricky when it comes to containers. Within a docker container, localhost points to the container itself.
This means, with an upstream like this:
upstream foo{
server 127.0.0.1:8080;
}
or
upstream foo{
server 0.0.0.0:8080;
}
you are telling nginx to pass your request to the local host.
But in the context of a docker-container, localhost (and the corresponding ip addresses) are pointing to the container itself:
by addressing 127.0.0.1 you will never reach your host machine, if your container is not on the host network.
Solutions
Host Networking
You can choose to run nginx on the same network as your host:
docker run --name nginx -d -v /root/nginx/conf:/etc/nginx/conf.d --net=host nginx
Note that you do not need to expose any ports in this case.
This works though you lose the benefit of docker networking. If you have multiple containers that should communicate through the docker network, this approach can be a problem. If you just want to deploy nginx with docker and do not want to use any advanced docker network features, this approach is fine.
Access the hosts remote IP Address
Another approach is to reconfigure your nginx upstream directive to directly connect to your host machine by adding its remote IP address:
upstream foo{
//insert your hosts ip here
server 192.168.99.100:8080;
}
The container will now go through the network stack and resolve your host correctly:
You can also use your DNS name if you have one. Make sure docker knows about your DNS server.
For me helped this line of code proxy_set_header Host $http_host;
server {
listen 80;
server_name localhost;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
proxy_pass http://myserver;
}
Just to complete other answers, I'm using mac for development and using host.docker.internal directly on upstream worked for me and no need to pass the host remote IP address. Here is config of the proxy nginx:
events { worker_connections 1024; }
http {
upstream app1 {
server host.docker.internal:81;
}
upstream app1 {
server host.docker.internal:82;
}
server {
listen 80;
server_name app1.com;
location / {
proxy_pass http://app1;
}
}
server {
listen 80;
server_name app2.com;
location / {
proxy_pass http://app2;
}
}
}
As you can see, I used different ports for different apps behind the nginx proxy. I used port 81 for the app1 and port 82 for the app2 and both app1 and app2 have their own nginx containers:
For app1:
docker run --name nginx -d -p 81:80 nginx
For app2:
docker run --name nginx -d -p 82:80 nginx
Also, please refer to this link for more details:
docker doc for mac
What you can do is configure proxy_pass that from container perspective the adress will be pointing to your real host.
To get host address from container perspective you can do as following on Windows with docker 18.03 (or more recent):
Run bash on container from host where image name is nginx (works on Alpine Linux distribution):
docker run -it nginx /bin/ash
Then run inside container
/ # nslookup host.docker.internal
Name: host.docker.internal
Address 1: 192.168.65.2
192.168.65.2 is the host's IP - not the bridge IP like in spinus accepted answer.
I am using here host.docker.internal:
The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker for Windows.
Then you can change nginx config to:
proxy_pass http://192.168.65.2:{your_app_port};
and it should work fine.
Remember to provide the same port as your local application runs with.
# the upstream component nginx needs to connect to
upstream django {
# server unix:///path/to/your/mysite/mysite.sock; # for a file socket
server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
location / {
uwsgi_pass django;
include /path/to/your/mysite/uwsgi_params; # the uwsgi_params file you installed
}
complete reference: https://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.html
nginx.sh
ip=$(ifconfig | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1' | head -n 1)
docker run --name nginx --add-host="host:${ip}" -p 80:80 -d nginx
nginx.conf
location / {
...
proxy_pass http://host:8080/;
}
It‘s works for me
I had this issue and it turned out to be an issue with the docker container not starting up due to a permissions issue.
In my case running
docker-compose ps
showed that the container had not started and exited with status 1. Turns out the permissions had been lost in migrating to a new machine. Adjusting the permissions to a know staff user on the parent directory fixed the problem for me and I was then able to start docker service where as previously I was getting
nginx_1_c18a7f6f7d6d | chown: /var/www/html: Operation not permitted
I have a live HTTPS wordpress site running. Recently I tried to WAMP it locally to test out some other themes, Git & stuff, but the site wouldn't load, returning ERR_CONNECTION_REFUSED errors.
The live site is hosted by Siteground, running with SSL (HTTPS).
I have set up the database correctly. (I'm sure because it used to return database connection error and then I fixed it)
I used to get Invalid command 'AddOutputFilterByType', 'Header' in Apache error logs. I then turned on the mod_filter and mod_headers in Wamp, and now I still get ERR_CONNECTION_REFUSED error in browser, but the error log does not add new messages.
This is the only local site which wouldn't run. I have other Wamp sites run with no issues.
The only clue for me could be the Apache access log, which is pasted as follows:
127.0.0.1 - - [17/May/2016:16:04:41 +1000] "POST /wp-cron.php?doing_wp_cron=1463465081.0992779731750488281250 HTTP/1.0" 200 -
127.0.0.1 - - [17/May/2016:16:04:28 +1000] "GET / HTTP/1.1" 301 -
127.0.0.1 - - [17/May/2016:16:04:27 +1000] "GET / HTTP/1.1" 301 -
127.0.0.1 - - [17/May/2016:16:04:41 +1000] "POST /wp-cron.php?doing_wp_cron=1463465081.0982780456542968750000 HTTP/1.0" 200 25
127.0.0.1 - - [17/May/2016:16:30:11 +1000] "GET / HTTP/1.1" 200 435
127.0.0.1 - - [17/May/2016:16:30:13 +1000] "GET / HTTP/1.1" 200 435
I have tried to disabled wp-cron but it couldn't fix the problem.
Thanks all in advance.