I have two Amazon EC2 instances. Let me call them X and Y. I have nginx installed on both of them. Y has resque running on port 3000. Only X has a public IP and domain example.com. Suppose private IP of Y is 15.0.0.10
What I want is that all the requests come to X. And only if the request url matches the pattern /resque, then it should be handled by Y at localhost:3000/overview, which is the resque web interface. It seems like this can be done using proxy_pass in nginx config.
So, In nginx.conf, I have added the following :
location /resque {
real_ip_header X-Forwarded-For;
set_real_ip_from 0.0.0.0/0;
allow all;
proxy_pass http://15.0.0.10:3000/overview;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
But now, when I visit http://example.com/resque from my web browser, it shows 502 Bad Gateway.
In /var/log/nginx/error.log on X,
2014/02/27 10:27:16 [error] 12559#0: *2588 connect() failed (111: Connection
refused) while connecting to upstream, client: 123.201.181.82, server: _,
request: "GET /resque HTTP/1.1", upstream: "http://15.0.0.10:3000/overview",
host: "example.com"
Any suggestions on what could be wrong and how to fix this ?
Turns out that the server should be running on 0.0.0.0 if it needs to be reachable by addressing the IP of the instance.
So to solve my problem, I stopped the server running resque on 127.0.0.1:3000 and restarted it to bind to 0.0.0.0:3000. Rest everything remains the same as above and it works. Thanks.
For reference : Curl amazon EC2 instance
Related
Issue: I have an nginx reverse proxy installed in a ubuntu server with private IP only. The purpose of this reverse proxy is to route the incoming request to various 3rd party web sockets and RestAPIs. Furthermore, to distribute the load, I have a http loadbalancer sitting behind the nginx proxy server.
So this is how it looks technically:
IncomingRequest --> InternalLoadBalancer(Port:80) --> NginxReverseProxyServer(80) --> ThirdParyAPIs(Port:443) & WebSockets(443)
The problem I have is that, Nginx does not reverse_proxy correctly to the RestAPIs and gives a 502 error, but it does work successfully for Web Sockets.
Below is my /etc/nginx/sites-available/default config file: (No changes done elsewhere)
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
location /binance-ws/ {
# Web Socket Connection
####################### THIS WORKS FINE
proxy_pass https://stream.binance.com:9443/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
}
location /binance-api/ {
# Rest API Connection
##################### THIS FAILS WITH 502 ERROR
proxy_pass https://api.binance.com/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
}
}
I have even tried adding https://api.binance.com:443/ but no luck.
The websocket connection works fine:
wscat -c ws://LOADBALANCER-DNS/binance-ws/ws/btcusdt#aggTrade
However, the below one fails:
curl http://LOADBALANCER-DNS/binance-api/api/v3/time
When I see the nginx logs for 502 error, this is what I see:
[error] 14339#14339: *20 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 10.5.2.187, server: , request: "GET /binance-api/api/v3/time HTTP/1.1", upstream: "https://52.84.150.34:443/api/v3/time", host: "internal-prod-nginx-proxy-xxxxxx.xxxxx.elb.amazonaws.com"
This is the actual RestAPI call which I am trying to simulate from nginx:
curl https://api.binance.com/api/v3/time
I have gone through many almost similar posts but unable to find what/where am I going wrong. Appreciate your help!
I have an nginx running in a docker which acts as https proxy. I have lot of other services running in other docker containers, like, gitlab and nginx seems to work fine as a web proxy.
Today I setup a wordpress docker and used below config in nginx:
#
# A virtual host using mix of IP-, name-, and port-based configuration
#
server {
listen 80;
listen 443 ssl;
server_name x.example.com;
ssl on;
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
if ($scheme = http) {
return 301 https://$server_name$request_uri;
}
location / {
proxy_pass http://172.19.0.3;
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# these two lines here
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
the wordpress is running on host port 8080 and guest port 80. i.e., i can perfectly access the site with the url http://x.example.com:8080. But when I try to access using https, i.e, https://x.example.com, nginx gives me 504 Gateway Time-out.
docker logs -f nginx-proxy
shows the below log line.
2018/04/23 21:52:21 [error] 28#28: *3202 upstream timed out (110: Connection timed out) while connecting to upstream, client:
37.20.24.26, server: x.example.com, request: "GET / HTTP/1.1", u 0/", host: "x.example.com"
37.201.224.236 - - [23/Apr/2018:21:52:21 +0000] "GET / HTTP/1.1" 504 585 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36" "-"
Can someone please help me how to fix this issue? Wordpress is running under a different docker network as the container was created using docker-compose.xml. Is that the reason that nginx not able to proxy through?
I had similar problem with local upstream. It was pointing to localhost, which was resolved to both ipv4 and ipv6 while docker made bindings only on ipv4. When request from nginx proxy used ipv6, it timed out (according to connection timeout, default 60s) but retry succeeded (cause it used ipv4).
my nginx container was not able to communicate to the docker network created for wordpress. Resolved the issue using
docker network connect
command.
I use nginx in the docker,this is my nginx configure
server { listen 80; server_name saber;
location / {
root /usr/share/nginx;
index index.html;
}
location /saber {
proxy_pass http://localhost:8080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_connect_timeout 90;
}
}
when I use "http://localhost/saber/blog/getBlog.do" in browser ,browser give me a error with "502".
and nginx`s error.log have new.
2017/07/09 05:16:18 [warn] 5#5: *1 upstream server temporarily disabled while connecting to upstream, client: 172.17.0.1, server: saber, request: "GET /saber/blog/getBlog.do HTTP/1.1", upstream: "http://127.0.0.1:8080/saber/blog/getBlog.do", host: "localhost"
I can promise the "http://127.0.0.1:8080/saber/blog/getBlog.do" have response success in browser.
I try search answer in other question,i find a answer is "/usr/sbin/setsebool httpd_can_network_connect true",this is question url "nginx proxy server localhost permission denied",but I use the docker in win10,the nginx container dont hava setsebool,because the container dont find SELinux.
This all,Thank you in advance.
Localhost inside each container (like the nginx container) is different from localhost outside on your container. Each container gets its own networking namespace by default. Instead of pointing to localhost, you need to place your containers on the same docker network (not the default bridge network) and use the container or service name with Docker's built in DNS to connect. The target port will also be the container port, not the published port on your host.
I'd like to run multiple docker containers on one host VM which would be accessible through only one domain. I wanted to use request url to differentiate between containers.
To achieve this I'm trying to set nginx server as reverse proxy and run it in the container also listening on port 80.
Let's say I have two containers running on port 3000 and 4000.
The routing would be following:
docker-host.example.com/3000 -> this will access container exposing port 3000
docker-host.example.com/4000 -> this will access container exposing port 4000
The thing is I'm currently stack even with trying to define static rule for such reverse proxy.
It works fine without any location:
upstream application {
server <docker container>:3000;
}
server {
listen 80;
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://application/;
}
}
But when I add port location and try to access it using localhost:{nginx port}/3000/
upstream application {
server <docker container>:3000;
}
server {
listen 80;
location /3000/ {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://application/3000/;
}
}
It seems that first resource (main html) is requested correctly, but any other depending resource (for example js or css needed for this site) is missing.
If I examine request for those resources I have in logs:
09:19:20 [error] 5#5: *1 open() "/etc/nginx/html/public/css/fonts.min.css" failed (2: No such file or directory), client: 172.17.0.1, server: , request: "GET /public/css/fonts.min.css HTTP/1.1", host: "localhost:8455", referrer:"http://localhost:8455/3000/"
So request url is http://localhost:8455/public/css/fonts.min.css
Instead of http://localhost:8455/3000/public/css/fonts.min.css
Could I ask you for any suggestions ? Is this scenario possible ?
You can select a docker container per port, your example:
http://example.com:4000/css/fonts.min.css
http://example.com:3000/css/fonts.min.css
But there is another approach that I like more, because I think it is clearer, access to a docker container by domain name, e.g:
http://a.example.com/css/fonts.min.css
http://b.example.com/css/fonts.min.css
Whichever you choose, there is a project in github that helps you to implement docker multi-container reverse proxy: https://github.com/jwilder/nginx-proxy
I've written an example using docker-compose for a similar scenario at: http://carlosvin.github.io/posts/reverse-proxy-multidomain-docker/
I have a really weird issue with NGINX.
I have the following upstream.conf file, with the following upstream:
upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello;
server mymachine:6006 ;
}
In locations.conf:
location ~ "^/files(?<command>.+)/[0123]" {
rewrite ^ $command break;
proxy_pass https://files_1 ;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
In /etc/hosts:
127.0.0.1 localhost mymachine
When I do: wget https://mynachine:6006/alive --no-check-certificate, I get HTTP request sent, awaiting response... 200 OK. I also verified that port 6006 is listening with netstat, and its OK.
But when I send to the NGINX file server a request, I get the following error:
no live upstreams while connecting to upstream, client: .., request: "POST /files/save/2 HTTP/1.1, upstream: "https://files_1/save"
But the upstream is OK. What is the problem?
When defining upstream Nginx treats the destination server and something that can be up or down. Nginx decides if your upstream is down or not based on fail_timeout (default 10s) and max_fails (default 1)
So if you have a few slow requests that timeout, Nginx can decide that the server in your upstream is down, and because you only have one, the whole upstream is effectively down, and Nginx reports no live upstreams. Better explained here:
https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/
I had a similar problem and you can prevent this overriding those settings.
For example:
upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello max_fails=0;
server mymachine:6006 ;
}
I had the same error no live upstreams while connecting to upstream
Mine was SSL related: adding proxy_ssl_server_name on solved it.
location / {
proxy_ssl_server_name on;
proxy_pass https://my_upstream;
}