Upstream location changed to HTTPS after redirect - nginx

I use Nginx as a reverse proxy to serve an internal HTTP application over a HTTPS connection.
But I hae some problems with redirection.
This is my setup:
location / {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://172.16.88.208;
proxy_intercept_errors on;
error_page 302 = #handle_redirects;
}
location #handle_redirects {
resolver 8.8.8.8;
set $orig_loc $upstream_http_location;
proxy_pass $orig_loc;
}
At first, I tried it without setting the resolver to 8.8.8.8 but encountered this error (while a 502 was displayed to the client):
*1 no resolver defined to resolve xxx.xxxx.com while sending to client, client: xxx.xxx.xxx.xxx
After setting the resolver I'm getting closer, but not quite. Still a 502 and this error:
*6 connect() failed (111: Connection refused) while connecting to upstream,
upstream: "https://172.16.88.208:443/login"
Well, that makes perfect sense, because the application is hosted at http://, not https://
I specify this in the code above:
set $orig_loc $upstream_http_location;
Why is the $upstream_http_location redirected to a https page?
Thanks for any help.

Related

NGINX proxy to anycable websocket server causing "111: Connection refused"

This is my NGINX config:
upstream app {
server 127.0.0.1:3000;
}
upstream websockets {
server 127.0.0.1:3001;
}
server {
listen 80 default_server deferred;
root /home/malcom/dev/scrutiny/public;
server_name localhost 127.0.0.1;
try_files $uri #app;
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
location /cable {
proxy_pass http://websockets/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
"app" is a puma server serving a Rails app, and "websockets" points to an anycable-go process as the backend for CableReady.
The Rails app is working fine, apart from the websockets.
The browser says:
WebSocket connection to 'ws://127.0.0.1/cable' failed:
And the NGINX error_log the following:
2021/07/14 13:47:59 [error] 16057#16057: *14 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /cable HTTP/1.1", upstream: "http://127.0.0.1:3001/", host: "127.0.0.1"
The websocket setup per se is working, since everything's fine if I point the ActionCable config directly to 127.0.0.1:3001. It's trying to pass it through NGINX that's giving me headaches.
All the documentation and advice I've found so far makes me believe that this config should do the trick, but it's really not.
Thanks in advance!
So the problem seemed to be the trailing slash in
proxy_pass http://websockets/;
Looks like it's working now.

NGINX Rewriting subdomains as URL in a proxy_pass

I would like to redirect specifc subdomains of my domain to my backend as
prefixes of the URL that is passed to the backend. This is because I have a single server and I do not want to have to handle the multiple domains in the backend due to increased complexity.
Hence if I have:
sub1.domain.com => domain.com/sub1/
sub1.domain.com/pathname => domain.com/sub1/pathname
sub1.domain.com/pathname?searchquery => domain.com/pathname?searchquery
and so forth.
So far what I have come up with is the following:
server {
charset utf8;
listen 80;
server_name
domain.com,
sub1.domain.com,
sub2.domain.com,
sub3.domain.com,
sub4.domain.com,
sub5.domain.com;
# Default
if ($host ~ ^domain\.com) {
set $proxy_uri $request_uri;
}
# Rewrites
if ($host ~ (.*)\.domain\.com) {
set $proxy_uri $1$request_uri;
}
location / {
expires 1s;
proxy_pass http://node:8080$proxy_uri; #node is an internally listed host (docker container)
proxy_set_header Host domain.com;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_valid 200 1s;
}
}
But unfortunately all I get is a 502: Bad Gateway with the following log, 2017/06/11 12:49:18 [error] 6#6: *2 no resolver defined to resolve node, client: 136.0.0.110, server: domain.com:8888,, request: "GET /favicon.ico HTTP/1.1", host: "sub1.domain.com:8888", referrer: "http://sub1.domain.com:8888/" Any idea how I can achieve my goal? Any help would be greatly appreciated :)
Cheers!
It seems I wasn't so far from the answer - adding an upstream block before the server block was sufficient to finalize the config to the desired effect.
upstream backend {
server node:8080;
keepalive 8;
}
I also had to slightly modify the proxy pass line to the following:
proxy_pass http://backend$proxy_uri;
The problem must likely have been one related to how NGINX parses the proxy pass urls - if anyone reading this can provide an insight into the reason, please edit this answer!

No live upstreams while connecting to upstream, but upsteam is OK

I have a really weird issue with NGINX.
I have the following upstream.conf file, with the following upstream:
upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello;
server mymachine:6006 ;
}
In locations.conf:
location ~ "^/files(?<command>.+)/[0123]" {
rewrite ^ $command break;
proxy_pass https://files_1 ;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
In /etc/hosts:
127.0.0.1 localhost mymachine
When I do: wget https://mynachine:6006/alive --no-check-certificate, I get HTTP request sent, awaiting response... 200 OK. I also verified that port 6006 is listening with netstat, and its OK.
But when I send to the NGINX file server a request, I get the following error:
no live upstreams while connecting to upstream, client: .., request: "POST /files/save/2 HTTP/1.1, upstream: "https://files_1/save"
But the upstream is OK. What is the problem?
When defining upstream Nginx treats the destination server and something that can be up or down. Nginx decides if your upstream is down or not based on fail_timeout (default 10s) and max_fails (default 1)
So if you have a few slow requests that timeout, Nginx can decide that the server in your upstream is down, and because you only have one, the whole upstream is effectively down, and Nginx reports no live upstreams. Better explained here:
https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/
I had a similar problem and you can prevent this overriding those settings.
For example:
upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello max_fails=0;
server mymachine:6006 ;
}
I had the same error no live upstreams while connecting to upstream
Mine was SSL related: adding proxy_ssl_server_name on solved it.
location / {
proxy_ssl_server_name on;
proxy_pass https://my_upstream;
}

Thumbor/NGINX 502 Bad Gateway for larger images

I'm not sure if this is an issue with nginx or thumbor. I followed the instructions located here for setting up thumbor with nginx, and everything has been running smoothly for the last month. Then recently we tried to use thumbor after uploading images with larger dimensions (above 2500x2500), but I'm only greeted with a broken image icon.
If I go to my thumbor URL and pass the image location itself into the browser I get one of two response:
1) 500: Internal Server Error
or
2) 502: Bad Gateway
For example, if I try to pass this image:
http://www.newscenter.philips.com/pwc_nc/main/shared/assets/newscenter/2008_pressreleases/Simplicity_event_2008/hires/Red_Square1_hires.jpg
I get 502: Bad Gateway and checking my nginx error logs results in
2015/05/12 10:59:16 [error] 32020#0: *32089 upstream prematurely closed connection while reading response header from upstream, client: <my-ip>, server: <my-server>, request: "GET /unsafe/450x450/smart/http://www.newscenter.philips.com/pwc_nc/main/shared/assets/newscenter/2008_pressreleases/Simplicity_event_2008/hires/Red_Square1_hires.jpg HTTP/1.1" upstream: "http://127.0.0.1:8003/unsafe/450x450/smart/http://www.newscenter.philips.com/pwc_nc/main/shared/assets/newscenter/2008_pressreleases/Simplicity_event_2008/hires/Red_Square1_hires.jpg", host: "<my-host>"
If needed, here's my thumbor.conf file for nginx:
#
# A virtual host using mix of IP-, name-, and port-based configuration
#
upstream thumbor {
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}
server {
listen 80;
server_name <my-server>;
client_max_body_size 10M;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://thumbor;
proxy_redirect off;
}
}
For images below this, it works fine, but users will be uploading images from their phones. How can I fix this?

proxy_pass in nginx to private IP of EC2 instance

I have two Amazon EC2 instances. Let me call them X and Y. I have nginx installed on both of them. Y has resque running on port 3000. Only X has a public IP and domain example.com. Suppose private IP of Y is 15.0.0.10
What I want is that all the requests come to X. And only if the request url matches the pattern /resque, then it should be handled by Y at localhost:3000/overview, which is the resque web interface. It seems like this can be done using proxy_pass in nginx config.
So, In nginx.conf, I have added the following :
location /resque {
real_ip_header X-Forwarded-For;
set_real_ip_from 0.0.0.0/0;
allow all;
proxy_pass http://15.0.0.10:3000/overview;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
But now, when I visit http://example.com/resque from my web browser, it shows 502 Bad Gateway.
In /var/log/nginx/error.log on X,
2014/02/27 10:27:16 [error] 12559#0: *2588 connect() failed (111: Connection
refused) while connecting to upstream, client: 123.201.181.82, server: _,
request: "GET /resque HTTP/1.1", upstream: "http://15.0.0.10:3000/overview",
host: "example.com"
Any suggestions on what could be wrong and how to fix this ?
Turns out that the server should be running on 0.0.0.0 if it needs to be reachable by addressing the IP of the instance.
So to solve my problem, I stopped the server running resque on 127.0.0.1:3000 and restarted it to bind to 0.0.0.0:3000. Rest everything remains the same as above and it works. Thanks.
For reference : Curl amazon EC2 instance

Resources