Thumbor/NGINX 502 Bad Gateway for larger images - nginx

I'm not sure if this is an issue with nginx or thumbor. I followed the instructions located here for setting up thumbor with nginx, and everything has been running smoothly for the last month. Then recently we tried to use thumbor after uploading images with larger dimensions (above 2500x2500), but I'm only greeted with a broken image icon.
If I go to my thumbor URL and pass the image location itself into the browser I get one of two response:
1) 500: Internal Server Error
or
2) 502: Bad Gateway
For example, if I try to pass this image:
http://www.newscenter.philips.com/pwc_nc/main/shared/assets/newscenter/2008_pressreleases/Simplicity_event_2008/hires/Red_Square1_hires.jpg
I get 502: Bad Gateway and checking my nginx error logs results in
2015/05/12 10:59:16 [error] 32020#0: *32089 upstream prematurely closed connection while reading response header from upstream, client: <my-ip>, server: <my-server>, request: "GET /unsafe/450x450/smart/http://www.newscenter.philips.com/pwc_nc/main/shared/assets/newscenter/2008_pressreleases/Simplicity_event_2008/hires/Red_Square1_hires.jpg HTTP/1.1" upstream: "http://127.0.0.1:8003/unsafe/450x450/smart/http://www.newscenter.philips.com/pwc_nc/main/shared/assets/newscenter/2008_pressreleases/Simplicity_event_2008/hires/Red_Square1_hires.jpg", host: "<my-host>"
If needed, here's my thumbor.conf file for nginx:
#
# A virtual host using mix of IP-, name-, and port-based configuration
#
upstream thumbor {
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}
server {
listen 80;
server_name <my-server>;
client_max_body_size 10M;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://thumbor;
proxy_redirect off;
}
}
For images below this, it works fine, but users will be uploading images from their phones. How can I fix this?

Related

NGINX proxy to anycable websocket server causing "111: Connection refused"

This is my NGINX config:
upstream app {
server 127.0.0.1:3000;
}
upstream websockets {
server 127.0.0.1:3001;
}
server {
listen 80 default_server deferred;
root /home/malcom/dev/scrutiny/public;
server_name localhost 127.0.0.1;
try_files $uri #app;
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
location /cable {
proxy_pass http://websockets/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
"app" is a puma server serving a Rails app, and "websockets" points to an anycable-go process as the backend for CableReady.
The Rails app is working fine, apart from the websockets.
The browser says:
WebSocket connection to 'ws://127.0.0.1/cable' failed:
And the NGINX error_log the following:
2021/07/14 13:47:59 [error] 16057#16057: *14 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /cable HTTP/1.1", upstream: "http://127.0.0.1:3001/", host: "127.0.0.1"
The websocket setup per se is working, since everything's fine if I point the ActionCable config directly to 127.0.0.1:3001. It's trying to pass it through NGINX that's giving me headaches.
All the documentation and advice I've found so far makes me believe that this config should do the trick, but it's really not.
Thanks in advance!
So the problem seemed to be the trailing slash in
proxy_pass http://websockets/;
Looks like it's working now.

Nginx Proxy Pass to External APIs- 502 Bad Gateway

Issue: I have an nginx reverse proxy installed in a ubuntu server with private IP only. The purpose of this reverse proxy is to route the incoming request to various 3rd party web sockets and RestAPIs. Furthermore, to distribute the load, I have a http loadbalancer sitting behind the nginx proxy server.
So this is how it looks technically:
IncomingRequest --> InternalLoadBalancer(Port:80) --> NginxReverseProxyServer(80) --> ThirdParyAPIs(Port:443) & WebSockets(443)
The problem I have is that, Nginx does not reverse_proxy correctly to the RestAPIs and gives a 502 error, but it does work successfully for Web Sockets.
Below is my /etc/nginx/sites-available/default config file: (No changes done elsewhere)
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
location /binance-ws/ {
# Web Socket Connection
####################### THIS WORKS FINE
proxy_pass https://stream.binance.com:9443/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
}
location /binance-api/ {
# Rest API Connection
##################### THIS FAILS WITH 502 ERROR
proxy_pass https://api.binance.com/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
}
}
I have even tried adding https://api.binance.com:443/ but no luck.
The websocket connection works fine:
wscat -c ws://LOADBALANCER-DNS/binance-ws/ws/btcusdt#aggTrade
However, the below one fails:
curl http://LOADBALANCER-DNS/binance-api/api/v3/time
When I see the nginx logs for 502 error, this is what I see:
[error] 14339#14339: *20 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 10.5.2.187, server: , request: "GET /binance-api/api/v3/time HTTP/1.1", upstream: "https://52.84.150.34:443/api/v3/time", host: "internal-prod-nginx-proxy-xxxxxx.xxxxx.elb.amazonaws.com"
This is the actual RestAPI call which I am trying to simulate from nginx:
curl https://api.binance.com/api/v3/time
I have gone through many almost similar posts but unable to find what/where am I going wrong. Appreciate your help!

docker nginx appear "502".1 upstream server temporarily disabled while connecting to upstream

I use nginx in the docker,this is my nginx configure
server { listen 80; server_name saber;
location / {
root /usr/share/nginx;
index index.html;
}
location /saber {
proxy_pass http://localhost:8080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_connect_timeout 90;
}
}
when I use "http://localhost/saber/blog/getBlog.do" in browser ,browser give me a error with "502".
and nginx`s error.log have new.
2017/07/09 05:16:18 [warn] 5#5: *1 upstream server temporarily disabled while connecting to upstream, client: 172.17.0.1, server: saber, request: "GET /saber/blog/getBlog.do HTTP/1.1", upstream: "http://127.0.0.1:8080/saber/blog/getBlog.do", host: "localhost"
I can promise the "http://127.0.0.1:8080/saber/blog/getBlog.do" have response success in browser.
I try search answer in other question,i find a answer is "/usr/sbin/setsebool httpd_can_network_connect true",this is question url "nginx proxy server localhost permission denied",but I use the docker in win10,the nginx container dont hava setsebool,because the container dont find SELinux.
This all,Thank you in advance.
Localhost inside each container (like the nginx container) is different from localhost outside on your container. Each container gets its own networking namespace by default. Instead of pointing to localhost, you need to place your containers on the same docker network (not the default bridge network) and use the container or service name with Docker's built in DNS to connect. The target port will also be the container port, not the published port on your host.

Multiple docker containers accessible by nginx reverse proxy

I'd like to run multiple docker containers on one host VM which would be accessible through only one domain. I wanted to use request url to differentiate between containers.
To achieve this I'm trying to set nginx server as reverse proxy and run it in the container also listening on port 80.
Let's say I have two containers running on port 3000 and 4000.
The routing would be following:
docker-host.example.com/3000 -> this will access container exposing port 3000
docker-host.example.com/4000 -> this will access container exposing port 4000
The thing is I'm currently stack even with trying to define static rule for such reverse proxy.
It works fine without any location:
upstream application {
server <docker container>:3000;
}
server {
listen 80;
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://application/;
}
}
But when I add port location and try to access it using localhost:{nginx port}/3000/
upstream application {
server <docker container>:3000;
}
server {
listen 80;
location /3000/ {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://application/3000/;
}
}
It seems that first resource (main html) is requested correctly, but any other depending resource (for example js or css needed for this site) is missing.
If I examine request for those resources I have in logs:
09:19:20 [error] 5#5: *1 open() "/etc/nginx/html/public/css/fonts.min.css" failed (2: No such file or directory), client: 172.17.0.1, server: , request: "GET /public/css/fonts.min.css HTTP/1.1", host: "localhost:8455", referrer:"http://localhost:8455/3000/"
So request url is http://localhost:8455/public/css/fonts.min.css
Instead of http://localhost:8455/3000/public/css/fonts.min.css
Could I ask you for any suggestions ? Is this scenario possible ?
You can select a docker container per port, your example:
http://example.com:4000/css/fonts.min.css
http://example.com:3000/css/fonts.min.css
But there is another approach that I like more, because I think it is clearer, access to a docker container by domain name, e.g:
http://a.example.com/css/fonts.min.css
http://b.example.com/css/fonts.min.css
Whichever you choose, there is a project in github that helps you to implement docker multi-container reverse proxy: https://github.com/jwilder/nginx-proxy
I've written an example using docker-compose for a similar scenario at: http://carlosvin.github.io/posts/reverse-proxy-multidomain-docker/

No live upstreams while connecting to upstream, but upsteam is OK

I have a really weird issue with NGINX.
I have the following upstream.conf file, with the following upstream:
upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello;
server mymachine:6006 ;
}
In locations.conf:
location ~ "^/files(?<command>.+)/[0123]" {
rewrite ^ $command break;
proxy_pass https://files_1 ;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
In /etc/hosts:
127.0.0.1 localhost mymachine
When I do: wget https://mynachine:6006/alive --no-check-certificate, I get HTTP request sent, awaiting response... 200 OK. I also verified that port 6006 is listening with netstat, and its OK.
But when I send to the NGINX file server a request, I get the following error:
no live upstreams while connecting to upstream, client: .., request: "POST /files/save/2 HTTP/1.1, upstream: "https://files_1/save"
But the upstream is OK. What is the problem?
When defining upstream Nginx treats the destination server and something that can be up or down. Nginx decides if your upstream is down or not based on fail_timeout (default 10s) and max_fails (default 1)
So if you have a few slow requests that timeout, Nginx can decide that the server in your upstream is down, and because you only have one, the whole upstream is effectively down, and Nginx reports no live upstreams. Better explained here:
https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/
I had a similar problem and you can prevent this overriding those settings.
For example:
upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello max_fails=0;
server mymachine:6006 ;
}
I had the same error no live upstreams while connecting to upstream
Mine was SSL related: adding proxy_ssl_server_name on solved it.
location / {
proxy_ssl_server_name on;
proxy_pass https://my_upstream;
}

Resources