NGINX proxy to anycable websocket server causing "111: Connection refused" - nginx

This is my NGINX config:
upstream app {
server 127.0.0.1:3000;
}
upstream websockets {
server 127.0.0.1:3001;
}
server {
listen 80 default_server deferred;
root /home/malcom/dev/scrutiny/public;
server_name localhost 127.0.0.1;
try_files $uri #app;
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
location /cable {
proxy_pass http://websockets/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
"app" is a puma server serving a Rails app, and "websockets" points to an anycable-go process as the backend for CableReady.
The Rails app is working fine, apart from the websockets.
The browser says:
WebSocket connection to 'ws://127.0.0.1/cable' failed:
And the NGINX error_log the following:
2021/07/14 13:47:59 [error] 16057#16057: *14 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /cable HTTP/1.1", upstream: "http://127.0.0.1:3001/", host: "127.0.0.1"
The websocket setup per se is working, since everything's fine if I point the ActionCable config directly to 127.0.0.1:3001. It's trying to pass it through NGINX that's giving me headaches.
All the documentation and advice I've found so far makes me believe that this config should do the trick, but it's really not.
Thanks in advance!

So the problem seemed to be the trailing slash in
proxy_pass http://websockets/;
Looks like it's working now.

Related

Nginx Proxy Pass to External APIs- 502 Bad Gateway

Issue: I have an nginx reverse proxy installed in a ubuntu server with private IP only. The purpose of this reverse proxy is to route the incoming request to various 3rd party web sockets and RestAPIs. Furthermore, to distribute the load, I have a http loadbalancer sitting behind the nginx proxy server.
So this is how it looks technically:
IncomingRequest --> InternalLoadBalancer(Port:80) --> NginxReverseProxyServer(80) --> ThirdParyAPIs(Port:443) & WebSockets(443)
The problem I have is that, Nginx does not reverse_proxy correctly to the RestAPIs and gives a 502 error, but it does work successfully for Web Sockets.
Below is my /etc/nginx/sites-available/default config file: (No changes done elsewhere)
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
location /binance-ws/ {
# Web Socket Connection
####################### THIS WORKS FINE
proxy_pass https://stream.binance.com:9443/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
}
location /binance-api/ {
# Rest API Connection
##################### THIS FAILS WITH 502 ERROR
proxy_pass https://api.binance.com/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
}
}
I have even tried adding https://api.binance.com:443/ but no luck.
The websocket connection works fine:
wscat -c ws://LOADBALANCER-DNS/binance-ws/ws/btcusdt#aggTrade
However, the below one fails:
curl http://LOADBALANCER-DNS/binance-api/api/v3/time
When I see the nginx logs for 502 error, this is what I see:
[error] 14339#14339: *20 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 10.5.2.187, server: , request: "GET /binance-api/api/v3/time HTTP/1.1", upstream: "https://52.84.150.34:443/api/v3/time", host: "internal-prod-nginx-proxy-xxxxxx.xxxxx.elb.amazonaws.com"
This is the actual RestAPI call which I am trying to simulate from nginx:
curl https://api.binance.com/api/v3/time
I have gone through many almost similar posts but unable to find what/where am I going wrong. Appreciate your help!

docker nginx appear "502".1 upstream server temporarily disabled while connecting to upstream

I use nginx in the docker,this is my nginx configure
server { listen 80; server_name saber;
location / {
root /usr/share/nginx;
index index.html;
}
location /saber {
proxy_pass http://localhost:8080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_connect_timeout 90;
}
}
when I use "http://localhost/saber/blog/getBlog.do" in browser ,browser give me a error with "502".
and nginx`s error.log have new.
2017/07/09 05:16:18 [warn] 5#5: *1 upstream server temporarily disabled while connecting to upstream, client: 172.17.0.1, server: saber, request: "GET /saber/blog/getBlog.do HTTP/1.1", upstream: "http://127.0.0.1:8080/saber/blog/getBlog.do", host: "localhost"
I can promise the "http://127.0.0.1:8080/saber/blog/getBlog.do" have response success in browser.
I try search answer in other question,i find a answer is "/usr/sbin/setsebool httpd_can_network_connect true",this is question url "nginx proxy server localhost permission denied",but I use the docker in win10,the nginx container dont hava setsebool,because the container dont find SELinux.
This all,Thank you in advance.
Localhost inside each container (like the nginx container) is different from localhost outside on your container. Each container gets its own networking namespace by default. Instead of pointing to localhost, you need to place your containers on the same docker network (not the default bridge network) and use the container or service name with Docker's built in DNS to connect. The target port will also be the container port, not the published port on your host.

No live upstreams while connecting to upstream, but upsteam is OK

I have a really weird issue with NGINX.
I have the following upstream.conf file, with the following upstream:
upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello;
server mymachine:6006 ;
}
In locations.conf:
location ~ "^/files(?<command>.+)/[0123]" {
rewrite ^ $command break;
proxy_pass https://files_1 ;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
In /etc/hosts:
127.0.0.1 localhost mymachine
When I do: wget https://mynachine:6006/alive --no-check-certificate, I get HTTP request sent, awaiting response... 200 OK. I also verified that port 6006 is listening with netstat, and its OK.
But when I send to the NGINX file server a request, I get the following error:
no live upstreams while connecting to upstream, client: .., request: "POST /files/save/2 HTTP/1.1, upstream: "https://files_1/save"
But the upstream is OK. What is the problem?
When defining upstream Nginx treats the destination server and something that can be up or down. Nginx decides if your upstream is down or not based on fail_timeout (default 10s) and max_fails (default 1)
So if you have a few slow requests that timeout, Nginx can decide that the server in your upstream is down, and because you only have one, the whole upstream is effectively down, and Nginx reports no live upstreams. Better explained here:
https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/
I had a similar problem and you can prevent this overriding those settings.
For example:
upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello max_fails=0;
server mymachine:6006 ;
}
I had the same error no live upstreams while connecting to upstream
Mine was SSL related: adding proxy_ssl_server_name on solved it.
location / {
proxy_ssl_server_name on;
proxy_pass https://my_upstream;
}

Gitlab 8.2.1 '110: Connection timed out' attempting to view merge_request

We recently upgraded from Gitlab 7.14.0 to Gitlab 8.2.1 on a 16GB/8cpu VM at DigitalOcean. We have one Merge Request with 55 comments that simply won't load in the browser. All other merge_requests load fine. We get the following error from NGINX:
2015/12/02 14:49:02 [error] 9094#0: *62 upstream timed out (110: Connection timed out) while reading response header from upstream, client: x.x.x.x, server: gitlab.domain.com, request: "GET /group/project/merge_requests/854 HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket/group/project/merge_requests/900", host: "gitlab.domain.com"
In config/unicorn.rb we have set timeout 1200 (up from an original 30, then tried 300 and 600). 600 was working well with Gitlab 7.14.0. We have worker_processes 12 also set in config/unicorn.rb.
Gitlab 8.2.1 uses gitlab-workhorse, but I'm not familiar enough with gitlab-workhorse to know if there are settings for it.
Our workhorse settings in Nginx:
upstream gitlab {
server unix:/home/git/gitlab/tmp/sockets/gitlab.socket fail_timeout=0;
}
upstream gitlab-workhorse {
server unix:/home/git/gitlab/tmp/sockets/gitlab-workhorse.socket fail_timeout=0;
}
location #gitlab-workhorse {
client_max_body_size 0;
gzip off;
proxy_read_timeout 600;
proxy_connect_timeout 600;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://gitlab-workhorse;
}
Output from free:
total used free shared buffers cached
Mem: 16433928 11946992 4486936 119052 236972 8130620
-/+ buffers/cache: 3579400 12854528
We've restarted Gitlab with service gitlab restart in the hopes there was a hungry resource, but we saw no difference.
Any suggestions on how we figure out what is going on and fix this issue?

Thumbor/NGINX 502 Bad Gateway for larger images

I'm not sure if this is an issue with nginx or thumbor. I followed the instructions located here for setting up thumbor with nginx, and everything has been running smoothly for the last month. Then recently we tried to use thumbor after uploading images with larger dimensions (above 2500x2500), but I'm only greeted with a broken image icon.
If I go to my thumbor URL and pass the image location itself into the browser I get one of two response:
1) 500: Internal Server Error
or
2) 502: Bad Gateway
For example, if I try to pass this image:
http://www.newscenter.philips.com/pwc_nc/main/shared/assets/newscenter/2008_pressreleases/Simplicity_event_2008/hires/Red_Square1_hires.jpg
I get 502: Bad Gateway and checking my nginx error logs results in
2015/05/12 10:59:16 [error] 32020#0: *32089 upstream prematurely closed connection while reading response header from upstream, client: <my-ip>, server: <my-server>, request: "GET /unsafe/450x450/smart/http://www.newscenter.philips.com/pwc_nc/main/shared/assets/newscenter/2008_pressreleases/Simplicity_event_2008/hires/Red_Square1_hires.jpg HTTP/1.1" upstream: "http://127.0.0.1:8003/unsafe/450x450/smart/http://www.newscenter.philips.com/pwc_nc/main/shared/assets/newscenter/2008_pressreleases/Simplicity_event_2008/hires/Red_Square1_hires.jpg", host: "<my-host>"
If needed, here's my thumbor.conf file for nginx:
#
# A virtual host using mix of IP-, name-, and port-based configuration
#
upstream thumbor {
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}
server {
listen 80;
server_name <my-server>;
client_max_body_size 10M;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://thumbor;
proxy_redirect off;
}
}
For images below this, it works fine, but users will be uploading images from their phones. How can I fix this?

Resources