I get an 5#5: *23 upstream timed out (110: Connection timed out) while connecting to upstream, client: error with nginx.
I have read and applied the Nginx reverse proxy causing 504 Gateway Timeout question. However my case sligthly different, because I have three endpoint to proxy.
My nginx.conf:
worker_processes 1;
events { worker_connections 1024; }
http {
server {
listen 80;
listen [::]:80;
server_name rollcall;
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass "http://[hostip]:8080/api";
}
location /api/attendance {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass "http://[hostip]:8000/api";
}
location / {
include /etc/nginx/mime.types;
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
My "/" and the application with port:8080 proxy as they are expected, however my application with port:8000 does not proxy, and get the above mentioned timeout exception. If I try to request the application with the port:8000, the application works as expected.
What could cause the above described timeout, and how I should change my conf file?
The problem was not a Nginx or Docker related problem. The port 8000 was not opened on the application droplet.
Related
I created actix web & websocket within single application, and it works fine in localhost.
Basically, after passing a login page, it opens a dashboard and a common Javascript's WebSocket.
new WebSocket(`ws://server:8181/client?token=${TokenString}`);
And it works fine.
I don't want to expose this 8181 port on my production server, so my plan is using a sub path /ws to map to 8181 port.
So my /etc/nginx/sites-enabled/default config is:
server {
server_name my_domain.com; # managed by Certbot
....
#WebSocket part is here, under /ws path and mapped to 8181 port
location /ws {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy false;
proxy_pass http://127.0.0.1:8181;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
#Here is my web app, / mapped to 8080 port
location / {
client_max_body_size 50m;
client_body_buffer_size 50m;
proxy_pass http://127.0.0.1:8080;
proxy_set_header X-Real-Ip $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location ^~ /\. {
deny all;
}
#configs generated by Certbot
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl;
#...
}
#redirect http to https
server {
if ($host = my_domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 ;
listen [::]:80 ;
server_name my_domain.com;
return 404; # managed by Certbot
}
My web page https://my_domain.com, works fine. But my mapped WebSocket connection doesn't.
new WebSocket(`wss://my_domain.com/ws/client?token=${TokenString}`);
With just WebSocket connection to ... failed: message, and /var/log/nginx/error.log shows nothing.
Is something wrong with my nginx config?
*Edit: it turns out showing 404 in /var/log/nginx/access.log 😪
It turns out, the /ws path should be URL rewritten since my websocket didn't map /ws to anything.
The idea was from here
So my configuration is:
location ~* ^/ws/ {
rewrite ^/ws/(.*) /$1 break;
....
I have a page that has a .../chat/ url, and the whole thing works on localhost. I'm trying to deploy on ubuntu and having a hard time.
I guess getting to the point looks like posting what I've got:
/etc/nginx/sites-enabled/mysite:
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=1r/s;
upstream channels-backend {
server localhost:8001;
}
server {
listen 80;
server_name foo.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/ubuntu/mysite/mysite/;
}
location / {
include proxy_params;
limit_req zone=mylimit;
proxy_pass http://unix:/run/gunicorn.sock;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
I tried changing my server block to contain:
...
location / {
try_files $uri #proxy_to_app;
include proxy_params;
limit_req zone=mylimit;
proxy_pass http://unix:/run/gunicorn.sock;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location #proxy_to_app {
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
...
but this prompted my webpage to 503.
My python and js seem fine as it works on localhost, and running daphne -b 0.0.0.0 -p 8001 myproject.asgi:application results in a Listening... message so that seems fine.
My /etc/supervisor/conf.d/mysite.conf is:
[program:mysite_asgi]
directory=/home/ubuntu/mysite/mysite
command=/home/ubuntu/mysite/mysite/venv/bin/daphne -b 0.0.0.0 -p 8001 mysite.asgi:application
autostart=true
autorestart=true
stopasgroup=true
user=ubuntu
stdout_logfile=/home/ubuntu/mysite/daphnelog/asgi.log
redirect_stderr=true
The browser console shows WebSocket connection to 'ws://foo.com/chat/' failed: Error during WebSocket handshake: Unexpected response code: 503, I'm unsure if I've posted a complete picture of what needs to be said in order to help you help me--please let me know if there's more information to reach that end. Thank you!
I have an issue with a freshly configured Nginx setup on Debian 9.
My site loads fine using https, but I get a 404 Not Found when I access it using http.
I tried removing the ssl certificate, it works however i need the location /webex/receive in https and /ping and /mailgun in http.
See my edited down server block:
server {
listen 80;
listen 443 ssl;
include snippets/self-signed.conf;
include snippets/ssl-params.conf;
location /ping {
proxy_pass http://xx.xx.xx.xxx:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /mailgun {
proxy_pass http://xx.xx.xx.xxx:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /webex/receive {
proxy_pass http://xx.xx.xx.xxx:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
All location (/ping, /mailgun and /webex/receive) work in https but I only want /webex/receive in https and the others locations /mailgun and /ping in http.
I found the solution
server {
listen 80;
listen [::]:80;
root /var/www/xx.xx.xx.xxx/html;
index index.html index.htm index.nginx-debian.html;
server_name xx.xx.xx.xxx www.xx.xx.xx.xxx;
location / {
try_files $uri $uri/ =404;
}
location /ping {
proxy_pass http://xx.xx.xx.xxx:3000;
}
location /mailgun {
proxy_pass http://xx.xx.xx.xxx:3000;
}
}
server {
listen 443 ssl;
include snippets/self-signed.conf;
include snippets/ssl-params.conf;
location /webex/receive {
proxy_pass http://xx.xx.xx.xxx:8080;
}
}
In my use case, I am trying to do a reverse proxy using nginx server. I have two applications running in two ports. For example app server is running port 9090 and api server is running in 8081.
I will be running nginx server in port in 8080. If I get /api request, nginx should redirect to api server. Other requests should go to app server.
I have the following nginx.conf,
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream app {
server 127.0.0.1:9090;
keepalive 64;
}
upstream api {
server 127.0.0.1:8081;
keepalive 64;
}
#
# The default server
#
server {
listen 8080;
server_name howti;
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
#proxy_set_header Connection "";
#proxy_http_version 1.1;
}
location /{
rewrite /(.*) /$1 break;
proxy_pass http://app;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
#proxy_set_header Connection "";
#proxy_http_version 1.1;
}
# redirect not found pages to the static page /404.html
error_page 404 /404.html;
location = /404.html {
root /usr/share/nginx/html;
}
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
But it is not working. I am not able to debug the request in nginx? Could someone help me with this? Thanks
I am using the following configuration
upstream site {
server 127.0.0.1:3000;
keepalive 64;
}
server {
listen 80;
error_page 400 404 500 502 503 504 /50x.html;
location /50x.html {
internal;
root /usr/share/nginx/www;
}
location /static {
root /opt/site/static;
access_log off;
expires max;
}
location / {
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_pass http://site;
proxy_intercept_errors on;
}
}
I have saved it to /etc/nginx/sites-available/site.conf and symlinked to it /etc/sites-enabled/site.conf , and when I restart nginx it gives me the following error:
Restarting nginx: [emerg]: unknown directive "keepalive" in /etc/nginx/sites-enabled/site.conf:3
There are no keepalive directive. Use keepalive_timeout instead. And you can't put it inside upsream, use inside http, server or location.
Option "keepalive" is provided by keepalive module. And since 1.1.4 keepalive functionality is included in the main code.