I'm trying to proxy WebSocket + HTTP traffic with nginx.
I have read this: http://nginx.org/en/docs/http/websocket.html
My config looks like:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name ourapp.com;
location / {
proxy_pass http://127.0.0.1:100;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
}
I have 2 problems:
1) The connection closes once a minute.
2) I want to run both HTTP and WS on the same port. The application works fine locally, but if I try to put HTTP and WS on the same port and set this nginx proxy, I get this:
WebSocket connection to 'ws://ourapp.com/ws' failed: Unexpected response code: 200
Loading the app (HTTP) seems to work fine, but WebSocket connection fails.
Problem 1: As for the connection dying once a minute, I realized that it's nginx timeout variable. I can either make our app to ping once in a while or increase the timeout. I'm not sure if I should set it as 0, I decided to just ping once a minute and set the timeout to 90 seconds. (keepalive_timeout)
Problem 2: Connectivity issues arose when I used CloudFlare CDN. Disabling CloudFlare acceleration solved the problem.
Alternatively I could create a subdomain and set it as "unaccelerated" and use that for WS.
Related
I have a blazor-server application, which works correctly in all cases other than running behind reverse proxy (I've only tested with NGINX).
Browser is able to connect to /_blazor?id=xyz endpoint and successfully send/receive heartbeat messages. But events with button click etc. does not work at all. There are no error or warnings in console and application's logs.
Nginx config is written according to this .NET docs
Here is my setup:
map $http_connection $connection_upgrade {
"~*Upgrade" $http_connection;
default keep-alive;
}
server {
listen 80;
server_name service.example.com;
location / {
# App server url
proxy_pass http://localhost:9000;
# Configuration for WebSockets
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_cache off;
# WebSockets were implemented after http/1.0
proxy_http_version 1.1;
# Configuration for ServerSentEvents
# proxy_buffering off; # Removed according to docs
# Configuration for LongPolling or if your KeepAliveInterval is longer than 60 seconds
proxy_read_timeout 100s;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
I think the problem is because of nginx, as app works fine on my local machine.
If you need any more data, please comment below, I will provide.
Thanks in advance.
Apparently, cloudflare was also configured to proxy my request, and this somehow affected it.
Disabling cloudflare proxy and configuring https with let's encrypt solved the issue
I have been battling this issue for some days now. I found a temporary solution but just can't wrap my head around what exactly is happening.
So what happens is that one request is handled immediately. And if I send the same request right after it hangs on 'waiting' for 60 seconds. If I cancel the request and send a new one it is handled correctly again. If I send a request after this one it hangs again. This cycle repeat.
It sounds like a load-balancing issue but I didn't set it up. Does nginx have some sort of default load balancing for connection to the upstream server?
The error received is upstream timed out (110: Connection timed out).
I found out that changing these proxy parameters, it only hangs for 3 seconds and every subsequent request now handles fine (after the waited one). Because of a working keep-alive connection I suppose.
proxy_connect_timeout 3s;
It looks like setting up a connection to the upstream is timing out and then after the timeout it tries again and succeeds. Also in the "(cancelled)request - ok request - (cancelled)request" cycle described above there is no keep-alive being setup. Only if I wait for the request to complete. Which takes 60 seconds without the above settings and is unacceptable.
It happens for both domains..
NGINX conf:
worker_processes 1;
events
{
worker_connections 1024;
}
http
{
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
gzip on;
# Timeouts
client_body_timeout 12;
client_header_timeout 12;
send_timeout 10;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server
{
server_name domain.com www.domain.com;
root /usr/share/nginx/html;
index index.html index.htm;
location /api/
{
proxy_redirect off;
proxy_pass http://localhost:3001/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
#TEMP fix
proxy_connect_timeout 3s;
}
}
DOMAIN2 conf:
server {
server_name domain2.com www.domain2.com;
location /api/
{
proxy_redirect off;
proxy_pass http://localhost:5000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
#TEMP fix
proxy_connect_timeout 3s;
}
}
I found the answer. However, I still don't fully understand why and how. I suspect setting up the keep-alive wasn't working as it should. I read to the documentation and found the answer there: https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
For both the configuration files I added a 'upstream' block.
i.e.
DOMAIN2.CONF:
upstream backend
{
server 127.0.0.1:5000;
keepalive 16;
}
location /api/
{
proxy_redirect off;
proxy_pass http://backend/;
proxy_http_version 1.1;
proxy_set_header Connection "";
...
# REMOVED THE TEMP FIX
}
Make sure to:
Clear the Connection header
Use 127.0.0.1 instead of localhost in upstream block
Set http version to 1.1
I have a meteor app deployed in DigitalOcean (Ubuntu 14.04). I was able to setup nginx and deployed my app successfully using mup. However, the problem is, this app will be used by our company and almost 95% of the total population of users have the same IP. We tested the ip_hash directive but it only directs us to one of our servers.
I tried different options, but I can't seem to figure out what was wrong on our configurations. With these setup, load balancing doesn't make any sense because all users will always direct to just 1 server.
What do you think is the best nginx configuration for this?
Please see code below:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream unifyhub {
ip_hash;
server 111.222.333.44:3000; # server 1
server 555.666.777.88:3000; # server 2
}
server {
listen 80;
#listen [::]:80 ipv6only=on;
server_name www.unifyhub.com;
access_log /var/log/nginx/unify.access.log;
error_log /var/log/nginx/unify.error.log;
location / {
proxy_pass http://unifyhub;
#proxy_set_header X-Real-IP $remote_addr; # http://wiki.nginx.org/HttpProxyModule
#proxy_set_header Host $host; # pass the host header - http://wiki.nginx.org/HttpProxyModule#proxy_pass
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; # allow websockets
proxy_set_header Connection $connection_upgrade;
add_header Cache-Control no-cache;
}
}
TIA!
I'm working on a Google Cloud Compute Engine instance. Ubuntu 12.04.
I have a Tornado app installed on the server working on port 8888 and I have nginx configuration as shown below:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream chat_servers {
server 127.0.0.1:8888;
}
server {
listen 80;
server_name chat.myapp.com;
access_log /home/ubuntu/logs/nginx_access.log;
error_log /home/ubuntu/logs/nginx_error.log;
location /talk/ {
proxy_set_header X-Real-IP $remote_addr; # http://wiki.nginx.org/HttpProxyModule
proxy_set_header Host $host; # pass the host header - http://wiki.nginx.org/HttpProxyModule#proxy_pass
proxy_http_version 1.1; # recommended with keepalive connections - http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version
# WebSocket proxying - from http://nginx.org/en/docs/http/websocket.html
proxy_connect_timeout 7d;
proxy_send_timeout 7d;
proxy_read_timeout 7d;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass http://chat_servers;
}
}
When I try to connect to ws://chat.myapp.com/talk/etc/ via Javascript, Tornado app's open() method on WebSocketHandler gets called and I print the log on the server successfully, but on the client side, the code never enters the onopen() and after some time I get 1006 error code,WebSocket opening handshake timed out`.
This app was working fine on Amazon (AWS) EC2 server with the same configuration but after I moved to Google Cloud, somehow the handshake cannot be done.
Is there any configuration specific to Google Cloud? Or any nginx update on the file?
I am confused and I spent two days on this but couldn't solve the problem.
Default nginx's version on Ubuntu was nginx/1.1.19. I updated it to nginx/1.8.0. The problem is solved.
So I got the tcp module for nginx all set up and am trying to use this with private_pub (faye) for websockets. As of now I'm getting very slow loading from faye and a 502 Bad Gateway errors. Everyone points towards configuring it like so:
I have this in my nginx.conf:
tcp {
timeout 1d;
websocket_read_timeout 1d;
websocket_send_timeout 1d;
upstream websockets {
server 199.36.105.34:9292;
check interval=300 rise=2 fall=5 timeout=1000;
}
server {
listen 9200;
server_name 2u.fm;
timeout 43200000;
websocket_connect_timeout 43200000;
proxy_connect_timeout 43200000;
so_keepalive on;
tcp_nodelay on;
websocket_pass websockets;
}
I've tried every variation of that on the web. I want to be able to hit it from my domain "2u.fm/faye" but the only way I can get that to work is to do a proxy inside my http block:
location /faye {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://127.0.0.1:9200;
break;
}
Adding that makes it work at 2u.fm/faye but now I'm back at square one, still getting super slow responses and 502 Bad Gateway's. Which I think makes sense as it's routing through http still and not directly to tcp. I've tried hitting 199.36.105.34:9200 directly but I get no response.