Nginx Websockets and keepalive_timeout - nginx

I am using Nginx (nginx/1.10.2) as a reverse proxy to back end servers. I have websockets that I need to ensure a long lived connection on. I have the following lines in the http part of the config:
keepalive_timeout 0;
proxy_read_timeout 5d;
proxy_send_timeout 5d;
I understand the proxy_read and proxy_sends lines as per documentation. However how does the keepalive_timeout come into this? Should I set the keepalive_timeout to 0 to basically have no timeout? or should I set it to a high value?
What does this actually do? I didn't really find the documentation that clear on this parameter:http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout
Also how will setting or disabling the keepalive_timeout affect the other static pages that I'm loading? Is it possible to set these timeout values for just the websocket? because the documentation has them under the http module so I wasn't sure if I can set them within specific locations:
location /websock {
# limit connections to 10
limit_conn addr 10;
proxy_set_header Host $host;
proxy_pass http://backends;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}

Related

Experiencing random timeouts for nginx proxy pass

I have been battling this issue for some days now. I found a temporary solution but just can't wrap my head around what exactly is happening.
So what happens is that one request is handled immediately. And if I send the same request right after it hangs on 'waiting' for 60 seconds. If I cancel the request and send a new one it is handled correctly again. If I send a request after this one it hangs again. This cycle repeat.
It sounds like a load-balancing issue but I didn't set it up. Does nginx have some sort of default load balancing for connection to the upstream server?
The error received is upstream timed out (110: Connection timed out).
I found out that changing these proxy parameters, it only hangs for 3 seconds and every subsequent request now handles fine (after the waited one). Because of a working keep-alive connection I suppose.
proxy_connect_timeout 3s;
It looks like setting up a connection to the upstream is timing out and then after the timeout it tries again and succeeds. Also in the "(cancelled)request - ok request - (cancelled)request" cycle described above there is no keep-alive being setup. Only if I wait for the request to complete. Which takes 60 seconds without the above settings and is unacceptable.
It happens for both domains..
NGINX conf:
worker_processes 1;
events
{
worker_connections 1024;
}
http
{
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
gzip on;
# Timeouts
client_body_timeout 12;
client_header_timeout 12;
send_timeout 10;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server
{
server_name domain.com www.domain.com;
root /usr/share/nginx/html;
index index.html index.htm;
location /api/
{
proxy_redirect off;
proxy_pass http://localhost:3001/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
#TEMP fix
proxy_connect_timeout 3s;
}
}
DOMAIN2 conf:
server {
server_name domain2.com www.domain2.com;
location /api/
{
proxy_redirect off;
proxy_pass http://localhost:5000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
#TEMP fix
proxy_connect_timeout 3s;
}
}
I found the answer. However, I still don't fully understand why and how. I suspect setting up the keep-alive wasn't working as it should. I read to the documentation and found the answer there: https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
For both the configuration files I added a 'upstream' block.
i.e.
DOMAIN2.CONF:
upstream backend
{
server 127.0.0.1:5000;
keepalive 16;
}
location /api/
{
proxy_redirect off;
proxy_pass http://backend/;
proxy_http_version 1.1;
proxy_set_header Connection "";
...
# REMOVED THE TEMP FIX
}
Make sure to:
Clear the Connection header
Use 127.0.0.1 instead of localhost in upstream block
Set http version to 1.1

Nginx $args manipulation for proxy_pass

I am trying to work out how to modify $args before using them in a set variable command.
I am moving from a service that uses the addresses formatted like this
http://server/proxy/account?mp=/stream
and need them to be sent to the new service like this
http://server:2000/system/proxy.php?unique_id=account&mounturl=stream
so far i have been able to use
this location block to get the stream account name
`location ~ ^/proxy//?([^/]+)/?([^/]+)? {
set $proxy_url https://127.0.0.1:2000/system/proxy.php?unique_id=$1&mounturl=$2;
proxy_buffering off;
proxy_ignore_client_abort off;
proxy_intercept_errors off;
proxy_redirect off;
proxy_next_upstream error timeout invalid_header;
proxy_pass_request_headers on;
proxy_set_header Cache-Control no-cache;
proxy_set_header User-Agent "$http_user_agent [ip:$remote_addr]";
proxy_set_header X-Forwarded-For $remote_addr;
proxy_connect_timeout 5;
proxy_send_timeout 15;
proxy_read_timeout 15;
proxy_max_temp_file_size 0;
proxy_pass $proxy_url;
expires off;
client_max_body_size 1M;
tcp_nodelay on;
}
however i cannot work out what i need to use to change the mount point from /stream to just stream
im aware i can just use $arg_mp as a variable to use it directly but i need to present it without the leading slash, and frankly i dont have any hair left to pull out, can anybody point me in the right direction please?

nginx and proxying WebSockets

I'm trying to proxy WebSocket + HTTP traffic with nginx.
I have read this: http://nginx.org/en/docs/http/websocket.html
My config looks like:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name ourapp.com;
location / {
proxy_pass http://127.0.0.1:100;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
}
I have 2 problems:
1) The connection closes once a minute.
2) I want to run both HTTP and WS on the same port. The application works fine locally, but if I try to put HTTP and WS on the same port and set this nginx proxy, I get this:
WebSocket connection to 'ws://ourapp.com/ws' failed: Unexpected response code: 200
Loading the app (HTTP) seems to work fine, but WebSocket connection fails.
Problem 1: As for the connection dying once a minute, I realized that it's nginx timeout variable. I can either make our app to ping once in a while or increase the timeout. I'm not sure if I should set it as 0, I decided to just ping once a minute and set the timeout to 90 seconds. (keepalive_timeout)
Problem 2: Connectivity issues arose when I used CloudFlare CDN. Disabling CloudFlare acceleration solved the problem.
Alternatively I could create a subdomain and set it as "unaccelerated" and use that for WS.

private_pub/faye and nginx tcp -- 502 Bad Gateway

So I got the tcp module for nginx all set up and am trying to use this with private_pub (faye) for websockets. As of now I'm getting very slow loading from faye and a 502 Bad Gateway errors. Everyone points towards configuring it like so:
I have this in my nginx.conf:
tcp {
timeout 1d;
websocket_read_timeout 1d;
websocket_send_timeout 1d;
upstream websockets {
server 199.36.105.34:9292;
check interval=300 rise=2 fall=5 timeout=1000;
}
server {
listen 9200;
server_name 2u.fm;
timeout 43200000;
websocket_connect_timeout 43200000;
proxy_connect_timeout 43200000;
so_keepalive on;
tcp_nodelay on;
websocket_pass websockets;
}
I've tried every variation of that on the web. I want to be able to hit it from my domain "2u.fm/faye" but the only way I can get that to work is to do a proxy inside my http block:
location /faye {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://127.0.0.1:9200;
break;
}
Adding that makes it work at 2u.fm/faye but now I'm back at square one, still getting super slow responses and 502 Bad Gateway's. Which I think makes sense as it's routing through http still and not directly to tcp. I've tried hitting 199.36.105.34:9200 directly but I get no response.

nginx proxy to comet

I need some help from some linux gurus. I am working on a webapp that includes a comet server. The comet server runs on localhost:8080 and exposes the url localhost:8080/long_polling for clients to connect to. My webapp runs on localhost:80.
I've used nginx to proxy requests from nginx to the comet server (localhost:80/long_polling proxied to localhost:8080/long_polling), however, I have two gripes with this solution:
nginx gives me a 504 Gateway time-out after a minute, even though I changed EVERY single time out setting to 600 seconds
I don't really want nginx to have to proxy to the comet server anyway - the nginx proxy is not built for long lasting connections (up to half an hour possibly). I would rather allow the clients to directly connect to the comet server, and let the comet server deal with it.
So my question is: is there any linux trick that allows me to expose localhost:8080/long_polling to localhost:80/long_polling without using the nginx proxy? There must be something. That's why I think this question can probably be best answered by a linux guru.
The reason I need /long_polling to be exposed on port 80 is so I can use AJAX to connect to it (ajax same-origin-policy).
This is my nginx proxy.conf for reference:
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
send_timeout 600;
proxy_buffering off;
Here's my nginx.conf and my proxy.conf. Note however that the proxy.conf is way overkill - I was just setting all these settings while trying to debug my program.
/etc/nginx/nginx.conf
worker_processes 1;
user www-data;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/proxy.conf;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
sendfile on;
tcp_nopush on;
keepalive_timeout 600;
tcp_nodelay on;
gzip on;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
/etc/nginx/proxy.conf
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 6000;
proxy_send_timeout 6000;
proxy_read_timeout 6000;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
send_timeout 6000;
proxy_buffering off;
proxy_next_upstream error;
I actually managed to get this working now. Thank you all. The reason nginx was 504 timing out was a silly one: I hadn't included proxy.conf in my nginx.conf like so:
include /etc/nginx/proxy.conf;
So, I'm keeping nginx as a frontend proxy to the COMET server.
i don't think, that is possible ...
localhost:8080/long_polling is a URI ... more exactly, it should be http://localhost:8080/long_polling ... in HTTP the URI would be resolved as requesting /long_polling, to port 80 to the server with at the domain 'localhost' ... that is, opening a tcp-connection to 127.0.0.1:80, and sending
GET /long_polling HTTP/1.1
Host: localhost:8080
plus some additional HTTP headers ... i haven't heard yet, that ports can be bound accross processes ...
actually, if i understand well, nginx was designed to be a scalable proxy ... also, they claim they need 2.5 MB for 10000 HTTP idling connections ... so that really shouldn't be a problem ...
what comet server are you using? could you maybe let the comet server proxy a webserver? normal http requests should be handled quickly ...
greetz
back2dos
Try
proxy_next_upstream error;
The default is
proxy_next_upstream error timeout;
The timeout cannot be more than 75 seconds.
http://wiki.nginx.org/NginxHttpProxyModule#proxy_next_upstream
http://wiki.nginx.org/NginxHttpProxyModule#proxy_connect_timeout
There is now a Comet plugin for Nginx. It will probably solve your issues quite nicely.
http://www.igvita.com/2009/10/21/nginx-comet-low-latency-server-push/
without doing some serious TCP/IP mungling, you can't expose two applications on the same TCP port on the same IP address. once nginx has started to service the connection, it can't pass it to other application, it can only proxy it.
so, either user another port, another IP number (could be on the same physical machine), or live with proxy.
edit: i guess nginx is timing out because it doesn't see any activity for a long time. maybe adding a null message every few minutes could keep the connection from failing.
You might want to try listen(80) on the node.js server instead of 8080 (i presume you are using that as an async server?) and potentially miss out Ngnix altogether. I use connect middleware and express to server static files and deal with caching that would normally be handled by Ngnix. If you want to have multiple instances of node running (which I would advise), you might want to look into node.js itself as a proxy / load balancer to other node instances rather than Nginx as your gateway. I ran into a problem with this though when I was serving too many static image files at once but after I put the images on S3 it stabilized. Nginx MAY be overkill for what you are doing. Try it and see. Best of luck.

Resources