I have been battling this issue for some days now. I found a temporary solution but just can't wrap my head around what exactly is happening.
So what happens is that one request is handled immediately. And if I send the same request right after it hangs on 'waiting' for 60 seconds. If I cancel the request and send a new one it is handled correctly again. If I send a request after this one it hangs again. This cycle repeat.
It sounds like a load-balancing issue but I didn't set it up. Does nginx have some sort of default load balancing for connection to the upstream server?
The error received is upstream timed out (110: Connection timed out).
I found out that changing these proxy parameters, it only hangs for 3 seconds and every subsequent request now handles fine (after the waited one). Because of a working keep-alive connection I suppose.
proxy_connect_timeout 3s;
It looks like setting up a connection to the upstream is timing out and then after the timeout it tries again and succeeds. Also in the "(cancelled)request - ok request - (cancelled)request" cycle described above there is no keep-alive being setup. Only if I wait for the request to complete. Which takes 60 seconds without the above settings and is unacceptable.
It happens for both domains..
NGINX conf:
worker_processes 1;
events
{
worker_connections 1024;
}
http
{
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
gzip on;
# Timeouts
client_body_timeout 12;
client_header_timeout 12;
send_timeout 10;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server
{
server_name domain.com www.domain.com;
root /usr/share/nginx/html;
index index.html index.htm;
location /api/
{
proxy_redirect off;
proxy_pass http://localhost:3001/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
#TEMP fix
proxy_connect_timeout 3s;
}
}
DOMAIN2 conf:
server {
server_name domain2.com www.domain2.com;
location /api/
{
proxy_redirect off;
proxy_pass http://localhost:5000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
#TEMP fix
proxy_connect_timeout 3s;
}
}
I found the answer. However, I still don't fully understand why and how. I suspect setting up the keep-alive wasn't working as it should. I read to the documentation and found the answer there: https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
For both the configuration files I added a 'upstream' block.
i.e.
DOMAIN2.CONF:
upstream backend
{
server 127.0.0.1:5000;
keepalive 16;
}
location /api/
{
proxy_redirect off;
proxy_pass http://backend/;
proxy_http_version 1.1;
proxy_set_header Connection "";
...
# REMOVED THE TEMP FIX
}
Make sure to:
Clear the Connection header
Use 127.0.0.1 instead of localhost in upstream block
Set http version to 1.1
Related
We have an UI application deployed using Nginx.
when any services go down, the response changes to 504 gateway timeout, which is fine, however when the service comes back up this is not detected and 504 gateway timeout is still returned to the UI.
I have tried a bunch of Nginx config but none seems to work.
some of the configs tried are:
location ~ ^/(hub/note|rest/api/note) {
proxy_pass http://localhost:8011;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_cache_bypass $http_pragma;
proxy_no_cache $http_pragma;
proxy_cache off;
proxy_cache_use_stale off;
proxy_connect_timeout 180s;
proxy_read_timeout 180s;
proxy_send_timeout 180s;
}
did someone face a similar issue?
UPDATE: if I reload Nginx using nginx -s reload, connections are successful.
Okay so I've set up a nginx server that proxies to another 2 servers with load balancing. The only thing now missing are the cookies.
I've been searching numerous forums and questions regarding the rewriting of cookies. Can anyone please give me insight as to how to fix this issue?
The web application deployed to the 2 servers are written with Vaadin.
The 2 servers are running TomEE on port 8080 and 8081 for example.
I'm rewriting through nginx from easy.io to server1:8080 and server2:8080.
Refer to image below: when navigating to server1:8080/myapplication all my cookies are available.
https://ibb.co/X86pvCq
https://ibb.co/0M0GjCt
Refer to image below: when navigating to http://worksvdnui.io/ my cookies are not available.
https://ibb.co/qBkBRqb
I've tried using proxy_cookie_path, proxy_set_header Cookie $http_cookie but to no avail.
Here's the code:
upstream worksvdnuiio {
# ip_hash; sticky sessions!
ip_hash;
# server localhost:8080;
server hades:9090;
server loki:9090;
}
server {
listen 80;
listen [::]:80;
server_name worksvdnui.io;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location /PUSH {
proxy_pass "http://worksvdnuiio/test.qa.gen/PUSH";
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffering off;
proxy_ignore_client_abort off;
proxy_read_timeout 84600s;
proxy_send_timeout 84600s;
break;
}
location / {
proxy_pass "http://worksvdnuiio/test.qa.gen/";
proxy_cookie_path /test.qa.gen/ /;
proxy_set_header Cookie $http_cookie;
proxy_pass_request_headers on;
}
}
Any insight would be VALUABLE!
Thanks in advance.
I am using Nginx (nginx/1.10.2) as a reverse proxy to back end servers. I have websockets that I need to ensure a long lived connection on. I have the following lines in the http part of the config:
keepalive_timeout 0;
proxy_read_timeout 5d;
proxy_send_timeout 5d;
I understand the proxy_read and proxy_sends lines as per documentation. However how does the keepalive_timeout come into this? Should I set the keepalive_timeout to 0 to basically have no timeout? or should I set it to a high value?
What does this actually do? I didn't really find the documentation that clear on this parameter:http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout
Also how will setting or disabling the keepalive_timeout affect the other static pages that I'm loading? Is it possible to set these timeout values for just the websocket? because the documentation has them under the http module so I wasn't sure if I can set them within specific locations:
location /websock {
# limit connections to 10
limit_conn addr 10;
proxy_set_header Host $host;
proxy_pass http://backends;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
I have the following configuration on a NGINX which is serving as a reverse proxy to my Docker machine located at: 192.168.99.100:3150.
Basically, I need to hit: http://localhost:8150 and the content displayed has to be the content from inside the Docker.
The configuration bellow is doing his job.
The point here is that when hitting the localhost:8150 I'm getting http status code 302, and I would like to get the http status code 200.
Does anyone know if it's possible to be done on Nginx or any other way to do that?
server {
listen 8150;
location / {
proxy_pass http://192.168.99.100:3150;
}
}
Response from a request to http://localhost:8150/products
HTTP Requests
-------------
GET /projects 302 Found
I have found the solution.
Looks that a simple proxy_pass doens't work quite fine with ngrok.
I'm using proxy_pass with upstream and it's working fine.
Bellow my configuration.
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream rorweb {
server 192.168.99.100:3150 fail_timeout=0;
}
server {
listen 8150;
server_name git.example.com;
server_tokens off;
root /dev/null;
client_max_body_size 20m;
location / {
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_pass http://rorweb;
}
}
include servers/*;
}
My environment is like this:
Docker (running a rails project on port 3150)
Nginx (as a reverse proxy exposing the port 8150)
Ngrok (exporting my localhost/nginx)
I'm trying to proxy WebSocket + HTTP traffic with nginx.
I have read this: http://nginx.org/en/docs/http/websocket.html
My config looks like:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name ourapp.com;
location / {
proxy_pass http://127.0.0.1:100;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
}
I have 2 problems:
1) The connection closes once a minute.
2) I want to run both HTTP and WS on the same port. The application works fine locally, but if I try to put HTTP and WS on the same port and set this nginx proxy, I get this:
WebSocket connection to 'ws://ourapp.com/ws' failed: Unexpected response code: 200
Loading the app (HTTP) seems to work fine, but WebSocket connection fails.
Problem 1: As for the connection dying once a minute, I realized that it's nginx timeout variable. I can either make our app to ping once in a while or increase the timeout. I'm not sure if I should set it as 0, I decided to just ping once a minute and set the timeout to 90 seconds. (keepalive_timeout)
Problem 2: Connectivity issues arose when I used CloudFlare CDN. Disabling CloudFlare acceleration solved the problem.
Alternatively I could create a subdomain and set it as "unaccelerated" and use that for WS.