Nginx Request Time Latency Spikes - http

I am using Nginx as reverse proxy to my backend (Java app with Spring boot). In overall (avg, p50, p90, p95, p99 latencies) it performs well. But time to time, I see latency spikes around 100-200 milliseconds. When I enabled the access logs, I see that upstream response time (upstream_response_time) is very low even though request time (request_time) is high. For example,
[25/Apr/2020:18:28:17 +0000] "XXX" XXX - request="POST /v1/composite-monitoring-data HTTP/1.1" status=429 request_time=0.081 trace_id="Root=1-5ea48141-2f8e07a4c7c71a1360d9c5f5" request_length=9864 bytes_sent=979 body_bytes_sent=623 upstream_addr=127.0.0.1:5000 upstream_status=429 upstream_response_time=0.004 upstream_connect_time=0.000 upstream_header_time=0.004 user_agent="okhttp/3.10.0" current_time_msec=1587839297.256
...
[25/Apr/2020:18:28:17 +0000] "XXX" XXX - request="POST /v1/composite-monitoring-data HTTP/1.1" status=429 request_time=0.084 trace_id="Root=1-5ea48141-51f0d12a6f7c4b0651f6ef42" request_length=20534 bytes_sent=979 body_bytes_sent=623 upstream_addr=127.0.0.1:5000 upstream_status=429 upstream_response_time=0.000 upstream_connect_time=0.000 upstream_header_time=0.000 user_agent="okhttp/3.10.0" current_time_msec=1587839297.278
Also here is my nginx.conf file:
user nginx;
pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log;
worker_processes auto;
worker_rlimit_nofile 32768;
events {
worker_connections 4096;
use epoll;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
include /etc/nginx/conf.d/*.conf;
default_type application/json;
sendfile on;
tcp_nopush off;
tcp_nodelay on;
keepalive_timeout 300;
keepalive_requests 10000;
client_body_timeout 15;
client_header_timeout 15;
client_body_buffer_size 4m;
client_max_body_size 4m;
log_format main '[$time_local] "$http_x_forwarded_for" $remote_addr - '
'request="$request" status=$status request_time=$request_time trace_id="$http_x_amzn_trace_id" '
'request_length=$request_length bytes_sent=$bytes_sent body_bytes_sent=$body_bytes_sent '
'upstream_addr=$upstream_addr '
'upstream_status=$upstream_status '
'upstream_response_time=$upstream_response_time '
'upstream_connect_time=$upstream_connect_time '
'upstream_header_time=$upstream_header_time '
'user_agent="$http_user_agent" '
'current_time_msec=$msec';
access_log /var/log/nginx/access.log main;
upstream http_backend {
server 127.0.0.1:5000;
keepalive 1024;
}
server {
listen 80;
listen [::]:80;
server_name _ localhost;
location /v1 {
proxy_pass http://http_backend/v1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Request-Start $msec;
proxy_set_header Connection "";
proxy_http_version 1.1;
keepalive_timeout 300;
keepalive_requests 10000;
}
location /ping {
proxy_pass http://http_backend/ping;
}
}
}
What might cause this big difference between the request time and upstream response time? Is there anything I need to configure and not configured properly?

Related

What is the relationship of server's setting in both nginx.conf and proxy.conf?

I am very newbie on NGINX.
In my project, the server is defined in both etc/nginx/nginx.conf and etc/nginx/conf.d/proxy.conf. And etc/nginx/conf.d/proxy.conf is included in nginx.conf
I am not understand the relationship the server's setting in these two files. ex. In nginx.conf, server's setting is listen 80 ; listen [::]:80 ; and in proxy.conf, server's setting is listen 80 proxy_protocol.
In above example, which setting will be used in real communication?
Does the server's setting of proxy.conf overwrite the server's setting of nginx.conf?
or the server's setting of proxy.conf will be merged into server's setting of nginx.conf?
Please find the full conf files as below:
etc/nginx/conf.d/proxy.conf
content: |
client_max_body_size 500M;
server_names_hash_bucket_size 128;
upstream backend {
server unix:///var/run/puma/my_app.sock;
}
server {
listen 80 proxy_protocol;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
large_client_header_buffers 8 32k;
set_real_ip_from 10.0.0.0/8;
real_ip_header proxy_protocol;
location / {
proxy_http_version 1.1;
proxy_set_header X-Real-IP $proxy_protocol_addr;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_buffers 8 32k;
proxy_buffer_size 64k;
proxy_pass http://backend;
proxy_redirect off;
Enables WebSocket support
location /v1/cable {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade "websocket";
proxy_set_header Connection "Upgrade";
proxy_set_header X-Real-IP $proxy_protocol_addr;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
}
}
}
etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
listen 80 ;
listen [::]:80 ;
server_name localhost;
root /usr/share/nginx/html;
location / {
}
}
}
Nginx selects a server block to process a request based on the values of the listen and server_name directives.
If a matching server name cannot be found, the default server for that port will be used.
In the configuration in your question, the server block in proxy.conf is encountered first, so it becomes the de-facto default server for port 80.
The server block in nginx.conf will only match requests which use the correct host name, i.e. http://localhost
See this document for details.

NGINX: How do I remove a port when performing a reverse proxy?

I have an Nginx reverse proxy set up which is being used as an SSL offload for several servers such as confluence. I've got it successfully working for taking http://confluence and https://confluence but when I try to redirect http://confluence:8090, it tries to go to https://confluence:8090 and fails.
How can I remove the port from the URL?
The config below is a bit trimmed but maybe helpful? Is the $server_port bit in the headers causing the problem?
server {
listen 8090;
server_name confluence;
return 301 https://confluence$request_uri;
}
server {
listen 443 ssl http2;
server_name confluence;
location / {
proxy_http_version 1.1;
proxy_pass http://confbackend:8091
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $server_name:$server_port;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Upgrade $http_upgrade; #WebSocket Support
proxy_set_header Connection $connection_upgrade; #WebSocket Support
}
}
Seems like a lot of answers here involve http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect but I find no solace in that confusing mess.
I also would have thought you'd have a single server but I was trying the advice from https://serverfault.com/questions/815797/nginx-rewrite-to-new-protocol-and-port
I tried messing with the port_in_redirect off; option but maybe I was using it wrong?
EDIT 1: Add conf files
The files below are modifications from the Artifactory nginx setup. I used their setup initially and added additional conf files (in ./conf.d/) for other RP endpoints.
Confluence.conf
server {
listen 8090 ssl http2;
server_name confluence.domain.com confluence;
## return 301 https://confluence.domain.com$request_uri;
proxy_redirect https://confluence.domain.com:8090 https://confluence.domain.com;
}
server {
## add ssl entries when https has been set in config
ssl_certificate /data/rpssl/confluence.pem;
ssl_certificate_key /data/rpssl/confluence_unencrypted.key;
## server configuration
listen 443 ssl http2;
server_name confluence.domain.com confluence;
add_header Strict-Transport-Security max-age=31536000;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
access_log /var/log/nginx/confluence-access.log timing;
error_log /var/log/nginx/confluence-error.log;
client_max_body_size 0;
proxy_read_timeout 1200;
proxy_connect_timeout 240;
location / {
proxy_http_version 1.1;
proxy_pass http://backendconfluence.domain.com:8091;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $server_name:$server_port;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Upgrade $http_upgrade; # WebSocket Support
proxy_set_header Connection $connection_upgrade; # WebSocket support
}
}
nginx.conf
# Main Nginx configuration file
worker_processes 4;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
worker_rlimit_nofile 4096;
events {
worker_connections 2048;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
variables_hash_max_size 1024;
variables_hash_bucket_size 64;
server_names_hash_max_size 4096;
server_names_hash_bucket_size 128;
types_hash_max_size 2048;
types_hash_bucket_size 64;
proxy_read_timeout 2400s;
client_header_timeout 2400s;
client_body_timeout 2400s;
proxy_connect_timeout 75s;
proxy_send_timeout 2400s;
proxy_buffer_size 32k;
proxy_buffers 40 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 250m;
proxy_http_version 1.1;
client_body_buffer_size 128k;
map $http_upgrade $connection_upgrade { #WebSocket support
default upgrade;
'' '';
}
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format timing 'ip = $remote_addr '
'user = \"$remote_user\" '
'local_time = \"$time_local\" '
'host = $host '
'request = \"$request\" '
'status = $status '
'bytes = $body_bytes_sent '
'upstream = \"$upstream_addr\" '
'upstream_time = $upstream_response_time '
'request_time = $request_time '
'referer = \"$http_referer\" '
'UA = \"$http_user_agent\"';
access_log /var/log/nginx/access.log timing;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Your problem is the STS header
add_header Strict-Transport-Security max-age=31536000;
When you add the STS header. The first request to http://example.com:8090 generates a redirect to https://example.com
This https://example.com then returns the STS header in the response and the browser remembers the example.com always needs to be served on https no matter what. The port doesn't make a difference
Now when you make another request to http://example.com:8090, STS kicks in and then converts it to https://example.com:8090, which is your problem here
Because a port can only serve http or https, you can't use 8090 to redirect http to https AND redirect https 8090 to https 443

Preserve response headers in nginx

I have a reverse-proxy setup(I think), for gunicorn running a falcon app. I was also able to setup SSL on the nginx server. The /etc/nginx/nginx.conf:
worker_processes 1;
user nobody nogroup;
pid /tmp/nginx.pid;
error_log /tmp/nginx.error.log;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # set to 'on' if nginx worker_processes > 1
}
http {
include mime.types;
# fallback in case we can't determine a type
default_type application/json;
access_log /tmp/nginx.access.log combined;
sendfile on;
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types application/json;
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
server 127.0.0.1:6789 fail_timeout=0;
}
server {
# if no Host match, close the connection to prevent host spoofing
listen 80 default_server;
return 444;
}
server {
listen 443 ssl;
client_max_body_size 4G;
# set the correct host(s) for your site
server_name 0.0.0.0;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
keepalive_timeout 2;
location / {
proxy_bind $server_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;
proxy_redirect off;
proxy_pass http://app_server;
}
}
}
What do I need to change so that the response headers from gunicorn are preserved? Also, I am completely new to this. So is there anything that I should change?

Nginx cache hit with long $request_time

$upstream_cache_status is HIT, but the $request_time sometimes last for 5s, What's the problem?
My nginx.conf
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
gzip on;
client_max_body_size 30M;
proxy_temp_path /tmp/proxy_temp_dir;
proxy_cache_path /tmp/proxy_cache_dir levels=1:2 keys_zone=cache:500m inactive=1d max_size=500m;
log_format cache_log '$remote_addr - [$request_time] $status $upstream_cache_status "$request"';
server {
access_log logs/access.log cache_log;
error_log logs/error.log error;
proxy_cache cache;
proxy_cache_valid 10m;
location / {
proxy_next_upstream http_502 error timeout;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://xxxxxx;
}
}
include /usr/local/openresty/nginx/conf/vhosts/*.conf;
}
And access.log:
x.x.x.x - [5.076] 200 HIT "GET /xxx"
x.x.x.x - [0.092] 200 HIT "GET /xxx"
Same request url, both are hit the cache, why $request_time last for 5s or more?
Thanks.
It's a disk IO problem, I moved the proxy_cache_path to another SSD and the problem is solved.

Numerous 499 status codes in nginx access log after 75 seconds

We are using nginx in a long polling scenario. We have a client that the user installs which then communicates with our server. An nginx process in that server passes that request to backends which are Python processes. The Python process holds the request for up to 650 seconds.
In the nginx access log there are a lot of 499 entries. Logging the $request_time shows that the client times out after 75 seconds. None of the nginx timeouts are set to 75 seconds though.
Some research suggest that the backend processes might be too slow, but there isn't a lot of activity in the servers containing the processes. Adding more servers/processes also didn't help, neither did upgrading the instance where nginx is.
Here are the nginx configuration files.
nginx.conf
user nobody nogroup;
worker_processes 1;
worker_rlimit_nofile 131072;
pid /run/nginx.pid;
events {
worker_connections 76800;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 2048;
keepalive_timeout 65;
server_names_hash_bucket_size 64;
include /usr/local/openresty/nginx/conf/mime.types;
default_type application/octet-stream;
log_format combined_edit '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" "$request_time"';
access_log /var/log/nginx/access.log combined_edit;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
include /usr/local/openresty/nginx/conf.d/*.conf;
include /usr/local/openresty/nginx/sites-enabled/*;
}
backend.conf
upstream backend {
server xxx.xxx.xxx.xxx:xxx max_fails=12 fail_timeout=12;
server xxx.xxx.xxx.xxx:xxx max_fails=12 fail_timeout=12;
}
server {
listen 0.0.0.0:80;
server_name host;
rewrite ^(.*) https://$host$1 permanent;
}
server {
listen 0.0.0:443;
ssl_certificate /etc/ssl/certs/ssl.pem;
ssl_certificate_key /etc/ssl/certs/ssl.pem;
ssl on;
server_name host;
location / {
proxy_connect_timeout 700;
proxy_buffering off;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 10000; # something really large
proxy_pass http://backend;
}
}

Resources