Evaluating Nginx Plus R7 Performance - nginx

I am evaluating the Nginx Plus R7 commercial version and seems it has significant performance improvements than it's previous versions but still there are some Java runtime libraries which gives high performance than Nginx for simple proxy scenarios.
Following are the configuration I add and I have enabled the thread pools , socket shading as well.
user nginx;
worker_processes auto;
events {
worker_connections 100000;
use epoll;
multi_accept on;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 6000;
keepalive_requests 100000;
access_log off;
tcp_nopush on;
tcp_nodelay on;
open_file_cache max=9000000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
reset_timedout_connection on;
client_body_timeout 10;
send_timeout 2;
gzip on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
gzip_disable "MSIE [1-6]\.";
server {
listen 9090 reuseport backlog=8192;
server_name localhost;
location / {
aio threads;
proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
proxy_set_header Connection "";
proxy_http_version 1.1;
if ( $route_id = r1 ) {
proxy_pass http://10.100.5.98:9000/service;
}
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
error_log /var/log/nginx/error.log notice;
}
Are there any other parameters that need to enabled from Nginx level and Kernel level parameters are also being set up.

Related

Nginx very slow and sometimes not reachable

We are running a Nginx Server but it is very slow and sometimes not even reachable. Sometimes it loads 20 seconds.
This is our config:
#Example Config
user www-data;
worker_rlimit_nofile 100000;
events {
worker_connections 4000;
use epoll;
multi_accept on;
}
http {
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
access_log off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
gzip on;
gzip_min_length 10240;
gzip_comp_level 1;
gzip_vary on;
gzip_disable msie6;
gzip_proxied expired no-cache no-store private auth;
gzip_types
# text/html is always compressed by HttpGzipModule
text/css
text/javascript
text/xml
text/plain
text/x-component
application/javascript
application/x-javascript
application/json
application/xml
application/rss+xml
application/atom+xml
font/truetype
font/opentype
application/vnd.ms-fontobject
image/svg+xml;
# allow the server to close connection on non responding client, this will free up memory
reset_timedout_connection on;
# request timed out -- default 60
client_body_timeout 10;
client_max_body_size 10M;
# if client stop responding, free up memory -- default 60
send_timeout 2;
# server will close connection after this time -- default 75
keepalive_timeout 30;
# number of requests client can make over keep-alive -- for testing environment
keepalive_requests 100000;
include mime.types;
default_type application/octet-stream;
include fastcgi.conf;
server {
listen 443 ssl http2;
#listen 80 http2;
server_name ***;
root /**/**/**/;
# SSL certificate
ssl_certificate /**/**/**;
ssl_certificate_key /**/**/**;
index index.php index.html;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
if (!-f $request_filename){
set $rule_0 1$rule_0;
}
if (!-d $request_filename){
set $rule_0 2$rule_0;
}
if ($rule_0 = "21"){
rewrite ^/(.*)$ /index.php?$1 last;
}
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
server_tokens off;
}
We also have multiple errors in the nginx/error.log:
2021/07/01 10:53:28 [error] 25453#25453: *81 upstream timed out (110: Connection timed out) while reading response header from upstream,
The errors are POST and GET requests.
Don't know if it has something to do with it.
Is there anything we can do?
Thanks in Advance!

Nginx syslog post request

Now my nginx logs save on the file. But it's possible send logs to custom url (http://myapi.com/save-logs) ? I need save all my nginx logs on my database.
Currently my config file looks like this:
user www-data;
worker_processes 1;
pid /var/run/nginx.pid;
worker_rlimit_nofile 4096;
events {
multi_accept on;
use epoll;
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
error_log /var/log/nginx/error.log warn;
access_log /var/log/nginx/access.log;
open_file_cache max=5000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
types_hash_max_size 2048;
keepalive_requests 1000;
keepalive_timeout 5;
server_names_hash_max_size 512;
server_names_hash_bucket_size 64;
client_max_body_size 100m;
client_body_buffer_size 256k;
reset_timedout_connection on;
client_body_timeout 10;
send_timeout 2;
gzip on;
gzip_static on;
gzip_comp_level 5;
gzip_min_length 256;
gzip_http_version 1.1;
gzip_proxied any;
gzip_vary on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
gzip_disable "msie6";
proxy_max_temp_file_size 0;
upstream proj {
server clickhouse:8123;
}
upstream grafana {
server grafana:3000;
}
server {
listen 8888;
server_name 127.0.0.1;
root /var/www;
proxy_set_header Host $host;
location / {
proxy_pass http://proj;
proxy_set_header Host $host;
add_header Cache-Control "no-cache" always;
}
}
server {
listen 9999;
server_name 127.0.0.1;
root /var/www;
proxy_set_header Host $host;
location / {
proxy_pass http://grafana;
proxy_set_header Host $host;
add_header Cache-Control "no-cache" always;
}
}
}
I think this is possible. According to http://nginx.org/en/docs/syslog.html, the server directive could let you specify where you want to log your info to.

Nginx not saving cached 404s to disk

Here's my nginx config (using nginx 1.16.1):
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 100000;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format upstream_time '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"'
'rt=$request_time uct="$upstream_connect_time" uht="$upstream_header_time" urt="$upstream_response_time"';
##
# Logging Settings
##
error_log /var/log/nginx/error.log warn;
access_log /var/log/nginx/access.log upstream_time;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;
#Nginx cache
proxy_cache_path /nginx_cache/product levels=1:2 keys_zone=product_cache:100m max_size=20g inactive=2d use_temp_path=off;
#Serve HTML, JS, CSS & Go requests
server {
client_max_body_size 102M;
listen 443 ssl http2;
server_name example.com;
root /html;
index /;
error_page 404 /404.html;
error_page 500 /500.html;
error_page 502 =503 /maintenance.html;
location = /404.html {
add_header x-nginx-cache-status $upstream_cache_status always;
}
location ~^/([a-zA-Z0-9/]+)$ {
set $product_id $1;
rewrite ^ /product?id=$product_id break;
proxy_cache product_cache;
proxy_http_version 1.1;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_background_update on;
proxy_cache_lock on;
proxy_cache_key product-$product_id;
add_header x-nginx-cache-key product-$product_id always;
add_header x-nginx-cache-status $upstream_cache_status always;
proxy_cache_valid 200 404 1d;
proxy_cache_bypass $nocache;
proxy_ignore_headers Cache-Control; #force cache
proxy_ignore_headers Set-Cookie;
proxy_intercept_errors on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header REQUEST_URI $request_uri;
proxy_pass http://go:2053;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
}
}
}
With this configuration, I can see that 404s are actually a cache HIT after the first request, however it's not saved in the nginx_cache folder as set.
Other requests which are 200 responses are cached appropriately and saved to disk as expected.
I've tried also adding the same caching config to the location = /404.html block, but that did not have any effect on whether the file was saved to disk.
I am guessing this has to do with overriding the error page by using proxy_intercept_errors and error_page, so nginx is no longer caching it using the parameters I set.
Is there a way to achieve this?

Nginx gzip_static_module configuration not working

I am using nginx with Heroku and I wanna enable http_gzip_static_module
to serve compressed files. I compress my files manually so I have for example
bundle.js
bunsle.js.gz
I can not make this work. If I enable gzip on dynamic compression works. I am not really familiar with ngnix and I am using configs that i found on internet for use with Heroku or should I say I am using this Heroku buildpack that says it is supported.
For now only compression is important to me. I would remove extra noise if I knew what is not important. Is there something I should change? This is my config file.
daemon off;
#Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
events {
use epoll;
accept_mutex on;
multi_accept on;
worker_connections 1024;
}
error_log logs/nginx/error.log;
error_log logs/nginx/error_extreme.log emerg;
error_log logs/nginx/error_debug.log debug;
error_log logs/nginx/error_critical.log crit;
http {
charset utf-8;
include mime.types;
# # - Add extra mime types
types{
application/x-httpd-php .html;
}
default_type application/octet-stream;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log logs/nginx/access.log l2met;
# # - Basic Settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 2048;
# # - Enable open file cache
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# # - Configure buffer sizes
client_body_buffer_size 16k;
client_header_buffer_size 1k;
# # - Responds with 413 http status ie. request entity too large error if this value exceeds
client_max_body_size 8m;
large_client_header_buffers 2 1k;
# # - Configure Timeouts
client_body_timeout 12;
client_header_timeout 12;
# # - Use a higher keepalive timeout to reduce the need for repeated handshake
keepalive_timeout 300;
# # - if the request is not completed within 10 seconds, then abort the connection and send the timeout errror
send_timeout 10;
# # - Hide nginx version information
server_tokens off;
# # - Dynamic gzip compression
gzip_static on;
#gzip off;
gzip_http_version 1.0;
gzip_disable "msie6";
gzip_vary on;
#gzip_min_length 20;
#gzip_buffers 4 16k;
#gzip_comp_level 9;
gzip_proxied any;
#Turn on gzip for all content types that should benefit from it.
gzip_types application/ecmascript;
gzip_types application/javascript;
gzip_types application/json;
gzip_types application/pdf;
gzip_types application/postscript ;
gzip_types application/x-javascript;
gzip_types image/svg+xml;
gzip_types text/css;
gzip_types text/csv;
gzip_types text/javascript ;
gzip_types text/plain;
gzip_types text/xml;
gzip_types text/html;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream nodebeats {
server unix:/tmp/nginx.socket fail_timeout=0;
keepalive 32;
}
server {
listen <%= ENV['PORT'] %>;
server_name _;
root "/app/";
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://nodebeats;
}
location /api {
proxy_pass http://nodebeats;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /dist {
alias "/app/app-dist";
# # - 1 month expiration time
expires 1M;
access_log off;
add_header Pragma public;
add_header Cache-Control public;
add_header Vary Accept-Encoding;
}
location /offline {
alias "/app/public/offline";
# # - 1 month expiration time
expires 1M;
access_log off;
add_header Pragma public;
add_header Cache-Control public;
add_header Vary Accept-Encoding;
}
location /scripts {
alias "/app/node_modules";
# # - 1 month expiration time
expires 1M;
access_log off;
add_header Pragma public;
add_header Cache-Control public;
add_header Vary Accept-Encoding;
}
}
}

Ngnix is very slow

I configured ngnix but it is very slow. Sometimes when I hit reload assets are pending until it starts to download them. I noticed that after few consecutive reloads of the page it start to hang, pending assets and slows down. Is there something wrong with my configuration? I deploy my app to Heroku and use ngnix in front.
daemon off;
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
worker_rlimit_nofile 10000;
events {
# optmized to serve many clients with each thread
use epoll;
# if accept_mutex is enabled, worker processes will accept new connections by turn. Otherwise, all worker processes will be notified about new connections, and if volume of new connections is low, some of the worker processes may just waste system resources.
accept_mutex on;
multi_accept on;
worker_connections 1024;
}
# error logs
error_log logs/nginx/error.log;
error_log logs/nginx/error_extreme.log emerg;
error_log logs/nginx/error_debug.log debug;
error_log logs/nginx/error_critical.log crit;
http {
charset utf-8;
include mime.types;
default_type application/octet-stream;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log logs/nginx/access.log l2met;
# # - Basic Settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 2048;
# # - Enable open file cache
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# # - Configure buffer sizes
client_body_buffer_size 16k;
client_header_buffer_size 1k;
# # - Responds with 413 http status ie. request entity too large error if this value exceeds
client_max_body_size 8m;
large_client_header_buffers 2 1k;
# # - Configure Timeouts
client_body_timeout 12;
client_header_timeout 12;
# # - Use a higher keepalive timeout to reduce the need for repeated handshake
keepalive_timeout 300;
# # - if the request is not completed within 10 seconds, then abort the connection and send the timeout errror
send_timeout 10;
# # - Hide nginx version information
server_tokens off;
# # - Dynamic gzip compression
gzip on;
gzip_http_version 1.0;
gzip_disable "msie6";
gzip_vary on;
gzip_min_length 20;
gzip_buffers 4 16k;
gzip_comp_level 3;
gzip_proxied any;
#Turn on gzip for all content types that should benefit from it.
gzip_types application/ecmascript;
gzip_types application/javascript;
gzip_types application/json;
gzip_types application/pdf;
gzip_types application/postscript;
gzip_types application/x-javascript;
gzip_types image/svg+xml;
gzip_types text/css;
gzip_types text/csv;
gzip_types text/javascript;
gzip_types text/plain;
gzip_types text/xml;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
#proxying requests to other servers
upstream nodebeats {
server unix:/tmp/nginx.socket max_fails=3 fail_timeout=30s;
keepalive 32;
}
server {
listen <%= ENV['PORT'] %>;
server_name _;
root "/app/";
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://nodebeats;
}
location ~* \.(js|css|jpg)$ {
root "/app/src/dist";
add_header Pragma public;
add_header Cache-Control public;
expires 1y;
gzip_static on;
gzip off;
gzip_http_version 1.0;
gzip_disable "msie6";
gzip_vary on;
gzip_min_length 20;
gzip_proxied any;
}
}
}
EDIT
Ok. I found out what setting is causing this. It is proxy_read_timeout which is by default 60 seconds. If i put it to 1 second, i can reload page any number of times I want and it always refreshes quickly. But why?
That is supposed to be time that nginx waits server to respond. If I get back response and reload the page, why does it stale? Isn't timeout supposed to be restarted and wait for response again?

Resources