I configured ngnix but it is very slow. Sometimes when I hit reload assets are pending until it starts to download them. I noticed that after few consecutive reloads of the page it start to hang, pending assets and slows down. Is there something wrong with my configuration? I deploy my app to Heroku and use ngnix in front.
daemon off;
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
worker_rlimit_nofile 10000;
events {
# optmized to serve many clients with each thread
use epoll;
# if accept_mutex is enabled, worker processes will accept new connections by turn. Otherwise, all worker processes will be notified about new connections, and if volume of new connections is low, some of the worker processes may just waste system resources.
accept_mutex on;
multi_accept on;
worker_connections 1024;
}
# error logs
error_log logs/nginx/error.log;
error_log logs/nginx/error_extreme.log emerg;
error_log logs/nginx/error_debug.log debug;
error_log logs/nginx/error_critical.log crit;
http {
charset utf-8;
include mime.types;
default_type application/octet-stream;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log logs/nginx/access.log l2met;
# # - Basic Settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 2048;
# # - Enable open file cache
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# # - Configure buffer sizes
client_body_buffer_size 16k;
client_header_buffer_size 1k;
# # - Responds with 413 http status ie. request entity too large error if this value exceeds
client_max_body_size 8m;
large_client_header_buffers 2 1k;
# # - Configure Timeouts
client_body_timeout 12;
client_header_timeout 12;
# # - Use a higher keepalive timeout to reduce the need for repeated handshake
keepalive_timeout 300;
# # - if the request is not completed within 10 seconds, then abort the connection and send the timeout errror
send_timeout 10;
# # - Hide nginx version information
server_tokens off;
# # - Dynamic gzip compression
gzip on;
gzip_http_version 1.0;
gzip_disable "msie6";
gzip_vary on;
gzip_min_length 20;
gzip_buffers 4 16k;
gzip_comp_level 3;
gzip_proxied any;
#Turn on gzip for all content types that should benefit from it.
gzip_types application/ecmascript;
gzip_types application/javascript;
gzip_types application/json;
gzip_types application/pdf;
gzip_types application/postscript;
gzip_types application/x-javascript;
gzip_types image/svg+xml;
gzip_types text/css;
gzip_types text/csv;
gzip_types text/javascript;
gzip_types text/plain;
gzip_types text/xml;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
#proxying requests to other servers
upstream nodebeats {
server unix:/tmp/nginx.socket max_fails=3 fail_timeout=30s;
keepalive 32;
}
server {
listen <%= ENV['PORT'] %>;
server_name _;
root "/app/";
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://nodebeats;
}
location ~* \.(js|css|jpg)$ {
root "/app/src/dist";
add_header Pragma public;
add_header Cache-Control public;
expires 1y;
gzip_static on;
gzip off;
gzip_http_version 1.0;
gzip_disable "msie6";
gzip_vary on;
gzip_min_length 20;
gzip_proxied any;
}
}
}
EDIT
Ok. I found out what setting is causing this. It is proxy_read_timeout which is by default 60 seconds. If i put it to 1 second, i can reload page any number of times I want and it always refreshes quickly. But why?
That is supposed to be time that nginx waits server to respond. If I get back response and reload the page, why does it stale? Isn't timeout supposed to be restarted and wait for response again?
Related
I made webpage using R(shiny) and deployed it on shiny-server. And tried to use NGINX to achieve multi-threaded sort of stuff. I found on some posts that NGINX can also help to achieve concurrency but I don't know how to do it. Could you please help me to do that.
In case I misunderstand the definition of concurrency, my desired result is that when different users accessed to the webpage and use some function at the same time, they don't need to wait in the queue and my server could handle those requests at the same time.
Below is the configuration:
`
user www-data;
worker_processes 4;
worker_rlimit_nofile 20960;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
use epoll;
worker_connections 1024;
accept_mutex on;
accept_mutex_delay 500ms;
multi_accept on;
}
http {
underscores_in_headers on;
aio threads;
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
upstream shiny-server {
ip_hash;
server 127.0.0.1:3838;
}
map $http_app_version $app1_url {
"1.0" http://35.78.39.174:3838;
}
server {
aio threads;
listen 80;
listen [::]:80;
server_name 35.78.39.174:3838;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
if ($http_user_agent !~* "MicroMessenger"){
set $app1_url http://35.78.39.174:3838;
}
aio threads;
proxy_pass http://localhost:3838;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real_IP $remote_addr;
proxy_set_header User-Agent $http_user_agent;
proxy_set_header Accept-Encoding '';
proxy_buffering off;
}
location ^~ /mathjax/ {
alias /usr/share/mathjax2/;
}
}
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*.*;
server_names_hash_bucket_size 128;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
}
I have also edited the shiny-server configuration:
# Instruct Shiny Server to run applications as the user "shiny"
run_as shiny;
sanitize_errors false;
preserve_logs true;
# Define a server that listens on port 3838
server {
listen 3838;
# Define a location at the base URL
location / {
# Host the directory of Shiny Apps stored in this directory
site_dir /home/rstudio/;
# Log all Shiny output to files in this directory
log_dir /var/log/shiny-server/port_3838;
# When a user visits the base URL rather than a particular application,
# an index of the applications available in this directory will be shown.
directory_index on;
app_init_timeout 1800;
app_idle_timeout 1800;
}
}
`
Really appreciate your help. Thanks a lot.
In case I misunderstand the definition of concurrency, my desired result is that when different users accessed to the webpage and use some function at the same time, they don't need to wait in the queue and my server could handle those requests at the same time.
Could you please how to set the configuration to achieve that?
Now my nginx logs save on the file. But it's possible send logs to custom url (http://myapi.com/save-logs) ? I need save all my nginx logs on my database.
Currently my config file looks like this:
user www-data;
worker_processes 1;
pid /var/run/nginx.pid;
worker_rlimit_nofile 4096;
events {
multi_accept on;
use epoll;
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
error_log /var/log/nginx/error.log warn;
access_log /var/log/nginx/access.log;
open_file_cache max=5000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
types_hash_max_size 2048;
keepalive_requests 1000;
keepalive_timeout 5;
server_names_hash_max_size 512;
server_names_hash_bucket_size 64;
client_max_body_size 100m;
client_body_buffer_size 256k;
reset_timedout_connection on;
client_body_timeout 10;
send_timeout 2;
gzip on;
gzip_static on;
gzip_comp_level 5;
gzip_min_length 256;
gzip_http_version 1.1;
gzip_proxied any;
gzip_vary on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
gzip_disable "msie6";
proxy_max_temp_file_size 0;
upstream proj {
server clickhouse:8123;
}
upstream grafana {
server grafana:3000;
}
server {
listen 8888;
server_name 127.0.0.1;
root /var/www;
proxy_set_header Host $host;
location / {
proxy_pass http://proj;
proxy_set_header Host $host;
add_header Cache-Control "no-cache" always;
}
}
server {
listen 9999;
server_name 127.0.0.1;
root /var/www;
proxy_set_header Host $host;
location / {
proxy_pass http://grafana;
proxy_set_header Host $host;
add_header Cache-Control "no-cache" always;
}
}
}
I think this is possible. According to http://nginx.org/en/docs/syslog.html, the server directive could let you specify where you want to log your info to.
I am using nginx with Heroku and I wanna enable http_gzip_static_module
to serve compressed files. I compress my files manually so I have for example
bundle.js
bunsle.js.gz
I can not make this work. If I enable gzip on dynamic compression works. I am not really familiar with ngnix and I am using configs that i found on internet for use with Heroku or should I say I am using this Heroku buildpack that says it is supported.
For now only compression is important to me. I would remove extra noise if I knew what is not important. Is there something I should change? This is my config file.
daemon off;
#Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
events {
use epoll;
accept_mutex on;
multi_accept on;
worker_connections 1024;
}
error_log logs/nginx/error.log;
error_log logs/nginx/error_extreme.log emerg;
error_log logs/nginx/error_debug.log debug;
error_log logs/nginx/error_critical.log crit;
http {
charset utf-8;
include mime.types;
# # - Add extra mime types
types{
application/x-httpd-php .html;
}
default_type application/octet-stream;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log logs/nginx/access.log l2met;
# # - Basic Settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 2048;
# # - Enable open file cache
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# # - Configure buffer sizes
client_body_buffer_size 16k;
client_header_buffer_size 1k;
# # - Responds with 413 http status ie. request entity too large error if this value exceeds
client_max_body_size 8m;
large_client_header_buffers 2 1k;
# # - Configure Timeouts
client_body_timeout 12;
client_header_timeout 12;
# # - Use a higher keepalive timeout to reduce the need for repeated handshake
keepalive_timeout 300;
# # - if the request is not completed within 10 seconds, then abort the connection and send the timeout errror
send_timeout 10;
# # - Hide nginx version information
server_tokens off;
# # - Dynamic gzip compression
gzip_static on;
#gzip off;
gzip_http_version 1.0;
gzip_disable "msie6";
gzip_vary on;
#gzip_min_length 20;
#gzip_buffers 4 16k;
#gzip_comp_level 9;
gzip_proxied any;
#Turn on gzip for all content types that should benefit from it.
gzip_types application/ecmascript;
gzip_types application/javascript;
gzip_types application/json;
gzip_types application/pdf;
gzip_types application/postscript ;
gzip_types application/x-javascript;
gzip_types image/svg+xml;
gzip_types text/css;
gzip_types text/csv;
gzip_types text/javascript ;
gzip_types text/plain;
gzip_types text/xml;
gzip_types text/html;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream nodebeats {
server unix:/tmp/nginx.socket fail_timeout=0;
keepalive 32;
}
server {
listen <%= ENV['PORT'] %>;
server_name _;
root "/app/";
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://nodebeats;
}
location /api {
proxy_pass http://nodebeats;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /dist {
alias "/app/app-dist";
# # - 1 month expiration time
expires 1M;
access_log off;
add_header Pragma public;
add_header Cache-Control public;
add_header Vary Accept-Encoding;
}
location /offline {
alias "/app/public/offline";
# # - 1 month expiration time
expires 1M;
access_log off;
add_header Pragma public;
add_header Cache-Control public;
add_header Vary Accept-Encoding;
}
location /scripts {
alias "/app/node_modules";
# # - 1 month expiration time
expires 1M;
access_log off;
add_header Pragma public;
add_header Cache-Control public;
add_header Vary Accept-Encoding;
}
}
}
I have a Node.js app server sitting behind an Nginx configuration that has been working well. I'm anticipating some load increase and figured I'd get ahead by setting up another Nginx to serve the static file on the Node.js app server. So, essentially I have setup Nginx reverse proxy in front of Nginx & Node.js.
When I reload Nginx and let it start serving the requests (Nginx<->Nginx) on the routes /publicfile/, I notice a SIGNIFICANT decrease in speed. Something that took Nginx<->Node.js around 3seconds not took Nginx<->Nginx ~15seconds!
I'm new to Nginx and have spent the better part of the day on this and finally decided to post for some community help. Thanks!
The web facing Nginx nginx.conf:
http {
# Main settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_header_timeout 1m;
client_body_timeout 1m;
client_header_buffer_size 2k;
client_body_buffer_size 256k;
client_max_body_size 256m;
large_client_header_buffers 4 8k;
send_timeout 30;
keepalive_timeout 60 60;
reset_timedout_connection on;
server_tokens off;
server_name_in_redirect off;
server_names_hash_max_size 512;
server_names_hash_bucket_size 512;
# Log format
log_format main '$remote_addr - $remote_user [$time_local] $request '
'"$status" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format bytes '$body_bytes_sent';
access_log /var/log/nginx/access.log main;
# Mime settings
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Compression
gzip on;
gzip_comp_level 9;
gzip_min_length 512;
gzip_buffers 8 64k;
gzip_types text/plain text/css text/javascript
application/x-javascript application/javascript;
gzip_proxied any;
# Proxy settings
#proxy_redirect of;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Set-Cookie;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
real_ip_header CF-Connecting-IP;
# SSL PCI Compliance
# - removed for brevity
# Error pages
# - removed for brevity
# Cache
proxy_cache_path /var/cache/nginx levels=2 keys_zone=cache:10m inactive=60m max_size=512m;
proxy_cache_key "$host$request_uri $cookie_user";
proxy_temp_path /var/cache/nginx/temp;
proxy_ignore_headers Expires Cache-Control;
proxy_cache_use_stale error timeout invalid_header http_502;
proxy_cache_valid any 3d;
proxy_http_version 1.1; # recommended with keepalive connections
# WebSocket proxying - from http://nginx.org/en/docs/http/websocket.html
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
map $http_cookie $no_cache {
default 0;
~SESS 1;
~wordpress_logged_in 1;
}
upstream backend {
# my 'backend' server IP address (local network)
server xx.xxx.xxx.xx:80;
}
# Wildcard include
include /etc/nginx/conf.d/*.conf;
}
The web facing Nginx Server block that forwards the static files to the Nginx behind it (on another box):
server {
listen 80 default;
access_log /var/log/nginx/nginx.log main;
# pass static assets on to the app server nginx on port 80
location ~* (/min/|/audio/|/fonts/|/images/|/js/|/styles/|/templates/|/test/|/publicfile/) {
proxy_pass http://backend;
}
}
And finally the "backend" server:
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
sendfile_max_chunk 32;
# server_tokens off;
# server_names_hash_bucket_size 64;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server {
root /home/admin/app/.tmp/public;
listen 80 default;
access_log /var/log/nginx/app-static-assets.log;
location /publicfile {
alias /home/admin/APP-UPLOADS;
}
}
}
#keenanLawrence mentioned in the comments above, sendfile_max_chunk directive.
After setting sendfile_max_chunk to 512k, I saw a significant speed improvement in my static file (from disk) delivery from Nginx.
I experimented with it from 8k, 32k, 128k, & finally 512k The difference seems to be per server for configuration on the optimal chunk size depending on the content being delivered, threads available, & server request load.
I also noticed another significant bump in performance when I changed worker_processes auto; to worker_processes 2; which went from utilizing worker_process on every cpu to only using 2. In my case, this was more efficient since I also have Node.js app servers running on the same machine and they are also performing operations on the cpu's.
I am evaluating the Nginx Plus R7 commercial version and seems it has significant performance improvements than it's previous versions but still there are some Java runtime libraries which gives high performance than Nginx for simple proxy scenarios.
Following are the configuration I add and I have enabled the thread pools , socket shading as well.
user nginx;
worker_processes auto;
events {
worker_connections 100000;
use epoll;
multi_accept on;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 6000;
keepalive_requests 100000;
access_log off;
tcp_nopush on;
tcp_nodelay on;
open_file_cache max=9000000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
reset_timedout_connection on;
client_body_timeout 10;
send_timeout 2;
gzip on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
gzip_disable "MSIE [1-6]\.";
server {
listen 9090 reuseport backlog=8192;
server_name localhost;
location / {
aio threads;
proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
proxy_set_header Connection "";
proxy_http_version 1.1;
if ( $route_id = r1 ) {
proxy_pass http://10.100.5.98:9000/service;
}
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
error_log /var/log/nginx/error.log notice;
}
Are there any other parameters that need to enabled from Nginx level and Kernel level parameters are also being set up.