We are using nginx in a long polling scenario. We have a client that the user installs which then communicates with our server. An nginx process in that server passes that request to backends which are Python processes. The Python process holds the request for up to 650 seconds.
In the nginx access log there are a lot of 499 entries. Logging the $request_time shows that the client times out after 75 seconds. None of the nginx timeouts are set to 75 seconds though.
Some research suggest that the backend processes might be too slow, but there isn't a lot of activity in the servers containing the processes. Adding more servers/processes also didn't help, neither did upgrading the instance where nginx is.
Here are the nginx configuration files.
nginx.conf
user nobody nogroup;
worker_processes 1;
worker_rlimit_nofile 131072;
pid /run/nginx.pid;
events {
worker_connections 76800;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 2048;
keepalive_timeout 65;
server_names_hash_bucket_size 64;
include /usr/local/openresty/nginx/conf/mime.types;
default_type application/octet-stream;
log_format combined_edit '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" "$request_time"';
access_log /var/log/nginx/access.log combined_edit;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
include /usr/local/openresty/nginx/conf.d/*.conf;
include /usr/local/openresty/nginx/sites-enabled/*;
}
backend.conf
upstream backend {
server xxx.xxx.xxx.xxx:xxx max_fails=12 fail_timeout=12;
server xxx.xxx.xxx.xxx:xxx max_fails=12 fail_timeout=12;
}
server {
listen 0.0.0.0:80;
server_name host;
rewrite ^(.*) https://$host$1 permanent;
}
server {
listen 0.0.0:443;
ssl_certificate /etc/ssl/certs/ssl.pem;
ssl_certificate_key /etc/ssl/certs/ssl.pem;
ssl on;
server_name host;
location / {
proxy_connect_timeout 700;
proxy_buffering off;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 10000; # something really large
proxy_pass http://backend;
}
}
Related
I have openresty running on my aws instance (Instance A) and the ip address of the server is already tied to the domain name myapp.john.com.
My app is running on another aws instance (Instance B) within the same private network. It has private ip address of 192.42.56.87 and the app is running on port :80.
I want to set up my openresty / nginx such that when visiting prod.myapp.john.com, nginx directs me to 192.42.56.87:80. And when visiting test.myapp.john.com, nginx directs me to another instance (Instance C) running the test version of my app, say on 192.xx.xx.xx:80
Below are code in (Instance A):
Main config file /usr/local/openresty/nginx/conf/nginx.conf is defined as:
# Main NGNX Config File
#user www-data;
worker_processes auto;
pid logs/nginx.pid;
error_log logs/error.log info;
error_log logs/error.log notice;
error_log logs/error.log debug;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
keepalive_requests 100000;
resolver 8.8.8.8 valid=30s ipv6=off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
gzip on;
# Include all the sites for the domain
include /usr/local/openresty/nginx/sites/*;
}
/usr/local/openresty/nginx/sites/prod.myapp.john.com is defined as:
server {
listen 80;
listen [::]:80;
server_name prod.myapp.john.com; // this does not work; but "myapp.john.com" works
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name prod.myapp.john.com; // this does not work; but "myapp.john.com" works
ssl_certificate /etc/letsencrypt/live/myapp.john.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp.john.com/privkey.pem;
location / {
proxy_pass http://192.42.56.87:80/;
expires 0;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
}
Now, in chrome browser when I visit prod.myapp.john.com, there is no response at all since the request never get to my Instance A;
However, if I change
server_name prod.myapp.john.com
to
server_name myapp.john.com
it works and the web page gets rendered.
Why?
How can I include more site files in /usr/local/openresty/nginx/sites/ and set the server blocks in config correctly to provide more subdomains on my site?
Jenkins is running behind Nginx server on CentOS virtual machine. I am able to
access Jenkins via web interface in a web browser. Since I want to trigger the automatic builds when the code is pushed to the GitHub repository I have defined a Github repository web hook.
Then I edited the NGINX config file
/etc/nginx/nginx.conf
by adding the location with:
location /github-webhook {
proxy_pass http://localhost:8080/github-webhook;
proxy_method POST;
proxy_connect_timeout 150;
proxy_send_timeout 100;
proxy_read_timeout 100;
proxy_buffers 4 32k;
client_max_body_size 8m;
client_body_buffer_size 128k;
}
But when Github sends a POST request Jenkins sends back 400 Hook should contain payload response. Is there anything I could do to solve this issue?
Below is the complete Nginx config file (the domain name has been changed to xyz.com):
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
upstream jenkins{
server 127.0.0.1:8080;
keepalive 16;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name xyz.com;
ssl_certificate /etc/letsencrypt/live/xyz.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/xyz.com/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
# intermediate configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;
# OCSP stapling
ssl_stapling on;
ssl_stapling_verify on;
# replace with the IP address of your resolver
resolver 127.0.0.1;
ignore_invalid_headers off;
location /github-webhook {
proxy_pass http://localhost:8080/github-webhook;
proxy_method POST;
proxy_connect_timeout 150;
proxy_send_timeout 100;
proxy_read_timeout 100;
proxy_buffers 4 32k;
client_max_body_size 8m;
client_body_buffer_size 128k;
}
location / {
proxy_pass http://jenkins;
# we want to connect to Jenkins via HTTP 1.1 with keep-alive connections
proxy_http_version 1.1;
# has to be copied from server block,
# since we are defining per-location headers, and in
# this case server headers are ignored
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# no Connection header means keep-alive
proxy_set_header Connection "";
# Jenkins will use this header to tell if the connection
# was made via http or https
proxy_set_header X-Forwarded-Proto $scheme;
# increase body size (default is 1mb)
client_max_body_size 10m;
# increase buffer size, not sure how this impacts Jenkins, but it is recommended
# by official guide
client_body_buffer_size 128k;
# block below is for HTTP CLI commands in Jenkins
# increase timeouts for long-running CLI commands (default is 60s)
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
# disable buffering
proxy_buffering off;
proxy_request_buffering off;
}
}
}
And here is the GitHub webhook settings:
In Jenkins projects configuration Github was configured as:
The problem was solved by setting Jenkins URL field with http://localhost:8080/ instead of being xyz.com:8080/. You can can access this field by going to Jenkins > Manage Jenkins > Configure System
I am trying to make nginx have two functions like fiddler does:
1、Redirect requests from data.abc.com to 127.0.0.1:9000
2、Pass all other requests to their original servers
my nginx.conf is:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
listen 8008;
server_name data.abc.com;
root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass https://127.0.0.1:9000/;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
However, right now every request via port 8008 is redirected, it seems like server_name doesn't work, how to make other requests go to original server?
Your configuration is to redirect all request to https://127.0.0.1:9000
Add 2 different server block like follows
1) Redirect data.abc.com to https://127.0.0.1:9000
server {
listen 8008;
server_name data.abc.com;
return 301 https://127.0.0.1:9000$request_uri
}
2) Serve request for another website:
server {
listen 8008 default_server;;
root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
}
I have an scenario on where a nginx is in front of an Artifactory server.
Recently, while trying to pull a big number of docker images in a for loop, all at the same time (first test was with 200 images, second test with 120 images), access to Artifactory gets blocked, as nginx is busy processing all the requests and users will not be able to reach it.
My nginx server is running with 4 cpu cores and 8192 of ram.
I have tried to improve the handling of files in the server, by adding the bellow:
sendfile on;
sendfile_max_chunk 512k;
tcp_nopush on;
This made it a bit better (but of course, pull's of images with 1gb+ take much more time, due to the chunk size) - still, access to the UI would cause a lot of timeouts.
Is there something else that i can do to improve the nginx performance, whenever a bigger load is pushed thru it?
I think that my last option is to increase the size of the machine (more cpu's) aswell as the number of processes on nginx (8 to 16).
The full nginx.conf file follows bellow:
user www-data;
worker_processes 8;
pid /var/run/nginx.pid;
events {
worker_connections 19000;
}
http {
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
include /etc/nginx/mime.types;
default_type application/octet-stream;
gzip on;
gzip_disable "msie6";
sendfile on;
sendfile_max_chunk 512k;
tcp_nopush on;
set_real_ip_from 138.190.190.168;
real_ip_header X-Forwarded-For;
log_format custome '$remote_addr - $realip_remote_addr - $remote_user [$time_local] $request_time'
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
server {
listen 80 default;
listen [::]:80 default;
server_name _;
return 301 https://$server_name$request_uri;
}
###########################################################
## this configuration was generated by JFrog Artifactory ##
###########################################################
## add ssl entries when https has been set in config
ssl_certificate /etc/ssl/certs/{{ hostname }}.cer;
ssl_certificate_key /etc/ssl/private/{{ hostname }}.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
## server configuration
server {
listen 443 ssl;
server_name ~(?<repo>.+)\.{{ hostname }} {{ hostname }} _;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
access_log /var/log/nginx/{{ hostname }}-access.log custome;
error_log /var/log/nginx/{{ hostname }}-error.log warn;
rewrite ^/$ /webapp/ redirect;
rewrite ^//?(/webapp)?$ /webapp/ redirect;
rewrite ^/(v1|v2)/(.*) /api/docker/$repo/$1/$2;
chunked_transfer_encoding on;
client_max_body_size 0;
location / {
proxy_read_timeout 900;
proxy_max_temp_file_size 10240m;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://{{ appserver }}:8081/artifactory/;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
Thanks for the tips.
Cheers,
Ricardo
I have a Node.js app server sitting behind an Nginx configuration that has been working well. I'm anticipating some load increase and figured I'd get ahead by setting up another Nginx to serve the static file on the Node.js app server. So, essentially I have setup Nginx reverse proxy in front of Nginx & Node.js.
When I reload Nginx and let it start serving the requests (Nginx<->Nginx) on the routes /publicfile/, I notice a SIGNIFICANT decrease in speed. Something that took Nginx<->Node.js around 3seconds not took Nginx<->Nginx ~15seconds!
I'm new to Nginx and have spent the better part of the day on this and finally decided to post for some community help. Thanks!
The web facing Nginx nginx.conf:
http {
# Main settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_header_timeout 1m;
client_body_timeout 1m;
client_header_buffer_size 2k;
client_body_buffer_size 256k;
client_max_body_size 256m;
large_client_header_buffers 4 8k;
send_timeout 30;
keepalive_timeout 60 60;
reset_timedout_connection on;
server_tokens off;
server_name_in_redirect off;
server_names_hash_max_size 512;
server_names_hash_bucket_size 512;
# Log format
log_format main '$remote_addr - $remote_user [$time_local] $request '
'"$status" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format bytes '$body_bytes_sent';
access_log /var/log/nginx/access.log main;
# Mime settings
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Compression
gzip on;
gzip_comp_level 9;
gzip_min_length 512;
gzip_buffers 8 64k;
gzip_types text/plain text/css text/javascript
application/x-javascript application/javascript;
gzip_proxied any;
# Proxy settings
#proxy_redirect of;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Set-Cookie;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
real_ip_header CF-Connecting-IP;
# SSL PCI Compliance
# - removed for brevity
# Error pages
# - removed for brevity
# Cache
proxy_cache_path /var/cache/nginx levels=2 keys_zone=cache:10m inactive=60m max_size=512m;
proxy_cache_key "$host$request_uri $cookie_user";
proxy_temp_path /var/cache/nginx/temp;
proxy_ignore_headers Expires Cache-Control;
proxy_cache_use_stale error timeout invalid_header http_502;
proxy_cache_valid any 3d;
proxy_http_version 1.1; # recommended with keepalive connections
# WebSocket proxying - from http://nginx.org/en/docs/http/websocket.html
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
map $http_cookie $no_cache {
default 0;
~SESS 1;
~wordpress_logged_in 1;
}
upstream backend {
# my 'backend' server IP address (local network)
server xx.xxx.xxx.xx:80;
}
# Wildcard include
include /etc/nginx/conf.d/*.conf;
}
The web facing Nginx Server block that forwards the static files to the Nginx behind it (on another box):
server {
listen 80 default;
access_log /var/log/nginx/nginx.log main;
# pass static assets on to the app server nginx on port 80
location ~* (/min/|/audio/|/fonts/|/images/|/js/|/styles/|/templates/|/test/|/publicfile/) {
proxy_pass http://backend;
}
}
And finally the "backend" server:
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
sendfile_max_chunk 32;
# server_tokens off;
# server_names_hash_bucket_size 64;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server {
root /home/admin/app/.tmp/public;
listen 80 default;
access_log /var/log/nginx/app-static-assets.log;
location /publicfile {
alias /home/admin/APP-UPLOADS;
}
}
}
#keenanLawrence mentioned in the comments above, sendfile_max_chunk directive.
After setting sendfile_max_chunk to 512k, I saw a significant speed improvement in my static file (from disk) delivery from Nginx.
I experimented with it from 8k, 32k, 128k, & finally 512k The difference seems to be per server for configuration on the optimal chunk size depending on the content being delivered, threads available, & server request load.
I also noticed another significant bump in performance when I changed worker_processes auto; to worker_processes 2; which went from utilizing worker_process on every cpu to only using 2. In my case, this was more efficient since I also have Node.js app servers running on the same machine and they are also performing operations on the cpu's.