turning on gzip cuts the response from upstream - nginx

My upstream server returns extremely large JSON responses (5~8GB).
I'm trying to condense those responses by enabling gzip on nginx. This is my config
server {
listen 0.0.0.0:8080;
location / {
gzip on;
gzip_comp_level 1;
gzip_types *;
gzip_proxied any;
proxy_pass http://localhost:8081;
}
}
This config technically works. At least, it works for smaller responses (~150MB before compression). When I try to download large response (~7.5GB before compression)
curl -v --compressed --output /path_to_file -X POST http://localhost:8080 --data '{data}'
It gets cut off in the middle, i.e. I see this message from curl
curl: (18) transfer closed with outstanding read data remaining
and the response itself is incomplete (on average it only downloads ~5.3GB out of ~7.5GB)
I also see this log from nginx
2022/04/20 01:18:45 [error] 37#37: *135 upstream prematurely closed connection while reading upstream, client: 127.0.0.1, server: , request: "POST / HTTP/1.1", upstream: "http://127.0.0.1:8081/", host: "localhost:8080"
I tried increasing proxy_max_temp_file_size, and, I tried disabling buffering. nothing works
Any ideas?
Edit: this is the nginx.conf that's built in in the docker image I'm using
worker_processes auto;
error_log "/opt/bitnami/nginx/logs/error.log";
pid "/opt/bitnami/nginx/tmp/nginx.pid";
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log "/opt/bitnami/nginx/logs/access.log" main;
add_header X-Frame-Options SAMEORIGIN;
client_body_temp_path "/opt/bitnami/nginx/tmp/client_body" 1 2;
proxy_temp_path "/opt/bitnami/nginx/tmp/proxy" 1 2;
fastcgi_temp_path "/opt/bitnami/nginx/tmp/fastcgi" 1 2;
scgi_temp_path "/opt/bitnami/nginx/tmp/scgi" 1 2;
uwsgi_temp_path "/opt/bitnami/nginx/tmp/uwsgi" 1 2;
sendfile on;
tcp_nopush on;
tcp_nodelay off;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/css application/javascript text/xml application/xml+rss;
keepalive_timeout 65;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 80M;
server_tokens off;
absolute_redirect off;
port_in_redirect off;
include "/opt/bitnami/nginx/conf/server_blocks/*.conf";
# HTTP Server
server {
# Port to listen on, can also be set in IP:PORT format
listen 8080;
include "/opt/bitnami/nginx/conf/bitnami/*.conf";
location /status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
}

in similar case this line in location block solved the problem
proxy_http_version 1.1;
it seems like gzip'ed response somehow incompatible with http2

Related

Proxy Pass to the same domain but different endpoint using TLS

I'm trying to add two endpoints in nginx.conf file. The main purpose is to be able to send a POST request to the first endpoint so i can log the body message to the console (stdout). The second endpoint is needed because i need to do a proxy_pass to another endpoint so i can send the body message to stdout (I'm following this tutorial https://matthias-kainer.de/blog/posts/logging-client-console-errors-with-nginx/).
The problem i'm facing is that the server name can be any, so i have the server_name directive as _. I have tried a lot of things but i always get some error: 502 - Bad Gateway or 400 - Bad Request or 400 - No required SSL certificate was sent.
My nginx.conf file (and my actual try) is this:
worker_processes 1;
error_log /dev/stdout warn;
events {
worker_connections 1024;
}
http {
resolver 127.0.0.11 valid=30s;
resolver_timeout 20s;
access_log stdout;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format error_trace '$remote_addr - $remote_user $request_time $upstream_response_time '
'[$time_local] "$request" $status $body_bytes_sent "Client Error: $request_body" "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout main;
error_log /dev/stdout debug;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
default_type application/octet-stream;
gzip on;
gzip_http_version 1.1;
gzip_vary on;
gzip_comp_level 6;
gzip_proxied any;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
gzip_buffers 16 8k;
gzip_disable "MSIE [1-6].(?!.*SV1)";
server {
listen 60000 ssl http2 default_server;
root /var/www/html;
server_name _;
server_tokens off;
client_body_buffer_size 1k;
client_header_buffer_size 1k;
client_max_body_size 1k;
large_client_header_buffers 4 16k;
ssl_certificate /some/path/some-file.pem;
ssl_certificate_key /some/path/some-file-key.key;
ssl_trusted_certificate /some/path/some-certificate.pem;
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 180m;
ssl_session_tickets off;
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
ssl_stapling on;
ssl_stapling_verify on;
ssl_verify_client on;
ssl_verify_depth 10;
ssl_client_certificate /some/path/some-certificate.pem;
add_header Strict-Transport-Security "max-age=31536000" always;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
location = / {
try_files $uri$args $uri$args/ /index.html;
}
location = /client_error_trace {
access_log /dev/stdout error_trace;
proxy_pass https://127.0.0.1:60000/client_error_trace_proxy;
proxy_redirect off;
proxy_set_header Host $host;
}
location = /client_error_trace_proxy {
access_log off;
return 200 'Error logged';
}
error_page 404 /;
}
}
With this file, im getting an error 400 - Bad Request - No required SSL certificate was sent. Any hint or help would be very appreciated >_<

Nginx https reverse proxy is too slow

I have Implemented the Nginx cache with https reverse proxy in centos, My response time taking more than 1.5 seconds for each request. My nginx server configuration was 4 core, 8gb ram.
My configuration looks like below (nginx.config)
`user nginx;
worker_processes auto;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 80000;
use epoll;
multi_accept on;
}
http {
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format rt_cache '$remote_addr - $upstream_cache_status [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
# Below pattern will print
# Time stamp | Client IP | client Dev apps Name| Request | Status Returned| Time taken in ms| size Returned in bytes| Referer | hit or miss | User agent
log_format bf_log_format '[$time_local]|'
'$remote_addr|'
'$http_x_developer_username|$http_x_forwarded_for|'
'"$request"|'
'$status|$upstream_response_time|$body_bytes_sent|'
'"$http_referer"|'
'"$upstream_cache_status"|'
'"$http_user_agent"';
log_format json_log_format escape=json '{'
'"time": "$time_iso8601",'
'"trace_id": "$request_id",'
'"http": {'
'"body_bytes_sent": "$body_bytes_sent",'
'"x_developer_username": "$http_x_developer_username",'
'"remote_addr": "$remote_addr",'
'"method": "$request_method",'
'"request": "$request_uri",'
'"schema": "$scheme",'
'"request_time": "$request_time",'
'"host": "$host",'
'"uri": "$uri",'
'"user_agent": "$http_user_agent",'
'"status": "$status"'
'},'
'"proxy": {'
'"host": "$proxy_host"'
'},'
'"upstream": {'
'"response_time": "$upstream_response_time sec",'
'"cache_status": "$upstream_cache_status"'
'}'
'}';
# access_log /var/log/nginx/access.log main;
# access_log /var/log/nginx/access.log json_log_format;
access_log off;
sendfile on;
sendfile_max_chunk 512k;
# directio 4m;
# directio_alignment 512;
tcp_nopush on;
tcp_nodelay on;
reset_timedout_connection on;
keepalive_requests 100000;
types_hash_max_size 2048;
# reduce the data that needs to be sent over network -- for testing environment
gzip on;
# gzip_static on;
gzip_min_length 10240;
gzip_comp_level 1;
gzip_vary on;
gzip_disable msie6;
gzip_proxied expired no-cache no-store private auth;
gzip_types
text/css
text/javascript
text/xml
text/plain
text/x-component
application/javascript
application/x-javascript
application/json
application/xml
application/rss+xml
application/atom+xml
font/truetype
font/opentype
application/vnd.ms-fontobject
image/svg+xml;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
proxy_cache_path /opt/nginx/cache levels=1:2 keys_zone=api-cache:3000m max_size=100g inactive=43200m use_temp_path=off;
proxy_temp_path /opt/nginx/cache/other;
include /etc/nginx/conf.d/ssl.conf;
}`
My ssl.confg looks like below
server {
server_name _;
root /usr/share/nginx/html;
listen 443 ssl http2 default_server;
listen [::]:443 ssl;
ssl_certificate "/etc/private/ssl/cert.pem";
ssl_certificate_key "/etc/private/ssl/key.pem";
# ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
keepalive_timeout 100;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
location /health {
default_type application/json;
return 200 '{"status":"UP"}';
}
location /nginx-status {
stub_status;
}
location /trellotest {
proxy_cache_bypass $http_no_cache_purge $arg_nocache;
proxy_cache_methods GET POST;
add_header Cache-Control "public";
proxy_cache api-cache;
proxy_cache_valid 200 40320m;
add_header X-Cache $upstream_cache_status;
add_header X-Time $request_time;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
proxy_pass https://mytrelloapp;
}
}
If possible, Anyone could you please advise me if we have anyway to improve the above configurations?

Nginx not scaling more than 750 Connections when used as Reverse Proxy

I have a mobile application in which we are using Nginx as a Reverse proxy which routes request to Nginx. The Nginx request in app server is passed to Node.js for processing.
We are getting 504 Gateway timeout error when we are hitting more than 750 Users.we are seeing below error in Nginx Logs.
upstream timed out (110: Connection timed out) while connecting to upstream, client: LoadGenerator_IP, server: WebserverDNS, request: "GET /api/sample/profile HTTP/1.1", upstream: "https://app_server:443/api/sample/profile", host: "webserver_IP"
I tried to hit App server Nginx directly.We could able to hit more than 1000 users.But if we are using reverse proxy we are getting that error.
I tried alot of Linux system and Nginx settings.But did not overcome this issue.
NGINX.CONF
user nginx;
worker_processes auto;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/error.log error;
pid /var/run/nginx.pid;
events {
worker_connections 4096;
use epoll;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"'
'uct="$upstream_connect_time"'
'uht="$upstream_header_time"'
'urt="$upstream_response_time"'
'rt="$request_time "';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#tcp_nodelay on;
# to boost I/O on HDD we can disable access logs
access_log off;
keepalive_timeout 120;
keepalive_requests 10000;
# allow the server to close connection on non responding client, this will free up memory
reset_timedout_connection on;
# request timed out -- default 60
client_body_timeout 10;
# if client stop responding, free up memory -- default 60
send_timeout 2;
#gzip on;
# gzip_static on;
#gzip_min_length 10240;
#gzip_comp_level 1;
#gzip_vary on;
#gzip_disable msie6;
#gzip_proxied expired no-cache no-store private auth;
#gzip_types
# text/html is always compressed by HttpGzipModule
# text/css
# text/javascript
# text/xml
# text/plain
# text/x-component
# application/javascript
# application/x-javascript
# application/json
# application/xml
# application/rss+xml
# application/atom+xml
# font/truetype
# font/opentype
# application/vnd.ms-fontobject
# image/svg+xml;
include /etc/nginx/conf.d/Default.conf;
}
Default.conf
upstream app_server {
server app_server_ip:443;
}
server {
listen 80;
listen 443 ssl backlog=32768;
server_name someIP;
server_tokens off;
location ~ {
proxy_pass https://app_server_ip;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 75;
proxy_read_timeout 300;
proxy_send_timeout 300;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
ssl_certificate /etc/ssl/certs/uatweb.crt;
ssl_certificate_key /etc/ssl/certs/uatweb.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers On;
ssl_ciphers ECDH+AESGCM:ECDH+CHACHA20:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
server {
listen 8114 ssl;
server_name someServerName;
ssl_certificate /etc/ssl/certs/uatweb.crt;
ssl_certificate_key /etc/ssl/certs/uatweb.key;
ssl_protocols TLSv1.1 TLSv1.2;
location ~ {
proxy_pass https://someIP;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Please let me know if any changes are required in Config Files.

Flask Upstream prematurely closed connection

Please note, I have referred to THIS QUESTION however this did not fix the issue...
As you can see from the nginx error log, I am sending a post request to /order-history. This will then run a SQL query that takes about a minute, however, the connection is prematurely closing. This issue does not occur when the application is deployed with the flask test server obviously, as the logs point out :)
/var/log/nginx/error.log:
2018/05/30 15:34:04 [error] 12294#12294: *9 upstream prematurely closed connection while reading response header from upstream, client: 192.168.96.116, server: alpha2, request: "POST /order-history HTTP/1.1", upstream: "http://unix:/home/hleggio/myproject/myproject.sock:/order-history", host: "alpha2:5000", referrer: "http://alpha2:5000/query-selection"
/etc/nginx//nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
fastcgi_read_timeout 99999;
proxy_read_timeout 99999;
# server_tokens off;
client_max_body_size 20M;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
/etc/nginx/sites-available/myproject:
server {
listen 5000;
server_name alpha2;
location / {
include proxy_params;
proxy_pass http://unix:/home/hleggio/myproject/myproject.sock;
proxy_read_timeout 9999;
proxy_connect_timeout 9999;
proxy_request_buffering off;
proxy_buffering off;
}
}
I was able to solve this issue. This wasn't related to my NGINX configuration.
The problem resided in my Gunicorn configuration file.
In my Gunicorn config file (/etc/systemd/system/myproject.service), I added the following to my ExecStart line:
--timeout 600
The file now looks like this:
[Unit]
Description=Gunicorn instance to serve myproject
After=network.target
[Service]
User=harrison
Group=www-data
WorkingDirectory=/home/harrison/myproject
Environment="PATH=/home/harrison/myproject/myprojectenv/bin"
ExecStart=/home/harrison/myproject/myprojectenv/bin/gunicorn --workers 3 --timeout 600 --bind unix:myproject.sock -m 007 wsgi:application
[Install]
WantedBy=multi-user.target

https static content on NGINX

I've inherited a configuration from a colleague for a VPS where NGINX has been setup. I can currently serve dynamic content via http and https, however static content like images, javascript and css are not loaded (shown as 404) when https is specified as the connection type.
As I've inherited the config, I'm not too sure where to start although I have unsuccessfully tried NGINX's own guide on adding a location block to the nginx.conf under http directive, specifying a root to use.
http{
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_tokens off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location / {
root /var/www/;
}
}
I do also get the same behaviour with NGINX disabled, however I'm unable to find any documentation in my colleagues notes on what else could be serving content.
Any pointers in the right direction would be much appreciated!
nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_tokens off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
log_format main '$remote_addr - $remote_user [$time_local] "$request "'
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
autoindex off;
map $scheme $fastcgi_https { ## Detect when HTTPS is used
default off;
https on;
}
client_header_timeout 3000;
client_body_timeout 3000;
fastcgi_read_timeout 3000;
client_max_body_size 32m;
fastcgi_buffers 8 128k;
fastcgi_buffer_size 128k;
keepalive_timeout 10;
gzip on;
gzip_http_version 1.1;
gzip_vary on;
gzip_comp_level 6;
gzip_proxied any;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
gzip_buffers 16 8k;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
# Load config files from the /etc/nginx/conf.d directory
include /etc/nginx/conf.d/*.conf;
}
Trawling through this I found that NGINX has been adding a command into the server >> Location blocks of x-accel-internal.
By copying the template proxy.php from conf/templates/default/domain/service into custom/domain/service and commenting out this command, nginx is serving content correctly again after re-creating the config files via SSH
/usr/local/psa/admin/bin/httpdmng --reconfigure-all
nginx -t && service nginx reload

Resources