We are facing issue with nginx memory leak it seems.
Setup:
Nginx running as deployment in GKE
Nginx version 1.20.2
Nginx is used to stream HLS. We write chunk file to a google filestore(NFS service). It is mounted on /var/www/html/.
Nginx never ever recovers memory it just grows on increasing. Nginx confiuration
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Default.conf
server {
listen 80;
server_name localhost;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 64 4k;
proxy_busy_buffers_size 16k;
proxy_cache_valid 200 302 1m;
proxy_cache_valid 404 60m;
proxy_cache_use_stale error timeout invalid_header updating;
proxy_redirect off;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' '*';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
add_header 'Access-Control-Max-Age' 1728000;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location /stub_status {
stub_status on;
access_log on;
allow all;
}
}
}
Screenshot of our internal monitoring system
First of all, what does your chart show? Memory usage of nginx worker processes? Or memory utilization of whole system?
In case of nginx memory growth it may relate to known issue (basically with OpenSSL either), see https://trac.nginx.org/nginx/ticket/2316
So either try to apply patch suggested by Maxim in that issue, or try the workaround he suggested in the last comment or upgrade to newer version of OpenSSL (PKCS11 engine) or even nginx (especially if it is linked statically).
There are enough OpenSSL-related leak issues, see also for example https://github.com/kubernetes/ingress-nginx/issues/7647 or linked within. So to veryfy it is not affected by OpenSSL, try to test it without SSL/TLS/https and check whether you'd see growth of memory usage.
Although I don't see any memory leak trying vanilla nginx 1.20.2 (without any patch, built with OpenSSL 1.1.1k) testing similar configuration (I don't see proxy_pass directive in your config so I was simply proxying to http/https upstream too). No leak reproducible at all.
In case of high system memory usage, it may be common OS cache or even some buffering of NFS, see https://askubuntu.com/a/1393696/1384131 for similar question.
Related
Overview
Okay, before I start, let me say I don't know much of Nginx and has routing works. What I know I learned in about 1 week's time. I'm more of an apache type guy. However, I'm working on a large scale project and would prefer using a faster server and not just apache server. So I decided on Nginx.
This issue relates to CSS/JS files not being resolved within the browser during page rendering for the frontend user.
I've spent over 3 days messing around and keep running into issues. I wouldn't be shocked if it's an easy fix though, so if you know Nginx and want to help me solve this issue please help me. I would, greatly.... greatly, appreciate a helping hand with this.
So I reckon that any dev's that want to help will need a copy of my Nginx config. It's probably also worth mentioning that I am using a hosting panel, called aaPanel on my Ubuntu 20.04 server, hosted in the cloud with IBM.
Other things to note:
aaPanel has two Nginx config files (I'm aware of), I'm messed around on each sub level Nginx config file. If I'm understanding this correctly, aaPanel uses a master Nginx config file and then a copy of the master conf file for your website. I fiddled with both, reverting changes, reloading Nginx etc, etc.
The frontend errors I have received via Opera Console or any other Browser's Console:
Failed to load resource: net::ERR_HTTP2_PROTOCOL_ERROR
Failed to load resource: net::ERR_CONNECTION_CLOSED
The main issue and some thinks I've identified
CSS/JS/IMG/Any MIME TYPE files do not load/resolve.
Any kind of mime type does NOT load, even if I directly visit them in the browser.
Nginx (I am 90% sure it is Nginx) is appending duplicates of my domain name for these files in the path. Seen below (Note that this is not the complete URL, but a small fraction of it since it's over 1mb long. Which leads me to believe it's a stackoverflow/endless loop issue.
https:// domain .net /domain.net/domain.net/domain.net/domain.net/filetype.exstension
(JS/IMG/CSS)
My two Nginx conf file configs I am aware of within aaPanel
My main website's NGINX CONFIG FILE:
server
{
listen 80;
listen 443 ssl http2;
server_name solidbets.net;
index index.php;
root /www/wwwroot/solidbets.net;
#SSL-START SSL related configuration, do NOT delete or modify the next line of commented-out 404 rules
#error_page 404/404.html;
ssl_certificate /www/server/panel/vhost/cert/domain.net/fullchain.pem;
ssl_certificate_key /www/server/panel/vhost/cert/domain.net/privkey.pem;
ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;
#Below ssl_ciphers changed for security PURPOSES
ssl_ciphers EECDH+CCHA20:EEH+CHACHA20-draft:EEH+AES128:RSA+AES128:EDH+A56:RSA+A6:+3DES:RSA+3DES:!MD5;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
add_header Strict-Transport-Security "max-age=31536000";
error_page 497 https://$host$request_uri;
#SSL-END
#ERROR-PAGE-START Error page configuration, allowed to be commented, deleted or modified
#error_page 404 /404.html;
#error_page 502 /502.html;
#ERROR-PAGE-END
#PHP-INFO-START PHP reference configuration, allowed to be commented, deleted or modified
include enable-php-74.conf;
#PHP-INFO-END
#REWRITE-START URL rewrite rule reference, any modification will invalidate the rewrite rules set by the panel
include /www/server/panel/vhost/rewrite/domain.net.conf;
#REWRITE-END
# Forbidden files or directories
location ~ ^/(\.user.ini|\.htaccess|\.git|\.svn|\.project|LICENSE|README.md)
{
return 404;
}
# Directory verification related settings for one-click application for SSL certificate
location ./* {
include /www/server/nginx/conf/mime.types;
}
access_log /www/wwwlogs/domain.net.log;
error_log /www/wwwlogs/domain.net.error.log;
}
My MASTER NGINX CONF File for the server:
user www www;
worker_processes auto;
error_log /www/wwwlogs/nginx_error.log crit;
pid /www/server/nginx/logs/nginx.pid;
worker_rlimit_nofile 51200;
stream {
log_format tcp_format '$time_local|$remote_addr|$protocol|$status|$bytes_sent|$bytes_received|$session_time|$upstream_addr|$upstream_bytes_sent|$upstream_bytes_received|$upstream_connect_time';
access_log /www/wwwlogs/tcp-access.log tcp_format;
error_log /www/wwwlogs/tcp-error.log;
include /www/server/panel/vhost/nginx/tcp/*.conf;
}
events
{
use epoll;
worker_connections 51200;
multi_accept on;
}
http
{
#AAPANEL_FASTCGI_CONF_BEGIN
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache_path /dev/shm/nginx-cache/wp levels=1:2 keys_zone=WORDPRESS:100m inactive=60m max_size=1g;
fastcgi_cache_use_stale error timeout invalid_header http_500;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
#AAPANEL_FASTCGI_CONF_END
include mime.types;
#include luawaf.conf;
include proxy.conf;
default_type application/octet-stream;
server_names_hash_bucket_size 512;
client_header_buffer_size 32k;
large_client_header_buffers 4 32k;
client_max_body_size 50m;
sendfile on;
tcp_nopush on;
keepalive_timeout 60;
tcp_nodelay on;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 256k;
fastcgi_intercept_errors on;
gzip on;
gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_http_version 1.1;
gzip_comp_level 2;
gzip_types text/plain application/javascript application/x-javascript text/javascript text/css application/xml;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_disable "MSIE [1-6]\.";
limit_conn_zone $binary_remote_addr zone=perip:10m;
limit_conn_zone $server_name zone=perserver:10m;
server_tokens off;
access_log off;
server
{
listen 888;
server_name phpmyadmin;
index index.html index.htm index.php;
root /www/server/phpmyadmin;
location ~ /tmp/ {
return 403;
}
#error_page 404 /404.html;
include enable-php.conf;
location ~ /\.
{
deny all;
}
access_log /www/wwwlogs/access.log;
}
include /www/server/panel/vhost/nginx/*.conf;
}
URL REWRITE OPTIONS
If anyone has a solution to this very frustrating problem, which may be obvious, please help.
I have a service which gets analytics data from multiple sites.
Now we can't send a POST request as browser stops the request when the tab is closed or switched, so we use a GET request inside an img tag which is sent to the server even if window is closed.
Since it's a get request, the request structure looks like this
http://localhost/log?event=test&data=base64EncodedJSON
Now, if the JSON is big, the base64EncodedJSON will also be big.
If the URL is big, nginx returns a 414 error.
After research, I found out that large_client_header_buffers should be modified to allow larger url's, but for some reason, it doesn't seem to work.
Here's the nginx configuration
nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
# Tried Fix for 414 error
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
client_max_body_size 24M;
client_body_buffer_size 128k;
client_header_buffer_size 5120k;
large_client_header_buffers 16 5120k;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
conf.d/default.conf
server {
listen 80;
#listen 443 ssl;
#ssl_certificate /etc/nginx/ssl/server.pem;
#ssl_certificate_key /etc/nginx/ssl/server.key;
server_name _; #TODO: restrict server domain
root /var/www/html/;
location / {
proxy_pass http://logger:8888/;
proxy_cache off;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
}
}
But even after setting large_client_header_buffers = 5120k, I still get a 414 error for big URLS.
Probably the fastcgi_buffers and fastcgi_buffer_size have to be adjusted accordingly to the large_client_header_buffers.
Additionally it's likely advisable to keep the values as small as possible. Especially for testing I'd increase slowly and check if the configured size is transferred--if not, check the exact size and compare it to configuration options that could be related.
Beside that the client_header_buffer_size can be kept small like 1k or 2k, if the length is larger the large_client_header_buffers kicks in.
I have been trying to set up a local docker container that would host NGINX server. To start with, here is my Dockerfile:
# Set nginx base image
FROM nginx
# File Author / Maintainer
MAINTAINER myuser "myemail#mydomain.com"
# Copy custom configuration file from the current directory
COPY nginx.conf /etc/nginx/nginx.conf
I did build this file using the docker build command and when I listed the images, I get to see this image in the list.
Now, I tried to run this newly created image which resulted in an error:
my-MacBook-Pro:nginx-docker me$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
myrepo nginx-latest 0d73419e8da9 12 minutes ago 182.8 MB
hello-world latest c54a2cc56cbb 13 days ago 1.848 kB
nginx latest 0d409d33b27e 6 weeks ago 182.8 MB
my-MacBook-Pro:nginx-docker me$ docker run -it myrepo:nginx-latest
2016/07/15 07:07:35 [emerg] 1#1: open() "/etc/nginx/logs/access.log" failed (2: No such file or directory)
nginx: [emerg] open() "/etc/nginx/logs/access.log" failed (2: No such file or directory)
The path to the log file is configured in my nginx.conf which is as below:
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 8080;
server_name localhost;
root /Users/me/Projects/Sandbox/my-app;
#charset koi8-r;
access_log logs/host.access.log main;
#
# Wide-open CORS config for nginx
#
location / {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}
# /api will server your proxied API that is running on same machine different port
# or another machine. So you can protect your API endpoint not get hit by public directly
location /api {
proxy_pass http://localhost:9000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header Access-Control-Allow-Origin *;
proxy_set_header Access-Control-Allow-Origin $http_origin;
}
error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
include servers/*;
}
When I now tried to run this NGINX image, I get the following error:
2016/07/15 07:07:35 [emerg] 1#1: open() "/etc/nginx/logs/access.log" failed (2: No such file or directory)
nginx: [emerg] open() "/etc/nginx/logs/access.log" failed (2: No such file or directory)
What should I do to fix this? Also what and where should that path be? I suppose it is on the underlying OS path that is exposed by Docker?
The base image in your Dockerfile is nginx (nginx:latest to be exact). It has a pre-configured nginx configuration that comes from Debian Nginx package. You may inspect the container yourself: docker run -it --rm nginx /bin/bash and look at the files and directories to learn few facts about it:
it provides nginx user
it provides /var/log/nginx directory, but root owns it
it provides access.log and error.log in that directory writable by anyone
(Dockerfile for the base Nginx image is here)
Your configuration:
runs Nginx as nginx (it's the default)
tries to write log files into /etc/nginx/logs
Apparently, this directory does not exist because no one has created it. If it'd existed it should be writable by nginx user.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I recently had to set up a nginx server on a centOS 7 server.
In order to run the dataiku software.
Every thing seems to run fine but once i try to access the pages i get absolutely nothing.
With elinks in local i manage to get the nginx default web page but not from my browser so i think it comes frommy nginx configuration.
here is my nginx.conf :
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
And here is the default.conf included file :
server {
listen 80 default;
server_name _;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
I really need this server running properly and being accessible do you have any idea ?
Thank you for reading.
you should add a new rule on public zone, because CentOS 7 has a firewalld.
Try:
firewall-cmd --zone=public --add-service=http
and go head!
Add the rule to the permanent set and reload FirewallD:
sudo firewall-cmd --zone=public --add-service=http --permanent
sudo firewall-cmd --reload
That should work!
You missed proxy_pass configuration which is actually translate all requests from backend to the outside via HTTP port 80 in your case:
server {
# Host/port on which to expose Data Science Studio to users
listen 80;
server_name _;
location / {
# Base url of the Data Science Studio installation
proxy_pass http://DSS_HOST:DSS_PORT/;
proxy_redirect off;
# Allow long queries
proxy_read_timeout 3600;
proxy_send_timeout 600;
# Allow large uploads
client_max_body_size 0;
# Allow protocol upgrade to websocket
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Official documentation is pretty clear on that: http://doc.dataiku.com/dss/latest/installation/reverse_proxies.html
Make sure you have uptodated Nginx to be able to serve WebSocket requests.
I need to do 2 things, first set the expiration header to 30d and second to enable the page speed module. Non of them work so far, this is my nginx.conf file
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
expires 30d;
add_header Pragma public;
add_header Cache-Control "public";
}
include /etc/nginx/conf.d/*.conf;
}
For turning on pagespeed, you first need to build your nginx from source with pagespeed's module.
Its very easy! You could just follow Google's instruction here and then here
After you complied Nginx from source with pagespeed module enabled, you could just add this to your conf:
pagespeed on;
pagespeed FileCachePath /var/ngx_pagespeed_cache;
To set the expiration header, I think it's better to put your code inside a server block, and then inside the main location block. See this blog post but it uses an if clause if you don't mind.
If you are optimizing your website, maybe consider using gzip on too in your conf. It compress the content before sending them to your clients. It saves you a lot of bandwidth and I think it reduce the latency (faster load).
If you are decided to use gzip with pagespeed, make sure to add below line to your conf and read
pagespeed FetchWithGzip on;