I tried to restart nginx with command, but error occured.
When I run "sudo systemctl restart nginx", this happens.
Job for nginx.service failed because the control process exited with error code. See "systemctl status nginx.service" and "journalctl -xe" for details.
When I run "systemctl status nginx.service", this happens.
Mar 30 08:55:04 ip-172-31-22-186 nginx[2624]: nginx: [emerg] "proxy_buffers" directive invalid value in /etc/nginx/sites-enabled/...:19
Mar 30 08:55:04 ip-172-31-22-186 nginx[2624]: nginx: configuration file /etc/nginx/nginx.conf test failed
in nginx.conf file:
location / {
....
proxy_buffer_size 0M;
proxy_buffers 4 0M;
proxy_busy_buffers_size 0M;
client_max_body_size 0M;
}
is there a problem with the configuration here?
The proxy_buffers can not be configured like this. Based on what they are used for how how they are designed you can NOT set a buffer of 0m. This would set a memory size (page size) of 0M.
proxy_buffers
Sets the number and size of the buffers used for reading a response from the proxied server, for a single connection. By default, the buffer size is equal to one memory page. This is either 4K or 8K, depending on a platform.
The proxy buffer size is equal to a memory page. To find your current memory_page size type:
getconf PAGE_SIZE
This should return 4096(bytes) -> 4K.
So as you can see there is a reason why you can only use 4K or 8K depending on your system architecture.
We have a great blog post about proxying in general.
https://www.nginx.com/blog/performance-tuning-tips-tricks/
By turning proxy_buffering to on you can configure the proxy_buffers with the directives shown in the docs:
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering
Related
I want to run SonarQube on my Ubuntu 18.04 server along with nginx (a Droplet at DigitalOcean).
Mostly I've followed these instructions. I've used Postgres instead of MySQL.
Nginx should accept the request and pass it to the localhost-address used by SonarQube (127.0.0.1:9000).
Nginx is running and working. SSL is active and working. Here my codequality.example.com.conf:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name codequality.example.com www.codequality.example.com;
root /var/www/html;
index index.html index.htm;
access_log /var/log/nginx/codequality.example.com.access.log;
error_log /var/log/nginx/codequality.example.com.error.log;
ssl_certificate /etc/letsencrypt/live/codequality.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/codequality.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://127.0.0.1:9000;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
}
}
The command systemctl status sonarqube gives me the following response:
● sonarqube.service - SonarQube service
Loaded: loaded (/etc/systemd/system/sonarqube.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2020-03-17 22:16:50 UTC; 5s ago
Process: 21796 ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop (code=exited, status=0/SUCCESS)
Process: 21855 ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start (code=exited, status=0/SUCCESS)
Main PID: 21918 (wrapper)
Tasks: 43 (limit: 2361)
CGroup: /system.slice/sonarqube.service
├─21918 /opt/sonarqube/bin/linux-x86-64/./wrapper /opt/sonarqube/bin/linux-x86-64/../../conf/wrapper.co
├─21922 java -Dsonar.wrapped=true -Djava.awt.headless=true -Xms8m -Xmx32m -Djava.library.path=./lib -cl
└─21954 /usr/lib/jvm/java-11-openjdk-amd64/bin/java -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyF
lines 1-11/11 (END)
So I can assume that SonarQube-Server is running correctly. Trying to access SonarQube via https://codequality.example.com results in a 502 Error. The log-file says:
2020/03/17 22:19:44 [error] 19598#19598: *233 connect() failed (111: Connection refused) while connecting to upstream, client: 79.254.63.100, server: codequality.example.com, request: "GET / HTTP/2.0", upstream: "http://127.0.0.1:9000/", host: "codequality.example.com"
Trying to access the localhost (127.0.0.1:9000) during a ssh-session via curl http://127.0.0.1:9000 I get the error:
curl: (7) Failed to connect to 127.0.0.1 port 9000: Connection refused
This is the log from SonarQube:
2020.03.17 22:38:53 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2020.03.17 22:38:53 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2020.03.17 22:38:54 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch
2020.03.17 22:38:54 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2020.03.17 22:38:55 INFO app[][o.e.p.PluginsService] no modules loaded
2020.03.17 22:38:55 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
ERROR: [2] bootstrap checks failed
1: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
2020.03.17 22:39:11 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 78
2020.03.17 22:39:11 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped
2020.03.17 22:39:11 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
What am I doing wrong?
the problem is not the NGINX Proxy. The logfile you shared gives some insides
max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535] [2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] 2020.03.17 22:39:11 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 78 2020.03.17 22:39:11 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped
Your SonarQube isn't running correctly because of some misconfiguration of elastic sarch.
Check this links to find out how to adjust the limit:
general information
https://www.elastic.co/guide/en/elasticsearch/reference/current/file-descriptors.html
how to chnage the settings
https://www.elastic.co/guide/en/elasticsearch/reference/master/setting-system-settings.html
Make sure SonarQube is running correctly on localhost:9000 before move on to the proxy configuration.
I have an Artifactory behind nginx and uploading files larger than 4 GB fails. I am fairly certain that this is nginx's fault, because if the file is uploaded from/to localhost, no problem occurs.
nginx is set up to have client_max_body_size and client_body_timeout large enough for this not to be an issue.
Still, when uploading a large file (>4 GB) via curl, after about half a minute it fails. The only error message I get is HTTP 500 Internal Server Error, nothing is written to the nginx's error logs.
The problem in my case was insufficient disk space mounted on root. I have a huge disk mounted on /home, but only had about 4 GB left on /. I assume that nginx was saving incoming request bodies there and after it had filled up, the request was shut down.
The way I fixed it was to add those lines to the nginx.conf file (not all of them are necessarily required):
http {
(...)
client_max_body_size 100G;
client_body_timeout 300s;
client_body_in_file_only clean;
client_body_buffer_size 16K;
client_body_temp_path /home/nginx/client_body_temp;
}
The last line is the important part - there I tell nginx to fiddle with its files in the /home space.
I am running Gitlab on Debian using the package from the Repository. Most of the time Gitlab is running very fast, but after longer idle times Gitlab is very slow or even times out (error 502). One time I also had a timeout on a remote git access (could not reproduce the issue - timeout on the internal API).
In my setup the the Debian machine is behind another nginx proxy which also serves some other services just fine. I did the gitlab-cli checks and everything seems fine.
In the error log of my reverse proxy I only see connection timeouts:
[error] 8643#0: *4139 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.1.1.10, server: gitlab.mydomain.tld, request: "GET / HTTP/1.1", upstream: "http://{SERVER-IP}:80/", host: "gitlab.mydomain.tld"
I can see some errors in my unicorn_stderr.log
E, [2016-03-30T19:40:20.183991 #783] ERROR -- : worker=1 PID:16798 timeout (61s > 60s), killing
E, [2016-03-30T19:40:20.194969 #783] ERROR -- : reaped #<Process::Status: pid 16798 SIGKILL (signal 9)> worker=1
I, [2016-03-30T19:40:20.197554 #16871] INFO -- : worker=1 spawned pid=16871
I, [2016-03-30T19:40:20.197909 #16871] INFO -- : worker=1 ready
E, [2016-03-30T20:08:42.911429 #783] ERROR -- : worker=0 PID:16866 timeout (61s > 60s), killing
E, [2016-03-30T20:08:43.191151 #783] ERROR -- : reaped #<Process::Status: pid 16866 SIGKILL (signal 9)> worker=0
I, [2016-03-30T20:08:43.758363 #18728] INFO -- : worker=0 spawned pid=18728
I, [2016-03-30T20:08:44.108244 #18728] INFO -- : worker=0 ready
What I am a bit curious about is the fact that there are no errors in the log of the nginx delivered with gitlab.
Some more system information:
#sudo gitlab-rake gitlab:env:info
System information
System: Debian 8.3
Current User: git
Using RVM: no
Ruby Version: 2.1.8p440
Gem Version: 2.5.1
Bundler Version:1.10.6
Rake Version: 10.5.0
Sidekiq Version:4.0.1
GitLab information
Version: 8.5.0
Revision: a513e09
Directory: /opt/gitlab/embedded/service/gitlab-rails
DB Adapter: postgresql
URL: http://gitlab.mydomain.tld
HTTP Clone URL: http://gitlab.mydomain.tld/some-group/some-project.git
SSH Clone URL: git#gitlab.mydomain.tld:some-group/some-project.git
Using LDAP: no
Using Omniauth: no
GitLab Shell
Version: 2.6.10
Repositories: /var/opt/gitlab/git-data/repositories
Hooks: /opt/gitlab/embedded/service/gitlab-shell/hooks/
Git: /opt/gitlab/embedded/bin/git
Edit:
My nginx config on the "external" reverse proxy looks like this:
server {
listen 443;
ssl on;
server_name gitlab.mydomain.tld;
access_log /var/log/nginx/gitlab.mydomain.tld.access.log;
error_log /var/log/nginx/gitlab.mydomain.tld.error.log;
ssl_certificate /etc/nginx/ssl/gitlab.mydomain.tld_unified.crt;
ssl_certificate_key /etc/nginx/ssl/mydomain.tld.key;
location / {
proxy_pass http://gitlab:80;
proxy_redirect default;
proxy_set_header Host $http_host;
proxy_set_header X_FORWARDED_PROTO "https";
satisfy any;
}
}
Edit2:
I took the suggested answer into account and also considered this source: https://github.com/gitlabhq/gitlabhq/blob/master/doc/install/requirements.md
I assigned 2GB RAM to the VM now, and also added one additional unicorn worker.
Edit3:
The problem seems to be solved by adding more memory and using 3 unicorn workers.
Jan,
I have a similar setup although our box is dedicated to GITlab. Without knowing the specs of your server (GITLAB likes memory) and the load on that box I would suggest the following diagnostics:
Does your upstream nginx use identical parameters as the gitlab nginx configuration? They have tweaked a number of things including timeouts.
What kind of request result in time outs? Some operations (like generating diffs) can take some time to render.
If you run the requests via SSH do you also experience time outs?
Have you checked global logs in /var/log?
FYI: I had to enlarge my small GitLab installation to have 4GB RAM not to throw OOM errors
Now I think, I'd better go with gogs or other alternative.
i have a huge Problem on my site.
Please help me to fix it.
i have a site where users can download files from different other sites (f.e one-click-hoster like uploaded.net). We act as like a proxy. The user generate a link and download the file directly. Our Script download nothing on the server. A little bit like a premium link generator but different. AND NOT ILLEGAL.
If the user are downloading a file that is larger than 1GB, the download will be canceled when it reachs 1gb.
In the log files i found repeated the error
"Upstream timed out (110: Connection timed out) while reading response"
I have tried to put the settings higher but that didn't help.
I tried following:
1. nginx.conf:
fastcgi_send_timeout 300s;
fastcgi_read_timeout 300s;
2. nginx host file:
fastcgi_read_timeout 300;
fastcgi_buffers 8 128k;
fstcgi_buffer_size 256k;
3. PHP.ini:
max_execution_time = 60 (but my php script will set it automaticly to 0)
max_input_time = 60
memory_limit = 128M
4. PHP-FPM >> www.conf
pm.max_children = 25
pm.start_servers = 2
pm.min_spare_servers = 2
pm.max_spare_servers = 12
request_terminate_timeout = 300s
But nothing helps. What can i do to fix this problem?
Server/Nginx Infos:
Memory: 32079MB
CPU: model name: Intel(R) Xeon(R) CPU E3-1230 v3 # 3.30GHz (8 Cores)
PHP: PHP 5.5.15-1~dotdeb.1 (cli) (built: Jul 24 2014 16:44:04)
NGINX: nginx/1.2.1
nginx.conf:
worker_processes 8;
worker_connections 2048;
But time settings are doens't matter i think. Because the download stops exactly on 1.604.408 KB everytime. If i download with 20kb/s the download needs more time, but will cancel on exactly 1.604.408 KB.
thank you for any help.
If you need more informations please ask me.
i had similar problem, where download would stop at 1024MB with error
readv() failed (104: Connection reset by peer) while reading upstream
adding this to nginx.conf file helped:
fastcgi_max_temp_file_size 1024m;
We are using Nginx As a load balancer for multiple riak nodes. The setup worked fine for some time(few hours) before Nginx started giving bad gateway 502 errors. On checking the individual nodes seemed to be working. We found out that The problem was with nginx buffer size hence increased the buffer size to 16k, it worked fine for one more day before we started getting 502 error for everything.
My Nginx configuration is as follows
upstream riak {
server 127.0.0.1:8091 weight=3;
server 127.0.0.1:8092;
server 127.0.0.1:8093;
server 127.0.0.1:8094;
}
server {
listen 8098;
server_name 127.0.0.1:8098;
location / {
proxy_pass http://riak;
proxy_buffer_size 16k;
proxy_buffers 8 16k;
}
}
Any help is appreciated,Thank you.
Check if you are running out of fd's on the nginx box. Check with netstat if you have too many connections in the TIME_WAIT state. If so, you will need to reduce you tcp_fin_timeout value from default 60 seconds to something smaller.