Nginx - Daphne deployment issue - nginx

I recently added a function utilizing WebSocket with Channels to Django web application and having some trouble. Since the Channels and WebSocket work just fine with the local test server (manage.py runserver), one can tell that the deployment part is responsible.
Here are some setting files and status checks:
nginx_conf.conf
#added this block
upstream channels-backend {
server localhost:9001;
}
server {
listen 80;
server_name MY_URL;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/MY_USER/server/mysite;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/MY_USER/server/mysite/mysite.sock;
}
#path to proxy my WebSocket requests
location /ws/ {
# proxy_pass http://unix:/home/MY_USER/server/mysite/mysite_w.sock;
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection “upgrade”;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
daphne.service
[Unit]
Description=daphne daemon
After=network.target
[Service]
User=MY_USER
Group=www-data
WorkingDirectory=/home/MY_USER/server/mysite
#ExecStart=/home/MY_USER/server/server/bin/daphne -u /home/MY_USER/server/mysite/mysite_w.sock mysite.asgi:application -v2
ExecStart=/home/MY_USER/server/server/bin/daphne -p 9001 mysite.asgi:application
[Install]
WantedBy=multi-user.target
As you can see I've tested both port and UNIX socket for binding and both are not successful.
PORT case
Chrome console message
(index):16 WebSocket connection to 'ws://MY_URL/ws/chat/test/' failed: Error during WebSocket handshake: Unexpected response code: 400
(anonymous) # (index):16
(index):30 Chat socket closed unexpectedly
chatSocket.onclose # (index):30
2(index):43 WebSocket is already in CLOSING or CLOSED state.
❯ sudo less /var/log/nginx/access.log
61.82.112.1 - - [12/Aug/2020:06:27:29 +0000] "GET /chat/test/ HTTP/1.1" 200 682 "http://MY_URL/chat/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36"
61.82.112.1 - - [12/Aug/2020:06:27:30 +0000] "GET /ws/chat/test/ HTTP/1.1" 400 5 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36"
❯ sudo journalctl -u mysite-daphne.service
Aug 10 09:31:32 hermes systemd[1]: mysite-daphne.service: Succeeded.
Aug 10 09:31:32 hermes systemd[1]: Stopped daphne daemon.
Aug 10 09:31:32 hermes systemd[1]: Started daphne daemon.
Which looks like Daphne did not get the message from Nginx.
UNIX socket case
Chrome console message
(index):16 WebSocket connection to 'ws://MY_URL/ws/chat/test/' failed: Error during WebSocket handshake: Unexpected response code: 400
(anonymous) # (index):16
(index):30 Chat socket closed unexpectedly
chatSocket.onclose # (index):30
3(index):43 WebSocket is already in CLOSING or CLOSED state.
document.querySelector.onclick # (index):43
document.querySelector.onkeyup # (index):36
❯ sudo less /var/log/nginx/access.log
61.82.112.1 - - [12/Aug/2020:06:42:44 +0000] "GET /ws/chat/test/ HTTP/1.1" 400 5 "-" "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Mobile Safari/537.36"
❯ sudo journalctl -u mysite-daphne.service
Aug 10 09:31:32 hermes systemd[1]: Stopped daphne daemon.
Aug 10 09:31:32 hermes systemd[1]: Started daphne daemon.
I need some advice for troubleshooting. Please feel free to ask any extra info which may be helpful.
Thank you in advance.

I had exactly the same problem and was starting to go mad.
Your nginx config contains the same wrong statement as mine did:
[...]
proxy_set_header Connection “upgrade”;
[...]
The word upgrade is enclosed by the wrong quotes. You need to have standard quotes here (ASCII code 34, Shift-2 on the keyboard), not the "fancy" unicode quotes. Very very hard to find.
Some websites seem to convert standard quotes to unicode shifted quotes because someone thinks they look better...

Related

NGINX Thumbnail Generation won't work with spaces or %20 in Ubuntu 22 but did in Ubuntu 16 - Nginx 1.23.0 specific

Please see update at the bottom regarding the Nginx version 1.23.0 being the cause
I've been using NGINX to generate image thumbnails for a number of years, however having just switched from Ubuntu 16 to Ubuntu 22 and Nginx 1.16 to nginx 1.23 it no longer works with paths that include spaces.
It is very likely that this is a configuration difference rather than something to do with the different versions, however as far as I can tell the NGINX configs are identical so possibly it is something to do with different Ubuntu/Nginx versions.
The error given when accessing a URL with a space in the path is simply "400 Bad Request".
There are no references to the request in either the access or error logs after 400 Bad Request is returned.
The nginx site config looks like this and was originally based off this guide
server {
server_name localhost;
listen 8888;
access_log /home/mysite/logs/nginx-thumbnails-localhost-access.log;
error_log /home/mysite/logs/nginx-thumbnails-localhost-error.log error;
location ~ "^/width/(?<width>\d+)/(?<image>.+)$" {
alias /home/mysite/$image;
image_filter resize $width -;
image_filter_jpeg_quality 95;
image_filter_buffer 8M;
}
location ~ "^/height/(?<height>\d+)/(?<image>.+)$" {
alias /home/mysite/$image;
image_filter resize - $height;
image_filter_jpeg_quality 95;
image_filter_buffer 8M;
}
location ~ "^/resize/(?<width>\d+)/(?<height>\d+)/(?<image>.*)$" {
alias /home/mysite/$image;
image_filter resize $width $height;
image_filter_jpeg_quality 95;
image_filter_buffer 8M;
}
location ~ "^/crop/(?<width>\d+)/(?<height>\d+)/(?<image>.*)$" {
alias /home/mysite/$image;
image_filter crop $width $height;
image_filter_jpeg_quality 95;
image_filter_buffer 8M;
}
}
proxy_cache_path /tmp/nginx-thumbnails-cache/ levels=1:2 keys_zone=thumbnails:10m inactive=24h max_size=1000m;
server {
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name thumbnails.mysite.net;
ssl_certificate /etc/letsencrypt/live/thumbnails.mysite.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/thumbnails.mysite.net/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
access_log /home/mysite/logs/nginx-thumbnails-access.log;
error_log /home/mysite/logs/nginx-thumbnails-error.log error;
location ~ "^/width/(?<width>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass http://localhost:8888/width/$width/$image;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
location ~ "^/height/(?<height>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass http://localhost:8888/height/$height/$image;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
location ~ "^/resize/(?<width>\d+)/(?<height>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass http://localhost:8888/resize/$width/$height/$image;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
location ~ "^/crop/(?<width>\d+)/(?<height>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass http://localhost:8888/crop/$width/$height/$image;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
location /media {
# Nginx needs you to manually define DNS resolution when using
# variables in proxy_pass. Creating this dummy location avoids that.
# The error is: "no resolver defined to resolve localhost".
proxy_pass http://localhost:8888/;
}
}
I don't know if it's related but nginx-thumbnails-error.log also regularly has the following line, although testing the response in browser seems to work:
2022/07/19 12:02:28 [error] 1058111#1058111: *397008 connect() failed
(111: Connection refused) while connecting to upstream, client:
????, server: thumbnails.mysite.net, request:
"GET
/resize/100/100/path/to/my/file.png
HTTP/2.0", upstream:
"http://[::1]:8888/resize/100/100/path/to/my/file.png",
host: "thumbnails.mysite.net"
This error does not appear when accessing a file with a space in it.
There are no references to the request for a file with a space in the path in nginx-thumbnails-access.log or nginx-thumbnails-error.log.
But there is an entry in the access log for localhost nginx-thumbnails-localhost-access.log
./nginx-thumbnails-localhost-access.log:127.0.0.1 - -
[29/Jul/2022:10:37:13 +0000] "GET /resize/200/200/test
dir/KPjjCTl0lnpJcUdQIWaPflzAEzgN25gRMfAH5qiI.png HTTP/1.0" 400 150 "-"
"-"
When a path has no spaces there is an entry in both nginx-thumbnails-localhost-access.log and nginx-thumbnails-access.log
./nginx-thumbnails-localhost-access.log:127.0.0.1 - -
[29/Jul/2022:10:43:48 +0000] "GET
/resize/200/202/testdir/KPjjCTl0lnpJcUdQIWaPflzAEzgN25gRMfAH5qiI.png
HTTP/1.0" 200 11654 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X
10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0
Safari/537.36"
./nginx-thumbnails-access.log:185.236.155.232 - -
[29/Jul/2022:10:43:48 +0000] "GET
/resize/200/202/testdir/KPjjCTl0lnpJcUdQIWaPflzAEzgN25gRMfAH5qiI.png
HTTP/2.0" 200 11654 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X
10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0
Safari/537.36"
I've no idea if it's relevant, but access log entries for images with a space in the name do not include the browser user agent.
./nginx-thumbnails-localhost-access.log:127.0.0.1 - -
[29/Jul/2022:10:52:16 +0000] "GET /resize/200/202/testdir/thumb
image.png HTTP/1.0" 400 150 "-" "-"
./nginx-thumbnails-localhost-access.log:127.0.0.1 - -
[29/Jul/2022:10:52:33 +0000] "GET
/resize/200/202/testdir/thumbimage.png HTTP/1.0" 200 11654 "-"
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36
(KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"
As requested here is a result from curl -i
HTTP/2 400
server: nginx
date: Thu, 28 Jul 2022 15:44:13 GMT
content-type: text/html
content-length: 150
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
I can now confirm that this is specific to Nginx 1.23.0 which is not yet stable.
I created a new Digital Ocean Droplet installed Nginx and setup the thumbnail server and it worked perfectly. The default nginx installation for Ubuntu 22 was 1.18.0.
I then upgraded to 1.23.0 as I had done on my live server via:
apt-add-repository ppa:ondrej/nginx-mainline -y
apt install nginx
The thumbnail server then stopped working with the same issue as my original issue.
I am now investigating downgrading nginx.
Downgrading Nginx to 1.18.0 worked, using these steps:
apt-add-repository --remove ppa:ondrej/nginx-mainline
apt autoremove nginx
apt update
apt install nginx
For some reason on one server I also had to run apt autoremove nginx-core but not on the other.
However i'm a little concerned that 1.18.0 is marked as no longer receiving security support. But I could not find an easy way to install 1.22.0 and test, only 1.18.0: https://endoflife.date/nginx
The first thing you could try is rewrite regex replacement [flag]; directive which in the code below will look for spaces or %20 and split the file name into two, the space or the %20 will be removed, the file name will be rewritten without them, and will be redirected to the new URL.
rewrite ^(.*)(\s|%20)(.*)$ $1$3 permanent;
Another possible solution is you probably have to encode the string in order to handle the spaces in the file names. To try that on your code:
location ~ "^/crop/(?<width>\d+)/(?<height>\d+)/(?<image>.+)$" {
if ( (?<image>.+)$ ~ (.*)%20(.*) ) {
set (?<image>.+)$ $1$3;
} if ( (?<image>.+)$ ~ (.*)\s(.*) ) {
set (?<image>.+)$ $1$3;
}
(?<image>.+);
}
But tbh I haven't tried the second one myself.
The error above seems to be unrelated but someone has posted a fix to a similar error.
I figured out that this error was specific to Nginx version 1.23.0 and downgrading to 1.18.0 resolved the issue.
Downgrading Nginx to 1.18.0 worked, using these steps:
apt-add-repository --remove ppa:ondrej/nginx-mainline
apt autoremove nginx
apt update apt install nginx
For some reason on one server I also had to run apt autoremove nginx-core but not on the other.
I am still investigating whether 1.22.0 is okay and how to report this error to Nginx directly.
So thanks to this answer on the Nginx Bug Tracker: https://trac.nginx.org/nginx/ticket/1930
The solution was actually very simply to remove the URI components
From this:
location ~ "^/resize/(?<width>\d+)/(?<height>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass localhost:8888/resize/$width/$height/$image;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
To this:
location ~ "^/resize/(?<width>\d+)/(?<height>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass localhost:8888;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
And then it works.
This is the full explanation from #Maxim Dounin at the bug tracker:
That's because spaces are not allowed in request URIs, and since nginx 1.21.1 these are rejected. Quoting ​CHANGES:
*) Change: now nginx always returns an error if spaces or control characters are used in the request line.
See ticket #196 for details.
Your configuration results in incorrect URIs being used during proxying, as it uses named captures from ​$uri, which is unescaped, in ​proxy_pass, which expects all variables to be properly escaped if used with variables.
As far as I can see, in your configuration the most simple fix would be to change all proxy_pass directives to don't use any URI components, that is:
location ~ "/width/(?<width>\d+)/(?<image>.+)$" {
proxy_pass http://localhost:8888;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
This way, nginx will pass URIs from the client request unmodified (and properly escaped), and these match URIs you've been trying to reconstruct with variables.

Nginx returns 400 for a proxy_pass to an external URL

I am trying to have a route in my Nginx which will proxy the request to an external https resource. My config for that looks like this:
server {
listen 443 ssl;
server_name x.x.com;
location / {
resolver 8.8.8.8;
proxy_pass https://y.y.com$request_uri;
proxy_ssl_server_name on;
}
}
Now, whenever I try to call the URL I will immediately get a 400.
Strangely enough on the Nginx logs, I will not get any reason for the 400 at first. Only after exactly 1 minute, I will get a timeout message. (My error log level is set to info)
nginx_1_e6b52cd440fd | 999.999.99.999 - - [29/Aug/2019:10:05:27 +0000] "GET / HTTP/1.1" 400 226 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36"
nginx_1_e6b52cd440fd | 2019/08/29 10:06:27 [info] 67#67: *30 client timed out (110: Connection timed out) while waiting for request, client: 999.999.99.999, server: 0.0.0.0:8080
My Nginx is running as a docker container using Nginx:1.17
For anyone experiencing a similar issue I solved it in the end by adding
proxy_set_header Host y.y.com;
proxy_set_header X-Forwarded-For $remote_addr;
For some reason the server did not like the request having the default x.x.com host header and rejected it with a 400, which probably comes from some webserver configuration on the serverside.

Server receives POST request twice from Nginx

We have a nginx server acting as a reverse proxy between the client and server.
Whenever the server returns a 500 we actually see that the request is being sent to the server twice from the nginx logs:
173.38.209.10 - - [26/Jan/2018:15:15:36 +0000] "POST /api/customer/add HTTP/1.1" 500 115 "http://apiwebsite.com" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"
173.38.209.10 - - [26/Jan/2018:15:15:36 +0000] "POST /api/customer/add HTTP/1.1" 500 157 "http://apiwebsite.com" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"
This API is only called twice if the first response is a 500.
If I bypass the nginx proxy and call the server directly, then it's only called once.
What's more strange is after further testing we found out this only happens in our corporate network. If i use my home network to connect to the proxy, then there's no retry even in case of a 500 response.
Anway, here's my nginx configuration:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name localhost;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://127.0.0.1:3000";
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
location /api/customer/ {
proxy_pass "http://127.0.0.1:8080/";
}
}
Is there anything suspicious which is causing this behaviour?
Thanks

Nginx -Gitlab: could not clone over https

I starting setting up a gitlab-runner, before I only try to clone, pull, push, etc. over ssh. With ssh it was no problem, so I think it is a problem with nginx. I try some settings in nginx, but not clearly sure what will be need. Do anybody know what to set, to get data? Website is also running fine.
The nginx output while cloning git repo ci https:
172.17.0.1 - - [20/Jul/2017:21:13:39 +0000] "GET /server/nginx.git/info/refs?service=git-upload-pack HTTP/1.1" 401 26 "-" "git/2.7.4"
172.17.0.1 - user [20/Jul/2017:21:13:39 +0000] "GET /server/nginx.git/info/refs?service=git-upload-pack HTTP/1.1" 401 26 "-" "git/2.7.4"
172.17.0.1 - - [20/Jul/2017:21:13:42 +0000] "POST /heartbeat HTTP/1.1" 200 5 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:54.0) Gecko/20100101 Firefox/54.0"
172.17.0.1 - - [20/Jul/2017:21:13:46 +0000] "GET /ocs/v2.php/apps/notifications/api/v2/notifications HTTP/1.1" 200 74 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:54.0) Gecko/20100101 Firefox/54.0"
172.17.0.1 - user [20/Jul/2017:21:13:47 +0000] "GET /server/nginx.git/info/refs?service=git-upload-pack HTTP/1.1" 200 415 "-" "git/2.7.4"
172.17.0.1 - user [20/Jul/2017:21:13:47 +0000] "POST /server/nginx.git/git-upload-pack HTTP/1.1" 500 0 "-" "git/2.7.4"
git response:
error: RPC failed; HTTP 500 curl 22 The requested URL returned error: 500 Internal Server Error
fatal: The remote end hung up unexpectedly
git workhorse error message
2017-07-22_11:19:45.43536 2017/07/22 11:19:45 error: POST "/server/nginx.git/git-upload-pack": handleUploadPack: ReadAllTempfile: open /tmp/gitlab-workhorse-read-all-tempfile358528589: permission denied
2017-07-22_11:19:45.43551 git.dropanote.de 172.10.11.97:43758 - - [2017-07-22 11:19:45.349933226 +0000 UTC] "POST /server/nginx.git/git-upload-pack HTTP/1.1" 500 0 "" "git/2.7.4" 0.085399
nginx config
## GitLab
##
## Modified from nginx http version
## Modified from http://blog.phusion.nl/2012/04/21/tutorial-setting-up-gitlab-on-debian-6/
## Modified from https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
##
## Lines starting with two hashes (##) are comments with information.
## Lines starting with one hash (#) are configuration parameters that can be uncommented.
##
##################################
## CONTRIBUTING ##
##################################
##
## If you change this file in a Merge Request, please also create
## a Merge Request on https://gitlab.com/gitlab-org/omnibus-gitlab/merge_requests
##
###################################
## configuration ##
###################################
##
## See installation.md#using-https for additional HTTPS configuration details.
upstream gitlab-workhorse {
server 172.10.11.66:8181;
keepalive 32;
}
## Redirects all HTTP traffic to the HTTPS host
server {
## Either remove "default_server" from the listen line below,
## or delete the /etc/nginx/sites-enabled/default file. This will cause gitlab
## to be served if you visit any address that your server responds to, eg.
## the ip address of the server (http://x.x.x.x/)
listen 0.0.0.0:80;
listen [::]:80 ipv6only=on;
server_name url.tdl; ## Replace this with something like gitlab.example.com
server_tokens off; ## Don't show the nginx version number, a security best practice
location /.well-known/acme-challenge {
root /tmp;
}
location / {
return 301 https://$http_host$request_uri;
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
}
}
## HTTPS host
server {
listen 0.0.0.0:443 ssl;
listen [::]:443 ipv6only=on ssl;
server_name url.tdl; ## Replace this with something like gitlab.example.com
server_tokens off; ## Don't show the nginx version number, a security best practice
root /opt/gitlab/embedded/service/gitlab-rails/public;
## Strong SSL Security
## https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html & https://cipherli.st/
ssl on;
ssl_certificate linkto/fullchain.pem;
ssl_certificate_key linkto/privkey.pem;
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains" always;
# GitLab needs backwards compatible ciphers to retain compatibility with Java IDEs
ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
location / {
client_max_body_size 0;
gzip off;
## https://github.com/gitlabhq/gitlabhq/issues/694
## Some requests take more than 30 seconds.
proxy_read_timeout 3000;
proxy_connect_timeout 3000;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://gitlab-workhorse;
}
}
Make sure you're running gitlab-workhorse, check your /etc/gitlab/gitlab.rb with this lines uncommented:
gitlab_workhorse['enable'] = true
gitlab_workhorse['listen_network'] = "tcp"
gitlab_workhorse['listen_addr'] = "127.0.0.1:8181"
then run
sudo gitlab-ctl reconfigure
sudo gitlab-ctl restart
...
ok: run: gitlab-workhorse: ...
Seems like everything is ok with your nginx.conf for me.
I get it solved the problem by remove the tmp folder from mapped docker folder. Before I mapped the tmp folder to my host systems. I do not know why, but gitlab got a problem to write to this folder and that seems to be the problem of http connection.
With Git 2.35 (Q1 2022), "git upload-pack"(man) (the other side of "git fetch"(man)) should be more robust.
It used a 8kB buffer but most of its payload came on 64kB "packets".
The buffer size has been enlarged so that such a packet fits.
This is a contribution directly from GitLab staff (Jacob Vosmaer).
See commit 55a9651 (14 Dec 2021) by Jacob Vosmaer (jacobvosmaer).
(Merged by Junio C Hamano -- gitster -- in commit d9fc3a9, 05 Jan 2022)
upload-pack.c: increase output buffer size
Signed-off-by: Jacob Vosmaer
When serving a fetch, git upload-pack(man) copies data from a git pack-objects(man) stdout pipe to its stdout.
This commit increases the size of the buffer used for that copying from 8192 to 65515, the maximum sideband-64k packet size.
Previously, this buffer was allocated on the stack.
Because the new buffer size is nearly 64KB, we switch this to a heap allocation.
On GitLab.com we use GitLab's pack-objects cache which does writes of 65515 bytes.
Because of the default 8KB buffer size, propagating these cache writes requires 8 pipe reads and 8 pipe writes from git-upload-pack, and 8 pipe reads from Gitaly (our Git RPC service).
If we increase the size of the buffer to the maximum Git packet size, we need only 1 pipe read and 1 pipe write in git-upload-pack, and 1 pipe read in Gitaly to transfer the same amount of data.
In benchmarks with a pure fetch and 100% cache hit rate workload we are seeing CPU utilization reductions of over 30%.

Browsers not receiving / interpreting 413 response from Nginx

I have an Angular 2 app that is using ng2-file-upload to upload files to a server running Nginx. Nginx is definitely sending a 413 when the file size is too large but the browsers (Chrome and Safari) don't seem to be catching it / interpreting it.
Chrome console error:
XMLHttpRequest cannot load <url>. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin '<url>' is therefore not allowed access. The response had HTTP status code 413.
Safari console error
XMLHttpRequest cannot load <url>. Origin <url> is not allowed by Access-Control-Allow-Origin.
Nginx config
server {
listen 80;
server_name <url>;
access_log /var/log/nginx/access.log main;
client_max_body_size 4m;
location / {
proxy_pass http://<ip address>:3009;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Nginx access log
<ip address> - - [11/Oct/2016:17:28:26 +0100] "OPTIONS /properties/57fbab6087f787a80407c3b4/floors HTTP/1.1" 200 4 "<url>" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36" "-"
<ip address> - - [11/Oct/2016:17:28:36 +0100] "POST /properties/57fbab6087f787a80407c3b4/floors HTTP/1.1" 413 601 "<url>" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36" "-"
Nginx error log
2016/10/11 17:28:26 [error] 30847#0: *1489 client intended to send too large body: 34865919 bytes, client: <ip address>, server: <server>, request: "POST /properties/57fbab6087f787a80407c3b4/floors HTTP/1.1", host: "<host>", referrer: "<url>"
When calling the ng2-file-upload error handling method the response code is 0 and headers are an empty object.
Any help would be much appreciated!
Seems like an old question, but even sader that in 2019 it's still the case that Firefox & chrome don't handle the 413 as you would expect. The continue to process the upload despite nginx sending a 413.
Good old Safari (words I don't get to utter often) appears to be the only modern browser that does what you would expect and if you send a custom 413 error it handles it.
In regard to this question, you can use Angular to get the size of the file and have simple endpoint that verifies if it's to big before you send the actual file.
Doing that in JS would b the best option.
A similar question was also posted here.
As pointed out by the accepted answer:
The problem is that most HTTP clients don't read the response until
they've sent the entire request body. If you're dealing with web browsers
you're probably out of luck here.
I recently tried with the latest version of Safari (v15.6 - 17613.3.9.1.5) and it does handle the error as I expected. It aborts uploading file when right after receiving the 413 code from the server.
In this case, I agree with #AppHandwerker's answer that we should validate the file size on client side before starting upload.

Resources