nginx on docker doesn't work with location URL - nginx

I am running nginx in docker to act as a reverse proxy for multiple applications. for e.g.,
http://localhost/eureka/ will show http://registry:8761
http://localhost/zipkin/ will show http://zipkin:9411
I started with following nginx conf,
http {
server {
location /eureka/ {
proxy_pass http://registry:9761;
}
}
}
The above configuration is not working and nginx throwing error as,
proxy | 172.20.0.1 - - [24/Mar/2017:10:46:28 +0000] "GET /eureka/ HTTP/1.1" 404 0 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36"
But the below configuration works for http://localhost/ showing eureka page.
http {
server {
location / {
proxy_pass http://registry:9761;
}
}
}
What I am missing? As per nginx proxy_pass it should work, but its not.

The proxy_pass directive can optionally modify the URI before it is passed upstream. To remove the /eureka/ prefix simply append the URI / to the proxy_pass statement.
For example:
location /eureka/ {
proxy_pass http://registry:9761/;
}
The URI /eureka/foo will be mapped to http://registry:9761/foo. See this document for more.
Of course, this is only half of the problem. In many cases, the upstream application must access its resources using the correct prefix or a path-relative URI. Many applications cannot be forced into a subdirectory.

Related

NGINX Thumbnail Generation won't work with spaces or %20 in Ubuntu 22 but did in Ubuntu 16 - Nginx 1.23.0 specific

Please see update at the bottom regarding the Nginx version 1.23.0 being the cause
I've been using NGINX to generate image thumbnails for a number of years, however having just switched from Ubuntu 16 to Ubuntu 22 and Nginx 1.16 to nginx 1.23 it no longer works with paths that include spaces.
It is very likely that this is a configuration difference rather than something to do with the different versions, however as far as I can tell the NGINX configs are identical so possibly it is something to do with different Ubuntu/Nginx versions.
The error given when accessing a URL with a space in the path is simply "400 Bad Request".
There are no references to the request in either the access or error logs after 400 Bad Request is returned.
The nginx site config looks like this and was originally based off this guide
server {
server_name localhost;
listen 8888;
access_log /home/mysite/logs/nginx-thumbnails-localhost-access.log;
error_log /home/mysite/logs/nginx-thumbnails-localhost-error.log error;
location ~ "^/width/(?<width>\d+)/(?<image>.+)$" {
alias /home/mysite/$image;
image_filter resize $width -;
image_filter_jpeg_quality 95;
image_filter_buffer 8M;
}
location ~ "^/height/(?<height>\d+)/(?<image>.+)$" {
alias /home/mysite/$image;
image_filter resize - $height;
image_filter_jpeg_quality 95;
image_filter_buffer 8M;
}
location ~ "^/resize/(?<width>\d+)/(?<height>\d+)/(?<image>.*)$" {
alias /home/mysite/$image;
image_filter resize $width $height;
image_filter_jpeg_quality 95;
image_filter_buffer 8M;
}
location ~ "^/crop/(?<width>\d+)/(?<height>\d+)/(?<image>.*)$" {
alias /home/mysite/$image;
image_filter crop $width $height;
image_filter_jpeg_quality 95;
image_filter_buffer 8M;
}
}
proxy_cache_path /tmp/nginx-thumbnails-cache/ levels=1:2 keys_zone=thumbnails:10m inactive=24h max_size=1000m;
server {
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name thumbnails.mysite.net;
ssl_certificate /etc/letsencrypt/live/thumbnails.mysite.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/thumbnails.mysite.net/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
access_log /home/mysite/logs/nginx-thumbnails-access.log;
error_log /home/mysite/logs/nginx-thumbnails-error.log error;
location ~ "^/width/(?<width>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass http://localhost:8888/width/$width/$image;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
location ~ "^/height/(?<height>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass http://localhost:8888/height/$height/$image;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
location ~ "^/resize/(?<width>\d+)/(?<height>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass http://localhost:8888/resize/$width/$height/$image;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
location ~ "^/crop/(?<width>\d+)/(?<height>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass http://localhost:8888/crop/$width/$height/$image;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
location /media {
# Nginx needs you to manually define DNS resolution when using
# variables in proxy_pass. Creating this dummy location avoids that.
# The error is: "no resolver defined to resolve localhost".
proxy_pass http://localhost:8888/;
}
}
I don't know if it's related but nginx-thumbnails-error.log also regularly has the following line, although testing the response in browser seems to work:
2022/07/19 12:02:28 [error] 1058111#1058111: *397008 connect() failed
(111: Connection refused) while connecting to upstream, client:
????, server: thumbnails.mysite.net, request:
"GET
/resize/100/100/path/to/my/file.png
HTTP/2.0", upstream:
"http://[::1]:8888/resize/100/100/path/to/my/file.png",
host: "thumbnails.mysite.net"
This error does not appear when accessing a file with a space in it.
There are no references to the request for a file with a space in the path in nginx-thumbnails-access.log or nginx-thumbnails-error.log.
But there is an entry in the access log for localhost nginx-thumbnails-localhost-access.log
./nginx-thumbnails-localhost-access.log:127.0.0.1 - -
[29/Jul/2022:10:37:13 +0000] "GET /resize/200/200/test
dir/KPjjCTl0lnpJcUdQIWaPflzAEzgN25gRMfAH5qiI.png HTTP/1.0" 400 150 "-"
"-"
When a path has no spaces there is an entry in both nginx-thumbnails-localhost-access.log and nginx-thumbnails-access.log
./nginx-thumbnails-localhost-access.log:127.0.0.1 - -
[29/Jul/2022:10:43:48 +0000] "GET
/resize/200/202/testdir/KPjjCTl0lnpJcUdQIWaPflzAEzgN25gRMfAH5qiI.png
HTTP/1.0" 200 11654 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X
10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0
Safari/537.36"
./nginx-thumbnails-access.log:185.236.155.232 - -
[29/Jul/2022:10:43:48 +0000] "GET
/resize/200/202/testdir/KPjjCTl0lnpJcUdQIWaPflzAEzgN25gRMfAH5qiI.png
HTTP/2.0" 200 11654 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X
10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0
Safari/537.36"
I've no idea if it's relevant, but access log entries for images with a space in the name do not include the browser user agent.
./nginx-thumbnails-localhost-access.log:127.0.0.1 - -
[29/Jul/2022:10:52:16 +0000] "GET /resize/200/202/testdir/thumb
image.png HTTP/1.0" 400 150 "-" "-"
./nginx-thumbnails-localhost-access.log:127.0.0.1 - -
[29/Jul/2022:10:52:33 +0000] "GET
/resize/200/202/testdir/thumbimage.png HTTP/1.0" 200 11654 "-"
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36
(KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"
As requested here is a result from curl -i
HTTP/2 400
server: nginx
date: Thu, 28 Jul 2022 15:44:13 GMT
content-type: text/html
content-length: 150
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
I can now confirm that this is specific to Nginx 1.23.0 which is not yet stable.
I created a new Digital Ocean Droplet installed Nginx and setup the thumbnail server and it worked perfectly. The default nginx installation for Ubuntu 22 was 1.18.0.
I then upgraded to 1.23.0 as I had done on my live server via:
apt-add-repository ppa:ondrej/nginx-mainline -y
apt install nginx
The thumbnail server then stopped working with the same issue as my original issue.
I am now investigating downgrading nginx.
Downgrading Nginx to 1.18.0 worked, using these steps:
apt-add-repository --remove ppa:ondrej/nginx-mainline
apt autoremove nginx
apt update
apt install nginx
For some reason on one server I also had to run apt autoremove nginx-core but not on the other.
However i'm a little concerned that 1.18.0 is marked as no longer receiving security support. But I could not find an easy way to install 1.22.0 and test, only 1.18.0: https://endoflife.date/nginx
The first thing you could try is rewrite regex replacement [flag]; directive which in the code below will look for spaces or %20 and split the file name into two, the space or the %20 will be removed, the file name will be rewritten without them, and will be redirected to the new URL.
rewrite ^(.*)(\s|%20)(.*)$ $1$3 permanent;
Another possible solution is you probably have to encode the string in order to handle the spaces in the file names. To try that on your code:
location ~ "^/crop/(?<width>\d+)/(?<height>\d+)/(?<image>.+)$" {
if ( (?<image>.+)$ ~ (.*)%20(.*) ) {
set (?<image>.+)$ $1$3;
} if ( (?<image>.+)$ ~ (.*)\s(.*) ) {
set (?<image>.+)$ $1$3;
}
(?<image>.+);
}
But tbh I haven't tried the second one myself.
The error above seems to be unrelated but someone has posted a fix to a similar error.
I figured out that this error was specific to Nginx version 1.23.0 and downgrading to 1.18.0 resolved the issue.
Downgrading Nginx to 1.18.0 worked, using these steps:
apt-add-repository --remove ppa:ondrej/nginx-mainline
apt autoremove nginx
apt update apt install nginx
For some reason on one server I also had to run apt autoremove nginx-core but not on the other.
I am still investigating whether 1.22.0 is okay and how to report this error to Nginx directly.
So thanks to this answer on the Nginx Bug Tracker: https://trac.nginx.org/nginx/ticket/1930
The solution was actually very simply to remove the URI components
From this:
location ~ "^/resize/(?<width>\d+)/(?<height>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass localhost:8888/resize/$width/$height/$image;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
To this:
location ~ "^/resize/(?<width>\d+)/(?<height>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass localhost:8888;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
And then it works.
This is the full explanation from #Maxim Dounin at the bug tracker:
That's because spaces are not allowed in request URIs, and since nginx 1.21.1 these are rejected. Quoting ​CHANGES:
*) Change: now nginx always returns an error if spaces or control characters are used in the request line.
See ticket #196 for details.
Your configuration results in incorrect URIs being used during proxying, as it uses named captures from ​$uri, which is unescaped, in ​proxy_pass, which expects all variables to be properly escaped if used with variables.
As far as I can see, in your configuration the most simple fix would be to change all proxy_pass directives to don't use any URI components, that is:
location ~ "/width/(?<width>\d+)/(?<image>.+)$" {
proxy_pass http://localhost:8888;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
This way, nginx will pass URIs from the client request unmodified (and properly escaped), and these match URIs you've been trying to reconstruct with variables.

Nginx returns 400 for a proxy_pass to an external URL

I am trying to have a route in my Nginx which will proxy the request to an external https resource. My config for that looks like this:
server {
listen 443 ssl;
server_name x.x.com;
location / {
resolver 8.8.8.8;
proxy_pass https://y.y.com$request_uri;
proxy_ssl_server_name on;
}
}
Now, whenever I try to call the URL I will immediately get a 400.
Strangely enough on the Nginx logs, I will not get any reason for the 400 at first. Only after exactly 1 minute, I will get a timeout message. (My error log level is set to info)
nginx_1_e6b52cd440fd | 999.999.99.999 - - [29/Aug/2019:10:05:27 +0000] "GET / HTTP/1.1" 400 226 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36"
nginx_1_e6b52cd440fd | 2019/08/29 10:06:27 [info] 67#67: *30 client timed out (110: Connection timed out) while waiting for request, client: 999.999.99.999, server: 0.0.0.0:8080
My Nginx is running as a docker container using Nginx:1.17
For anyone experiencing a similar issue I solved it in the end by adding
proxy_set_header Host y.y.com;
proxy_set_header X-Forwarded-For $remote_addr;
For some reason the server did not like the request having the default x.x.com host header and rejected it with a 400, which probably comes from some webserver configuration on the serverside.

Nginx reverse proxy subdirectory rewrites for sourcegraph

I'm trying to have a self hosted sourcegraph server being served on a subdirectory of my domain using a reverse proxy to add an SSL cert.
The target is to have http://example.org/source serve the sourcegraph server
My rewrites and reverse proxy look like this:
location /source {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
rewrite ^/source/?(.*) /$1 break;
proxy_pass http://localhost:8108;
}
The problem I am having is that upon calling http://example.org/source I get redirected to http://example.org/sign-in?returnTo=%2F
Is there a way to rewrite the response of sourcegraph to the correct subdirectory?
Additionally, where can I debug the rewrite directive? I would like to follow the changes it does to understand it better.
-- Edit:
I know my approach is probably wrong using rewrite and I'm trying the sub_filter module right now.
I captured the response of sourcegraph using tcpdump and analyzed using wireshark so I am at:
GET /sourcegraph/ HTTP/1.0
Host: 127.0.0.1:8108
Connection: close
Upgrade-Insecure-Requests: 1
DNT: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Referer: https://example.org/
Accept-Encoding: gzip, deflate, br
Accept-Language: de,en-US;q=0.9,en;q=0.8
Cookie: sidebar_collapsed=false;
HTTP/1.0 302 Found
Cache-Control: no-cache, max-age=0
Content-Type: text/html; charset=utf-8
Location: /sign-in?returnTo=%2Fsourcegraph%2F
Strict-Transport-Security: max-age=31536000
Vary: Cookie
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Trace: #tracer-not-enabled
X-Xss-Protection: 1; mode=block
Date: Sat, 07 Jul 2018 13:59:06 GMT
Content-Length: 58
Found.
Using rewrite here causes extra processing overhead and is totally unnecessary.
proxy_pass works like this:
proxy_pass to a naked url, i.e. nothing at all after domain/ip/port and the full client request uri gets added to the end and passed to the proxy.
Add anything, even just a slash to the proxy_pass and whatever you add replaces the part of the client request uri which matches the uri of that location block.
so if you want to lose the source part of your client request it needs to look like this:
location /source/ {
proxy_pass http://localhost:8108/;
.....
}
Now requests will be proxied like this:
example.com/source/ -> localhost:8108/
example.com/source/files/file.txt -> localhost:8108/files/file.txt
It's important to point out that Nginx isn't just dropping /source/ from the request, it's substituting my entire proxy_pass URI, It's not as clear when that's just a trailing slash, so to better illustrate if we change proxy_pass to this:
proxy_pass http://localhost:8108/graph/; then the requests are now processed like this:
example.com/source/ -> localhost:8108/graph/
example.com/source/files/file.txt -> localhost:8108/graph/files/file.txt
If you are wondering what happens if someone requests example.com/source this works providing you have not set the merge_slashes directive to off as Nginx will add the trailing / to proxied requests.
If you have Nginx in front of another webserver that's running on port 8108 and serve its content by proxy_pass of everything from a subdir, e.g. /subdir, then you might have the issue that the service at port 8108 serves an HTML page that includes resources, calls its own APIs, etc. based on absolute URL's. These calls will omit the /subdir prefix, thus they won't be routed to the service at port 8108 by nginx.
One solution is to make the webserver at port 8108 serve HTML that includes the base href attribute, e.g
<head>
<base href="https://example.com/subdir">
</head>
which tells a client that all links are relative to that path (see https://www.w3schools.com/tags/att_base_href.asp)
Sometimes this is not an option though - maybe the webserver is something you just spin up provided by an external docker image, or maybe you just don't see a reason why you should need to tamper with a service that runs perfectly as a standalone. A solution that only requires changes to the nginx in front is to use the Referer header to determine if the request was initiated by a resource located at /subdir. If that is the case, you can rewrite the request to be prefixed with /subdir and then redirect the client to that location:
location / {
if ($http_referer = "https://example.com/subdir/") {
rewrite ^/(.*) https://example.com/subdir/$1 redirect;
}
...
}
location /subdir/ {
proxy_pass http://localhost:8108/;
}
Or something like this, if you prefer a regex to let you omit the hostname:
if ($http_referer ~ "^https?://[^/]+/subdir/") {
rewrite ^/(.*) https://$http_host/subdir/$1 redirect;
}

Server receives POST request twice from Nginx

We have a nginx server acting as a reverse proxy between the client and server.
Whenever the server returns a 500 we actually see that the request is being sent to the server twice from the nginx logs:
173.38.209.10 - - [26/Jan/2018:15:15:36 +0000] "POST /api/customer/add HTTP/1.1" 500 115 "http://apiwebsite.com" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"
173.38.209.10 - - [26/Jan/2018:15:15:36 +0000] "POST /api/customer/add HTTP/1.1" 500 157 "http://apiwebsite.com" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"
This API is only called twice if the first response is a 500.
If I bypass the nginx proxy and call the server directly, then it's only called once.
What's more strange is after further testing we found out this only happens in our corporate network. If i use my home network to connect to the proxy, then there's no retry even in case of a 500 response.
Anway, here's my nginx configuration:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name localhost;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://127.0.0.1:3000";
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
location /api/customer/ {
proxy_pass "http://127.0.0.1:8080/";
}
}
Is there anything suspicious which is causing this behaviour?
Thanks

Browsers not receiving / interpreting 413 response from Nginx

I have an Angular 2 app that is using ng2-file-upload to upload files to a server running Nginx. Nginx is definitely sending a 413 when the file size is too large but the browsers (Chrome and Safari) don't seem to be catching it / interpreting it.
Chrome console error:
XMLHttpRequest cannot load <url>. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin '<url>' is therefore not allowed access. The response had HTTP status code 413.
Safari console error
XMLHttpRequest cannot load <url>. Origin <url> is not allowed by Access-Control-Allow-Origin.
Nginx config
server {
listen 80;
server_name <url>;
access_log /var/log/nginx/access.log main;
client_max_body_size 4m;
location / {
proxy_pass http://<ip address>:3009;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Nginx access log
<ip address> - - [11/Oct/2016:17:28:26 +0100] "OPTIONS /properties/57fbab6087f787a80407c3b4/floors HTTP/1.1" 200 4 "<url>" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36" "-"
<ip address> - - [11/Oct/2016:17:28:36 +0100] "POST /properties/57fbab6087f787a80407c3b4/floors HTTP/1.1" 413 601 "<url>" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36" "-"
Nginx error log
2016/10/11 17:28:26 [error] 30847#0: *1489 client intended to send too large body: 34865919 bytes, client: <ip address>, server: <server>, request: "POST /properties/57fbab6087f787a80407c3b4/floors HTTP/1.1", host: "<host>", referrer: "<url>"
When calling the ng2-file-upload error handling method the response code is 0 and headers are an empty object.
Any help would be much appreciated!
Seems like an old question, but even sader that in 2019 it's still the case that Firefox & chrome don't handle the 413 as you would expect. The continue to process the upload despite nginx sending a 413.
Good old Safari (words I don't get to utter often) appears to be the only modern browser that does what you would expect and if you send a custom 413 error it handles it.
In regard to this question, you can use Angular to get the size of the file and have simple endpoint that verifies if it's to big before you send the actual file.
Doing that in JS would b the best option.
A similar question was also posted here.
As pointed out by the accepted answer:
The problem is that most HTTP clients don't read the response until
they've sent the entire request body. If you're dealing with web browsers
you're probably out of luck here.
I recently tried with the latest version of Safari (v15.6 - 17613.3.9.1.5) and it does handle the error as I expected. It aborts uploading file when right after receiving the 413 code from the server.
In this case, I agree with #AppHandwerker's answer that we should validate the file size on client side before starting upload.

Resources