Nginx proxy_pass not working without trailing slash - nginx

I've done some searching on this issue and have found other posts similar to this one, but none of the solutions have helped me.
I've got a location entry in my Nginx config which contains a proxy_pass rule, but it only seems to work when I add a trailing slash to the URL.
Here's the location stanza
location /webApp {
proxy_pass https://webapp.service.consul/webApp;
proxy_set_header Host webapp.service.consul;
proxy_intercept_errors on;
}
You can see, when I use curl and go out to /webApp/ (with trailing slash), it works fine - and the cookie is set as expected:
MacBook-Pro:~ $ curl -I https://example.com/webApp/
HTTP/1.1 302 Found
Server: nginx/1.9.11
Date: Thu, 28 Apr 2016 14:45:55 GMT
Content-Length: 0
Connection: keep-alive
Set-Cookie: JSESSIONID=440B8D729469BBD80FC92796754D9475; Path=/providerApp/; HttpOnly
Location: http://webapp.service.consul/webApp/login.do;jsessionid=440B8D729469BBD80FC92796754D9475
However, when I go out to /webApp (no trailing slash), I get a 302 Found, but I don't get redirected to the /webApp/login.do page as I did with the trailing slash:
MacBook-Pro:~$ curl -I https://example.com/webApp
HTTP/1.1 302 Found
Server: nginx/1.9.11
Date: Thu, 28 Apr 2016 14:48:44 GMT
Connection: keep-alive
Location: http://webapp.service.consul/webApp/
I have tried add a rewrite rule, something like:
rewrite ^(.*[^/])$ $1/ permanent;
But it doesn't seem to make a difference. This was working before, and I haven't really messed with this location stanza, so I'm wondering if it has something to do with one of my other locations.
Any tips?

You didn't actually specify IN WHICH WAY is it not working, but it sounds like an issue that should be fixable by the proxy_redirect directive.
proxy_redirect http://webapp.service.consul/webApp/ http://example.com/webApp/
(However, the best solution might be to just ensure that the hostnames are correctly specified in the rest of the configuration files.)

Related

SSE event data gets cut off when using Nginx

I am implementing a web interface using React and Flask. One component of this interface is a server sent event that is used to update data in the front end whenever it is updated in a database. This data is quite large and one event could contain over 16000 characters.
The React front end uses a reverse proxy to the flask back end in order to forward API requests to it. When accessing this back end directly, this works fine with the SSEs and the data is pushed as expected. However, when using Nginx to serve the reverse proxy, something weird happens. It seems like nginx buffers and chunks the event stream and does not send the results until it has filled around 16000 characters. If the data from the event is smaller than this, it means the front end will have to wait until more events are sent. If the data from the event is larger, it means the new-lines that tell the EventSource that a message has been received aren't part of the event. This results in the front end receiving event X when event X+1 is sent (that is, when the new-lines actually appear in the stream).
This is the response header when using nginx:
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Thu, 19 Nov 2020 13:10:49 GMT
Content-Type: text/event-stream; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
This is the response header when running flask run and accessing the port directly (I'm going to use gunicorn later but I tested with flask run to make sure nginx was the problem and not gunicorn):
HTTP/1.0 200 OK
Content-Type: text/event-stream; charset=utf-8
Cache-Control: no-transform
Connection: keep-alive
Connection: close
Server: Werkzeug/1.0.1 Python/3.7.6
Date: Thu, 19 Nov 2020 13:23:54 GMT
This is the nginx config in sites-available:
upstream backend {
server 127.0.0.1:6666;
}
server {
listen 7076;
root <path/to/react/build/>;
index index.html;
location / {
try_files $uri $uri/ =404;
}
location /api {
include proxy_params;
proxy_pass http://backend;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_buffering off;
proxy_cache off;
chunked_transfer_encoding off;
}
}
This config is based on the answer mentioned here.
As you can see, both proxy_buffering and chunked_transfer_encoding is off, so I don't understand why this is happening. I have also tried changing the buffer sizes but without any luck.
Can anybody tell me why this is happening? How do I fix it such that using nginx results in the same behaviour as when I don't use it? Thank you.
The above mentioned configuration actually did work. However, the server I was using contained another nginx configuration that was overriding my configuration. When the configuration parameters specific for SSEs were added to that configuration as well, things started working as expected. So this question was correct all along.

Nginx $upstream_addr variable doesn't work in if condition

I'm running a reverse proxy using proxy_pass directive from ngx_http_proxy_module. I want to forbid access to certain backend IP address ranges (like 172.0.0.0/24). I've tried
if ($upstream_addr ~* "^172.*") {
return 403;
}
add_header X-mine "$upstream_addr";
both in server and location context but it doesn't work, i.e. Nginx still returns 200:
$ curl localhost -I
HTTP/1.1 200 OK
Server: nginx/1.17.0
Date: Thu, 13 Feb 2020 12:58:36 GMT
Content-Type: text/html
Content-Length: 612
Connection: keep-alive
Last-Modified: Tue, 24 Sep 2019 14:49:10 GMT
ETag: "5d8a2ce6-264"
Accept-Ranges: bytes
X-mine: 172.20.0.2:80
What am I missing? (Note that I added the content of $upstream_addr variable to X-mine header for debugging.)
My understanding is that the if directive is run before the upstream request is sent, while the $upstream_addr header is only set after the upstream request has completed. I have tried and failed to find definitive documentation that explains the precise process, but the nginx documentation seems to be missing a number of things that one might wish for.
See this answer, and also If is evil for a little more guidance. I'm not actually sure quite what you're trying to achieve so I can't offer any hope about whether or not it's possible.

Nginx reverse proxy subdirectory rewrites for sourcegraph

I'm trying to have a self hosted sourcegraph server being served on a subdirectory of my domain using a reverse proxy to add an SSL cert.
The target is to have http://example.org/source serve the sourcegraph server
My rewrites and reverse proxy look like this:
location /source {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
rewrite ^/source/?(.*) /$1 break;
proxy_pass http://localhost:8108;
}
The problem I am having is that upon calling http://example.org/source I get redirected to http://example.org/sign-in?returnTo=%2F
Is there a way to rewrite the response of sourcegraph to the correct subdirectory?
Additionally, where can I debug the rewrite directive? I would like to follow the changes it does to understand it better.
-- Edit:
I know my approach is probably wrong using rewrite and I'm trying the sub_filter module right now.
I captured the response of sourcegraph using tcpdump and analyzed using wireshark so I am at:
GET /sourcegraph/ HTTP/1.0
Host: 127.0.0.1:8108
Connection: close
Upgrade-Insecure-Requests: 1
DNT: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Referer: https://example.org/
Accept-Encoding: gzip, deflate, br
Accept-Language: de,en-US;q=0.9,en;q=0.8
Cookie: sidebar_collapsed=false;
HTTP/1.0 302 Found
Cache-Control: no-cache, max-age=0
Content-Type: text/html; charset=utf-8
Location: /sign-in?returnTo=%2Fsourcegraph%2F
Strict-Transport-Security: max-age=31536000
Vary: Cookie
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Trace: #tracer-not-enabled
X-Xss-Protection: 1; mode=block
Date: Sat, 07 Jul 2018 13:59:06 GMT
Content-Length: 58
Found.
Using rewrite here causes extra processing overhead and is totally unnecessary.
proxy_pass works like this:
proxy_pass to a naked url, i.e. nothing at all after domain/ip/port and the full client request uri gets added to the end and passed to the proxy.
Add anything, even just a slash to the proxy_pass and whatever you add replaces the part of the client request uri which matches the uri of that location block.
so if you want to lose the source part of your client request it needs to look like this:
location /source/ {
proxy_pass http://localhost:8108/;
.....
}
Now requests will be proxied like this:
example.com/source/ -> localhost:8108/
example.com/source/files/file.txt -> localhost:8108/files/file.txt
It's important to point out that Nginx isn't just dropping /source/ from the request, it's substituting my entire proxy_pass URI, It's not as clear when that's just a trailing slash, so to better illustrate if we change proxy_pass to this:
proxy_pass http://localhost:8108/graph/; then the requests are now processed like this:
example.com/source/ -> localhost:8108/graph/
example.com/source/files/file.txt -> localhost:8108/graph/files/file.txt
If you are wondering what happens if someone requests example.com/source this works providing you have not set the merge_slashes directive to off as Nginx will add the trailing / to proxied requests.
If you have Nginx in front of another webserver that's running on port 8108 and serve its content by proxy_pass of everything from a subdir, e.g. /subdir, then you might have the issue that the service at port 8108 serves an HTML page that includes resources, calls its own APIs, etc. based on absolute URL's. These calls will omit the /subdir prefix, thus they won't be routed to the service at port 8108 by nginx.
One solution is to make the webserver at port 8108 serve HTML that includes the base href attribute, e.g
<head>
<base href="https://example.com/subdir">
</head>
which tells a client that all links are relative to that path (see https://www.w3schools.com/tags/att_base_href.asp)
Sometimes this is not an option though - maybe the webserver is something you just spin up provided by an external docker image, or maybe you just don't see a reason why you should need to tamper with a service that runs perfectly as a standalone. A solution that only requires changes to the nginx in front is to use the Referer header to determine if the request was initiated by a resource located at /subdir. If that is the case, you can rewrite the request to be prefixed with /subdir and then redirect the client to that location:
location / {
if ($http_referer = "https://example.com/subdir/") {
rewrite ^/(.*) https://example.com/subdir/$1 redirect;
}
...
}
location /subdir/ {
proxy_pass http://localhost:8108/;
}
Or something like this, if you prefer a regex to let you omit the hostname:
if ($http_referer ~ "^https?://[^/]+/subdir/") {
rewrite ^/(.*) https://$http_host/subdir/$1 redirect;
}

Nginx won't serve the json responses

I've got a single page application running on a server correctly, serving pages across different urls:
example.com
example.com/jsonendpoint
But when I try to access an endpoint meant to return JSON, I get an HTML response. The nginx config looks like:
server {
root /home/myapplication/server/public;
location / {
proxy_pass http://myapplication/;
proxy_redirect off;
try_files $uri $uri/ /index.html;
}
location /jsonendpoint {
proxy_pass http://myapplication/;
default_type application/json;
proxy_redirect off;
}
}
If I comment out the try_files line, then the JSON response works fine and I can still access the root URL at example.com but when I try to access example.com/jsonendpoint then nginx returns a 404.
How do I fix the config to get both things to work?
EDIT:
When I curl the server from within the VPS it's hosted on:
curl -i -H "Accept: application/json" http://localhost:3000/jsonendpoint
I get a JSON response:
Content-Type: application/json; charset=utf-8
When I make the same curl request from my local machine (which means going through nginx then the response type is wrong:
Content-Type: text/html; charset=UTF-8
This rules out the possibility that the problem lies with the backend server.
The mime.types file does have the json mime type added and it is being included by nginx. I've also tried forcing a response type per location block (see above snippet).
Check that in your in your mime types you have something like:
application/json json;
Then try to query your site using something like this:
curl -i -H "Accept: application/json" http://your-site
Check for the header content-type if your application is returning json it should be:
content-type: application/json
If you get something like:
content-type: text/html; charset=utf-8
Check your backend to add the proper content type.
In case you would like to force the type, try this:
location / {
default_type application/json;
# ...
}
Old question but wanted to share what fixed the issue for me:
nginx-user did not have access to my json files. Apparently root-user loads you the html when you browse to website. But other requests like json calls will be served in nginx worker threads as nginx-user.
I added read and execute permissions to all users in html-folder to solve this problem (there are maybe better solutions also):
chmod -R 755 /usr/share/nginx/html

Custom 404 in Nginx

I'm trying to specify a custom 404 page and preserve the URL, however using the below gives me the error nginx: [emerg] "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in...
location / {
error_page 404 #404;
}
location #404 {
proxy_pass https://example.com/404.html;
}
Does anyone know a solution to this?
I have tested your config on nginx 1.6.2. The error message complains about the URL in proxy_pass having path specified. Specifying only https://server would be OK, but doesn't solve your issue.
Named locations (like #fallback) have been introduced also to avoid having to use non-existing locations (which could later become existing and cause issues). But we can also use a location which should never exist on the server itself, for example:
location / { error_page 404 /xx-404-xx; }
location /xx-404-xx { proxy_pass https://example.com/404.html; }
Redirecting to relative URL causes only internal nginx redirect, which does not change the URL as browser sees it.
EDIT: Here is the test result:
kenny:/etc/nginx/sites-enabled:# wget -S server.com/abc/abc
HTTP request sent, awaiting response...
HTTP/1.1 404 Not Found
Server: nginx/1.6.2
Date: Thu, 16 Jul 2015 07:01:38 GMT
Content-Type: text/html
Content-Length: 5
Connection: keep-alive
Last-Modified: Thu, 16 Jul 2015 07:00:59 GMT
ETag: "5037762-5-51af8a241434b"
Accept-Ranges: bytes
2015-07-16 09:01:38 ERROR 404: Not Found.
From apache access log:
1.2.3.4 - - [16/Jul/2015:09:01:38 +0200] "GET /test.html HTTP/1.0" 200 5 "-" "Wget/1.13.4 (linux-gnu)"
In nginx I have proxy_pass to a html file on Apache webserver with just "test\n" in it. As you can see, nginx fetched that, added headers from Apache (Last-Mod, ETag) and also Content-Length: 5, so it received the html file from Apache, but wget doesn't save the content of 404 errors. Also many browsers don't display 404 errors by default if they are smaller than 1 kB (they show their own error page instead). So either make your 404 page bigger or you can configure nginx to serve it as normal html with "200" result code (error_page 404 =200 /xx).
I have no idea why you are receiving external redirects with the same config. Try it with wget to see which headers exactly nginx sent. Also mixing http and https for proxy should be no issue. Try to remove everything from your config and test only this error page, maybe some other directive is causing this (like other location is used instead).

Resources