I am trying to build a reverse proxy with nginx to make all Is in my project reachable from single address.
For a single service the configuration below works without problem
/etc/nginx/sites-enabled/reverse-proxy.conf
server {
listen 80;
listen [::]:80;
location / {
resolver 127.0.0.1;
allow "x.x.x.x";
deny all;
proxy_pass http://consul:8500;
}
}
So when I call server's ip x.x.x.x in my browser I see the Consul UI and the URL showing x.x.x.x/ui/dc1. Besides that, I see that the UI did requests for asset files successfully.
My question; is it possible two host different services on the same server and just reference to them with different location? For example, if I want to include Vault UI then I would think of doing something like this:
server {
listen 80;
listen [::]:80;
location /consul {
resolver 127.0.0.1;
allow "x.x.x.x";
deny all;
proxy_pass http://consul:8500;
}
location /vault {
resolver 127.0.0.1;
allow "x.x.x.x";
deny all;
proxy_pass http://vault:8200;
}
}
However I am not sure if this could be done this way. The farest I got, is to open the Consul UI with all other sub requests not found (i.e. loading assets).
UPDATE
I think my problem is that I am wrongly using location and proxy_pass
observing the first configuration (which is working)
server {
listen 80;
listen [::]:80;
location / {
resolver 127.0.0.1;
allow "x.x.x.x";
deny all;
proxy_pass http://consul:8500;
}
}
If I look at the curl command curl localhost -L -vvvv
* Trying 127.0.0.1:80...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: localhost
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.18.0 (Ubuntu)
< Date: Fri, 10 Jul 2020 16:24:38 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 39
< Connection: keep-alive
< Location: /ui/
<
* Ignoring the response-body
* Connection #0 to host localhost left intact
* Issue another request to this URL: 'http://localhost/ui/'
* Found bundle for host localhost: 0x557b754549e0 [serially]
* Can not multiplex, even if we wanted to!
* Re-using existing connection! (#0) with host localhost
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /ui/ HTTP/1.1
> Host: localhost
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.18.0 (Ubuntu)
< Date: Fri, 10 Jul 2020 16:24:38 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 7806
< Connection: keep-alive
< Accept-Ranges: bytes
< Last-Modified: Fri, 10 Jul 2020 07:37:44 GMT
<
<!DOCTYPE html>
<html lang="en" class="ember-loading">
...
and I can see the html already. However, if I changed the conf file to this:
server {
listen 80;
listen [::]:80;
location /consul/ {
resolver 127.0.0.1;
allow "x.x.x.x";
deny all;
proxy_pass http://consul:8500;
}
}
and then try to call it like curl localhost/consul -L -vvvv, I get the following:
* Trying 127.0.0.1:80...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /consul HTTP/1.1
> Host: localhost
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.18.0 (Ubuntu)
< Date: Fri, 10 Jul 2020 16:32:35 GMT
< Content-Type: text/html
< Content-Length: 178
< Location: http://localhost/consul/
< Connection: keep-alive
<
* Ignoring the response-body
* Connection #0 to host localhost left intact
* Issue another request to this URL: 'http://localhost/consul/'
* Found bundle for host localhost: 0x55ba7959f9e0 [serially]
* Can not multiplex, even if we wanted to!
* Re-using existing connection! (#0) with host localhost
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /consul/ HTTP/1.1
> Host: localhost
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Server: nginx/1.18.0 (Ubuntu)
< Date: Fri, 10 Jul 2020 16:32:35 GMT
< Content-Length: 0
< Connection: keep-alive
I would appreciate any ideas on this issue
You are right, you are using location and proxy_pass a wrong way. When you use the
location /vault {
proxy_pass http://vault:8200;
}
construction, you are passing your URI to the upstream as-is, while most likely you want to strip the /vault prefix from it. To do it, you should use this one:
location /vault/ {
proxy_pass http://vault:8200/;
}
You can read more about the difference of the first and the second one here. However this still can prevent the assets from loading correctly.
This question - how to proxy some webapp under some URI prefix - is being asked again and again on stackoverflow. The only right way to do it is to made your proxied app request its assets via relative URLs only (consider assets/script.js instead of /assets/script.js) or using the right prefix (/vault/assets/script.js). Some well-written apps are able to detect if they are used under such an URI prefix and use it when an asset link is being generated, some apps allows to specify it via some settings, but some are not suited for the such use at all. The reason why the webapp won't work without fulfilling these requirements is quite obvious - any URL not started with /vault won't match your location /vault/ { ... } block and would be served via main location block instead. So the best way to do it is to fix your webapp, however several workarounds can be used if you really cannot.
Some web frameworks already builds their webapps with relative URLs, but uses a <base href="/"> in the head section of index.html. For example, React or Angular use this approach. If you have such a line within your webapp root index.html, just change it to <base href="/vault/">.
Using conditional routing based on HTTP Referer header value. This approach works quite well for a single page applications for loading assets, but if a webapp contains several pages this approach won't work, it's logic for the right upstream detection would break after the first jump from one page to another. Here is an example:
map $http_referer $prefix {
~https?://[^/]+/vault/ vault;
# other webapps prefixes could be defined here
# ...
default base;
}
server {
# listen port, server name and other global definitions here
# ...
location / {
try_files "" #$prefix;
}
location /vault/ {
# proxy request to the vault upstream, remove "/vault" part from the URI
proxy_pass http://vault:8200/;
}
location #vault {
# proxy request to the vault upstream, do not change the URI
proxy_pass http://vault:8200;
}
location #base {
# default "root" location
proxy_pass http://consul:8500;
}
}
Update # 2022.02.19
Here is one more possible approach using conditional rewrite:
server {
# listen port, server name and other global definitions here
# ...
if ($http_referer ~ https?://[^/]+/vault/)
# rewrite request URI only if it isn't already started with '/vault' prefix
rewrite ^((?!/vault).*) /vault$1;
}
# locations here
# ...
}
Rewriting the links inside the response body using sub_filter directive from ngx_http_sub_module. This is the ugliest one, but still can be used as the last available option. This approach has an obvious perfomance impact. Rewrite patterns should be determined from your upstream response body. Usually that type of configuration looked like
location /vault/ {
proxy_pass http://vault:8200/;
sub_filter_types text/css application/javascript;
sub_filter_once off;
sub_filter 'href="/' 'href="/vault/';
sub_filter "href='/" "href='/vault/";
sub_filter 'src="/' 'src="/vault/';
sub_filter "src='/" "src='/vault/";
sub_filter 'url("/' 'url("/vault/';
sub_filter "url('/" "url('/vault/";
sub_filter "url(/" "url(/vault/";
}
Update # 2022.02.19
Related thread at the ServerFault: How to handle relative urls correctly with a nginx reverse proxy.
Possible caveats using sub_filter on the JavaScript code: Nginx as reverse proxy to two nodejs app on the same domain.
Related
I am trying to build a reverse proxy with nginx to make all Is in my project reachable from single address.
For a single service the configuration below works without problem
/etc/nginx/sites-enabled/reverse-proxy.conf
server {
listen 80;
listen [::]:80;
location / {
resolver 127.0.0.1;
allow "x.x.x.x";
deny all;
proxy_pass http://consul:8500;
}
}
So when I call server's ip x.x.x.x in my browser I see the Consul UI and the URL showing x.x.x.x/ui/dc1. Besides that, I see that the UI did requests for asset files successfully.
My question; is it possible two host different services on the same server and just reference to them with different location? For example, if I want to include Vault UI then I would think of doing something like this:
server {
listen 80;
listen [::]:80;
location /consul {
resolver 127.0.0.1;
allow "x.x.x.x";
deny all;
proxy_pass http://consul:8500;
}
location /vault {
resolver 127.0.0.1;
allow "x.x.x.x";
deny all;
proxy_pass http://vault:8200;
}
}
However I am not sure if this could be done this way. The farest I got, is to open the Consul UI with all other sub requests not found (i.e. loading assets).
UPDATE
I think my problem is that I am wrongly using location and proxy_pass
observing the first configuration (which is working)
server {
listen 80;
listen [::]:80;
location / {
resolver 127.0.0.1;
allow "x.x.x.x";
deny all;
proxy_pass http://consul:8500;
}
}
If I look at the curl command curl localhost -L -vvvv
* Trying 127.0.0.1:80...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: localhost
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.18.0 (Ubuntu)
< Date: Fri, 10 Jul 2020 16:24:38 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 39
< Connection: keep-alive
< Location: /ui/
<
* Ignoring the response-body
* Connection #0 to host localhost left intact
* Issue another request to this URL: 'http://localhost/ui/'
* Found bundle for host localhost: 0x557b754549e0 [serially]
* Can not multiplex, even if we wanted to!
* Re-using existing connection! (#0) with host localhost
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /ui/ HTTP/1.1
> Host: localhost
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.18.0 (Ubuntu)
< Date: Fri, 10 Jul 2020 16:24:38 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 7806
< Connection: keep-alive
< Accept-Ranges: bytes
< Last-Modified: Fri, 10 Jul 2020 07:37:44 GMT
<
<!DOCTYPE html>
<html lang="en" class="ember-loading">
...
and I can see the html already. However, if I changed the conf file to this:
server {
listen 80;
listen [::]:80;
location /consul/ {
resolver 127.0.0.1;
allow "x.x.x.x";
deny all;
proxy_pass http://consul:8500;
}
}
and then try to call it like curl localhost/consul -L -vvvv, I get the following:
* Trying 127.0.0.1:80...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /consul HTTP/1.1
> Host: localhost
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.18.0 (Ubuntu)
< Date: Fri, 10 Jul 2020 16:32:35 GMT
< Content-Type: text/html
< Content-Length: 178
< Location: http://localhost/consul/
< Connection: keep-alive
<
* Ignoring the response-body
* Connection #0 to host localhost left intact
* Issue another request to this URL: 'http://localhost/consul/'
* Found bundle for host localhost: 0x55ba7959f9e0 [serially]
* Can not multiplex, even if we wanted to!
* Re-using existing connection! (#0) with host localhost
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /consul/ HTTP/1.1
> Host: localhost
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Server: nginx/1.18.0 (Ubuntu)
< Date: Fri, 10 Jul 2020 16:32:35 GMT
< Content-Length: 0
< Connection: keep-alive
I would appreciate any ideas on this issue
You are right, you are using location and proxy_pass a wrong way. When you use the
location /vault {
proxy_pass http://vault:8200;
}
construction, you are passing your URI to the upstream as-is, while most likely you want to strip the /vault prefix from it. To do it, you should use this one:
location /vault/ {
proxy_pass http://vault:8200/;
}
You can read more about the difference of the first and the second one here. However this still can prevent the assets from loading correctly.
This question - how to proxy some webapp under some URI prefix - is being asked again and again on stackoverflow. The only right way to do it is to made your proxied app request its assets via relative URLs only (consider assets/script.js instead of /assets/script.js) or using the right prefix (/vault/assets/script.js). Some well-written apps are able to detect if they are used under such an URI prefix and use it when an asset link is being generated, some apps allows to specify it via some settings, but some are not suited for the such use at all. The reason why the webapp won't work without fulfilling these requirements is quite obvious - any URL not started with /vault won't match your location /vault/ { ... } block and would be served via main location block instead. So the best way to do it is to fix your webapp, however several workarounds can be used if you really cannot.
Some web frameworks already builds their webapps with relative URLs, but uses a <base href="/"> in the head section of index.html. For example, React or Angular use this approach. If you have such a line within your webapp root index.html, just change it to <base href="/vault/">.
Using conditional routing based on HTTP Referer header value. This approach works quite well for a single page applications for loading assets, but if a webapp contains several pages this approach won't work, it's logic for the right upstream detection would break after the first jump from one page to another. Here is an example:
map $http_referer $prefix {
~https?://[^/]+/vault/ vault;
# other webapps prefixes could be defined here
# ...
default base;
}
server {
# listen port, server name and other global definitions here
# ...
location / {
try_files "" #$prefix;
}
location /vault/ {
# proxy request to the vault upstream, remove "/vault" part from the URI
proxy_pass http://vault:8200/;
}
location #vault {
# proxy request to the vault upstream, do not change the URI
proxy_pass http://vault:8200;
}
location #base {
# default "root" location
proxy_pass http://consul:8500;
}
}
Update # 2022.02.19
Here is one more possible approach using conditional rewrite:
server {
# listen port, server name and other global definitions here
# ...
if ($http_referer ~ https?://[^/]+/vault/)
# rewrite request URI only if it isn't already started with '/vault' prefix
rewrite ^((?!/vault).*) /vault$1;
}
# locations here
# ...
}
Rewriting the links inside the response body using sub_filter directive from ngx_http_sub_module. This is the ugliest one, but still can be used as the last available option. This approach has an obvious perfomance impact. Rewrite patterns should be determined from your upstream response body. Usually that type of configuration looked like
location /vault/ {
proxy_pass http://vault:8200/;
sub_filter_types text/css application/javascript;
sub_filter_once off;
sub_filter 'href="/' 'href="/vault/';
sub_filter "href='/" "href='/vault/";
sub_filter 'src="/' 'src="/vault/';
sub_filter "src='/" "src='/vault/";
sub_filter 'url("/' 'url("/vault/';
sub_filter "url('/" "url('/vault/";
sub_filter "url(/" "url(/vault/";
}
Update # 2022.02.19
Related thread at the ServerFault: How to handle relative urls correctly with a nginx reverse proxy.
Possible caveats using sub_filter on the JavaScript code: Nginx as reverse proxy to two nodejs app on the same domain.
I've been facing some issues with nginx and PUT redirects:
Let's say I have an HTTP service sitting behind an nginx server (assume HTTP 1.1)
The client does a PUT /my/api with Expect: 100-continue.
My service is not sending a 100-continue, but sends a 307 redirect instead, to another endpoint (in this case, S3).
However, nginx is for some unknown reason sending a 100-continue prior to serving the redirect - the client proceeds to upload the whole body to nginx before the redirect is served. This causes the client to effectively transfer the body twice - which isn't great for multi-gigabyte uploads
I am wondering if there is a way to:
Prevent nginx to send 100-continue unless the service actually does send that.
Allow requests with arbitrarily large Content-Length without having to set client_max_body_size to a large value (to avoid 413 Entity too large).
Since my service is sending redirects only and never sending 100-Continue, the request body is never supposed to reach nginx. Having to set client_max_body_size and waiting for nginx to buffer the whole body just to serve a redirect is quite suboptimal.
I've been able to do that with Apache, but not with nginx. Apache used to have the same behavior before this got fixed: https://bz.apache.org/bugzilla/show_bug.cgi?id=60330 - wondering if nginx has the same issue
Any pointers appreciated :)
EDIT 1: Here's a sample setup to reproduce the issue:
An nginx listening on port 80, forwarding to localhost on port 9999
A simple HTTP server listening on port 9999, that always returns redirects on PUTs
nginx.conf
worker_rlimit_nofile 261120;
worker_shutdown_timeout 10s ;
events {
multi_accept on;
worker_connections 16384;
use epoll;
}
http {
server {
listen 80;
server_name frontend;
keepalive_timeout 75s;
keepalive_requests 100;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:9999/;
}
}
}
I'm running the above with
docker run --rm --name nginx --net=host -v /path/to/nginx.conf:/etc/nginx/nginx.conf:ro nginx:1.21.1
Simple python3 HTTP server.
#!/usr/bin/env python3
import sys
from http.server import HTTPServer, BaseHTTPRequestHandler
class Redirect(BaseHTTPRequestHandler):
def do_PUT(self):
self.send_response(307)
self.send_header('Location', 'https://s3.amazonaws.com/test')
self.end_headers()
HTTPServer(("", 9999), Redirect).serve_forever()
Test results:
Uploading directly to the python server works as expected. The python server does not send a 100-continue on PUTs - it will directly send a 307 redirect before seeing the body.
$ curl -sv -L -X PUT -T /some/very/large/file 127.0.0.1:9999/test
> PUT /test HTTP/1.1
> Host: 127.0.0.1:9999
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
> Expect: 100-continue
>
* Mark bundle as not supporting multiuse
* HTTP 1.0, assume close after body
< HTTP/1.0 307 Temporary Redirect
< Server: BaseHTTP/0.6 Python/3.9.2
< Date: Thu, 15 Jul 2021 10:16:44 GMT
< Location: https://s3.amazonaws.com/test
<
* Closing connection 0
* Issue another request to this URL: 'https://s3.amazonaws.com/test'
* Trying 52.216.129.157:443...
* Connected to s3.amazonaws.com (52.216.129.157) port 443 (#1)
> PUT /test HTTP/1.0
> Host: s3.amazonaws.com
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
>
Doing the same thing through nginx fails with 413 Entity too large - even though the body should not go through nginx.
After adding client_max_body_size 1G; to the config, the result is different, except nginx tries to buffer the whole body:
$ curl -sv -L -X PUT -T /some/very/large/file 127.0.0.1:80/test
* Trying 127.0.0.1:80...
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> PUT /test HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.74.0
> Accept: */*
> Content-Length: 531202949
> Expect: 100-continue
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 100 Continue
} [65536 bytes data]
* We are completely uploaded and fine
* Mark bundle as not supporting multiuse
< HTTP/1.1 502 Bad Gateway
< Server: nginx/1.21.1
< Date: Thu, 15 Jul 2021 10:22:08 GMT
< Content-Type: text/html
< Content-Length: 157
< Connection: keep-alive
<
{ [157 bytes data]
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.21.1</center>
</body>
</html>
Notice how nginx sends a HTTP/1.1 100 Continue
With this simple python server, the request subsequently fails because the python server closes the connection right after serving the redirect, which causes nginx to serve the 502 due to a broken pipe:
127.0.0.1 - - [15/Jul/2021:10:22:08 +0000] "PUT /test HTTP/1.1" 502 182 "-" "curl/7.74.0"
2021/07/15 10:22:08 [error] 31#31: *1 writev() failed (32: Broken pipe) while sending request to upstream, client: 127.0.0.1, server: frontend, request: "PUT /test HTTP/1.1", upstream: "http://127.0.0.1:9999/test", host: "127.0.0.1"
So as far as I can see, this seems exactly like the following Apache issue https://bz.apache.org/bugzilla/show_bug.cgi?id=60330 (which is now addressed in newer versions). I am not sure how to circumvent this with nginx
How to route based on cookies preferred by the end user?
We have Nginx/1.17.10 running as pod in AKS. eCommerce site is hosted on this.
CloudFlare is front end acting as DNS and WAF.
CloudFlare have GeoIP turned on , so we have parameter - $http_cf_ipcountry to trace the country code. however we are looking for the preference saved by end user and route to that specific region.
Example:
If $http_cookie --> COUNTRY_CODE=UAE;
Then rewrite to example.com --> example.com/en-ae
If $http_cookie --> COUNTRY_CODE=KW;
Then rewrite to example.com --> example.com/en-kw
If there is no preference saved on cookie, then route to default "example.com"
Http_cookie parameter also holds other detail such as _cfduid, COUNTRY_CODE_PREV, CURRENYCY_CODE , EXCHANGE_RATE
What should be the best approach to handle this requirement?
Anyone help me on this, thanks!
I would create a map to handle construct the redirect URLs.
http://nginx.org/en/docs/http/ngx_http_map_module.html#map
This will set the rewrite url to a variable $new_uri. The default, if no cookie value is present, will be /en-en/. Now you can create a rewrite rule.
rewrite ^(.*)$ $new_uri permanent;
Here is an updated config example as requested.
map $cookie_user_country $new_uri {
default /en-en/;
UAE /en-ae/;
KW /en-kw/;
}
server {
listen 8080;
return 200 "$uri \n";
}
server {
listen 8081;
rewrite ^(.*)$ $new_uri permanent;
return 200 "$cookie_user_country \n";
}
Use the $cookie_NAME directive to get the right value of a single cookie. The $http_VAR contains the value of a specific HTTP request header.
See my curl request for more details.
[root#localhost conf.d]# curl -v --cookie "user_country=KW; test=id; abcc=def" localhost:8081
* About to connect() to localhost port 8081 (#0)
* Trying ::1...
* Connection refused
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8081 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost:8081
> Accept: */*
> Cookie: user_country=KW; test=id; abcc=def
>
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.17.6
< Date: Sun, 26 Apr 2020 12:34:15 GMT
< Content-Type: text/html
< Content-Length: 169
< Location: http://localhost:8081/en-kw/
< Connection: keep-alive
<
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.17.6</center>
</body>
</html>
* Connection #0 to host localhost left intact
Checking current running NGINX Binary contains map module
type
strings `which nginx` | grep ngx_http_map_module | head -1
This will list all "printable" strings from the nginx binary and grep the output by "ngx_http_map_module". The result should look like this:
[root#localhost conf.d]# strings `which nginx` | grep ngx_http_map_module | head -1
--> ngx_http_map_module
If the output is eq to ngx_http_map_module the current running NGINX binary was compiled with map support. If not -> make sure you are using a NGX Binary compiled with map support.
I'm trying to have a self hosted sourcegraph server being served on a subdirectory of my domain using a reverse proxy to add an SSL cert.
The target is to have http://example.org/source serve the sourcegraph server
My rewrites and reverse proxy look like this:
location /source {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
rewrite ^/source/?(.*) /$1 break;
proxy_pass http://localhost:8108;
}
The problem I am having is that upon calling http://example.org/source I get redirected to http://example.org/sign-in?returnTo=%2F
Is there a way to rewrite the response of sourcegraph to the correct subdirectory?
Additionally, where can I debug the rewrite directive? I would like to follow the changes it does to understand it better.
-- Edit:
I know my approach is probably wrong using rewrite and I'm trying the sub_filter module right now.
I captured the response of sourcegraph using tcpdump and analyzed using wireshark so I am at:
GET /sourcegraph/ HTTP/1.0
Host: 127.0.0.1:8108
Connection: close
Upgrade-Insecure-Requests: 1
DNT: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Referer: https://example.org/
Accept-Encoding: gzip, deflate, br
Accept-Language: de,en-US;q=0.9,en;q=0.8
Cookie: sidebar_collapsed=false;
HTTP/1.0 302 Found
Cache-Control: no-cache, max-age=0
Content-Type: text/html; charset=utf-8
Location: /sign-in?returnTo=%2Fsourcegraph%2F
Strict-Transport-Security: max-age=31536000
Vary: Cookie
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Trace: #tracer-not-enabled
X-Xss-Protection: 1; mode=block
Date: Sat, 07 Jul 2018 13:59:06 GMT
Content-Length: 58
Found.
Using rewrite here causes extra processing overhead and is totally unnecessary.
proxy_pass works like this:
proxy_pass to a naked url, i.e. nothing at all after domain/ip/port and the full client request uri gets added to the end and passed to the proxy.
Add anything, even just a slash to the proxy_pass and whatever you add replaces the part of the client request uri which matches the uri of that location block.
so if you want to lose the source part of your client request it needs to look like this:
location /source/ {
proxy_pass http://localhost:8108/;
.....
}
Now requests will be proxied like this:
example.com/source/ -> localhost:8108/
example.com/source/files/file.txt -> localhost:8108/files/file.txt
It's important to point out that Nginx isn't just dropping /source/ from the request, it's substituting my entire proxy_pass URI, It's not as clear when that's just a trailing slash, so to better illustrate if we change proxy_pass to this:
proxy_pass http://localhost:8108/graph/; then the requests are now processed like this:
example.com/source/ -> localhost:8108/graph/
example.com/source/files/file.txt -> localhost:8108/graph/files/file.txt
If you are wondering what happens if someone requests example.com/source this works providing you have not set the merge_slashes directive to off as Nginx will add the trailing / to proxied requests.
If you have Nginx in front of another webserver that's running on port 8108 and serve its content by proxy_pass of everything from a subdir, e.g. /subdir, then you might have the issue that the service at port 8108 serves an HTML page that includes resources, calls its own APIs, etc. based on absolute URL's. These calls will omit the /subdir prefix, thus they won't be routed to the service at port 8108 by nginx.
One solution is to make the webserver at port 8108 serve HTML that includes the base href attribute, e.g
<head>
<base href="https://example.com/subdir">
</head>
which tells a client that all links are relative to that path (see https://www.w3schools.com/tags/att_base_href.asp)
Sometimes this is not an option though - maybe the webserver is something you just spin up provided by an external docker image, or maybe you just don't see a reason why you should need to tamper with a service that runs perfectly as a standalone. A solution that only requires changes to the nginx in front is to use the Referer header to determine if the request was initiated by a resource located at /subdir. If that is the case, you can rewrite the request to be prefixed with /subdir and then redirect the client to that location:
location / {
if ($http_referer = "https://example.com/subdir/") {
rewrite ^/(.*) https://example.com/subdir/$1 redirect;
}
...
}
location /subdir/ {
proxy_pass http://localhost:8108/;
}
Or something like this, if you prefer a regex to let you omit the hostname:
if ($http_referer ~ "^https?://[^/]+/subdir/") {
rewrite ^/(.*) https://$http_host/subdir/$1 redirect;
}
Hi I am new to Nginx and looking some help to redirect my http request to https.
I have two configuration on load balance with port 80 and 444 at Linode cloud system.
If request comes from https then load balancer if sending request to my serving tomcat after terminating SSL to LB.
If request comes from http then load balancer is sending to my nginx server which is redirecting request to https.
I see whenever, I start my nginx server, I see continues logs in my tomcat server of redirect url even no one is hitting my http url. I have following complete nginx.conf file.
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 ;
server_name example.com;
#return 301 https://$server_name$request_uri;
rewrite ^ https://$server_name/$request_uri permanent;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location =/ {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
Same configuration works perfectly, if I put IP address in place of actual domain name.
Following are the curl results based on location and I see in after redirecting from HTTPS location header is showing https://example.com/login which is doing correctly
# curl -i http://example.com
HTTP/1.1 301 Moved Permanently
Server: nginx/1.6.3
Date: Fri, 29 Jan 2016 07:43:54 GMT
Content-Type: text/html
Content-Length: 184
Connection: close
Location: https://example.com/
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.6.3</center>
</body>
</html>
# curl -i https://example.com
HTTP/1.1 302 Found
Server: Apache-Coyote/1.1
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
X-XSS-Protection: 1; mode=block
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Set-Cookie: JSESSIONID=C3B65BD4E015F05705B585F5F8D70074; Path=/; Secure; HttpOnly
Location: https://example.com/login
Content-Length: 0
Date: Fri, 29 Jan 2016 07:44:03 GMT
Connection: close
#curl -i https://example.com/login
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
X-XSS-Protection: 1; mode=block
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Set-Cookie: JSESSIONID=6B30D6D70672A99F13B2F441B2F2150E; Path=/; Secure; HttpOnly
Content-Type: text/html;charset=ISO-8859-1
Content-Language: en-US
Transfer-Encoding: chunked
Date: Fri, 29 Jan 2016 07:44:18 GMT
Connection: close
<HTML context of login page>
Please suggest me what I am missing here.
To simply redirect all requests to port 80 to https, use the following configuration. No further lines are required, and might skip the purpose of the server:
server {
listen 80 default_server;
server_name _;
rewrite ^ https://$host$request_uri permanent;
}
This way, whichever host or even IP address will be forwarded to the https counterpart. If you are sure there'll be only one destination host, you may use it instead of the $host variable (do not enter a / afterwards):
rewrite ^ https://example.com$request_uri permanent;
It'd be even better if you use return:
return 301 https://example.com$request_uri;
# or
# return 301 https://$host$request_uri;
Since this is the only purpose of this server block, remove all other directives, like root, location, error_page and include.
Beware of additional files at /etc/nginx/conf.d/*.conf or /etc/nginx/sites-enabled/*.conf, they may overwrite these settings.
Reload nginx configuration and test. I suggest using cURL — here's the expected result:
$ curl -i http://example.com
HTTP/1.1 301 Moved Permanently
Server: nginx/1.8.0
Date: Wed, 27 Jan 2016 17:33:45 GMT
Content-Type: text/html
Content-Length: 184
Connection: keep-alive
Location: https://example.com/
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.8.0</center>
</body>
</html>
Look and copy the Location: header content, then test again using cURL (use -k if you use a self-signed certificate):
curl -i https://example.com
The result should be from your tomcat application, and NOT another redirect to the same page. If the result is the same (safe from the date), then your LB is probably sending the https requests back to nginx, causing a loop.
Please note that the tomcat application may also be forwarding to https if it doesn't understand it's behind a proxy (the LB). In this case, you'll need to setup the application config to properly understand this (let me know if this is the case).
I see that my Nginx server is exposed on public IP using port:80 and it was receiving traffic from unwanted hosts.
Since I had a default configuration in Nginx server block which was redirecting my all incoming http:80 traffic from any host to my tomcat on https:443, thats why I saw tons of logs in my tomcat server.
I had to add below configuration to my /etc/nginx.conf to redirect port:80 traffic to https if request is coming from my domain only.
if ($host ~ ^(example.com|www.example.com)$) {
return 301 https://$server_name$request_uri;
}