Nginx forward proxy based on header value - nginx

I want to use nginx as forward proxy, but rewrite (also the host part) the URL based on a header value.
Suppose the browser connect to nginx on port 8888 with a regular http request. The header ha the pair:
X-myattribute: https://somehost.com
nginx should proxy_pass to https://somehost.com
My nginx.conf is now:
server {
listen 8888;
proxy_connect;
proxy_max_temp_file_size 0;
resolver 8.8.8.8;
location / {
proxy_pass https://$http_myattribute;
# proxy_pass http://$http_host$uri$is_args$args;
proxy_set_header Host $http_host;
}
}
}
but I get:
2018/08/16 19:44:08 [error] 9#0: *1 invalid port in upstream "https://somehost.com:443", client: 172.17.0.1, server: , request: "GET / HTTP/1.1", host: "localhost:8888"
2018/08/16 19:47:25 [error] 9#0: *1 invalid URL prefix in "https://somehost.com:443", client: 172.17.0.1, server: , request: "GET / HTTP/1.1", host: "localhost:8888"
(two lines depending if I set proxy_pass http://$X-myattribute or proxy_pass https://$X-myattribute or proxy_pass $X-myattribute. Assume X-myattribute always have http:// or https://)
Any suggestion?

Related

Facing issue with nginx proxy_pass

I want to do proxy_pass for
https://atmvpn.appdomain.cloud/sft-ui/sft/api/orgs/v1/org in such a way that should be
https://dev.apnat.net/sft/api/orgs/v1/orgso while proxy_pass we need to remove sft-ui so I add below location in nginx.conf file
`location /sft-ui/sft/api {
access_log off;
rewrite ^/sft-ui/(.*) /$1 break;
proxy_pass <%= ENV["AMS_DOMAIN"] %>;
}`
I have set AMS_DOMAIN as environment variable. But when I hit https://atmvpn.appdomain.cloud/sft-ui/sft/api/orgs/v1/org in browser I get error "502 Bad Gateway".
in Logs of openshift pod I can see:
2020/06/05 07:06:46 [error] 11#11: *1 SSL_do_handshake() failed (SSL: error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure:SSL alert number 40) while SSL handshaking to upstream, client: 172.30.96.141, server: , request: "GET /sft-ui/sft/api/orgs/v1/org HTTP/1.1", upstream: "https://104.18.12.180:443/sft/api/orgs/v1/org", host: "atmvpn.appdomain.cloud"
2020/06/05 07:06:46 [warn] 11#11: *1 upstream server temporarily disabled while SSL handshaking to upstream, client: 172.30.96.141, server: , request: "GET /sft-ui/sft/api/orgs/v1/org HTTP/1.1", upstream: "https://104.18.12.180:443/sft/api/orgs/v1/org", host: "atmvpn.appdomain.cloud"
Just adding proxy_ssl_server_name on; its resolved
location /sft-ui/sft/api {
access_log off;
rewrite ^/sft-ui/(.*) /$1 break;
proxy_pass <%= ENV["AMS_DOMAIN"] %>;
#By setting to "on" can proxy to upstream hosts using SNI
proxy_ssl_server_name on;
}

Nginx auth_request handle saml 302 redirects

I am using Nginx as a reverse proxy. Also, I am using auth_request feature to protect my resources. For that, auth_request always goes through authrization server where I am using SAML authentication. While authenticating, there is a redirect 302 to IDP (ssocircle in this case) which I am trying to follow at Nginx but I am always ending up with below error:
no resolver defined to resolve idp.ssocircle.com while sending to
client, client: 127.0.0.1, server: , request: "GET / HTTP/1.1",
subrequest: "/saml/login", host: "localhost:9000"
*1269 auth request unexpected status: 502 while sending to client, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host:
"localhost:9000"
Here is my Nginx configuration:
server{
listen 9000 ;
auth_request /saml/login;
proxy_intercept_errors on;
error_page 301 302 307 = #handle_redirects ;
location /saml/login {
resolver 46.4.112.4 valid=300s;
internal;
proxy_set_header Host $host;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_pass http://localhost:9090/ ; # url to authorization server
}
location #handle_redirects {
set $saved_redirect_location '$upstream_http_location';
proxy_pass $saved_redirect_location;
}
location / {
root data/www;
}
location /images/ {
root data/ ;
}
}
I tried adding the resolver 46.4.112.4 IP of ssocircle but it didn't work.
Basically, I want to follow the redirect from my authorization server and have user to authenticate themselves at the IDP and return back to the original URL.
I am very new to Nginx. Any help will be much appreciated.
Edit : I am able to resolve the issue stated above. Restarting the Nginx worked for me. But again I am getting this error:
auth request unexpected status: 302 while sending to client,

how to Add $request_uri to auth_request module?

Here is my nginx config file:
location ~* ^/admin-panel/rest/(.*) {
auth_request /admin/admin_authentication/check_access?url=$request_uri;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
resolver 127.0.0.11 ipv6=off;
proxy_pass http://nginx:8000/$1$is_args$args;
}
I want to send $request_uri as a GET parameter to my authentication service. but i get errors like this:
2019/06/23 06:30:07 [error] 6#6: *5 auth request unexpected status:
404 while sending to client, client: 192.168.224.1, server: , request:
"POST /admin-panel/rest/update HTTP/1.1", host: "localhost"
2019/06/23 06:30:07 [error] 6#6: *8 auth request unexpected status:
404 while sending to client, client: 192.168.224.1, server: , request:
"POST /admin-panel/rest/update HTTP/1.1", host: "localhost"
2019/06/23 06:31:56 [error] 6#6: *1 auth request unexpected status:
404 while sending to client, client: 192.168.224.1, server: , request:
"POST /admin-panel/rest/update HTTP/1.1", host: "localhost"
2019/06/23 06:31:57 [error] 6#6: *3 auth request unexpected status:
404 while sending to client, client: 192.168.224.1, server: , request:
"POST /admin-panel/rest/update HTTP/1.1", host: "localhost"
when I remove ?url=$request_uri section in auth_request, everything works fine
By using lua-nginx-module (or openresty docker image), You can use access_by_lua_block instead of auth_request like this:
location ~* ^/admin-panel/rest/(.*) {
access_by_lua_block {
local res = ngx.location.capture("/admin/admin_authentication/check_access?url=" .. ngx.var.request_uri)
if res.status == ngx.HTTP_OK then
return
end
if res.status == ngx.HTTP_FORBIDDEN then
ngx.exit(res.status)
end
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
}
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
resolver 127.0.0.11 ipv6=off;
proxy_pass http://nginx:8000/$1$is_args$args;
}
Actually, This code implements auth_request with access_by_lua_block and create URL with Lua concatenation operator ...

Docker + Nginx: Getting proxy_pass to work

I'm having a problem trying to get Nginx to proxy a path to another server that is also running in Docker.
To illustrate, I'm using Nexus server as an example.
This is my first attempt...
docker-compose.yml:-
version: '2'
services:
nexus:
image: "sonatype/nexus3"
ports:
- "8081:8081"
volumes:
- ./nexus:/nexus-data
nginx:
image: "nginx"
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
nginx.conf:-
worker_processes 4;
events { worker_connections 1024; }
http {
server {
listen 80;
location /nexus/ {
proxy_pass http://localhost:8081/;
}
}
}
When I hit http://localhost/nexus/, I get 502 Bad Gateway with the following log:-
nginx_1 | 2017/05/29 02:20:50 [error] 7#7: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: , request: "GET /nexus/ HTTP/1.1", upstream: "http://[::1]:8081/", host: "localhost"
nginx_1 | 2017/05/29 02:20:50 [error] 7#7: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: , request: "GET /nexus/ HTTP/1.1", upstream: "http://127.0.0.1:8081/", host: "localhost"
nginx_1 | 172.18.0.1 - - [29/May/2017:02:20:50 +0000] "GET /nexus/ HTTP/1.1" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"
In my second attempt...,
docker-compose.yml - I added links to Nginx configuration:-
version: '2'
services:
nexus:
image: "sonatype/nexus3"
ports:
- "8081:8081"
volumes:
- ./nexus:/nexus-data
nginx:
image: "nginx"
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
links:
- nexus:nexus
nginx.conf... Instead of using http://localhost:8081/, I use http://nexus:8081/:-
worker_processes 4;
events { worker_connections 1024; }
http {
server {
listen 80;
location /nexus/ {
proxy_pass http://nexus:8081/;
}
}
}
Now, when I hit http://localhost/nexus/, it gets proxied properly but the web content is partially rendered. When inspecting the HTML source code of that page, the javascript, stylesheet and image links are pointing to http://nexus:8081/[path]... hence, 404.
What should I change to get this to work properly?
Thank you very much.
The following additional options are what I have used
http {
server {
listen 80;
location /{
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
server_name_in_redirect on;
proxy_pass http://nexus:8081;
}
location /nexus/ {
proxy_pass http://nexus:8081/;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
server_name_in_redirect on;
}
}
}
My solution is to include the redirect for the '/' path in the nginx config. The Nexus app will be making requests to '/' for it resources which will not work.
However, this is not ideal and will not work with an Nginx configuration serving multiple apps.
The docs
cover this configuration and indicate that you need to configure Nexus to serve on /nexus. This would enable you to configure Nginx as follows (from docs) minus the hack above.
location /nexus {
proxy_pass http://localhost:8081/nexus;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
I would recommend using that configuration.

nginx not caching response from reverse proxy

http://nginx.org/en/docs/http/ngx_http_memcached_module.html
Basic config is here:
worker_processes 2;
events {
worker_connections 1024;
}
error_log /var/log/nginx/nginx_error.log warn;
error_log /var/log/nginx/nginx_error.log info;
http {
upstream backend {
server localhost:3000;
}
server {
listen 80;
location / {
set $memcached_key $uri;
memcached_pass 127.0.0.1:11211;
error_page 404 = #fallback;
}
location #fallback {
proxy_pass http://backend;
}
}
}
It reverse proxy's the request when hitting port 80, but the logs always say:
2016/08/23 15:25:19 [info] 68964#0: *4 key: "/users/12" was not found by memcached while reading response header from upstream, client: 127.0.0.1, server: , request: "GET /users/12 HTTP/1.1", upstream: "memcached://127.0.0.1:11211", host: "localhost"
Nginx Memcached module does not write to the Memcached server. You should do this in your backend (for example PHP) using the $memcached_key

Resources