nginx proxy_pass over https_proxy - nginx

I am trying to set up nginx with this config. To access backend.mygreat.server.com I have to go through my corporate proxy, which is myproxy.server.com:80.
Hence, I have added this in /etc/environment
https_proxy=myproxy.server.com:80
Yet, nginx is unable to reach https://backend.mygreat.server.com:443. I'm seeing 504 as HTTP status in nginx logs.
I could use wget or curl to load the page (goes via corporate proxy)
server {
listen 443;
server_name mygreat.server.com;
ssl on;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH:EDH+aRSA:!aNULL:!eNULL:!LOW:!RC4:!3DES:!MD5:!EXP:!PSK:!SRP:!SEED:!DSS:!CAMELLIA;
ssl_certificate /etc/nginx/ssl/mygreat.server.com.pem;
ssl_certificate_key /etc/nginx/ssl/mygreat.server.com.key;
access_log /var/log/nginx/access.ssl.log;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host-Real-IP $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-Pcol http;
proxy_intercept_errors on;
error_page 301 302 307 = #handle_redirects;
proxy_pass https://backend.mygreat.server.com:443;
}
location #handle_redirects {
set $saved_redirect_location '$upstream_http_location';
proxy_pass $saved_redirect_location;
}
}
Any help is greatly appreciated.
Thanks
Update :
Here is the sample error log from nginx
2017/10/18 06:55:51 [warn] 34604#34604: *1 upstream server temporarily disabled while connecting to upstream, client: <ip-address>, server: mygreat.server.com, request: "GET / HTTP/1.1", upstream: "https://<ip-of-backend>:443/", host: "mygreat.server.com"
If I run curl -v https://backend.mygreat.server.com/ below is the response
* About to connect() to proxy corp-proxy.server.com port 80 (#0)
* Trying <some-ip-address>...
* Connected to corp-proxy.server.com (<ip-of-proxy>) port 80 (#0)
* Establish HTTP proxy tunnel to backend.mygreat.server.com:443
> CONNECT backend.mygreat.server.com:443 HTTP/1.1
> Host: backend.mygreat.server.com:443
> User-Agent: curl/7.29.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 200 Connection established
<
* Proxy replied OK to CONNECT request
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=backend.mygreat.server.com,OU=Technology Operations,O=MyCompany.,L=San Diego,ST=California,C=US
* start date: Mar 15 00:00:00 2017 GMT
* expire date: Mar 15 23:59:59 2020 GMT
* common name: backend.mygreat.server.com
* issuer: CN=Symantec Class 3 Secure Server CA - G4,OU=Symantec Trust Network,O=Symantec Corporation,C=US
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: backend.mygreat.server.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: openresty/1.11.2.5
< Date: Wed, 18 Oct 2017 14:03:10 GMT
< Content-Type: text/html;charset=UTF-8
< Content-Length: 5642
< Connection: keep-alive
< X-XSS-Protection: 1; mode=block
< Cache-Control: max-age=0, no-cache, no-store, must-revalidate, private
< Expires: 0
< P3P: policyref="http://backend.mygreat.server.com/w3c/p3p.xml" CP="CURa OUR STP UNI INT"
< Content-Language: en
< Set-Cookie: qboeuid=127.0.0.1.1508335390550307; path=/; expires=Thu, 18-Oct-18 14:03:10 GMT; domain=.server.com
< Set-Cookie: JSESSIONID=784529AA39C10C3DB4B0ED0D61CC8F31.c23-pe2ec23uw2apu012031; Path=/; Secure; HttpOnly
< Set-Cookie: something.blah_blah=testme; Domain=.server.com; Path=/; Secure
< Vary: Accept-Encoding
<
<!DOCTYPE html>
<html>
....
</html>

So first of all I am not sure if Nginx is suppose to respect http_proxy and https_proxy variables. I didn't find any documentation on the same. So I assume your issues is related to nginx not using proxy at a all
So now you have an option to use something which actually uses proxy. This is where socat comes to rescue.
Running socat forwarder
If you have a transparent proxy then run
socat TCP4-LISTEN:8443,reuseaddr,fork TCP:<proxysever>:<proxyport>
And if you have CONNECT proxy then use below
socat TCP4-LISTEN:8443,reuseaddr,fork PROXY:yourproxy:backendserver:443,proxyport=<yourproxyport>
Then in your nginx config use
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host-Real-IP $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-Pcol http;
proxy_intercept_errors on;
proxy_set_header Host backend.mygreat.server.com;
proxy_pass https://127.0.0.1:8443;
proxy_redirect https://backend.mygreat.server.com https://mygreat.server.com;
}
You probably want to use Systemd service to launch the socat, so it runs on startup and is handled as a service

Nginx's proxy_pass does not support https proxy.
http proxy can be supported, but the request url only supports http.
this is a example:
server {
listen 8880;
server_name localhost;
location / {
rewrite ^(.*)$ "://developer.android.com$1";
rewrite ^(.*)$ "http$1" break;
proxy_set_header Proxy-Connection Keep-Alive;
proxy_set_header Host developer.android.com;
proxy_pass http://127.0.0.1:1080;
proxy_redirect ~^https?://developer\.android\.com(.*)$ http://$host:8080$1;
}
}
see: https://serverfault.com/a/683955/418613

Related

Nginx Ingress Controller cache not being hit

We are using the Nginx Ingress Controller image as described here (https://docs.nginx.com/nginx-ingress-controller/) in our Kubernetes (EKS) environment, and we are having big problems trying to implement caching.
We have a JSON-based service sitting behind our ingress controller.
The Ingress generates Nginx config that looks like this:
# configuration for dcjson-mlang25/terminology-ingress
upstream dcjson-mlang25-terminology-ingress-mlang25.test.domain-jsonserver-authoring-8080 {
zone dcjson-mlang25-terminology-ingress-mlang25.test.domain-jsonserver-authoring-8080 256k;
random two least_conn;
server 10.220.2.66:8080 max_fails=1 fail_timeout=10s max_conns=0;
}
server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /etc/nginx/secrets/dcjson-mlang25-jsonserver-tls-secret;
ssl_certificate_key /etc/nginx/secrets/dcjson-mlang25-jsonserver-tls-secret;
server_tokens on;
server_name mlang25.test.domain;
set $resource_type "ingress";
set $resource_name "terminology-ingress";
set $resource_namespace "dcjson-mlang25";
if ($scheme = http) {
return 301 https://$host:443$request_uri;
}
location /authoring/ {
set $service "jsonserver-authoring";
proxy_http_version 1.1;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout updating http_404 http_500 http_502 http_503 http_504;
proxy_cache_revalidate on;
proxy_set_header Connection "";
proxy_hide_header 'Access-Control-Allow-Origin';
proxy_hide_header 'Access-Control-Allow-Methods';
proxy_hide_header 'Access-Control-Allow-Headers';
proxy_hide_header 'Access-Control-Expose-Headers';
proxy_hide_header 'Access-Control-Allow-Credentials';
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'PUT, GET, POST, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization,AcceptX-FHIR-Starter,Origin,Accept,X-Requested-With,Content-Type,Access-Control-Request-Method,Access-Control-Request-Headers,Authorization,Prefer,Pragma,If-Match,If-None-Match' always;
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
add_header 'Access-Control-Allow-Credentials' 'true';
add_header X-Cache-Status $upstream_cache_status;
proxy_connect_timeout 60s;
proxy_read_timeout 1800s;
proxy_send_timeout 1800s;
client_max_body_size 4096m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_buffers 4 256k;
proxy_buffer_size 128k;
proxy_max_temp_file_size 4096m;
proxy_pass http://dcjson-mlang25-terminology-ingress-mlang25.test.domain-jsonserver-authoring-8080/;
}
}
The Nginx.conf file itself declares the cache like so:
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=STATIC:32m inactive=24h max_size=10g;
proxy_cache_key $scheme$proxy_host$request_uri;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
map $upstream_trailer_grpc_status $grpc_status {
default $upstream_trailer_grpc_status;
'' $sent_http_grpc_status;
}
** snipped**
}
The backend app does not return any Set-Cookie headers, which I know to be an issue - it's not that.
When placing a simple GET request I see this in Nginx Logs
2023/02/07 20:46:49 [debug] 416#416: *171 http script var: "https"
2023/02/07 20:46:49 [debug] 416#416: *171 http script var: "dcjson-mlang25-terminology-ingress-mlang25.test.domain-jsonserver-authoring-8080"
2023/02/07 20:46:49 [debug] 416#416: *171 http script var: "/authoring/fhir/CodeSystem/genenames.geneId-small"
2023/02/07 20:46:49 [debug] 416#416: *171 http cache key: "httpsdcjson-mlang25-terminology-ingress-mlang25.test.domain-jsonserver-authoring-8080/authoring/fhir/CodeSystem/genenames.geneId-small"
2023/02/07 20:46:49 [debug] 416#416: *171 add cleanup: 000055C5DDA4ED00
2023/02/07 20:46:49 [debug] 416#416: shmtx lock
2023/02/07 20:46:49 [debug] 416#416: slab alloc: 120 slot: 4
2023/02/07 20:46:49 [debug] 416#416: slab alloc: 00007FECD6324080
2023/02/07 20:46:49 [debug] 416#416: shmtx unlock
2023/02/07 20:46:49 [debug] 416#416: *171 http file cache exists: -5 e:0
2023/02/07 20:46:49 [debug] 416#416: *171 cache file: "/tmp/nginx_cache/8/b4/9ac307cbf4540372616c09cd894b9b48"
Repeated requests seconds later look exactly the same.
To my eyes, this is saying the cache isn't hit?
Every response header set looks something like this, with the status always being MISS
2023/02/07 20:46:49 [debug] 416#416: *171 HTTP/1.1 200
Server: nginx/1.23.2
Date: Tue, 07 Feb 2023 20:46:49 GMT
Content-Type: application/fhir+json;charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
X-Request-Id: sJ4yXmP1ziSF3fJt
Cache-Control: no-cache
Vary: Accept,Origin,Accept-Encoding,Accept-Language,Authorization
X-Powered-By: HAPI FHIR 6.0.0 REST Server (FHIR Server; FHIR 4.0.1/R4)
ETag: W/"1"
Content-Location: https://mlang25.test.domain/authoring/fhir/CodeSystem/genenames.geneId-small/_history/1
Last-Modified: Tue, 07 Feb 2023 20:08:35 GMT
Content-Encoding: gzip
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
X-Frame-Options: DENY
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: PUT, GET, POST, DELETE, OPTIONS
Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested- With,If-Modified-Since,Cache-Control,Content-Type,Authorization,AcceptX-FHIR- Starter,Origin,Accept,X-Requested-With,Content-Type,Access-Control-Request-Method,Access- Control-Request-Headers,Authorization,Prefer,Pragma,If-Match,If-None-Match
Access-Control-Expose-Headers: Content-Length,Content-Range
Access-Control-Allow-Credentials: true
X-Cache-Status: MISS
I am really struggling to work out why the cache is never being hit.
For anyone who stumbles across this - our backend had a change from a 3rd party and had started returning Cache-Control no-cache meaning nginx will never cache the result.

Nginx doesn't pass response headers of NodeJS Server

I have trouble configuring my nginx reverse proxy. As it stands the requests look like this:
root#devserver:~# curl -I https://example.com
HTTP/2 401
server: nginx
date: Fri, 15 Oct 2021 11:42:00 GMT
content-type: text/html; charset=utf-8
content-length: 172
www-authenticate: Basic realm="please login"
x-xss-protection: 1; mode=block
x-content-type-options: nosniff
referrer-policy: no-referrer-when-downgrade
content-security-policy: default-src 'self' http: https: data: blob: 'unsafe-inline'; frame-ancestors 'self';
permissions-policy: interest-cohort=()
strict-transport-security: max-age=31536000; includeSubDomains
root#devserver:~# curl -I http://127.0.0.1:5000
HTTP/1.1 500 Internal Server Error
X-Powered-By: Express
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PATCH, PUT, DELETE, OPTIONS
Access-Control-Allow-Headers: *
Content-Security-Policy: default-src 'none'
X-Content-Type-Options: nosniff
Content-Type: text/html; charset=utf-8
Content-Length: 1386
Date: Fri, 15 Oct 2021 11:42:52 GMT
Connection: keep-alive
Keep-Alive: timeout=5
The Access-Control headers are missing, and I cannot figure out how I can configure nginx to pass them to the browser.
My nginx configuration looks currently something like this:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com;
root /var/www/example.com;
# SSL
...
# reverse proxy
location / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
# Proxy headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Forwarded $proxy_add_forwarded;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
# Proxy timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
}
Thanks for the help in advance :)
location / {
proxy_pass https://127.0.0.1:80;
proxy_set_header Host $host;
proxy_hide_header X-Frame-Options;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Accept-Encoding "";
client_body_timeout 3000;
fastcgi_read_timeout 3000;
client_max_body_size 128m;
fastcgi_buffers 8 128k;
fastcgi_buffer_size 128k;
Please use this and try .

Localhost Nginx raise no-referrer-when-downgrade in browser

I have HTTP cloud functions running locally and i want to hide them behind a reverse proxy to reproduce the same architecture i have in production.
When i call directly the endpoint, it's fine. But when i use my react application, it's not working. I don't manage to fix the issue.
Can you help me please?
Nginx conf :
server {
listen 9003;
#add_header 'Referrer-Policy' 'no-referrer';
add_header 'Referrer-Policy' 'unsafe-url';
location /api/callImportEvent/ {
add_header 'Referrer-Policy' 'unsafe-url';
if ($request_method = OPTIONS ) {
add_header "Access-Control-Allow-Origin" *;
add_header "Access-Control-Allow-Methods" "GET, POST, OPTIONS, HEAD";
add_header "Access-Control-Allow-Headers" "Authorization, Origin, X-Requested-With, Content-Type, Accept";
return 200;
}
proxy_pass http://192.168.99.1:8888/;
proxy_redirect off;
proxy_pass_header Authorization;
proxy_pass_header x-api-key;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Cloud function command : 
$(gcloud beta emulators pubsub env-init) && functions-framework --target=callImportEventHttp --signature-type=http --source=./index.js --port=8888
Direct curl (working) :
rbarbu#DESKTOP-7EQNOFM:~$ curl -v 'http://localhost:9003/api/callImport/'
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 9003 (#0)
> GET /api/callImport/ HTTP/1.1
> Host: localhost:9003
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 456 unknown
< Server: nginx/1.14.0 (Ubuntu)
< Date: Mon, 07 Sep 2020 15:37:16 GMT
< Content-Type: application/json; charset=utf-8
< Content-Length: 61
< Connection: keep-alive
< X-Powered-By: Express
< ETag: W/"3d-kbNXdE/Nx2cqqnUsIdQHlgSM74w"
<
* Connection #0 to host localhost left intact
{"status":"Unrecoverable Error","message":"No API KEY given"}r
Call from React APP (not Working):
Direct call to NGINX URL from Browser (working):

Nginx - Reverse proxy - 404

Receive 404 error while calling URL - http://10.240.0.133/swagger. Below is the snippet of nginx.conf file, I need to append index.html at end of the URI, so I placed a rewrite rule.
server {
listen 80;
listen [::]:80;
server_name localhost;
server_name 10.240.0.133;
server_name 127.0.0.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
access_log /var/log/nginx/resources-reverse-access.log;
error_log /var/log/nginx/resources-reverse-error.log;
location /swagger {
rewrite ^/swagger/index.html break;
proxy_pass http://52.177.131.103:8082/;
}
}
When I visited the URL - curl -v http://10.240.0.133/swagger
404 is thrown:-
* Trying 10.240.0.133...
* TCP_NODELAY set
* Connected to 10.240.0.133 (10.240.0.133) port 80 (#0)
> GET /swagger HTTP/1.1
> Host: 10.240.0.133
> User-Agent: curl/7.55.1
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: nginx/1.14.0 (Ubuntu)
< Date: Wed, 18 Mar 2020 14:41:50 GMT
< Content-Length: 0
< Connection: keep-alive
<
* Connection #0 to host 10.240.0.133 left intact
I believe your rewrite rule is incorrect. It should look more like this.
location /swagger {
rewrite ^\/swagger\/?.*?$ /swagger/index.html break;
proxy_pass http://52.177.131.103:8082/;
}
but I believe this still not correct since you have not a set a root directive for this server.

nginx location header rewrite using proxy_redirect directive

Running nginx on windows as reverse proxy with the below nginx.conf
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 8082;
server_name localhost;
location / {
proxy_pass http://192.168.12.211:8082;
proxy_redirect http://192.168.12.211/ http://localhost:8080/;
proxy_set_header Host $host;
}
}
}
Here is the curl o/p.
c:\curl>curl -I http://localhost:8082
HTTP/1.1 303 See Other
Server: nginx/1.9.9
Date: Wed, 20 Jan 2016 10:30:38 GMT
Content-Type: text/html
Connection: keep-alive
Access-Control-Allow-Origin: *
location: http://192.168.12.211:8080/test.htm?Id=12345678
I want the "location" header received in the response to be rewritten as shown in the proxy_redirect directive in the nginx.conf file. Basically
location: http://192.168.12.211:8080/test.htm?Id=12345678
must be rewritten as
location: http://localhost:8080/test.htm?Id=12345678
What am I missing here in the nginx configuration? Any hints appreciated.

Resources