Nginx doesn't pass response headers of NodeJS Server - nginx

I have trouble configuring my nginx reverse proxy. As it stands the requests look like this:
root#devserver:~# curl -I https://example.com
HTTP/2 401
server: nginx
date: Fri, 15 Oct 2021 11:42:00 GMT
content-type: text/html; charset=utf-8
content-length: 172
www-authenticate: Basic realm="please login"
x-xss-protection: 1; mode=block
x-content-type-options: nosniff
referrer-policy: no-referrer-when-downgrade
content-security-policy: default-src 'self' http: https: data: blob: 'unsafe-inline'; frame-ancestors 'self';
permissions-policy: interest-cohort=()
strict-transport-security: max-age=31536000; includeSubDomains
root#devserver:~# curl -I http://127.0.0.1:5000
HTTP/1.1 500 Internal Server Error
X-Powered-By: Express
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PATCH, PUT, DELETE, OPTIONS
Access-Control-Allow-Headers: *
Content-Security-Policy: default-src 'none'
X-Content-Type-Options: nosniff
Content-Type: text/html; charset=utf-8
Content-Length: 1386
Date: Fri, 15 Oct 2021 11:42:52 GMT
Connection: keep-alive
Keep-Alive: timeout=5
The Access-Control headers are missing, and I cannot figure out how I can configure nginx to pass them to the browser.
My nginx configuration looks currently something like this:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com;
root /var/www/example.com;
# SSL
...
# reverse proxy
location / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
# Proxy headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Forwarded $proxy_add_forwarded;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
# Proxy timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
}
Thanks for the help in advance :)

location / {
proxy_pass https://127.0.0.1:80;
proxy_set_header Host $host;
proxy_hide_header X-Frame-Options;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Accept-Encoding "";
client_body_timeout 3000;
fastcgi_read_timeout 3000;
client_max_body_size 128m;
fastcgi_buffers 8 128k;
fastcgi_buffer_size 128k;
Please use this and try .

Related

Nginx Ingress Controller cache not being hit

We are using the Nginx Ingress Controller image as described here (https://docs.nginx.com/nginx-ingress-controller/) in our Kubernetes (EKS) environment, and we are having big problems trying to implement caching.
We have a JSON-based service sitting behind our ingress controller.
The Ingress generates Nginx config that looks like this:
# configuration for dcjson-mlang25/terminology-ingress
upstream dcjson-mlang25-terminology-ingress-mlang25.test.domain-jsonserver-authoring-8080 {
zone dcjson-mlang25-terminology-ingress-mlang25.test.domain-jsonserver-authoring-8080 256k;
random two least_conn;
server 10.220.2.66:8080 max_fails=1 fail_timeout=10s max_conns=0;
}
server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /etc/nginx/secrets/dcjson-mlang25-jsonserver-tls-secret;
ssl_certificate_key /etc/nginx/secrets/dcjson-mlang25-jsonserver-tls-secret;
server_tokens on;
server_name mlang25.test.domain;
set $resource_type "ingress";
set $resource_name "terminology-ingress";
set $resource_namespace "dcjson-mlang25";
if ($scheme = http) {
return 301 https://$host:443$request_uri;
}
location /authoring/ {
set $service "jsonserver-authoring";
proxy_http_version 1.1;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout updating http_404 http_500 http_502 http_503 http_504;
proxy_cache_revalidate on;
proxy_set_header Connection "";
proxy_hide_header 'Access-Control-Allow-Origin';
proxy_hide_header 'Access-Control-Allow-Methods';
proxy_hide_header 'Access-Control-Allow-Headers';
proxy_hide_header 'Access-Control-Expose-Headers';
proxy_hide_header 'Access-Control-Allow-Credentials';
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'PUT, GET, POST, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization,AcceptX-FHIR-Starter,Origin,Accept,X-Requested-With,Content-Type,Access-Control-Request-Method,Access-Control-Request-Headers,Authorization,Prefer,Pragma,If-Match,If-None-Match' always;
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
add_header 'Access-Control-Allow-Credentials' 'true';
add_header X-Cache-Status $upstream_cache_status;
proxy_connect_timeout 60s;
proxy_read_timeout 1800s;
proxy_send_timeout 1800s;
client_max_body_size 4096m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_buffers 4 256k;
proxy_buffer_size 128k;
proxy_max_temp_file_size 4096m;
proxy_pass http://dcjson-mlang25-terminology-ingress-mlang25.test.domain-jsonserver-authoring-8080/;
}
}
The Nginx.conf file itself declares the cache like so:
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=STATIC:32m inactive=24h max_size=10g;
proxy_cache_key $scheme$proxy_host$request_uri;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
map $upstream_trailer_grpc_status $grpc_status {
default $upstream_trailer_grpc_status;
'' $sent_http_grpc_status;
}
** snipped**
}
The backend app does not return any Set-Cookie headers, which I know to be an issue - it's not that.
When placing a simple GET request I see this in Nginx Logs
2023/02/07 20:46:49 [debug] 416#416: *171 http script var: "https"
2023/02/07 20:46:49 [debug] 416#416: *171 http script var: "dcjson-mlang25-terminology-ingress-mlang25.test.domain-jsonserver-authoring-8080"
2023/02/07 20:46:49 [debug] 416#416: *171 http script var: "/authoring/fhir/CodeSystem/genenames.geneId-small"
2023/02/07 20:46:49 [debug] 416#416: *171 http cache key: "httpsdcjson-mlang25-terminology-ingress-mlang25.test.domain-jsonserver-authoring-8080/authoring/fhir/CodeSystem/genenames.geneId-small"
2023/02/07 20:46:49 [debug] 416#416: *171 add cleanup: 000055C5DDA4ED00
2023/02/07 20:46:49 [debug] 416#416: shmtx lock
2023/02/07 20:46:49 [debug] 416#416: slab alloc: 120 slot: 4
2023/02/07 20:46:49 [debug] 416#416: slab alloc: 00007FECD6324080
2023/02/07 20:46:49 [debug] 416#416: shmtx unlock
2023/02/07 20:46:49 [debug] 416#416: *171 http file cache exists: -5 e:0
2023/02/07 20:46:49 [debug] 416#416: *171 cache file: "/tmp/nginx_cache/8/b4/9ac307cbf4540372616c09cd894b9b48"
Repeated requests seconds later look exactly the same.
To my eyes, this is saying the cache isn't hit?
Every response header set looks something like this, with the status always being MISS
2023/02/07 20:46:49 [debug] 416#416: *171 HTTP/1.1 200
Server: nginx/1.23.2
Date: Tue, 07 Feb 2023 20:46:49 GMT
Content-Type: application/fhir+json;charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
X-Request-Id: sJ4yXmP1ziSF3fJt
Cache-Control: no-cache
Vary: Accept,Origin,Accept-Encoding,Accept-Language,Authorization
X-Powered-By: HAPI FHIR 6.0.0 REST Server (FHIR Server; FHIR 4.0.1/R4)
ETag: W/"1"
Content-Location: https://mlang25.test.domain/authoring/fhir/CodeSystem/genenames.geneId-small/_history/1
Last-Modified: Tue, 07 Feb 2023 20:08:35 GMT
Content-Encoding: gzip
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
X-Frame-Options: DENY
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: PUT, GET, POST, DELETE, OPTIONS
Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested- With,If-Modified-Since,Cache-Control,Content-Type,Authorization,AcceptX-FHIR- Starter,Origin,Accept,X-Requested-With,Content-Type,Access-Control-Request-Method,Access- Control-Request-Headers,Authorization,Prefer,Pragma,If-Match,If-None-Match
Access-Control-Expose-Headers: Content-Length,Content-Range
Access-Control-Allow-Credentials: true
X-Cache-Status: MISS
I am really struggling to work out why the cache is never being hit.
For anyone who stumbles across this - our backend had a change from a 3rd party and had started returning Cache-Control no-cache meaning nginx will never cache the result.

How do I handle redirects when using nginx proxy pass?

I have the following two docker containers. The problem I'm having is when accessing the gitlab it returns a 302. I used insomnia and this is the timeline:
> GET /gitlab HTTP/1.1
> Host: 192.168.158.150
> User-Agent: insomnia/2022.3.0
> Cookie: _gitlab_session=f8989ed639173dad0d881a284165e03d
> Accept: */*
* STATE: DO => DID handle 0x7fe2508f5208; line 2077 (connection #141)
* STATE: DID => PERFORMING handle 0x7fe2508f5208; line 2196 (connection #141)
* Mark bundle as not supporting multiuse
* HTTP 1.1 or later with persistent connection
< HTTP/1.1 302 Found
< Server: nginx/1.21.6
< Date: Mon, 16 May 2022 14:54:15 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 102
< Connection: keep-alive
< Cache-Control: no-cache
< Content-Security-Policy:
< Location: http://192.168.158.150/users/sign_in
< Permissions-Policy: interest-cohort=()
< Pragma: no-cache
It appears the redirect is dropping the port and instead of redirecting to http://192.168.158.150:92/users/sign_in it redirects to http://192.168.158.150/users/sign_in. Does anyone have any ideas on how I can deal with this?
This is my nginx.conf:
server {
listen 80;
listen [::]:80;
server_name localhost;
location / {
# Redirects to docker container 1
set $upstream_app "192.168.158.150";
set $upstream_port '90';
set $upstream_proto http;
proxy_pass $upstream_proto://$upstream_app:$upstream_port;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Referer $http_referer;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /gitlab {
# Redirects to gitlab docker container
set $upstream_app "192.168.158.150";
set $upstream_port '92';
set $upstream_proto http;
proxy_pass "http://192.168.158.150:92";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Referer $http_referer;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Setting the the external url to match the nginx.conf resolved the problem.
gitlab.rb:
external_url "http://192.168.158.150:92/gitlab"
nginx.conf:
location /gitlab {
# Redirects to gitlab docker container
set $upstream_app "192.168.158.150";
set $upstream_port '92';
set $upstream_proto http;
proxy_pass "http://192.168.158.150:92";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Referer $http_referer;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}

Nginx normalize Accept-Encoding for better cache hit ratio

I have problem with normalizing Accept-Encoding, during tests express server getting correctly normalized Accept-Encoding from nginx, but nginx didn't respect "Vary: Accept-Encoding" and when I send request with e.g. "Accept-Encoding: gzip, br" and "Accept-Encoding: gzip" I got 2 requests to express server, it should be only 1 request
nginx
proxy_cache_path /tmp/express_cache keys_zone=express_cache:10m levels=1:2 inactive=600s max_size=100m;
map $http_accept_encoding $pwa_normalized_encoding {
default "";
"~*gzip" "gzip";
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
location /express {
proxy_set_header Host $best_http_host;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
proxy_cache express_cache;
proxy_cache_lock on;
proxy_cache_background_update on;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_min_uses 1;
proxy_set_header Accept-Encoding $pwa_normalized_encoding;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://localhost:5000/;
}
}
express server
app.get('/', (req, res) => {
console.log("encoding:", new Date().toISOString(), (req.get("Accept-Encoding")));
res.setHeader("Cache-Control", "public, max-age=30");
res.setHeader('Content-Type', 'application/json')
res.setHeader("Vary", "Accept-Encoding");
setTimeout(() => {
res.json({date: new Date().toISOString()})
}, 300)
})

nginx websocket proxy: using defined location

I'm trying to setup websocket proxy for a defined location (/ws) and it doesn't work. But using root (/) as location works for me.
works:
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://echo.websocket.org;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffering off;
}
}
doesn't work:
server {
listen 80;
server_name localhost;
location /ws {
proxy_pass http://echo.websocket.org;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffering off;
}
}
I also inspected the server response:
curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" -H "Host: localhost:8888" -H "Origin: http://localhost:8888/ws" http://localhost:8888/ws
HTTP/1.1 404 WebSocket Upgrade Failure
Server: nginx/1.13.12
Date: Thu, 17 Jan 2019 15:15:50 GMT
Content-Type: text/html
Content-Length: 77
Connection: keep-alive
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: content-type
Access-Control-Allow-Headers: authorization
Access-Control-Allow-Headers: x-websocket-extensions
Access-Control-Allow-Headers: x-websocket-version
Access-Control-Allow-Headers: x-websocket-protocol
Access-Control-Allow-Origin: http://localhost:8888/ws
<html><head></head><body><h1>404 WebSocket Upgrade Failure</h1></body></html>
what am I doing wrong?
I think the problem here is with the pattern matching. You first have to give websocket url from client side as ws://localhost:80/ws/
Note here you have to give (/) at the end of the url, and at the NGINX server side, configure it in this way :
server {
listen 80;
server_name localhost;
location /ws/ {
proxy_pass http://echo.websocket.org;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffering off;
}}
Note here that you have to give /ws/ reference to location.

Cache some API requests in Nginx

I'm seeking advise from experts here.
We have the following scenario. We have a java application. Java app is running on tomcat7. tomcat7 acting as API server. User interface files ( Static html and css ) are served by nginx. Nginx is acting as reverse proxy here. All API request are passed to API server and rest are being server by nginx directly.
What we want is to implement cache mechanism here. That is means we want to enable cache for all but with few exception. We want to exclude some API requests from being cached.
Our configuration is like as shown below
server {
listen 443 ssl;
server_name ~^(?<subdomain>.+)\.ourdomain\.com$;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
if ($request_method !~ ^(GET|HEAD|POST)$ )
{
return 405;
}
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
location / {
root /var/www/html/userUI;
location ~* \.(?:css|js)$ {
expires 1M;
access_log off;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
}
location /server {
proxy_pass http://upstream/server;
proxy_set_header Host $subdomain.ourdomain.com;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_temp_path /var/nginx/proxy_temp;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_redirect off;
proxy_cache sd6;
add_header X-Proxy-Cache $upstream_cache_status;
proxy_cache_bypass $http_cache_control;
}
ssl on;
ssl_certificate /etc/nginx/ssl/ourdomain.com.bundle.crt;
ssl_certificate_key /etc/nginx/ssl/ourdomain.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
#ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA HIGH !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
ssl_dhparam /etc/nginx/ssl/dhparams.pem;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_prefer_server_ciphers on;
ssl_session_timeout 24h;
keepalive_timeout 300;
As above, we use cache only for static files located in /var/www/html/userUI
We want to implement as such in location /server. This our api server. Means nginx passes api request to tomcat7 ( upstream ) server. We want to enable cache for specific API requests only but need to disable cache for rest of all requests.
We want to do the following
Exclude all json requests from cache and but need to enable cache for few.
Request url will be something like as shown below
Request URL:https://ourdomain.com/server/user/api/v7/userProfileImage/get?loginName=user1&_=1453442399073
What this url does is to get the Profile image. We want to enable cache for this specific url. So condition we would like to use is , if request url contains "/userProfileImage/get" we want to set cache and all other requests shouldn't cache.
To achieve this we changed the settings to following
location /server {
set $no_cache 0;
if ($request_uri ~* "/server/user/api/v7/userProfileImage/get*")
{
set $no_cache 1;
}
proxy_pass http://upstream/server;
proxy_set_header Host $subdomain.ourdomain.com;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_temp_path /var/nginx/proxy_temp;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_redirect off;
proxy_cache sd6;
add_header X-Proxy-Cache $upstream_cache_status;
proxy_no_cache $no_cache;
proxy_cache_bypass $no_cache;
}
Below are the results of http responses
General :
Request URL:https://ourdomain.com/server/common/api/v7/userProfileImage/get?loginName=user1
Request Method:GET
Status Code:200 OK
Remote Address:131.212.98.12:443
Response Headers :
Cache-Control:no-cache, no-store, must-revalidate
Connection:keep-alive
Content-Type:image/png;charset=UTF-8
Date:Fri, 22 Jan 2016 07:36:56 GMT
Expires:Thu, 01 Jan 1970 00:00:00 GMT
Pragma:no-cache
Server:nginx
Transfer-Encoding:chunked
X-Proxy-Cache:MISS
Please advise us a solution.

Resources