Nginx Ingress Controller cache not being hit - nginx

We are using the Nginx Ingress Controller image as described here (https://docs.nginx.com/nginx-ingress-controller/) in our Kubernetes (EKS) environment, and we are having big problems trying to implement caching.
We have a JSON-based service sitting behind our ingress controller.
The Ingress generates Nginx config that looks like this:
# configuration for dcjson-mlang25/terminology-ingress
upstream dcjson-mlang25-terminology-ingress-mlang25.test.domain-jsonserver-authoring-8080 {
zone dcjson-mlang25-terminology-ingress-mlang25.test.domain-jsonserver-authoring-8080 256k;
random two least_conn;
server 10.220.2.66:8080 max_fails=1 fail_timeout=10s max_conns=0;
}
server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /etc/nginx/secrets/dcjson-mlang25-jsonserver-tls-secret;
ssl_certificate_key /etc/nginx/secrets/dcjson-mlang25-jsonserver-tls-secret;
server_tokens on;
server_name mlang25.test.domain;
set $resource_type "ingress";
set $resource_name "terminology-ingress";
set $resource_namespace "dcjson-mlang25";
if ($scheme = http) {
return 301 https://$host:443$request_uri;
}
location /authoring/ {
set $service "jsonserver-authoring";
proxy_http_version 1.1;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout updating http_404 http_500 http_502 http_503 http_504;
proxy_cache_revalidate on;
proxy_set_header Connection "";
proxy_hide_header 'Access-Control-Allow-Origin';
proxy_hide_header 'Access-Control-Allow-Methods';
proxy_hide_header 'Access-Control-Allow-Headers';
proxy_hide_header 'Access-Control-Expose-Headers';
proxy_hide_header 'Access-Control-Allow-Credentials';
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'PUT, GET, POST, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization,AcceptX-FHIR-Starter,Origin,Accept,X-Requested-With,Content-Type,Access-Control-Request-Method,Access-Control-Request-Headers,Authorization,Prefer,Pragma,If-Match,If-None-Match' always;
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
add_header 'Access-Control-Allow-Credentials' 'true';
add_header X-Cache-Status $upstream_cache_status;
proxy_connect_timeout 60s;
proxy_read_timeout 1800s;
proxy_send_timeout 1800s;
client_max_body_size 4096m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_buffers 4 256k;
proxy_buffer_size 128k;
proxy_max_temp_file_size 4096m;
proxy_pass http://dcjson-mlang25-terminology-ingress-mlang25.test.domain-jsonserver-authoring-8080/;
}
}
The Nginx.conf file itself declares the cache like so:
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=STATIC:32m inactive=24h max_size=10g;
proxy_cache_key $scheme$proxy_host$request_uri;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
map $upstream_trailer_grpc_status $grpc_status {
default $upstream_trailer_grpc_status;
'' $sent_http_grpc_status;
}
** snipped**
}
The backend app does not return any Set-Cookie headers, which I know to be an issue - it's not that.
When placing a simple GET request I see this in Nginx Logs
2023/02/07 20:46:49 [debug] 416#416: *171 http script var: "https"
2023/02/07 20:46:49 [debug] 416#416: *171 http script var: "dcjson-mlang25-terminology-ingress-mlang25.test.domain-jsonserver-authoring-8080"
2023/02/07 20:46:49 [debug] 416#416: *171 http script var: "/authoring/fhir/CodeSystem/genenames.geneId-small"
2023/02/07 20:46:49 [debug] 416#416: *171 http cache key: "httpsdcjson-mlang25-terminology-ingress-mlang25.test.domain-jsonserver-authoring-8080/authoring/fhir/CodeSystem/genenames.geneId-small"
2023/02/07 20:46:49 [debug] 416#416: *171 add cleanup: 000055C5DDA4ED00
2023/02/07 20:46:49 [debug] 416#416: shmtx lock
2023/02/07 20:46:49 [debug] 416#416: slab alloc: 120 slot: 4
2023/02/07 20:46:49 [debug] 416#416: slab alloc: 00007FECD6324080
2023/02/07 20:46:49 [debug] 416#416: shmtx unlock
2023/02/07 20:46:49 [debug] 416#416: *171 http file cache exists: -5 e:0
2023/02/07 20:46:49 [debug] 416#416: *171 cache file: "/tmp/nginx_cache/8/b4/9ac307cbf4540372616c09cd894b9b48"
Repeated requests seconds later look exactly the same.
To my eyes, this is saying the cache isn't hit?
Every response header set looks something like this, with the status always being MISS
2023/02/07 20:46:49 [debug] 416#416: *171 HTTP/1.1 200
Server: nginx/1.23.2
Date: Tue, 07 Feb 2023 20:46:49 GMT
Content-Type: application/fhir+json;charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
X-Request-Id: sJ4yXmP1ziSF3fJt
Cache-Control: no-cache
Vary: Accept,Origin,Accept-Encoding,Accept-Language,Authorization
X-Powered-By: HAPI FHIR 6.0.0 REST Server (FHIR Server; FHIR 4.0.1/R4)
ETag: W/"1"
Content-Location: https://mlang25.test.domain/authoring/fhir/CodeSystem/genenames.geneId-small/_history/1
Last-Modified: Tue, 07 Feb 2023 20:08:35 GMT
Content-Encoding: gzip
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
X-Frame-Options: DENY
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: PUT, GET, POST, DELETE, OPTIONS
Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested- With,If-Modified-Since,Cache-Control,Content-Type,Authorization,AcceptX-FHIR- Starter,Origin,Accept,X-Requested-With,Content-Type,Access-Control-Request-Method,Access- Control-Request-Headers,Authorization,Prefer,Pragma,If-Match,If-None-Match
Access-Control-Expose-Headers: Content-Length,Content-Range
Access-Control-Allow-Credentials: true
X-Cache-Status: MISS
I am really struggling to work out why the cache is never being hit.

For anyone who stumbles across this - our backend had a change from a 3rd party and had started returning Cache-Control no-cache meaning nginx will never cache the result.

Related

Nginx doesn't pass response headers of NodeJS Server

I have trouble configuring my nginx reverse proxy. As it stands the requests look like this:
root#devserver:~# curl -I https://example.com
HTTP/2 401
server: nginx
date: Fri, 15 Oct 2021 11:42:00 GMT
content-type: text/html; charset=utf-8
content-length: 172
www-authenticate: Basic realm="please login"
x-xss-protection: 1; mode=block
x-content-type-options: nosniff
referrer-policy: no-referrer-when-downgrade
content-security-policy: default-src 'self' http: https: data: blob: 'unsafe-inline'; frame-ancestors 'self';
permissions-policy: interest-cohort=()
strict-transport-security: max-age=31536000; includeSubDomains
root#devserver:~# curl -I http://127.0.0.1:5000
HTTP/1.1 500 Internal Server Error
X-Powered-By: Express
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PATCH, PUT, DELETE, OPTIONS
Access-Control-Allow-Headers: *
Content-Security-Policy: default-src 'none'
X-Content-Type-Options: nosniff
Content-Type: text/html; charset=utf-8
Content-Length: 1386
Date: Fri, 15 Oct 2021 11:42:52 GMT
Connection: keep-alive
Keep-Alive: timeout=5
The Access-Control headers are missing, and I cannot figure out how I can configure nginx to pass them to the browser.
My nginx configuration looks currently something like this:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com;
root /var/www/example.com;
# SSL
...
# reverse proxy
location / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
# Proxy headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Forwarded $proxy_add_forwarded;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
# Proxy timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
}
Thanks for the help in advance :)
location / {
proxy_pass https://127.0.0.1:80;
proxy_set_header Host $host;
proxy_hide_header X-Frame-Options;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Accept-Encoding "";
client_body_timeout 3000;
fastcgi_read_timeout 3000;
client_max_body_size 128m;
fastcgi_buffers 8 128k;
fastcgi_buffer_size 128k;
Please use this and try .

nginx websocket proxy: using defined location

I'm trying to setup websocket proxy for a defined location (/ws) and it doesn't work. But using root (/) as location works for me.
works:
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://echo.websocket.org;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffering off;
}
}
doesn't work:
server {
listen 80;
server_name localhost;
location /ws {
proxy_pass http://echo.websocket.org;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffering off;
}
}
I also inspected the server response:
curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" -H "Host: localhost:8888" -H "Origin: http://localhost:8888/ws" http://localhost:8888/ws
HTTP/1.1 404 WebSocket Upgrade Failure
Server: nginx/1.13.12
Date: Thu, 17 Jan 2019 15:15:50 GMT
Content-Type: text/html
Content-Length: 77
Connection: keep-alive
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: content-type
Access-Control-Allow-Headers: authorization
Access-Control-Allow-Headers: x-websocket-extensions
Access-Control-Allow-Headers: x-websocket-version
Access-Control-Allow-Headers: x-websocket-protocol
Access-Control-Allow-Origin: http://localhost:8888/ws
<html><head></head><body><h1>404 WebSocket Upgrade Failure</h1></body></html>
what am I doing wrong?
I think the problem here is with the pattern matching. You first have to give websocket url from client side as ws://localhost:80/ws/
Note here you have to give (/) at the end of the url, and at the NGINX server side, configure it in this way :
server {
listen 80;
server_name localhost;
location /ws/ {
proxy_pass http://echo.websocket.org;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffering off;
}}
Note here that you have to give /ws/ reference to location.

nginx proxy_pass over https_proxy

I am trying to set up nginx with this config. To access backend.mygreat.server.com I have to go through my corporate proxy, which is myproxy.server.com:80.
Hence, I have added this in /etc/environment
https_proxy=myproxy.server.com:80
Yet, nginx is unable to reach https://backend.mygreat.server.com:443. I'm seeing 504 as HTTP status in nginx logs.
I could use wget or curl to load the page (goes via corporate proxy)
server {
listen 443;
server_name mygreat.server.com;
ssl on;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH:EDH+aRSA:!aNULL:!eNULL:!LOW:!RC4:!3DES:!MD5:!EXP:!PSK:!SRP:!SEED:!DSS:!CAMELLIA;
ssl_certificate /etc/nginx/ssl/mygreat.server.com.pem;
ssl_certificate_key /etc/nginx/ssl/mygreat.server.com.key;
access_log /var/log/nginx/access.ssl.log;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host-Real-IP $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-Pcol http;
proxy_intercept_errors on;
error_page 301 302 307 = #handle_redirects;
proxy_pass https://backend.mygreat.server.com:443;
}
location #handle_redirects {
set $saved_redirect_location '$upstream_http_location';
proxy_pass $saved_redirect_location;
}
}
Any help is greatly appreciated.
Thanks
Update :
Here is the sample error log from nginx
2017/10/18 06:55:51 [warn] 34604#34604: *1 upstream server temporarily disabled while connecting to upstream, client: <ip-address>, server: mygreat.server.com, request: "GET / HTTP/1.1", upstream: "https://<ip-of-backend>:443/", host: "mygreat.server.com"
If I run curl -v https://backend.mygreat.server.com/ below is the response
* About to connect() to proxy corp-proxy.server.com port 80 (#0)
* Trying <some-ip-address>...
* Connected to corp-proxy.server.com (<ip-of-proxy>) port 80 (#0)
* Establish HTTP proxy tunnel to backend.mygreat.server.com:443
> CONNECT backend.mygreat.server.com:443 HTTP/1.1
> Host: backend.mygreat.server.com:443
> User-Agent: curl/7.29.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 200 Connection established
<
* Proxy replied OK to CONNECT request
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=backend.mygreat.server.com,OU=Technology Operations,O=MyCompany.,L=San Diego,ST=California,C=US
* start date: Mar 15 00:00:00 2017 GMT
* expire date: Mar 15 23:59:59 2020 GMT
* common name: backend.mygreat.server.com
* issuer: CN=Symantec Class 3 Secure Server CA - G4,OU=Symantec Trust Network,O=Symantec Corporation,C=US
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: backend.mygreat.server.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: openresty/1.11.2.5
< Date: Wed, 18 Oct 2017 14:03:10 GMT
< Content-Type: text/html;charset=UTF-8
< Content-Length: 5642
< Connection: keep-alive
< X-XSS-Protection: 1; mode=block
< Cache-Control: max-age=0, no-cache, no-store, must-revalidate, private
< Expires: 0
< P3P: policyref="http://backend.mygreat.server.com/w3c/p3p.xml" CP="CURa OUR STP UNI INT"
< Content-Language: en
< Set-Cookie: qboeuid=127.0.0.1.1508335390550307; path=/; expires=Thu, 18-Oct-18 14:03:10 GMT; domain=.server.com
< Set-Cookie: JSESSIONID=784529AA39C10C3DB4B0ED0D61CC8F31.c23-pe2ec23uw2apu012031; Path=/; Secure; HttpOnly
< Set-Cookie: something.blah_blah=testme; Domain=.server.com; Path=/; Secure
< Vary: Accept-Encoding
<
<!DOCTYPE html>
<html>
....
</html>
So first of all I am not sure if Nginx is suppose to respect http_proxy and https_proxy variables. I didn't find any documentation on the same. So I assume your issues is related to nginx not using proxy at a all
So now you have an option to use something which actually uses proxy. This is where socat comes to rescue.
Running socat forwarder
If you have a transparent proxy then run
socat TCP4-LISTEN:8443,reuseaddr,fork TCP:<proxysever>:<proxyport>
And if you have CONNECT proxy then use below
socat TCP4-LISTEN:8443,reuseaddr,fork PROXY:yourproxy:backendserver:443,proxyport=<yourproxyport>
Then in your nginx config use
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host-Real-IP $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-Pcol http;
proxy_intercept_errors on;
proxy_set_header Host backend.mygreat.server.com;
proxy_pass https://127.0.0.1:8443;
proxy_redirect https://backend.mygreat.server.com https://mygreat.server.com;
}
You probably want to use Systemd service to launch the socat, so it runs on startup and is handled as a service
Nginx's proxy_pass does not support https proxy.
http proxy can be supported, but the request url only supports http.
this is a example:
server {
listen 8880;
server_name localhost;
location / {
rewrite ^(.*)$ "://developer.android.com$1";
rewrite ^(.*)$ "http$1" break;
proxy_set_header Proxy-Connection Keep-Alive;
proxy_set_header Host developer.android.com;
proxy_pass http://127.0.0.1:1080;
proxy_redirect ~^https?://developer\.android\.com(.*)$ http://$host:8080$1;
}
}
see: https://serverfault.com/a/683955/418613

Nginx return 501 when uploading large file

When I upload a 10M file to server, Nginx returns a 501 error. But smaller file is uploaded OK.
<html>
<head><title>501 Not Implemented</title></head>
<body bgcolor="white">
<center><h1>501 Not Implemented</h1></center>
<hr><center>nginx/1.8.1</center>
</body>
</html>
access.log
[01/Mar/2017:10:13:29 +0800] "POST /boss/cgi/importemoji HTTP/1.1" 501
582
The Nginx config file is
http {
include mime.types;
#default_type application/octet-stream;
default_type text/plain;
access_log logs/access.log main;
#access_log off;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 30;
keepalive_requests 100;
gzip on;
#gzip_disable msie6;
proxy_max_temp_file_size 0;
proxy_buffer_size 20M;
proxy_buffers 4 20M;
#mail_spam add url, host will be mod
#server_name_in_redirect off;
proxy_connect_timeout 60;
proxy_read_timeout 120;
proxy_send_timeout 120;
client_header_buffer_size 20M;
client_max_body_size 80M;
client_body_buffer_size 60M;
client_body_temp_path /usr/local/qspace/nginx/client_body_temp;
client_header_timeout 1m;
client_body_timeout 1m;
server_names_hash_max_size 1024;
server_names_hash_bucket_size 1024;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Real-Port $remote_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header PROXY_FORWARDED_FOR "disabled";
server {
listen 80;
listen 443 ssl;
keepalive_timeout 70;
...
}
}
error.log
2017/03/01 10:41:10 [debug] 6167#6167: *132144 __mydebug. menshen cleanup r: 00000000011DD720
2017/03/01 10:41:11 [debug] 6171#6171: *132871 __mydebug_menshen. ngx_http_dummy_payload_handler wait_for_body: yes
2017/03/01 10:41:11 [debug] 6171#6171: *132871 status: unkown. uri: /boss/cgi/importemoji args: r: 00000000011DD720 r->main: 00000000011DD720 r->count: 1
2017/03/01 10:41:11 [debug] 6171#6171: *132871 __mydebug_menshen. ngx_http_menshen_handler is called r: 00000000011DD720 nginx_version: 1008001
2017/03/01 10:41:11 [debug] 6171#6171: *132871 status: send. uri: /boss/cgi/importemoji. args: r: 00000000011DD720 r->main: 00000000011DD720 r->count: 1
2017/03/01 10:41:11 [debug] 6171#6171: *132871 __mydebug_menshen. len: 3078 header:
POST /boss/cgi/importemoji HTTP/1.1
Proxy-Connection:keep-alive
Content-Length:9465320
Pragma:no-cache
Cache-Control:no-cache
Accept:application/json, text/javascript, */*; q=0.01
I try to upload the 10M file using CURL directly to the server, It is Ok. So the problem probably arises from Nginx.
How can I fix the bug?
I have solved the problem. Generally, there exist these points when encountering uploading large file error on Nginx
file size limit. revise client_max_body_size.
keep-alive connections timeout. revise keepalive_timeout
reverse proxy timeout. revise proxy_connect_timeout
and at last you must make sure that Nginx is Native. In this case, My company compile a module called Menshen acted as firewall to Nginx. It only lets pass the file upload request less than 8M.

nginx location header rewrite using proxy_redirect directive

Running nginx on windows as reverse proxy with the below nginx.conf
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 8082;
server_name localhost;
location / {
proxy_pass http://192.168.12.211:8082;
proxy_redirect http://192.168.12.211/ http://localhost:8080/;
proxy_set_header Host $host;
}
}
}
Here is the curl o/p.
c:\curl>curl -I http://localhost:8082
HTTP/1.1 303 See Other
Server: nginx/1.9.9
Date: Wed, 20 Jan 2016 10:30:38 GMT
Content-Type: text/html
Connection: keep-alive
Access-Control-Allow-Origin: *
location: http://192.168.12.211:8080/test.htm?Id=12345678
I want the "location" header received in the response to be rewritten as shown in the proxy_redirect directive in the nginx.conf file. Basically
location: http://192.168.12.211:8080/test.htm?Id=12345678
must be rewritten as
location: http://localhost:8080/test.htm?Id=12345678
What am I missing here in the nginx configuration? Any hints appreciated.

Resources