Nginx Ingress session-cookie-expires doesn't work in kubernetes - nginx

Deployed application on Azure and kubernetes verison is 1.19.6 and nginx-ingress-controller version is 0.27.1
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: qas
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/http2-push-preload: "true"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
nginx.ingress.kubernetes.io/upstream-fail-timeout: "300"
nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
# Legacy: for compatibilty with older browsers: https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/
nginx.ingress.kubernetes.io/session-cookie-name: "INGRESSCOOKIE"
nginx.ingress.kubernetes.io/session-cookie-expires: "3600"
nginx.ingress.kubernetes.io/session-cookie-max-age: "3600"
#-----project specific-----#
nginx.ingress.kubernetes.io/app-root: "/welcome"
#----No ip whitelist for storefront, we fully depend on NSG rules in both D/Q/P-----#
nginx.ingress.kubernetes.io/server-snippet: |
# maintanance page
#rewrite ^(.*)$ https://www.maintenance.bosch.com/ redirect;
####################################
# NOTE for storefront we strictly don't allow access to admin urls
#################################
if ( $uri ~* "^/(smartedit|backoffice|hac|hmc|mcc|cmssmart*|maintenance|boschfoundationemployee|embeddedserver|groovynature|acceleratorservices|authorizationserver|permission*|previewwebservices|tomcatembeddedserver|.*cockpit)" ) {
return 403;
}
nginx.ingress.kubernetes.io/configuration-snippet: |
server_name_in_redirect on;
chunked_transfer_encoding off;
proxy_ignore_client_abort on;
gzip on;
gzip_vary on;
gzip_min_length 1;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/javascript application/json application/xml image/png image/svg+xml;
gzip_disable "MSIE [1-6]\.";
set $redirection_target "/";
#------project specific-----#
# TODO: Change to quality.aacentral.bosch.tech once migration is completed
set $best_http_host rb-q-aa-central.westeurope.cloudapp.azure.com;
# only if we did not redirect apply headers for caching
if ($uri ~* \.(js|css|gif|jpe?g|png|woff2)) {
# for older browsers
expires 5h;
add_header Cache-Control "private, max-age=1800, stale-while-revalidate";
}
spec:
tls:
- hosts:
- domain.com
secretName: waf
rules:
- host: domain.com
http:
paths:
- backend:
serviceName: svc
servicePort: 443
path: /
the Ingress works fine but the annotations
nginx.ingress.kubernetes.io/session-cookie-name: "INGRESSCOOKIE"
nginx.ingress.kubernetes.io/session-cookie-expires: "3600"
nginx.ingress.kubernetes.io/session-cookie-max-age: "3600"
no matter how I change the time-out value, it still the same with 300s
And I cannot found the session-affinity configuration in nginx.conf after deployed nginx-ingress-controller, and from Nginx official document also have no chapter describe how this annotation works.
Hope someone can provide any material to understand how it works and why the time-out doesn't work.
Thanks

Related

How to setup this ingress and controller to enable caching?

The Cluster is running multiple NGINX pods in one service, deployed over a Deployment YAML file.
I'm trying to cache GET Requests on both services a rest.js client, and an API web-application.
I'm struggling to make caching work with this ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: myNamespace
name: test-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt
acme.cert-manager.io/http01-edit-in-place: "true"
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-body-size: 8m
nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/http-snippet: "proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=static-cache:32m use_temp_path=off max_size=4g inactive=24h;"
nginx.ingress.kubernetes.io/server-snippet: |
proxy_cache static-cache;
proxy_cache_lock on;
proxy_cache_valid any 60m;
proxy_ignore_headers "Set-Cookie";
proxy_hide_header "Set-Cookie"
add_header Cache-Control "public";
add_header X-Cache-Status $upstream_cache_status;
spec:
rules:
- host: "{{ HOST }}"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: server
port:
number: 8080
- host: "client-{{ HOST }}"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: client
port:
number: 5500
tls:
- hosts:
- "{{ HOST }}"
- "testclientapplication-{{ HOST }}"
secretName: ingress-cert
In the response to any requests are the content-length, content-type, date and the strict-transport-security header.
Previously i was attempting to get it to run over a ConfigMap but that didn't work out either.
apiVersion: v1
kind: ConfigMap
metadata:
namespace: myNamespace
name: ingress-nginx-controller
data:
http-snippet: "proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=static-cache:32m use_temp_path=off max_size=4g inactive=24h;"
The service and client application are running fine but i'm struggeling to enable caching.
Some advice on how to enable caching would be highly appreciated.

Nginx on kubernetes not serving static content

I have Nginx Open Source on AKS service. Every thing was good but unable to serve static content like index.html or favicon.ico.
When I open http:// it is not serving the index.html by default[i get 404] and if I try to open any static content I get 404 error.
nginx configuration was passed as ConfigMap and below is the config file that talks about serving static content.
server {
listen 80;
server_name localhost;
root /opt/abc/html; #also tried root /opt/abc/html/
location / {
root /opt/abc/html; #also tried root /opt/abc/html/
index index.html;
try_files $uri $uri/ /index.html?$args;
...
...
..
proxy_pass http://tomcat;
}
}
Setup:
Kubernetes on AKS
Nginx Open Source [no ingress]
configMaps to mount config.d
the static content (/opt/abc/html) was passed into pod with kubernetes cp command. [will this work?]
ref: https://github.com/RammusXu/toolkit/tree/master/k8s/echoserver
Here's a example to mount nginx.conf from ConfigMap
And make sure you kubectl rollout restart deployment echoserver after update ConfigMap. Pod only clone ConfigMap when it created. It don't sync or auto-updated.
apiVersion: apps/v1
kind: Deployment
metadata:
name: echoserver
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: echoserver
template:
metadata:
labels:
app: echoserver
spec:
volumes:
- name: config
configMap:
name: nginx-config
containers:
- name: echoserver
# image: gcr.io/google_containers/echoserver:1.10
image: openresty/openresty:1.15.8.2-1-alpine
ports:
- containerPort: 8080
name: http
# nginx.conf override
volumeMounts:
- name: config
subPath: nginx.conf
# mountPath: /etc/nginx/nginx.conf
mountPath: /usr/local/openresty/nginx/conf/nginx.conf
readOnly: true
---
apiVersion: v1
kind: Service
metadata:
name: echoserver
namespace: default
spec:
type: NodePort
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app: echoserver
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: default
data:
nginx.conf: |-
events {
worker_connections 1024;
}
env HOSTNAME;
env NODE_NAME;
env POD_NAME;
env POD_NAMESPACE;
env POD_IP;
http {
default_type 'text/plain';
# maximum allowed size of the client request body. By default this is 1m.
# Request with bigger bodies nginx will return error code 413.
# http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size
client_max_body_size 10m;
# https://blog.percy.io/tuning-nginx-behind-google-cloud-platform-http-s-load-balancer-305982ddb340
keepalive_timeout 650;
keepalive_requests 10000;
# GZIP support
gzip on;
gzip_min_length 128;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/css
text/plain
text/javascript
application/javascript
application/json
application/x-javascript
application/xml
application/xml+rss
application/xhtml+xml
application/x-font-ttf
application/x-font-opentype
application/vnd.ms-fontobject
image/svg+xml
image/x-icon
application/rss+xml
application/atom_xml
application/vnd.apple.mpegURL
application/x-mpegurl
vnd.apple.mpegURL
application/dash+xml;
init_by_lua_block {
local template = require("template")
-- template syntax documented here:
-- https://github.com/bungle/lua-resty-template/blob/master/README.md
tmpl = template.compile([[
Hostname: {{os.getenv("HOSTNAME") or "N/A"}}
Pod Information:
{% if os.getenv("POD_NAME") then %}
node name: {{os.getenv("NODE_NAME") or "N/A"}}
pod name: {{os.getenv("POD_NAME") or "N/A"}}
pod namespace: {{os.getenv("POD_NAMESPACE") or "N/A"}}
pod IP: {{os.getenv("POD_IP") or "N/A"}}
{% else %}
-no pod information available-
{% end %}
Server values:
server_version=nginx: {{ngx.var.nginx_version}} - lua: {{ngx.config.ngx_lua_version}}
Request Information:
client_address={{ngx.var.remote_addr}}
method={{ngx.req.get_method()}}
real path={{ngx.var.request_uri}}
query={{ngx.var.query_string or ""}}
request_version={{ngx.req.http_version()}}
request_scheme={{ngx.var.scheme}}
request_uri={{ngx.var.scheme.."://"..ngx.var.host..":"..ngx.var.server_port..ngx.var.request_uri}}
Request Headers:
{% for i, key in ipairs(keys) do %}
{{key}}={{headers[key]}}
{% end %}
Request Body:
{{ngx.var.request_body or " -no body in request-"}}
]])
}
server {
# please check the benefits of reuseport https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1
# basically instructs to create an individual listening socket for each worker process (using the SO_REUSEPORT
# socket option), allowing a kernel to distribute incoming connections between worker processes.
listen 8080 default_server reuseport;
listen 8443 default_server ssl http2 reuseport;
ssl_certificate /certs/certificate.crt;
ssl_certificate_key /certs/privateKey.key;
# Replace '_' with your hostname.
server_name _;
location / {
lua_need_request_body on;
content_by_lua_block {
ngx.header["Server"] = "echoserver"
local headers = ngx.req.get_headers()
local keys = {}
for key, val in pairs(headers) do
table.insert(keys, key)
end
table.sort(keys)
ngx.say(tmpl({os=os, ngx=ngx, keys=keys, headers=headers}))
}
}
}
}

GKE gRPC Ingress Health Check with mTLS

I am trying to implement a gRPC service on GKE (v1.11.2-gke.18) with mutual TLS auth.
When not enforcing client auth, the HTTP2 health check that GKE automatically creates responds, and everything connects issue.
When I turn on mutual auth, the health check fails - presumably because it cannot complete a connection since it lacks a client certificate and key.
As always, documentation is light and conflicting. I require a solution that is fully programmatic (I.e. no console tweaking), but I have not been able to find a solution, other than manually changing the health check to TCP.
From what I can see
I am guessing that I will either need to:
implement a custom mTLS health check that will prevent GKE automatically creating a HTTP2 check
find an alternative way to do SSL termination at the container that doesn't use the service.alpha.kubernetes.io/app-protocols: '{"grpc":"HTTP2"}' proprietary annotation
find some way to provide the health check with the credentials it needs
alter my go implementation to somehow server a health check without requiring mTLS, while enforcing mTLS on all other endpoints
Or perhaps there is something else that I have not considered? The config below works perfectly for REST and gRPC with TLS but breaks with mTLS.
service.yaml
apiVersion: v1
kind: Service
metadata:
name: grpc-srv
labels:
type: grpc-srv
annotations:
service.alpha.kubernetes.io/app-protocols: '{"grpc":"HTTP2"}'
spec:
type: NodePort
ports:
- name: grpc
port: 9999
protocol: TCP
targetPort: 9999
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: myapp
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: io-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "grpc-ingress"
kubernetes.io/ingress.allow-http: "true"
spec:
tls:
- secretName: io-grpc
- secretName: io-api
rules:
- host: grpc.xxx.com
http:
paths:
- path: /*
backend:
serviceName: grpc-srv
servicePort: 9999
- host: rest.xxx.com
http:
paths:
- path: /*
backend:
serviceName: grpc-srv
servicePort: 8080
It seems that there is currently no way to achieve this using the GKE L7 ingress. But I have been successful deploying an NGINX Ingress Controller. Google have a not bad tutorial on how to deploy one here.
This installs a L4 TCP load balancer with no health checks on the services, leaving NGINX to handle the L7 termination and routing. This gives you a lot more flexibility, but the devil is in the detail, and the detail isn't easy to come by. Most of what I found was learned trawling through github issues.
What I have managed to achieve is for NGINX to handle the TLS termination, and still pass through the certificate to the back end, so you can handle things such as user auth via the CN, or check the certificate serial against a CRL.
Below is my ingress file. The annotations are the minimum required to achieve mTLS authentication, and still have access to the certificate in the back end.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grpc-ingress
namespace: master
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-secret: "master/auth-tls-chain"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "2"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPCS"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/grpc-backend: "true"
spec:
tls:
- hosts:
- grpc.example.com
secretName: auth-tls-chain
rules:
- host: grpc.example.com
http:
paths:
- path: /grpc.AwesomeService
backend:
serviceName: awesome-srv
servicePort: 9999
- path: /grpc.FantasticService
backend:
serviceName: fantastic-srv
servicePort: 9999
A few things to note:
The auth-ls-chain secret contains 3 files. ca.crt is the certificate chain and should include any intermediate certificates. tls.crt contains your server certificate and tls.key contains your private key.
If this secret lies in a namespace that is different to the NGINX ingress, then you should give the full path in the annotation.
My verify-depth is 2, but that is because I am using intermediate certificates. If you are using self signed, then you will only need a depth of 1.
backend-protocol: "GRPCS" is required to prevent NGINX terminating the TLS. If you want to have NGINX terminate the TLS and run your services without encryption, use GRPC as the protocol.
grpc-backend: "true" is required to let NGINX know to use HTTP2 for the backend requests.
You can list multiple paths and direct to multiple services. Unlike with the GKE ingress, these paths should not have a forward slash or asterisk suffix.
The best part is that if you have multiple namespaces, or if you are running a REST service as well (E.g. gRPC Gateway), NGINX will reuse the same load balancer. This provides some savings over the GKE ingress, that would use a separate LB for each ingress.
The above is from the master namespace and below is a REST ingress from the staging namespace.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: staging
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- hosts:
- api-stage.example.com
secretName: letsencrypt-staging
rules:
- host: api-stage.example.com
http:
paths:
- path: /awesome
backend:
serviceName: awesom-srv
servicePort: 8080
- path: /fantastic
backend:
serviceName: fantastic-srv
servicePort: 8080
For HTTP, I am using LetsEncrypt, but there's plenty of information available on how to set that up.
If you exec into the ingress-nginx pod, you will be able to see how NGINX has been configured:
...
server {
server_name grpc.example.com ;
listen 80;
set $proxy_upstream_name "-";
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
listen 442 proxy_protocol ssl http2;
# PEM sha: 142600b0866df5ed9b8a363294b5fd2490c8619d
ssl_certificate /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_by_lua_block {
certificate.call()
}
# PEM sha: 142600b0866df5ed9b8a363294b5fd2490c8619d
ssl_client_certificate /etc/ingress-controller/ssl/master-auth-tls-chain.pem;
ssl_verify_client on;
ssl_verify_depth 2;
error_page 495 496 = https://help.example.com/auth;
location /grpc.AwesomeService {
set $namespace "master";
set $ingress_name "grpc-ingress";
set $service_name "awesome-srv";
set $service_port "9999";
set $location_path "/grpc.AwesomeServices";
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = true,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
header_filter_by_lua_block {
plugins.run()
}
body_filter_by_lua_block {
}
log_by_lua_block {
balancer.log()
monitor.call()
plugins.run()
}
if ($scheme = https) {
more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains";
}
port_in_redirect off;
set $proxy_upstream_name "master-analytics-srv-9999";
set $proxy_host $proxy_upstream_name;
client_max_body_size 1m;
grpc_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
grpc_set_header ssl-client-cert $ssl_client_escaped_cert;
grpc_set_header ssl-client-verify $ssl_client_verify;
grpc_set_header ssl-client-subject-dn $ssl_client_s_dn;
grpc_set_header ssl-client-issuer-dn $ssl_client_i_dn;
# Allow websocket connections
grpc_set_header Upgrade $http_upgrade;
grpc_set_header Connection $connection_upgrade;
grpc_set_header X-Request-ID $req_id;
grpc_set_header X-Real-IP $the_real_ip;
grpc_set_header X-Forwarded-For $the_real_ip;
grpc_set_header X-Forwarded-Host $best_http_host;
grpc_set_header X-Forwarded-Port $pass_port;
grpc_set_header X-Forwarded-Proto $pass_access_scheme;
grpc_set_header X-Original-URI $request_uri;
grpc_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
grpc_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
grpc_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 4k;
proxy_request_buffering on;
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_tries 3;
grpc_pass grpcs://upstream_balancer;
proxy_redirect off;
}
location /grpc.FantasticService {
set $namespace "master";
set $ingress_name "grpc-ingress";
set $service_name "fantastic-srv";
set $service_port "9999";
set $location_path "/grpc.FantasticService";
...
This is just an extract of the generated nginx.conf. But you should be able to see how a single configuration could handle multiple services across multiple namespaces.
The last piece is a go snippet of how we get hold of the certificate via the context. As you can see from the config above, NGINX adds the authenticated cert and other details into the gRPC metadata.
meta, ok := metadata.FromIncomingContext(*ctx)
if !ok {
return status.Error(codes.Unauthenticated, "missing metadata")
}
// Check if SSL has been handled upstream
if len(meta.Get("ssl-client-verify")) == 1 && meta.Get("ssl-client-verify")[0] == "SUCCESS" {
if len(meta.Get("ssl-client-cert")) > 0 {
certPEM, err := url.QueryUnescape(meta.Get("ssl-client-cert")[0])
if err != nil {
return status.Errorf(codes.Unauthenticated, "bad or corrupt certificate")
}
block, _ := pem.Decode([]byte(certPEM))
if block == nil {
return status.Error(codes.Unauthenticated, "failed to parse certificate PEM")
}
cert, err := x509.ParseCertificate(block.Bytes)
if err != nil {
return status.Error(codes.Unauthenticated, "failed to parse certificate PEM")
}
return authUserFromCertificate(ctx, cert)
}
}
// if fallen through, then try to authenticate via the peer object for gRPCS,
// or via a JWT in the metadata for gRPC Gateway.
HTTP/2 and gRPC support on GKE is not available yet. Please see limitation. There is already a feature request in the works in order to address the issue.

Configure Kubernetes Traefik Ingress with different path rewrites for each service

I'm in the process of migration of our application from single instance Docker-compose configuration to Kubernetes. I currently have the following example NGINX configuration, running as a reverse proxy of my application:
server {
server_name example.com;
ssl_certificate /etc/nginx/certs/${CERT_NAME};
ssl_certificate_key /etc/nginx/certs/${KEY_NAME};
listen 443 ssl;
keepalive_timeout 70;
access_log /var/log/nginx/access.log mtail;
ssl_protocols xxxxxx
ssl_ciphers xxxxxx
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
rewrite_log on;
resolver 127.0.0.11 ipv6=off;
location /push/ {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/htpasswd;
rewrite /push/(.*) /index.php/$1 break;
proxy_pass pushinterface:3080;
}
location /flights/ {
rewrite /flights/(.*) /$1 break;
proxy_pass flightstats:3090;
}
location /api/ {
proxy_pass $api;
}
location /grafana/ {
access_log off;
log_not_found off;
proxy_pass http://grafana:3000;
rewrite ^/grafana/(.*) /$1 break;
}
}
My initial plans for the reverse proxy part was implementing an ingress with NGINX ingress controller, but I saw that my configuration can be created as Ingress only with NGINX Plus. That's why I decided to try with Traefik, but I'm not sure if it's still possible to have different rewrites of the path for each service.
I tried the following Ingress configuration, but it seems it's not working:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-traefik
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: ReplacePathRegex
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: pushinterface
servicePort: 80
path: /push/(.*) /index/$1
- backend:
serviceName: flights
servicePort: 80
path: /flights/(.*) /$1
- backend:
serviceName: api
servicePort: 80
path: /api
- backend:
serviceName: grafana
servicePort: 80
path: /grafana/(.*) /$1
I will appreciate any help for solving this task
After several hours of unsuccessful attempts to solve my issue, I did it with Nginx ingress controller and it works great! Here's the ingress configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite /push/(.*) /index/$1 break;
rewrite /flights/(.*) /$1 break;
rewrite /grafana/(.*) /$1 break;
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: pushinterface
servicePort: 80
path: /push
- backend:
serviceName: flights
servicePort: 80
path: /flights
- backend:
serviceName: api
servicePort: 80
path: /api
- backend:
serviceName: grafana
servicePort: 80
path: /grafana
Thanks to everyone for the answers! :)
Using ReplacePathRegex rule type in your example does not guarantee that incoming requests will be forwarded to the target backends, as mentioned in Traefik Documentation.
In order to route requests to the endpoints, use Matcher instead of Modifiers rules, as they are designed for that purpose.
Find a separate discussion on the similar issue here.

website not accessible after idling some time running nginx and kubernetes

My website is not accessible from the browser for a few minutes after idling or not accessing it like 30 minutes or more. I would have to reload the page for how many times to view the page and I am not sure which to debug.
the stack I am running is a Golang app behind nginx that runs on kubernetes ingress. here is part of my nginx.conf.
daemon off;
worker_processes 2;
pid /run/nginx.pid;
worker_rlimit_nofile 523264;
events {
multi_accept on;
worker_connections 16384;
use epoll;
}
http {
real_ip_header X-Forwarded-For;
set_real_ip_from 0.0.0.0/0;
real_ip_recursive on;
geoip_country /etc/nginx/GeoIP.dat;
geoip_city /etc/nginx/GeoLiteCity.dat;
geoip_proxy_recursive on;
# lua section to return proper error codes when custom pages are used
lua_package_path '.?.lua;./etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/lua-resty-http/lib/?.lua;';
init_by_lua_block {
require("error_page")
}
sendfile on;
aio threads;
tcp_nopush on;
tcp_nodelay on;
log_subrequest on;
reset_timedout_connection on;
keepalive_timeout 75s;
client_header_buffer_size 1k;
large_client_header_buffers 4 8k;
types_hash_max_size 2048;
server_names_hash_max_size 512;
server_names_hash_bucket_size 64;
map_hash_bucket_size 64;
include /etc/nginx/mime.types;
default_type text/html;
gzip on;
gzip_comp_level 5;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component;
gzip_proxied any;
server_tokens on;
log_format upstreaminfo '$remote_addr - '
'[$proxy_add_x_forwarded_for] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" '
'$request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status';
map $request_uri $loggable {
default 1;
}
access_log /var/log/nginx/access.log upstreaminfo if=$loggable;
error_log /var/log/nginx/error.log notice;
resolver 10.131.240.10 valid=30s;
# Retain the default nginx handling of requests without a "Connection" header
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# trust http_x_forwarded_proto headers correctly indicate ssl offloading
map $http_x_forwarded_proto $pass_access_scheme {
default $http_x_forwarded_proto;
'' $scheme;
}
map $http_x_forwarded_port $pass_server_port {
default $http_x_forwarded_port;
'' $server_port;
}
# map port 442 to 443 for header X-Forwarded-Port
map $pass_server_port $pass_port {
442 443;
default $pass_server_port;
}
# Map a response error watching the header Content-Type
map $http_accept $httpAccept {
default html;
application/json json;
application/xml xml;
text/plain text;
}
map $httpAccept $httpReturnType {
default text/html;
json application/json;
xml application/xml;
text text/plain;
}
server_name_in_redirect off;
port_in_redirect off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# turn on session caching to drastically improve performance
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_session_timeout 10m;
# allow configuring ssl session tickets
ssl_session_tickets on;
# slightly reduce the time-to-first-byte
ssl_buffer_size 4k;
# allow configuring custom ssl ciphers
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
upstream default-ui-80 {
sticky hash=sha1 name=route httponly;
server 10.128.2.104:4000 max_fails=0 fail_timeout=0;
server 10.128.4.37:4000 max_fails=0 fail_timeout=0;
}
server {
server_name app.com;
listen [::]:80;
listen 442 ssl http2;
# PEM sha: a51bd3f56b3ec447945f1f92f0ad140bb8134d11
ssl_certificate /ingress-controller/ssl/default-linker-secret.pem;
ssl_certificate_key /ingress-controller/ssl/default-linker-secret.pem;
more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains; preload";
location / {
set $proxy_upstream_name "default-ui-80";
port_in_redirect off;
# enforce ssl on server side
if ($scheme = http) {
return 301 https://$host$request_uri;
}
client_max_body_size "1024m";
proxy_set_header Host $host;
# Pass Real IP
proxy_set_header X-Real-IP $remote_addr;
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers
proxy_connect_timeout 5s;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;
proxy_redirect off;
proxy_buffering off;
proxy_buffer_size "4k";
proxy_http_version 1.1;
proxy_pass http://default-ui-80;
}
}
}
ingress controller
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
replicas: 1
selector:
k8s-app: nginx-ingress-lb
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.2
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-ingress-sticky-session
- --configmap=$(POD_NAMESPACE)/nginx-settings-configmap
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-configmaps
- --v=2
ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo-prod
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/affinity: "cookie"
ingress.kubernetes.io/session-cookie-name: "route"
ingress.kubernetes.io/session-cookie-hash: "sha1"
nginx.org/client-max-body-size: "1024m"
spec:
tls:
- hosts:
- foo.io
secretName: foo-secret
rules:
- host: foo.io
http:
paths:
- backend:
serviceName: foo.io
servicePort: 80
service
apiVersion: v1
kind: Service
metadata:
name: foo-prod-nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: http
- port: 443
name: https
selector:
app: nginx-ingress-controller
The service.type=LoadBalancer allocates a public IP per k8s service, which is not how ingress works. You should expose your service as nodeport, and let ingress route traffic to it. example here
Also, if you are going to use nginx as ingress controller, you should use endpoint instead of service. Here is why

Resources