How to setup this ingress and controller to enable caching? - nginx

The Cluster is running multiple NGINX pods in one service, deployed over a Deployment YAML file.
I'm trying to cache GET Requests on both services a rest.js client, and an API web-application.
I'm struggling to make caching work with this ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: myNamespace
name: test-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt
acme.cert-manager.io/http01-edit-in-place: "true"
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-body-size: 8m
nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/http-snippet: "proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=static-cache:32m use_temp_path=off max_size=4g inactive=24h;"
nginx.ingress.kubernetes.io/server-snippet: |
proxy_cache static-cache;
proxy_cache_lock on;
proxy_cache_valid any 60m;
proxy_ignore_headers "Set-Cookie";
proxy_hide_header "Set-Cookie"
add_header Cache-Control "public";
add_header X-Cache-Status $upstream_cache_status;
spec:
rules:
- host: "{{ HOST }}"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: server
port:
number: 8080
- host: "client-{{ HOST }}"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: client
port:
number: 5500
tls:
- hosts:
- "{{ HOST }}"
- "testclientapplication-{{ HOST }}"
secretName: ingress-cert
In the response to any requests are the content-length, content-type, date and the strict-transport-security header.
Previously i was attempting to get it to run over a ConfigMap but that didn't work out either.
apiVersion: v1
kind: ConfigMap
metadata:
namespace: myNamespace
name: ingress-nginx-controller
data:
http-snippet: "proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=static-cache:32m use_temp_path=off max_size=4g inactive=24h;"
The service and client application are running fine but i'm struggeling to enable caching.
Some advice on how to enable caching would be highly appreciated.

Related

Nginx 503 ingress controller k8s

I have an application running on nginx port 9000, and a service attached to it. If I put service as a LoadBalancer, I can open IP:PORT/app/pages in my browser (ClusterIP and NodePort with nginx doesn't work). Created Ingress controller with ALB, and A record to point to app.mydomain.com, but I keep getting 503 or 404 errors, even 400 sometimes (tried couple of ports/paths etc). Can someone point me to what should I look at? I want to be able to open https://app.mydomain.com/app/pages. cert-manager is complaining with 400 errors also when retrieving the certificate.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-app-dev
namespace: app-dev
annotations:
cert-manager.io/issuer: letsencrypt-nginx
ingress.kubernetes.io/rewrite-target: /
# ingressclass.kubernetes.io/is-default-class: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- app.mydomain.com
secretName: letsencrypt-nginx
rules:
- host: app.mydomain.com
http:
paths:
- pathType: Prefix
backend:
service:
name: app-service
port:
number: 8080
path: /
apiVersion: v1
kind: Service
metadata:
name: app-service
namespace: app-dev
spec:
type: LoadBalancer
ports:
- port: 9090
protocol: TCP
targetPort: 8080
selector:
app: app
IP of Ingress is added as a A record to DNS (app.mydomain.com). This is my nginx conf in the docker image
bash-5.1# cat /etc/nginx/conf.d/default.conf
server {
listen 8080 ssl;
ssl_certificate /ssl/cert;
ssl_certificate_key /ssl/key;
location / {
root /www;
autoindex off;
add_header 'Access-Control-Allow-Origin' '*';
}
location /healthz {
return 200 'ok';
}
}

How can I define the `limit_req_zone`?

I am working with nginx-ingress-controller (this is not the same that ingress-nginx )
I have this ingress file
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-test"
acme.cert-manager.io/http01-edit-in-place: "true"
nginx.org/location-snippets: |
limit_req zone=by_web;
spec:
ingressClassName: nginx
rules:
- host: my.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
tls:
- hosts:
- my.domain.com
secretName: quickstart-example-tls
I was able to define a limit_req using nginx.org/location-snippets.
How can I define the limit_req_zone?
limit_req_zone $request_uri zone=by_web:10m rate=60r/m;
Regards.
According to this article from official documentation, you can define limit_req_zone by adding following ConfigMap keys to location-snippets and server-snippets annotations:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-test"
acme.cert-manager.io/http01-edit-in-place: "true"
nginx.org/location-snippets: |
geo $limit {
default 1;
10.0.0.0/8 0;
192.168.0.0/24 0;
}
map $limit $request_uri {
default '';
'1' $binary_remote_addr;
}
limit_req_zone $request_uri zone=by_web:10m rate=1r/s;
nginx.org/server-snippets: |
location / {
limit_req zone=by_web burst=10 nodelay;
}
spec:
ingressClassName: nginx
rules:
- host: my.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
tls:
- hosts:
- my.domain.com
secretName: quickstart-example-tls
So this will let you rate limit to be defined on requests from anyone who is not on an “allowlist”

Nginx ingress kubernetes Proxy Pass

I need configure a proxy pass in a nginx ingress.
The rule must be:
%USER%.test.domain.com to app.test.domain.com/%USER%
*It must be a proxy pass NOT a redirect
I created this Ingress but it does not work
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test02-ingress
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/server-snippet: |
server_name ~^(?<subdomain>.+)\.test\.domain\.it;
location = / {
proxy_pass https://app.test.domain.it/$subdomain/;
proxy_set_header Host $subdomain.test.domain.it;
}
spec:
rules:
- host: "*.test.domain.it"
http:
paths:
- path: /
backend:
serviceName: test01-svc
servicePort: 80
``

ingress nginx redirect from www to https

I'm trying to redirect http://www... and https://www... to https://... using ingress-nginx. How can I do that?
I've tried adding the following custom configuration using the annotation nginx.ingress.kubernetes.io/server-snippet and nginx.ingress.kubernetes.io/configuration-snippet:
# 1
if($host = "www.example.com") {
return 308 https://example.com$request_uri;
}
# 2
server {
server_name www.example.com;
return 308 https://example.com$request_uri;
}
# 3
server_name www.example.com;
return 308 https://example.com$request_uri;
But I get an error in the nginx controller logs for #1:
2019/12/07 20:58:47 [emerg] 48898#48898: unknown directive "if($host" in /tmp/nginx-cfg775816039:418
nginx: [emerg] unknown directive "if($host" in /tmp/nginx-cfg775816039:418
nginx: configuration file /tmp/nginx-cfg775816039 test failed
For #2 I get an error that the server block is not allowed at that position and using #3 leads to infinite redirects. My ingress yaml looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.global-static-ip-name: "example-com"
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-read-timeout: "86400s"
nginx.ingress.kubernetes.io/proxy-send-timeout: "86400s"
nginx.ingress.kubernetes.io/proxy-body-size: "100m"
nginx.ingress.kubernetes.io/limit-rps: "20"
nginx.ingress.kubernetes.io/client-max-body-size: "100m"
nginx.ingress.kubernetes.io/configuration-snippet: |
# see above
spec:
tls:
- hosts:
- example.com
secretName: certificate-secret
rules:
- host: sub.example.com
http:
paths:
- backend:
serviceName: service-sub
servicePort: 1234
# more subdomains here
- host: example.com
http:
paths:
- backend:
serviceName: service-example
servicePort: 1235
- host: "*.example.com"
http:
paths:
- backend:
serviceName: service-example-wildcard
servicePort: 1236
I've also tried setting the nginx.ingress.kubernetes.io/from-to-www-redirect: "true" annotation, but that leads to a different error:
2019/12/07 21:20:34 [emerg] 51558#51558: invalid server name or wildcard "www.*.example.com" on 0.0.0.0:80
nginx: [emerg] invalid server name or wildcard "www.*.example" on 0.0.0.0:80
nginx: configuration file /tmp/nginx-cfg164546048 test failed
Ok I got it. The missing space after if fixed it. Thank you mdaniel :)
Here is a working configuration that redirects anything to https://... without www:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-nginx-integration
namespace: integration
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.global-static-ip-name: "example-com"
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-read-timeout: "86400s"
nginx.ingress.kubernetes.io/proxy-send-timeout: "86400s"
nginx.ingress.kubernetes.io/proxy-body-size: "100m"
nginx.ingress.kubernetes.io/limit-rps: "20"
nginx.ingress.kubernetes.io/client-max-body-size: "100m"
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($host = "www.example.com") {
return 308 https://example.com$request_uri;
}
spec:
tls:
- hosts:
- example.com
secretName: certificate-integration-secret
rules:
- host: subdomain.example.com
http:
paths:
- backend:
serviceName: service-emviwiki
servicePort: 4000
# ... more rules, NO www here

Is there a point of having pod-level nginx when using nginx ingress?

I was wondering if I should have the pod level nginx in the implementations below:
I was previously using a normal ingress and kube-lego after migrating from VMs and now I am using cert-manager and GKE.
My Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: myapp-static-ip
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/rewrite-target: /
certmanager.k8s.io/cluster-issuer: letsencrypt
namespace: default
spec:
tls:
- hosts:
- myapp.com
secretName: myapp-crt
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: myapp
servicePort: http
My service:
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 32111
protocol: "TCP"
name: http
selector:
app: myapp
My Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: gcr.io/myapp-1/myapp:latest
imagePullPolicy: Always
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql
key: password
-
name: STATIC_ROOT
value: https://storage.googleapis.com/myapp-api/static/
-
name: STATIC_URL
value: https://storage.googleapis.com/myapp-api/static/
-
name: MEDIA_ROOT
value: /myapp/media
-
name: MEDIA_URL
value: http://myapp.com/media/
-
name: nginx
image: nginx
command: [nginx, -g,'daemon off;']
imagePullPolicy: Always
volumeMounts:
-
name: api-nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
-
name: myapp-media
mountPath: /myapp/media/
ports:
- containerPort: 80
- image: b.gcr.io/cloudsql-docker/gce-proxy:1.05
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=myapp-1:europe-west1:myapp-api=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
- name: myapp-media
mountPath: /myapp/media
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: cloudsql
emptyDir:
- name: api-nginx-config
configMap:
name: api-nginx-config
-
name: myapp-media
persistentVolumeClaim:
claimName: myapp-media
my nginx conf:
apiVersion: v1
kind: ConfigMap
metadata:
name: api-nginx-config
data:
nginx.conf: |
events {
worker_connections 1024;
}
http {
upstream api {
server 127.0.0.1:8080 fail_timeout=0;
}
server {
access_log /var/log/nginx/http-access.log;
error_log /var/log/nginx/http-error.log;
listen 80;
listen [::]:80;
server_name myapp.com;
location /media/ {
alias /myapp/media;
}
location = /favicon.ico {
access_log off;
log_not_found off;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:8080/;
}
}
}
Is it serving any major purpose given I could directly map the myapp/media directly to /media in the volume mount and my tls is handled by ingress. My major concern is with the pod-level nginx as I highlighted earlier, is it useless in this case? Is it just a baggage I am carrying over from previous implementations?
Generally, there is not really a point to have an extra nginx pod. If you do, you would have something of a double ingress. An nginx ingress controller pod already has nginx in it and you can scale that up/down.
One reason you would want to keep it is for backward compatibility, if for example, you want to use an ingress but want to gradually roll it out in this sort of fashion: create new nginx ingress -> flip traffic from your own nginx only through the new nginx ingress and your own nginx, until you have flipped all your pods -> remove your own nginx gradually until you have removed them all.
Another reason is to support a very specific nginx configuration that is not supported by the nginx ingress controller yet.
You may need to run your own nginx as deployment , for the reasons listed in the above answer , plus , you may need to scale the nginx deployment , let say 10 replicas. you cant scale ingress like that. But in any case , you just need one of them.

Resources