kubernetes nginx ingress controller rewrites - nginx

We have deployed a mockserver on kubernetes. Currently, we only have one hostname which is shared by couple other applications (using a different path). However, the dashboard is not working because of the css location. What's the best way to solve this problem?
Failed to load resource: the server responded with a status of 404 (), hostname/mockserver/dashboard/static/css/main.477cab2a.chunk.css
The ingress manifest:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
app.kubernetes.io/instance: mock-server
kubernetes.io/ingress.class: nginx-ingress-protected
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: mock-server-ingress
namespace: my-namespace
spec:
rules:
- host: hostname
http:
paths:
- backend:
serviceName: mock-server-svc
servicePort: 80
path: /testing(/|$)(.*)
This works fine if I request resource like hostname/testing/mockserver/expectation, the rewrites will be sending /mockserver/exepctation to the backend.
However, if for path hostname/testing/mockserver/dashboard, it is a html page which will loads hostname/mockserver/dashboard which doesn't exist. I can't wrap my head around this. Should I create another ingress with path /mockserver just to serve the css?

Your rewrite is working as expected. However,
there are some options you can choose from:
Create a second rule for the /mockserver (the simplest solution).
Play with capture groups:
Captured groups are saved in numbered placeholders, chronologically,
in the form $1, $2 ... $n. These placeholders can be used as
parameters in the rewrite-target annotation.
Use a paid solution.
The easiest would be to go for option 1 and create a second rule which would satisfy the path for the css.

Related

GKE - expose service with Ingress and Internal Load Balancing

I have REST API Web service on Internal GKE cluster which I would like to expose with internal HTTP load balancing.
Let's call this service "blue" service:
I would like to expose it in following mapping:
http://api.xxx.yyy.internal/blue/isalive -> http://blue-service/isalive
http://api.xxx.yyy.internal/blue/v1/get -> http://blue-service/v1/get
http://api.xxx.yyy.internal/blue/v1/create -> http://blue-service/v1/create
http://api.xxx.yyy.internal/ -> http://blue-service/ (expose Swagger)
I'm omitting deployment yaml, since it's less relevant to discussion.
But my service yaml looks like this:
apiVersion: v1
kind: Service
metadata:
name: blue-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
selector:
app: blue-service
My Ingress configuration is the following:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: blue-ingress
annotations:
kubernetes.io/ingress.class: "gce-internal"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: api.xxx.yyy.internal
http:
paths:
- path: /blue/*
backend:
serviceName: blue-service
servicePort: 80
However, I'm receiving 404 for all requests. /blue/v1/get, /blue/v1/create and /blue/isalive returns 404.
In my "blue" application I log all my notFound requests and I can clearly see that my URIs are not being rewritten, the requests hitting the application are /blue/v1/get, /blue/v1/create and /blue/isalive.
What am I missing in Ingress configuration? How can I fix those rewrites?
I solved the problem and writing it here to memo it and hopefully someone will find it as useful.
First problem is that I have mixed annotations types. one of GKE ingress controller and second for Nginx Server controller. Currently GKE ingress controller doesn't support URL rewrite feature, so I need to use nginx ingress controller.
so I need to install Nginx based ingress controller. It cloud be done easily using Helm chart or or deployment yaml. However, by default this controller will expose ingress using external load balancer and this not what I want. So we need to modify deployment charts or YAML file of this controller.
I'm not using Helm, so I downoaded yaml itself using wget command.
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/cloud/deploy.yaml
Open it in editor and find the definition of Service names ingress-nginx-controller in namespace ingress-nginx. Add the following annotation.
cloud.google.com/load-balancer-type: "Internal"
After it I can run kubectl apply -f deploy.yaml command which will create Ingress controller for me. It will take a few minutes to provision it.
In addition I need to open firewall rule which will allow master nodes access worker nodes on port 8443/tcp.
And the last item is an ingress yaml itself which should look like this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
kubernetes.io/ingress.class: "nginx"
name: blue-ingress
namespace: default
spec:
rules:
- host: api.xxx.yyy.internal
http:
paths:
- backend:
serviceName: blue-service
servicePort: 80
path: /blue(/|$)(.*)

Issues with rewrite-target for iis site on kubernetes pod

We have several namespaces, each of which contain an instance of our product running on IIS on a windows pod. We do not want to expose this pods to the internet and as such are looking to enable a bastion VM on the same vnet to access them through an NGINX ingress controller.
This ingress controller is setup and working but we are running into some issues. Our goal is to be able to route between instances of the application based on a path e.g. nginxIP/instance1 routing to 1 instance and nginxIP/instance2 routing to a second.
The following is a sample ingress yaml file which we are using in the solution currently:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: instance1-ingress
namespace: instance1
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /instance1(/|$)(.*)
backend:
serviceName: service
servicePort: 80
This is redirecting to the root of the application successfully however there are lots of issues with the application loading css, images and other scripts. It seems that this is not being handled properly by the rewrite rule but after trying several different configurations of paths, rewrite targets as well as testing the app-root annotation I am at a loss.
The other thing of note here is that different behaviour can be observed depending on whether a trailing slash is used or not. For example this image on the homepage works fine when you use a slash but not without:
http://10.240.10.10/instance1/
works - http://10.240.10.10/instance1/b3c7ad64-a4e1-4c32-a616-6153ff535a83.adapter
http://10.240.10.10/instance1
doesn't work - http://10.240.10.10/b3c7ad64-a4e1-4c32-a616-6153ff535a83.adapter
Which I'm hoping offers some clues to what might be the issue here but after quite a while looking into this I'm now at a bit of a loss
you are saying this doesn't work - http://10.240.10.10/b3c7ad64-a4e1-4c32-a616-6153ff535a83.adapter because you are missing /instance1/ in the URL.
The rewrite target does not rewrite the URL so you will need to this path: /instance1/
You will need to update the frontend code for this serverside routing to work.

Nginx rewrite-target overwritting domain/suffix/index.php to domain/index.php

recently I have deployed an kubernetes cluster which is running wordpress instance and phpmyadmin. I'm using Nginx ingress controller to perform path based routing for both the services. However, request to / is happening without any hassle but when I request domain.com/phpmyadmin/ I get a login page after which I have been redirected to domain.com/index.php instead of domain.com/phpmyadmin/index.php. Please suggest me possible turn around for this. Thank you guys for the support :)
My ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/add-base-url : "true"
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
# ingress.kubernetes.io/rewrite-target: "^/phpmyadmin/"
spec:
rules:
- host: example.domain.com
http:
paths:
- path: /
backend:
serviceName: wordpress
servicePort: 80
- path: /phpmyadmin(/|$)(.*)
backend:
serviceName: phpmyadmin
servicePort: 80
I'd say issue is not on Nginx Ingress side.
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
...
- path: /phpmyadmin(/|$)(.*)
Should work properly for you.
However there is second part, configuration of myphpadmin. As you didn't provide this configuration I would guess what could cause this issue.
Like mentioned in phpmyadmin docs, sometimes you need to set $cfg['PmaAbsoluteUri']
In some setups (like separate SSL proxy or load balancer) you might have to set $cfg['PmaAbsoluteUri'] for correct redirection.
As I based on this configuration, many depends on how you configured PMA_ABSOLUTE_URI, is it http://somedomain.com/phpmyadmin or different?
Is important as you might encounter situation like:
When you enter to http://somedomain.com/phpmyadmin and login you will be redirected to http://somedomain.com/ so Ingress will redirect you to path: / set in ingress
If you will again enter http://somedomain.com/phpmyadmin you will be able to see phpmyadmin content, like you would be already logged in.
You could try to add env in your myphpadmin deployment. It would look similar like below:
env:
- name: PMA_ABSOLUTE_URI
value: http://somedomain.com/myphpadmin/
Last thing, its not recommended to use expose phpmyadmin without https.
For some extra information you can read this article.
In short:
Nginx ingress configuration looks ok
Check your myphpadmin configuration, especially PMA_ABSOLUTE_URI.

Multiple docker apps running nginx at multiple different subpaths

I'm attempting to run several Docker apps in a GKE instance, with a load balancer setup exposing them. Each app comprises a simple node.js app with nginx to serve the site; a simple nginx config exposes the apps with a location block responding to /. This works well locally when developing since I can run each pod on a separate port, and access them simply at 127.0.0.1:8080 or similar.
The problem I'm encountering is that when using the GCP load balancer, whilst I can easily route traffic to the Kubernetes services such that https://example.com/ maps to my foo service/pod and https://example.com/bar goes to my bar service, the bar pod responds with a 404 since the path, /bar doesn't match the path specified in the location block.
The number of these pods will scale a lot so I do not wish to manually know ahead of time what path each pod will be under, nor do I wish to embody this in my git repo.
Is there a way I can dynamically define the path the location block matches, for example via an environment variable, such that I could define it as part of the Helm charts I use to deploy these services? Alternatively is it possible to match all paths? Is that a viable solution, or just asking for problems?
Thanks for your help.
Simply use ingress. It will allow you to map different paths to different backend Services. It is very well explained both in GCP docs as well as in the official kubernetes documentation.
Typical ingress object definition may look as follows:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
backend:
serviceName: my-products
servicePort: 60001
rules:
- http:
paths:
- path: /
backend:
serviceName: my-products
servicePort: 60000
- path: /discounted
backend:
serviceName: my-discounted-products
servicePort: 80
- path: /special
backend:
serviceName: special-offers
servicePort: 80
- path: /news
backend:
serviceName: news
servicePort: 80
When you apply your ingress definition on GKE, load balancer is created automatically. Note that all Services may use same, standard http port and you don't have to use any custom ports.
You may want to specify a default backend, present in the above example (backend section right under spec), but it's optional. It will ensure that:
Any requests that don't match the paths in the rules field are sent to
the Service and port specified in the backend field. For example, in
the following Ingress, any requests that don't match / or /discounted
are sent to a Service named my-products on port 60001.
The only problem that you may encounter when using default ingress controller available on GKE is that for the time being it doesn't support rewrites.
If your nginx pods expose app content only on "/" path, no support for rewrites shouldn't be a limitation at all and as far as I understand, this applies in your case:
Each app comprises a simple node.js app with nginx to serve the site;
a simple nginx config exposes the apps with a location block
responding to /
However if you decide at some point that you need mentioned rewrites because e.g. one of your apps isn't exposed under / but rather /bar within the Pod you may decide to deploy nginx ingress controller which can be also done pretty easily on GKE.
So you will only need it in the following scenario: user accesses the ingress IP followed by /foo -> request is not only redirected to the specific backend Service that exposes your nginx Pod, but also the original path (/foo) needs to be rewritten to the new path (/bar) under which the application is exposed within the Pod
UPDATE:
Thank you for your reply. The above ingress configuration is very
similar to what I've already configured forwarding /foo and /bar to
different pods. The issue is that the path gets forwarded, and (after
doing some more research on the issue) I believe I need to rewrite the
URL that's sent to the pod, since the location / { ... } block in my
nginx config won't match against the received path of /foo or /bar. –
aodj Aug 14 at 9:17
Well, you're right. The original access path e.g. /foo indeed gets forwarded to the target Pod. So choosing /foo path apart from leading you to the respective backend defined in the ingress resource implicates that the target nginx server running in a Pod must serve its content also under /foo path.
I verified it with GKE ingress and can confirm by checking Pod logs that an http request sent to the nginx Pod thorough the /foo path, indeed comes to the Pod as request for /usr/share/nginx/html/foo while it serves its content under /, not /foo from /usr/share/nginx/html. So requesting for something that don't exist on the target server leads inevitably to 404 Error.
As I mentioned before, default ingress controller available on GKE doesn't support rewrites so if you want to use it for some reason, reconfiguring your target nginx servers seems the only solution to make it work.
Fortunatelly we have another option which is nginx ingress controller. It supports rewrites so it can easily solve our problem. We can deploy it on our GKE cluster by running two following commands:
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/cloud/deploy.yaml
Yes, it's really that simple! You can take a closer look at the installation process in official docs.
Then we can apply the following ingress resource definition:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: nginx-deployment-1
servicePort: 80
path: /foo(/|$)(.*)
- backend:
serviceName: nginx-deployment-2
servicePort: 80
path: /bar(/|$)(.*)
Note that we used kubernetes.io/ingress.class: "nginx" annotation to select our newly deployed nginx-ingress controller to handle this ingress resource rather than the default GKE-ingress controller.
Rewrites that were used will make sure that the original access path gets rewritten before reaching the target nginx Pod. So it's perfectly fine that both sets of Pods exposed by nginx-deployment-1 and nginx-deployment-2 Services serve their contents under "/".
If you want to quickly check how it works on your own, you can use the following Deployments:
nginx-deployment-1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-1
labels:
app: nginx-1
spec:
replicas: 3
selector:
matchLabels:
app: nginx-1
template:
metadata:
labels:
app: nginx-1
spec:
initContainers:
- name: init-myservice
image: nginx:1.14.2
command: ['sh', '-c', "echo DEPLOYMENT-1 > /usr/share/nginx/html/index.html"]
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-volume
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
nginx-deployment-2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-2
labels:
app: nginx-2
spec:
replicas: 3
selector:
matchLabels:
app: nginx-2
template:
metadata:
labels:
app: nginx-2
spec:
initContainers:
- name: init-myservice
image: nginx:1.14.2
command: ['sh', '-c', "echo DEPLOYMENT-2 > /usr/share/nginx/html/index.html"]
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-volume
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
And expose them via Services by running:
kubectl expose deployment nginx-deployment-1 --type NodePort --target-port 80 --port 80
kubectl expose deployment nginx-deployment-2 --type NodePort --target-port 80 --port 80
You may even omit --type NodePort as nginx-ingress controller accepts also ClusterIP Services.

strip_path and preserve_host attributes in KongIngress object. What do they do?

I have a KongIngress object configuration attributes regarding to Ingress resource which call to kong as an Ingress controller. I actually have this configuration:
apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
name: echo-site-ingress
namespace: hello-world
annotations:
kubernetes.io/ingress.class: "kong"
proxy:
protocols:
- http
- https
# path: /
route:
methods:
- POST
- GET
strip_path: true
preserve_host: true
---
#My Ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: kong
plugins.konghq.com: helloworld-customer-acceptance-basic-auth, hello-world-customer-acceptance-acl
name: echo-site-ingress
namespace: hello-world
spec:
rules:
- host: hello-world.bgarcial.me
http:
paths:
- backend:
serviceName: echo
servicePort: 80
path: /
tls:
- hosts:
- hello-world.bgarcial.me
secretName: letsencrypt-prod
The questions are:
What are doing in my kind:KongIngress object resource the strip_path and preserve_host attributes?
I read the documentation here, but it is not clear for me:
Regarding to strip_path I see this one:
When matching a Route via one of the paths, strip the matching prefix from the upstream request URL. Defaults to true.
but as we can see, I am not using the path attribute inside my KongIngress object (I commented for illustration purposes about my question)
So, how strip_path attribute value is applied here?
It is because I am using in my Ingress resource the path: / attribute and my Ingress and my KongIngress resources are working together?
I really don't have a clue about it, but I would like to know how is this about behind scenes.
When preserv_host annotation is enabled the host header of the request will be sent as is to the Service in Kubernetes. Well explained in the documentation.
strip_path can be configured to strip the matching part of your path from the HTTP request before it is proxied.
If it is set to "true", the part of the path specified in the Ingress rule will be stripped out before the request is sent to the service. For example, when it is set to "true", the Ingress rule has a path of /foo and the HTTP request that matches the Ingress rule has the path /foo/bar/something, then the request sent to the Kubernetes service will have the path /bar/something.
So when you use curl $YOUR_HOST/foo/bar/something, under real path value in the output you will see /bar/something
And if set to false no path manipulation is performed and in your case can be changed to such as there is no manipulation to be done.

Resources