Nginx rewrite-target overwritting domain/suffix/index.php to domain/index.php - nginx

recently I have deployed an kubernetes cluster which is running wordpress instance and phpmyadmin. I'm using Nginx ingress controller to perform path based routing for both the services. However, request to / is happening without any hassle but when I request domain.com/phpmyadmin/ I get a login page after which I have been redirected to domain.com/index.php instead of domain.com/phpmyadmin/index.php. Please suggest me possible turn around for this. Thank you guys for the support :)
My ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/add-base-url : "true"
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
# ingress.kubernetes.io/rewrite-target: "^/phpmyadmin/"
spec:
rules:
- host: example.domain.com
http:
paths:
- path: /
backend:
serviceName: wordpress
servicePort: 80
- path: /phpmyadmin(/|$)(.*)
backend:
serviceName: phpmyadmin
servicePort: 80

I'd say issue is not on Nginx Ingress side.
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
...
- path: /phpmyadmin(/|$)(.*)
Should work properly for you.
However there is second part, configuration of myphpadmin. As you didn't provide this configuration I would guess what could cause this issue.
Like mentioned in phpmyadmin docs, sometimes you need to set $cfg['PmaAbsoluteUri']
In some setups (like separate SSL proxy or load balancer) you might have to set $cfg['PmaAbsoluteUri'] for correct redirection.
As I based on this configuration, many depends on how you configured PMA_ABSOLUTE_URI, is it http://somedomain.com/phpmyadmin or different?
Is important as you might encounter situation like:
When you enter to http://somedomain.com/phpmyadmin and login you will be redirected to http://somedomain.com/ so Ingress will redirect you to path: / set in ingress
If you will again enter http://somedomain.com/phpmyadmin you will be able to see phpmyadmin content, like you would be already logged in.
You could try to add env in your myphpadmin deployment. It would look similar like below:
env:
- name: PMA_ABSOLUTE_URI
value: http://somedomain.com/myphpadmin/
Last thing, its not recommended to use expose phpmyadmin without https.
For some extra information you can read this article.
In short:
Nginx ingress configuration looks ok
Check your myphpadmin configuration, especially PMA_ABSOLUTE_URI.

Related

kubernetes nginx ingress controller rewrites

We have deployed a mockserver on kubernetes. Currently, we only have one hostname which is shared by couple other applications (using a different path). However, the dashboard is not working because of the css location. What's the best way to solve this problem?
Failed to load resource: the server responded with a status of 404 (), hostname/mockserver/dashboard/static/css/main.477cab2a.chunk.css
The ingress manifest:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
app.kubernetes.io/instance: mock-server
kubernetes.io/ingress.class: nginx-ingress-protected
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: mock-server-ingress
namespace: my-namespace
spec:
rules:
- host: hostname
http:
paths:
- backend:
serviceName: mock-server-svc
servicePort: 80
path: /testing(/|$)(.*)
This works fine if I request resource like hostname/testing/mockserver/expectation, the rewrites will be sending /mockserver/exepctation to the backend.
However, if for path hostname/testing/mockserver/dashboard, it is a html page which will loads hostname/mockserver/dashboard which doesn't exist. I can't wrap my head around this. Should I create another ingress with path /mockserver just to serve the css?
Your rewrite is working as expected. However,
there are some options you can choose from:
Create a second rule for the /mockserver (the simplest solution).
Play with capture groups:
Captured groups are saved in numbered placeholders, chronologically,
in the form $1, $2 ... $n. These placeholders can be used as
parameters in the rewrite-target annotation.
Use a paid solution.
The easiest would be to go for option 1 and create a second rule which would satisfy the path for the css.

Issues with rewrite-target for iis site on kubernetes pod

We have several namespaces, each of which contain an instance of our product running on IIS on a windows pod. We do not want to expose this pods to the internet and as such are looking to enable a bastion VM on the same vnet to access them through an NGINX ingress controller.
This ingress controller is setup and working but we are running into some issues. Our goal is to be able to route between instances of the application based on a path e.g. nginxIP/instance1 routing to 1 instance and nginxIP/instance2 routing to a second.
The following is a sample ingress yaml file which we are using in the solution currently:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: instance1-ingress
namespace: instance1
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /instance1(/|$)(.*)
backend:
serviceName: service
servicePort: 80
This is redirecting to the root of the application successfully however there are lots of issues with the application loading css, images and other scripts. It seems that this is not being handled properly by the rewrite rule but after trying several different configurations of paths, rewrite targets as well as testing the app-root annotation I am at a loss.
The other thing of note here is that different behaviour can be observed depending on whether a trailing slash is used or not. For example this image on the homepage works fine when you use a slash but not without:
http://10.240.10.10/instance1/
works - http://10.240.10.10/instance1/b3c7ad64-a4e1-4c32-a616-6153ff535a83.adapter
http://10.240.10.10/instance1
doesn't work - http://10.240.10.10/b3c7ad64-a4e1-4c32-a616-6153ff535a83.adapter
Which I'm hoping offers some clues to what might be the issue here but after quite a while looking into this I'm now at a bit of a loss
you are saying this doesn't work - http://10.240.10.10/b3c7ad64-a4e1-4c32-a616-6153ff535a83.adapter because you are missing /instance1/ in the URL.
The rewrite target does not rewrite the URL so you will need to this path: /instance1/
You will need to update the frontend code for this serverside routing to work.

Multiple docker apps running nginx at multiple different subpaths

I'm attempting to run several Docker apps in a GKE instance, with a load balancer setup exposing them. Each app comprises a simple node.js app with nginx to serve the site; a simple nginx config exposes the apps with a location block responding to /. This works well locally when developing since I can run each pod on a separate port, and access them simply at 127.0.0.1:8080 or similar.
The problem I'm encountering is that when using the GCP load balancer, whilst I can easily route traffic to the Kubernetes services such that https://example.com/ maps to my foo service/pod and https://example.com/bar goes to my bar service, the bar pod responds with a 404 since the path, /bar doesn't match the path specified in the location block.
The number of these pods will scale a lot so I do not wish to manually know ahead of time what path each pod will be under, nor do I wish to embody this in my git repo.
Is there a way I can dynamically define the path the location block matches, for example via an environment variable, such that I could define it as part of the Helm charts I use to deploy these services? Alternatively is it possible to match all paths? Is that a viable solution, or just asking for problems?
Thanks for your help.
Simply use ingress. It will allow you to map different paths to different backend Services. It is very well explained both in GCP docs as well as in the official kubernetes documentation.
Typical ingress object definition may look as follows:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
backend:
serviceName: my-products
servicePort: 60001
rules:
- http:
paths:
- path: /
backend:
serviceName: my-products
servicePort: 60000
- path: /discounted
backend:
serviceName: my-discounted-products
servicePort: 80
- path: /special
backend:
serviceName: special-offers
servicePort: 80
- path: /news
backend:
serviceName: news
servicePort: 80
When you apply your ingress definition on GKE, load balancer is created automatically. Note that all Services may use same, standard http port and you don't have to use any custom ports.
You may want to specify a default backend, present in the above example (backend section right under spec), but it's optional. It will ensure that:
Any requests that don't match the paths in the rules field are sent to
the Service and port specified in the backend field. For example, in
the following Ingress, any requests that don't match / or /discounted
are sent to a Service named my-products on port 60001.
The only problem that you may encounter when using default ingress controller available on GKE is that for the time being it doesn't support rewrites.
If your nginx pods expose app content only on "/" path, no support for rewrites shouldn't be a limitation at all and as far as I understand, this applies in your case:
Each app comprises a simple node.js app with nginx to serve the site;
a simple nginx config exposes the apps with a location block
responding to /
However if you decide at some point that you need mentioned rewrites because e.g. one of your apps isn't exposed under / but rather /bar within the Pod you may decide to deploy nginx ingress controller which can be also done pretty easily on GKE.
So you will only need it in the following scenario: user accesses the ingress IP followed by /foo -> request is not only redirected to the specific backend Service that exposes your nginx Pod, but also the original path (/foo) needs to be rewritten to the new path (/bar) under which the application is exposed within the Pod
UPDATE:
Thank you for your reply. The above ingress configuration is very
similar to what I've already configured forwarding /foo and /bar to
different pods. The issue is that the path gets forwarded, and (after
doing some more research on the issue) I believe I need to rewrite the
URL that's sent to the pod, since the location / { ... } block in my
nginx config won't match against the received path of /foo or /bar. –
aodj Aug 14 at 9:17
Well, you're right. The original access path e.g. /foo indeed gets forwarded to the target Pod. So choosing /foo path apart from leading you to the respective backend defined in the ingress resource implicates that the target nginx server running in a Pod must serve its content also under /foo path.
I verified it with GKE ingress and can confirm by checking Pod logs that an http request sent to the nginx Pod thorough the /foo path, indeed comes to the Pod as request for /usr/share/nginx/html/foo while it serves its content under /, not /foo from /usr/share/nginx/html. So requesting for something that don't exist on the target server leads inevitably to 404 Error.
As I mentioned before, default ingress controller available on GKE doesn't support rewrites so if you want to use it for some reason, reconfiguring your target nginx servers seems the only solution to make it work.
Fortunatelly we have another option which is nginx ingress controller. It supports rewrites so it can easily solve our problem. We can deploy it on our GKE cluster by running two following commands:
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/cloud/deploy.yaml
Yes, it's really that simple! You can take a closer look at the installation process in official docs.
Then we can apply the following ingress resource definition:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: nginx-deployment-1
servicePort: 80
path: /foo(/|$)(.*)
- backend:
serviceName: nginx-deployment-2
servicePort: 80
path: /bar(/|$)(.*)
Note that we used kubernetes.io/ingress.class: "nginx" annotation to select our newly deployed nginx-ingress controller to handle this ingress resource rather than the default GKE-ingress controller.
Rewrites that were used will make sure that the original access path gets rewritten before reaching the target nginx Pod. So it's perfectly fine that both sets of Pods exposed by nginx-deployment-1 and nginx-deployment-2 Services serve their contents under "/".
If you want to quickly check how it works on your own, you can use the following Deployments:
nginx-deployment-1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-1
labels:
app: nginx-1
spec:
replicas: 3
selector:
matchLabels:
app: nginx-1
template:
metadata:
labels:
app: nginx-1
spec:
initContainers:
- name: init-myservice
image: nginx:1.14.2
command: ['sh', '-c', "echo DEPLOYMENT-1 > /usr/share/nginx/html/index.html"]
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-volume
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
nginx-deployment-2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-2
labels:
app: nginx-2
spec:
replicas: 3
selector:
matchLabels:
app: nginx-2
template:
metadata:
labels:
app: nginx-2
spec:
initContainers:
- name: init-myservice
image: nginx:1.14.2
command: ['sh', '-c', "echo DEPLOYMENT-2 > /usr/share/nginx/html/index.html"]
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-volume
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
And expose them via Services by running:
kubectl expose deployment nginx-deployment-1 --type NodePort --target-port 80 --port 80
kubectl expose deployment nginx-deployment-2 --type NodePort --target-port 80 --port 80
You may even omit --type NodePort as nginx-ingress controller accepts also ClusterIP Services.

Nginx Ingress pass whole url to oauth proxy as Redirect

I am running a Kubernetes Cluster with an Nginx-ingress fronting couple of web apps. Because Nginx doesn't support SSO/OIDC by default, I use an oauth_proxy for authentication.
In detail I use oauth2_proxy (https://github.com/pusher/oauth2_proxy) with Azure AD.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-internal
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/auth-url: "https://example.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://example.com/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: "authorization, x-auth-request-user, x-auth-request-email, x_auth_request_access_token"
spec:
rules:
- host: example.com
http:
paths:
- path: /home(/|$)(.*)
backend:
serviceName: app-homepage-frontend-service
servicePort: 80
- path: /homepage-backend(/|$)(.*)
backend:
serviceName: app-homepage-backend-service
servicePort: 80
I skiped some details like tls. So in general everything is working, only verified users are able to access the web pages.
The issue is that my frontend is writte in Angular which use hash-routing. And if try to enter a deep route like
https://example.com/home/#/page1/subpage2
just base path (/home) is passed as redirect url. So when I'm authorized successfully, I get redirected to https://example.com/home.
Is there any veriable instead of $escaped_request_uri, which pass the whole url?
Please try with below process might be it will help!!
Adding State Parameter will help for oauth2_proxy
State Parameter
State parameter will reserve the state prior to authentication request and pass random generated state value in request to authenticate and in call back request they will add state back i.e. Oauth2_Proxy generated id. Then Oauth2_Proxy will read that ID and provide the URL back and respond.
Use below link
https://dev.bitly.com/v4_documentation.html
Bitly Oauth2_Proxy added the same in there code.
https://github.com/bitly/oauth2_proxy/blob/master/providers/provider_default.go#L87-L89

Nginx ingress resource - Redirect from to www (SSL doesn't work)

Use case
I deployed the nginx ingress controller in my Kubernetes cluster using this helm chart:
https://github.com/helm/charts/tree/master/stable/nginx-ingress
I created an ingress resource for my frontend serving webserver and it is supposed to redirect from non-www to the www version. I am using SSL as well.
The problem
When I visit the www version of my website everything is fine and nginx serves the page using my Lets Encrypt SSL certificate (which exists as secret in the right namespace). However when I visit the NON-www version of the website I get the failing SSL certificate page in my Browser (NET::ERR_CERT_AUTHORITY_INVALID) and one can see the page is served using the Kubernetes ingress fake certificate. I assume that's also the reason why the redirect to the www version does not work at all.
This is my ingress resource (actual hostnames have been redacted):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
creationTimestamp: 2018-10-03T19:34:41Z
generation: 3
labels:
app: nodejs
chart: nodejs-1.0.1
heritage: Tiller
release: example-frontend
name: example-frontend
namespace: microservices
resourceVersion: "5700380"
selfLink: /apis/extensions/v1beta1/namespaces/microservices/ingresses/example-frontend
uid: 5f6d6500-c743-11e8-8aaf-42010a8401fa
spec:
rules:
- host: www.example.io
http:
paths:
- backend:
serviceName: example-frontend
servicePort: http
path: /
tls:
- hosts:
- example.io
- www.example.io
secretName: example-frontend-tls
The question
Why doesn't nginx use the provided certificate on the non-www version as well?
Looks like you fixed the issue for receiving an invalid certificate by adding an additional rule.
The issue with the redirect looks like it's related to this and it's not fixed as of this writing. However, there is a workaround as described on the same link:
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($host = 'foo.com' ) {
rewrite ^ https://www.foo.com$request_uri permanent;
}
I fixed it by adding the non www version to the rules. The redirect still doesn't work, but the page is served using the correct SSL certificate though.
- host: example.io
http:
paths:
- backend:
serviceName: example-frontend
servicePort: http
path: /

Resources