istio host matching rule redirection problem - networking

I've 2 different services in Kubernetes. My purpose is to open these services and application behind the services to the outside. The problem is that application index.html has a redirection and it causes requests' bypass host matching rules.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: app1-vs
namespace: customapp
spec:
hosts:
- mydomain.com
gateways:
- customapp-gateway
http:
- match:
- uri:
prefix: "/app1"
route:
- destination:
host: app1.customapp.svc.cluster.local
port:
number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: app2-vs
namespace: customapp
spec:
hosts:
- mydomain.com
gateways:
- customapp-gateway
http:
- match:
- uri:
prefix: "/app2"
route:
- destination:
host: app2.customapp.svc.cluster.local
port:
number: 81
I mean, mydomain.com/app1/index.html forwards requests to mydomain.com/login page which out of matching rule. How can I figure it out?

Related

Istio Virtual Service - Proxy to external HTTPS service

I'm trying to proxy HTTP requests with specified URI prefix to an external HTTPS server.
The idea is to use ower internal Nexus Repository manager for NPM, but don't loosethe ability for 'npm audit' like this project does GitHub Project. It should be done with Istio instead of deploying an extra app.
I configured a virtual service and a service entry to route the traffic to the external service. So far it was not possible to convert an HTTP request to an HTTPS request. Is there any chance to do this?
Configuration:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-nexus
spec:
hosts:
- "test.com"
gateways:
- gateway-xy
http:
- match:
- uri:
prefix: /-/npm/v1/security/audits/
route:
- destination:
port:
number: 443
host: registry.npmjs.org
- route:
- destination:
port:
number: 80
host: nexus
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: npmjs-ext
spec:
hosts:
- registry.npmjs.org
ports:
- number: 443
name: tls
protocol: tls
resolution: DNS
location: MESH_EXTERNAL
Found a solution: You need to add an DestinationRule with TLS mode 'SIMPLE' to connect to an external HTTPS service.
The whole configuration for my issue with forwarding 'npm audit' requests to public 'registry.npmjs.org', if you are using a private Nexus Repository is:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs
spec:
hosts:
- "test.com"
gateways:
- gateway
http:
# Route to npm registry for audit
# Like this: https://github.com/chovyy/npm-audit-proxy
# See: https://istio.io/latest/blog/2019/proxy/
- match:
- uri:
prefix: /-/npm/v1/security
headers:
request:
set:
host: "registry.npmjs.org"
route:
- destination:
port:
number: 443
host: registry.npmjs.org
# This is for custom Nexus repositories: You need to rewrite the route, that the prefix of the repository URL is not forwarded to registry.npmjs.org
- match:
- uri:
prefix: /repository/npm-test-repo/-/npm/v1/security
rewrite:
uri: /-/npm/v1/security
headers:
request:
set:
host: "registry.npmjs.org"
route:
- destination:
port:
number: 443
host: registry.npmjs.org
- route:
- destination:
port:
number: 80
host: nexus
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: npmjs-ext
spec:
hosts:
- registry.npmjs.org
ports:
- number: 443
name: tls
protocol: TLS
resolution: DNS
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: npmjs-ext
spec:
host: registry.npmjs.org
trafficPolicy:
tls:
mode: SIMPLE

nginx ingress controller routing doesn't work as expected

I have a kubernetes cluster with an application (deployment + ClusterIp service), nginx ingress controller, cert manager and letsencrypt issuer.
Here the service
apiVersion: v1
kind: Service
metadata:
name: myapp-service
namespace: mynamespace
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
This is the the ingress yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- <myapp>.<myregion>.cloudapp.azure.com
secretName: tls-secret
rules:
- host: <myapp>.<myregion>.cloudapp.azure.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 80
It works correctly, responding to the url https://<myapp>.<myregion>.cloudapp.azure.com.
Now I need to change the path like in :
spec:
tls:
- hosts:
- <myapp>.<myregion>.cloudapp.azure.com
secretName: tls-secret
rules:
- host: <myapp>.<myregion>.cloudapp.azure.com
http:
paths:
- path: /sub
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 80
I would expect to browse my app at https://<myapp>.<myregion>.cloudapp.azure.com/sub.
Instead I get
This <myapp>.<myregion>.cloudapp.azure.com page can’t be found
What I am doing wrong?
I tried to find examples online, but couldn't find any that helped me understand what's wrong.
EDIT
What happens behind the scenes (dev tools) is:
The browser sends a request to /sub
The ingress routes to the correct service, rewriting the url to /
The application receives the request correctly
The application wants to redirect the browser to a login url (e.g. /login)
The browser receives a redirect (302) to /login and executes it
The ingress doesn't see the /sub in the redirect url, so it doesn't know what to do
I guess the redirect url should be /sub/login, not simply /login.
There should be an easy way to configure the ingress to fix this trivial issue. Can someone point me to the right direction?

Nginx Ingress controller- Path based routing

i am running an Nginx ingress controller and wanted to allow only few path for users to connect and rest all I wanted to block or provide an 403 error. how can i do that?
I only wanted users to allow to connect /example and rest all should be blocked.
kind: Ingress
metadata:
name: ingress1
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: ingress.example.com
http:
paths:
- path: /example
backend:
serviceName: ingress-svc
servicePort: 80
Can i add a nginx server-snippet?
location path {
"if the path is not matching then deny"
deny all;
}```
Make a custom backend using below
apiVersion: apps/v1
kind: Deployment
metadata:
name: custom-http-backend
spec:
selector:
matchLabels:
app: custom-http-backend
template:
metadata:
labels:
app: custom-http-backend
spec:
containers:
- name: custom-http-backend
image: inanimate/echo-server
ports:
- name: http
containerPort: 8080
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: custom-http-backend
spec:
selector:
app: custom-http-backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
Then in your ingress add this rule
- host: ingress.example.com
http:
paths:
- path: /
backend:
serviceName: custom-http-backend
servicePort: 80
Additionally to what #Tarun Khosla mentioned which is correct, there is another stackoverflow question with examples which might be helpful. I am posting this as a community wiki answer for better visibility for the community, feel free to expand on it.
There are 2 examples provided by #Nick Rak
I’ve faced the same issue and found the solution on github.
To achieve your goal, you need to create two Ingresses first by default without any restriction:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-test
spec:
rules:
- host: host.host.com
http:
paths:
- path: /service-mapping
backend:
serviceName: /service-mapping
servicePort: 9042
Then, create a secret for auth as described in the doc:
Creating the htpasswd
$ htpasswd -c auth foo
New password: <bar>
New password:
Re-type new password:
Adding password for user foo
Creating the secret:
$ kubectl create secret generic basic-auth --from-file=auth
secret "basic-auth" created
Second Ingress with auth for paths which you need to restrict:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-with-auth
annotations:
# type of authentication
nginx.ingress.kubernetes.io/auth-type: basic
# name of the secret that contains the user/password definitions
nginx.ingress.kubernetes.io/auth-secret: basic-auth
# message to display with an appropiate context why the authentication is required
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - foo"
spec:
rules:
- host: host.host.com
http:
paths:
- path: /admin
backend:
serviceName: service_name
servicePort: 80
According to sedooe answer, his solution may have some issues.
and #sedooe
You can use server-snippet annotation. This seems like exactly what you want to achieve.

K8S Ingress 404 ssl backend

I have an issue I can't figure out. I have setup Nginx Ingress Controller on my managed k8s cluster. I'm trying to reach an SSL enabled pod behind and it does not work. I have 404 not found from Nginx and the certificate which is presented is the Nginx one. I have deployed the controller using their github repo and the default files following their doc.
I have setup a clear http pod for purpose tests and it works. It seems to be related to ssl.
I have tried many things to no avail. How can I reach an SSL pod behind nginx ?
Here's the Deployment + service (for the https one) resource I have setup :
apiVersion: apps/v1
kind: Deployment
metadata:
name: moulip-https
spec:
selector:
matchLabels:
app: moulip-https
replicas: 2
template:
metadata:
labels:
app: moulip-https
spec:
containers:
- name: "wabam"
image: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
ports:
- containerPort: 443
imagePullSecrets:
- name: regcrd
---
apiVersion: v1
kind: Service
metadata:
name: https-svc
labels:
app: moulip-https
spec:
ports:
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: moulip-https
and my Ingress :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /
namespace: default
spec:
rules:
- host: https.moulip.lan
http:
paths:
- backend:
serviceName: https-svc
servicePort: 443
- host: test.moulip.lan
http:
paths:
- backend:
serviceName: hostname-svc
servicePort: 80
Many thanks for any guidance you could provide me with.
You are missing tls configuration in the ingress. follow sample below
apiVersion: v1
kind: Secret
metadata:
name: testsecret-tls
namespace: default
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
type: kubernetes.io/tls
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
tls:
- hosts:
- sslexample.foo.com
secretName: testsecret-tls
rules:
- host: sslexample.foo.com
http:
paths:
- path: /
backend:
serviceName: service1
servicePort: 80

can I use nginx ingress controller oauth2_proxy in kubernetes with azure active directory without cookies

I am in the process of changing from an Azure webservice to azure kubernetes to host an api. I have the solution working with nginx and oauth2_proxy and azure active directory. However the solution requires a cookie to function.
As this is an api and the external security will be managed by an AWS API Gateway with a custom authoriser. I would like for the API Gateway to authenticate using a bearer token only and not require a cookie.
I have my solution working and have been so far testing form postman. In postman I have the bearer token but cannot find a way to access without the cookie.
My application presently runs via aws api gateway and an azure app service with azure active directory. The aws api gateway custom authoriser does not require a cookie in this case.
I have the following configuration
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: oauth2-proxy
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: mydomain.com
http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 4180
tls:
- hosts:
- mydomain.com
secretName: tls-secret
------
# oauth2_proxy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- env:
- name: OAUTH2_PROXY_PROVIDER
value: azure
- name: OAUTH2_PROXY_AZURE_TENANT
value: mytennantid
- name: OAUTH2_PROXY_CLIENT_ID
value: my clientid
- name: OAUTH2_PROXY_CLIENT_SECRET
value: my client secret
- name: OAUTH2_PROXY_COOKIE_SECRET
value: my cookie secret
- name: OAUTH2_PROXY_HTTP_ADDRESS
value: "0.0.0.0:4180"
- name: OAUTH2_PROXY_UPSTREAM
value: "file:///dev/null"
image: machinedata/oauth2_proxy:latest
imagePullPolicy: IfNotPresent
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
app: oauth2-proxy
-----
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
spec:
tls:
- hosts:
- mydomain.com
secretName: tls-secret
rules:
- host: mydomain.com
http:
paths:
- backend:
serviceName: mayapp
servicePort: 80
I would like to change this configuration so a cookie is no longer required. If this is not possible is there another way to achieve the same outcome?
Just drop the oauth part on kubernetes and make API Gateway validate the requests, it has the ability to do exactly what you need. You can secure your kubernetes to only accept requests from the API Gateway, so you don't need to protect your endpoint from other calls.

Resources