Spring MVC deployed on Kubernetes with Ingress controller always returns 302 - spring-mvc

I am trying to deploy a Spring MVC application on Kubernetes with microk8s and Ingress controller. When I hit the URL in the browser, the login page is displayed which is the expected behavior. Although when I enter the credentials and hit login the home page is never displayed and a 302 response code is returned for the home page. And the browser redirects to login page again. How can I avoid those 302 response codes and redirect properly to the other pages?
This is the config file for the application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
selector:
matchLabels:
app: app
replicas: 3
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: mycustomimagename
ports:
- containerPort: 8080
---
apiVersion: v1 # Kubernetes API version
kind: Service # Kubernetes resource kind we are creating
metadata: # Metadata of the resource kind we are creating
name: app
spec:
selector:
app: app
ports:
- protocol: "TCP"
port: 8080 # The port that the service is running on in the cluster
targetPort: 8080 # The port exposed by the service
type: NodePort
and this is the ingress controller config file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: spring-ingress
spec:
rules:
- host: "example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app
port:
number: 8080

Related

How to setup an ingress controller for kubernetes?

Sorry I am new with Kubernetes and everything...
I have a java back-end in a clusterIP service and a front-end in a NodePort service. I try to make a request to the backend from the front (from the navigator) and it doesn't work.
I saw that I needed to setup an ingress crontroller in order to make it work, but each time I do a "minikube tunnel" and go to my localhost, I get a NGINX 404 error. And the address http://toto.virtualisation doesn't work too (like it doesn't exist).
Here is the setup of my front and my ingress controller in my yaml file :
# Front Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: front-end-deployment
spec:
selector:
matchLabels:
app: front-end
template:
metadata:
labels:
app: front-end
spec:
containers:
- name: front-end-container
image: oxasa/front-end-image:latest
---
# Front Service
apiVersion: v1
kind: Service
metadata:
name: front-end-service
spec:
ports:
- name: http
targetPort: 80
port: 80
type: NodePort
selector:
app: front-end
---
# Front Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: front-end-ingress
spec:
rules:
- host: toto.virtualisation
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: front-end-service
port:
number: 80
If you see anything that needs to be done to make it work...
Try adding
spec:
ingressClassName: nginx
To the Ingress resource to ensure nginx picks up the created ingress.
Also Ingress is not required for service to service Communication. You can use the Kubernetes internal DNS from your Frontend service.
You can make the Frontend access backend by using sth like {service-name}.{namespace}.svc.cluster.local

K8S Ingress controller, Cert manager and LetsEncrypt SSL doesn't working

I have created a brand new K8S cluster
I have created the Ingress nginx controller.
The controller created a namespace with all of the required Pods, Svcs and etc.
I have created an Ingress object that routes the traffic to a Deployment service with TLS enabled.
I have created a cluster issuer object.
When inspecting the kubectl describe cert everything okay and ready.
When inspecting the kubectl describe clusterissuer, as well.
When doing curl https://example.com/ it returns the following error:
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
Without SSL, the access is enabled from outside and works properly, when adding back the SSL configuration in the Ingress object, it fails again.
ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- k8s-poc.example.com
secretName: echo-tls
rules:
- host: k8s-poc.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-svc
port:
number: 3333
test-depl.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-depl
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: mydockeruser/test:42
ports:
- containerPort: 3333
imagePullSecrets:
- name: docker-regcred
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: test-svc
spec:
selector:
app: test
ports:
- name: http
protocol: TCP
port: 3333
targetPort: 3333
prod-issuer.yaml:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: cert-manager
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: my#email.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx

Loadbalancing/Redirecting from Kubernetes NodePort Services

i got a bare metal cluster with a few nodeport deployments of my services (http and https). I would like to access them from a single url like myservices.local with (sub)paths.
config could be sth like the following (pseudo code):
/app1
http://10.100.22.55:30322
http://10.100.22.56:30322
# browser access: myservices.local/app1
/app2
https://10.100.22.55:31432
https://10.100.22.56:31432
# browser access: myservices.local/app2
/...
I tried a few things with haproxy and nginx but nothing really worked (for inexperienced in this webserver/lb things kinda confusing syntax/ config style in my opinion).
what is the easiest solution for a case like this?
The easiest and most used way is to use a NGINX Ingress. The NGINX Ingress is built around the Kubernetes Ingress resource, using a ConfigMap to store the NGINX configuration.
In the documentation we can read:
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
internet
|
[ Ingress ]
--|-----|--
[ Services ]
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
This is exactly what you want to achieve.
The first thing you need to do is to install the NGINX Ingress Controller in your cluster. You can follow the official Installation Guide.
A ingress is always going to point to a Service. So you need to have a Deployment, a Service and a NGINX Ingress.
Here is an example of an application similar to your example.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app1
name: app1
spec:
replicas: 1
selector:
matchLabels:
app: app1
strategy:
type: Recreate
template:
metadata:
labels:
app: app1
spec:
containers:
- name: app1
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app2
name: app2
spec:
replicas: 1
selector:
matchLabels:
app: app2
strategy:
type: Recreate
template:
metadata:
labels:
app: app2
spec:
containers:
- name: app2
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: app1
labels:
app: app1
spec:
type: ClusterIP
ports:
- port: 5000
protocol: TCP
targetPort: 5000
selector:
app: app1
---
apiVersion: v1
kind: Service
metadata:
name: app2
labels:
app: app2
spec:
type: ClusterIP
ports:
- port: 5001
protocol: TCP
targetPort: 5001
selector:
app: app2
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress #ingress resource
metadata:
name: myservices
labels:
app: myservices
spec:
rules:
- host: myservices.local #only match connections to myservices.local.
http:
paths:
- path: /app1
backend:
serviceName: app1
servicePort: 5000
- path: /app2
backend:
serviceName: app2
servicePort: 5001

can I use nginx ingress controller oauth2_proxy in kubernetes with azure active directory without cookies

I am in the process of changing from an Azure webservice to azure kubernetes to host an api. I have the solution working with nginx and oauth2_proxy and azure active directory. However the solution requires a cookie to function.
As this is an api and the external security will be managed by an AWS API Gateway with a custom authoriser. I would like for the API Gateway to authenticate using a bearer token only and not require a cookie.
I have my solution working and have been so far testing form postman. In postman I have the bearer token but cannot find a way to access without the cookie.
My application presently runs via aws api gateway and an azure app service with azure active directory. The aws api gateway custom authoriser does not require a cookie in this case.
I have the following configuration
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: oauth2-proxy
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: mydomain.com
http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 4180
tls:
- hosts:
- mydomain.com
secretName: tls-secret
------
# oauth2_proxy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- env:
- name: OAUTH2_PROXY_PROVIDER
value: azure
- name: OAUTH2_PROXY_AZURE_TENANT
value: mytennantid
- name: OAUTH2_PROXY_CLIENT_ID
value: my clientid
- name: OAUTH2_PROXY_CLIENT_SECRET
value: my client secret
- name: OAUTH2_PROXY_COOKIE_SECRET
value: my cookie secret
- name: OAUTH2_PROXY_HTTP_ADDRESS
value: "0.0.0.0:4180"
- name: OAUTH2_PROXY_UPSTREAM
value: "file:///dev/null"
image: machinedata/oauth2_proxy:latest
imagePullPolicy: IfNotPresent
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
app: oauth2-proxy
-----
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
spec:
tls:
- hosts:
- mydomain.com
secretName: tls-secret
rules:
- host: mydomain.com
http:
paths:
- backend:
serviceName: mayapp
servicePort: 80
I would like to change this configuration so a cookie is no longer required. If this is not possible is there another way to achieve the same outcome?
Just drop the oauth part on kubernetes and make API Gateway validate the requests, it has the ability to do exactly what you need. You can secure your kubernetes to only accept requests from the API Gateway, so you don't need to protect your endpoint from other calls.

Kubernetes nginx ingress is not resolving services

Cloud: Google Cloud Platform.
I have the following configuration
kind: Deployment
apiVersion: apps/v1
metadata:
name: api
spec:
replicas: 2
selector:
matchLabels:
run: api
template:
metadata:
labels:
run: api
spec:
containers:
- name: api
image: gcr.io/*************/api
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /_ah/health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
---
kind: Service
apiVersion: v1
metadata:
name: api
spec:
selector:
run: api
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /api/*
backend:
serviceName: api
servicePort: 8080
All set. GKE saying that all deployments are okay, pods number are met and main ingress with nginx-ingress-controllers are set as well. But I'm not able to reach any of the services. Even application specific 404. Nothing. Like, it's not resolved at all.
Another related question I see to entrypoints. The first one through main-ingress. It created it's own LoadBalancer with own IP address. The second one address from nginx-ingress-controller. The second one is at least returning 404 from default backend, but also not pointing to expected api service.

Resources