Upstream server configuration in Nginx /Worker configuration in Apache inside Kubernetes - nginx

I have an k8s nginx server which is connecting to my statefulset application servers .
I am now trying to achieve sticky sessions based on JESSIONID in the cookie. I have nginx ingress controller which directs all requests to k8s nginx service. But my nginx service is not able to maintain the sticky seesion between application server pods therefore I am not able to maintain user session in my application.
If I am connecting Ingress controller directly to application service with config nginx.ingress.kubernetes.io/session-cookie-name=JESSIONID its working as expected.
But I need a webserver either apache or Nginx in front of my application servers .
Is there is any way to achieve this? OR how we can configure statefulset pods directly inside Upstream block of Nginx or as a worker in apache?
I need below structure
Ingress ->Webserver-> front application
Currently, I have below config
nginx.ingress.kubernetes.io/session-cookie-name=JESSIONID
- backend :
serviceName: nginx-web-svc
servicePort: 80
In my nginx statefulset I have below config in nginx.conf file
location / {
proxy_pass app-svc:3000;
}
app-svc is for Application statefulset having 3 replicas (3 pods) .Its working but not managing stickiness between application pods. If I bypass webserver and directly use below ingress config it's working like charm.
nginx.ingress.kubernetes.io/session-cookie-name=JESSIONID
- backend :
serviceName: app-svc
servicePort: 3000
But I need webserver in front of my app servers .How to achieve stickiness in that scenario.

Related

Nginx Ingress Controller with Nginx Reverse Proxy

I am a bit confused with the architecture of load-balancing K8s traffic with Nginx ingress controller.
I learned that an ingress controller is supposed to configure the load-balancer you're using according to ingress configurations.
So if I want to use Nginx ingress controller and I have a Physical server that is running Nginx that stands in front of my network, how can I make the ingress controller configure it?
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect. Take a look at the example graph below:
Nginx Ingress Controller is using service of type LoadBalancer to get the traffic enter the controller and then to get rerouted to particular services.
I strongly suggest going through the official documentation in order to get a good understanding of the topic and see some examples of using it.
is the nginx ingress controller supposed to (or can) configure an
Nginx machine?
NGINX Ingress Controller works with both NGINX and NGINX Plus and supports the standard Ingress features - content-based routing and TLS/SSL termination.

How to fix catch-22 with GCLB and Wordpress returning 301

I have setup a Kubernetes cluster on GKE. Installed the stable/wordpress Helm chart. Added an Ingress with a SSL certificate. But now the Google load balancer reports that my service is unhealthy. This is caused by the Wordpress pod that returns a 301 on the health check because it wants to enforce HTTPS, which is good. But the Google load balancer refuses to send a x-forwarded-proto: https header. So the pod thinks the health check was done over http. How can I work around this?
I have tried to add an .htaccess which always returns 200 for the GoogleHC User-agent but this is not possible with the helm chart which overrides the .htaccess after start-up.
Also see: https://github.com/kubernetes/ingress-gce/issues/937 and https://github.com/helm/charts/issues/18779
WAY : 1
If you are using Kubernetes cluster on GKE then you can use ingress indirectly it will create the Loadbalancer indirectly.
You can add SSL certificate store it inside secret and apply to ingress. For SSL you can also choose another approach to install cert-manager on GKE.
If you want to setup nginx-ingress with cert-manager you can follow this guide also :
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
WAY : 2
Edit the helm chart locally add the liveness & readinesss probe to deployment and it will check wordpress health checkup over http only.
Update :
To add x-forwarded-proto in ingress you can use this annotation
nginx.ingress.kubernetes.io/server-snippet: |
location /service {
proxy_set_header X-Forwarded-Proto https;
}
As the HTTPS load balancer terminates the client SSL/TLS session at the LB, you would need to configure HTTPS between the load balancer and your application (wordpress). Health checks use HTTP by default, to use HTTPS health checks with your backend services, the backend services would also require their own SSL/TLS certificate(See #4 of HTTP load balancing which HTTPS load balancing inherits). To make the backend certificates simpler to configure, you can use self-signed certificates, which do not interfere with any client <-> load balancer encryption as the client session is terminated at the LB.
You can of course use HTTP health checks (less configuring!) for your backend(s), this will not cause any client traffic encryption issues, as it only affects the health check and not the data being sent to your application.
Why do you need https between Load Balancer and Wordpress in the first place? Wouldn't it be enough to have https on Load Balancer frontend side(between LB and outside world)?
Do you have SSL termination done twice?
This is what I did when I was migrating my Wordpress site to GKE:
Removed all Wordpress plugins related to https/ssl/tls. Lukily for me it didn't even require any Db changes.
Added Google-managed certificate. With Google-managed certificates, it's very easy to add it. GKE even has a separate definition for a certificate. On top of that you just need to update your DNS records:
apiVersion: networking.gke.io/v1beta2
kind: ManagedCertificate
metadata:
name: my-certificate
namespace: prod
spec:
domains:
#Wildcard domains are not supported(https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs).
- example.com
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: prod-ingress
namespace: prod
annotations:
kubernetes.io/ingress.allow-http: "false"
kubernetes.io/ingress.global-static-ip-name: load-balancer-ip
networking.gke.io/managed-certificates: my-certificate
I realize you have helm on top of it, but there's always a way to edit it/or configs/params.

Nginx ingress controller vs HAProxy load balancer

What is the difference between Nginx ingress controller and HAProxy load balancer in kubernetes?
First, let's have a quick overview of what an Ingress Controller is in Kubernetes.
Ingress Controller: controller that responds to changes in Ingress rules and changes its internal configuration accordingly
So, both the HAProxy ingress controller and the Nginx ingress controller will listen for these Ingress configuration changes and configure their own running server instances to route traffic as specified in the targeted Ingress rules. The main differences come down to the specific differences in use cases between Nginx and HAProxy themselves.
For the most part, Nginx comes with more batteries included for serving web content, such as configurable content caching, serving local files, etc. HAProxy is more stripped down, and better equipped for high-performance network workloads.
The available configurations for HAProxy can be found here and the available configuration methods for Nginx ingress controller are here.
I would add that Haproxy is capable of doing TLS / SSL offloading (SSL termination or TLS termination) for non-http protocols such as mqtt, redis and ftp type workloads.
The differences go deeper than this, however, and these issues go into more detail on them:
https://serverfault.com/questions/229945/what-are-the-differences-between-haproxy-and-ngnix-in-reverse-proxy-mode
HAProxy vs. Nginx

Openshift - Internal NGINX proxy can't connect to Openshift route hostname

My use-case requires pass-through SSL, so we unforunately can't use path-based routing natively in Openshift. Our next best solution was to set up an internal NGINX proxy to route traffic from a path to another web UI's Openshift route. I'm getting errors when doing so.
Here's my simplified NGINX config:
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /etc/nginx/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
upstream app1-ui-1-0 {
server app1-1-0.192.168.99.100.nip.io:443;
}
server {
listen 8443 ssl default_server;
location /apps/app1/ {
proxy_pass https://app1-ui-1-0/;
}
}
}
My app1 route configuration is as follows:
apiVersion: v1
kind: Route
metadata:
name: app1-1-0
spec:
host: app1-1-0.192.168.99.100.nip.io
to:
kind: Service
name: app1-1-0
tls:
insecureEdgeTerminationPolicy: Redirect
termination: passthrough
When I hit https://app1-1-0.192.168.99.100.nip.io, the app works fine.
When I hit the NGINX proxy route url (https://proxier-1-0.192.168.99.100.nip.io), it properly loads up the nginx's standard index.html place.
However, when I try to hit app1 through the proxy via https://proxier-1-0.192.168.99.100.nip.io/apps/apps1/, I get the following Openshift error:
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
Via logs and testing, I know the request is getting into the /apps/app1/ location block, but it never gets to app1's NGINX. I've also confirmed this error is coming from either app1's router or service, but I don't know how to troubleshoot since neither has logs. Any ideas?
When you want to make a request to some other application running in the same OpenShift cluster, the correct solution in most cases is to use the internal DNS.
OpenShift ships with a SDN which enables comms between Pods. This is more efficient than communicating to another Pod via its route since this will typically route the request back onto the public internet before it hits the OpenShift router again and is at that point forwarded via the SDN.
Services can be reached <service>.<pod_namespace>.svc.cluster.local which in your case enables NGINX to proxy via server apps1-1-0.myproject.svc.cluster.local
Routes should typically be used to route external traffic into the cluster.
See OpenShift docs for more details on networking
Per a comment above, I ended up dropping the route and referencing the service's internal DNS in NGINX's upstream:
upstream finder-ui-1-0 {
server apps1-1-0.myproject.svc.cluster.local:443;
}
This suited my needs just fine and worked well.

Using a Kubernete Ingress on GCE to Redirect/Force TLS

Am I currently forced to use an additional webserver (nginx) to redirect all Kubernete Ingress traffic to https when hosting on GCE?
I'm looking to deploy a Golang application into the wild. As a learning experiment, I thought I would use GCE to host & K8s to deploy/scale. I have deployments and services all working as expected returning traffic and created certs with Lets Encrypt for TLS termination.
I am at the point of implementing an Ingress now as Service LoadBalancers seem to be deprecated. At this stage I am using a static IP for the Ingress to use for backend requests - as follows
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "kubernetes-ingress"
ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- secretName: web-ssl
backend:
serviceName: web
servicePort: 80
Of course I want all http traffic to go through https/TLS. Assigning the ingress.kubernetes.io/ssl-redirect: "true" entry has made no difference. As a sneaky attempt, I thought I may be able to alter the servicePort to 443. As my service is accepting requests on both 80/443 ports, valid responses were returned, but http was not forced to https.
At this stage I am guessing I will need to "bite the bullet" and create an nginx Ingress Controller. This will also help to update certs using Lego along with creating another abstraction should I need more service points.
But before I did, I just wanted to check first if there is no other way? Any help appreciated thanks.
An Ingress controller is needed to implement the Ingress manifest. Without it, installing the Ingress manifest doesn't do anything. Afaik, deploying an Ingress is the best way for HTTP redirection.
You can make the ingress redirect HTTP traffic to HTTPS. Check out this tutorial for TLS with traefik, and this tutorial for TLS with nginx.

Resources