Remote IP based SSL in Kubernetes Ingress - nginx

In plain nginx, I can use the nginx geo module to set a variable based on the remote address. I can use this variable in the ssl path to choose a different SSL certificate and key for different remote networks accessing the server. This is necessary because the different network environments have different CAs.
How can I reproduce this behavior in a Kubernetes nginx ingress? or even Istio?

You can customize the generated config both for the base and for each Ingress. I'm not familiar with the config you are describing but some mix of the various *-snippet configmap options (https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#server-snippet) or a custom template (https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/)

Related

Getting client original ip address with azure aks

I'm currently working on copying AWS EKS cluster to Azure AKS.
In our EKS we use external Nginx with proxy protocol to identify the client real IP and check if it is whitelisted in our Nginx.
In AWS to do so we added to the Kubernetes service annotation aws-load-balancer-proxy-protocol to support Nginx proxy_protocol directive.
Now the day has come and we want to run our cluster also on Azure AKS and I'm trying to do the same mechanism.
I saw that AKS Load Balancer hashes the IPs so I removed the proxy_protocol directive from my Nginx conf, I tried several things, I understand that Azure Load Balancer is not used as a proxy but I did read here:
AKS Load Balancer Standard
I tried whitelisting IPs at the level of the Kubernetes service using the loadBalancerSourceRanges api instead on the Nginx level.
But I think that the Load Balancer sends the IP to the cluster already hashed (is it the right term?) and the cluster seem to ignore the ips under loadBalancerSourceRanges and pass them through.
I'm stuck now trying to understand where I lack the knowledge, I tried to handle it from both ends (load balancer and kubernetes service) and they both seem not to cooperate with me.
Given my failures, what is the "right" way of passing the client real IP address to my AKS cluster?
From the docs: https://learn.microsoft.com/en-us/azure/aks/ingress-basic#create-an-ingress-controller
If you would like to enable client source IP preservation for requests
to containers in your cluster, add --set controller.service.externalTrafficPolicy=Local to the Helm install
command. The client source IP is stored in the request header under
X-Forwarded-For. When using an ingress controller with client source
IP preservation enabled, SSL pass-through will not work.
More information here as well: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
You can use the real_ip and geo modules to create the IP whitelist configuration. Alternatively, the loadBalancerSourceRanges should let you whitelist any client IP ranges by updating the associated NSG.

How ingress controller is providing dns names?

I am trying to understand how ingress controller works in kubernetes.
I have deployed nginx ingress controller on bare metal k8s cluster (referred to kind ingress docs)
localhost now points to nginx default page.
I have deployed an app with an ingress resource with host as "foo.localhost".
I can access my app on foo.localhost now.
I would like to know how nginx was able to do it without any modificaion on /etc/hosts file.
I also want to access my app from different machine over same/different network.
I have used ngrok for this
ngrok http foo.localhost
but it points to nginx default page and not my app
How can I access it using ngrok if I don't want to use port forward or kube proxy.
On your machine, localhost and foo.localhost all resolve to the same address, 127.0.0.1. This is already there, it is not something nginx or k8s does. That's the reason why you cannot access that from another machine, because that name resolves to the localhost for that machine as well, not the one running your k8s ingress. When you exposed it using ngrok, it exposes it using a different name. When you try to access the ingress using that name, the request contains a Host header with the ngrok URL, which is not the same as foo.localhost, so the ingress thinks the request is for a different domain.
Try exposing your localhost in the ingress using the ngrok url.

can we install the different SSL certificate on different Nginx-location in same nginx-server for same host name and same port?

I got the situation where I have to configure different certificate for two different application on the nginx server. Both application request will be proxy from the nginx server to there respective running application ..
I have to configure this for same server name and same port.
Any suggestion will be appreciated here.
Thanks
You can't do this with stock NGINX, because ssl_certificateq cannot be set per-location.
You can achieve what you want by using Lua nginx module, using, in particular ssl_certificate_by_lua_block, writing logic for loading different SSL cert depending on current URI.

Kubernetes: how to connect to service from outside the cluster?

I have created a service using this manual: https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address-service/
The service has IP in this example (10.32.0.16, by kubectl describe services example-service command) and we can create proxy_pass rule: proxy_pass http://10.32.0.16:8080; in external (outside the cluster) nginx.
This IP is always different (it depends on number of services, etc..). How can I specify this service for my external nginx?
An alternative, that I found very powerful, is to setup the nginx inside the cluster using the official nginx ingress controller.
Then you can have both load-balanced/HA nginx and have kubernetes automatically update its config from ingress rules.
At the moment Traefik seems very popular for such cases.
It's taking over nginx ingress too...
You can either:
specify a fixed IP for a service
proxy to the service DNS name

Use nginx to redirect requests to kuberenetes services

I'm looking for a way, if possible to route a request to for eg. team.mysite.com to team.default.svc.cluster.local using nginx. This way I could have multiple wordpress sites using different sub domains on my domains and working as explained above. Basically calling xyz.mysite.com would have the request forwarded to xyz.default.svc.cluster.local, provided the service exists.
Note:
I have the kube-dns service running at 10.254.0.2
Is this possible? And how exactly would I do this?
Thanks.
Edit:
Going over this again I could possibly use variables in the ngonx.conf i.e $host.$host.default.svc.cluster.conf where $host is $host.mydomain.com.
I'd need a way to let nginx resolve the kube dns services also a way to part out the xyz in xyz.mydomain.com in the nginx.conf and assigning it to $host
If your nodes have a public IP address you could use an Ingress resource to do exactly what you describe. An Ingress usually defines a list of paths and their target Service, so your service should be running before you try to expose it.
Ingress controllers then dynamically configure Web servers that listen on a defined list of ports for you (typically the host ports 80 and 443 as you may have guessed) using the Ingress configuration mentioned above, and can be deployed using the Kubernetes resource type of your choice: Pod, ReplicationController, DaemonSet.
Once your Ingress rules are configured and your Ingress controller is deployed, your can point the A or CNAME DNS records of your domain to the node(s) running the actual Web server(s).

Resources