How ingress controller is providing dns names? - nginx

I am trying to understand how ingress controller works in kubernetes.
I have deployed nginx ingress controller on bare metal k8s cluster (referred to kind ingress docs)
localhost now points to nginx default page.
I have deployed an app with an ingress resource with host as "foo.localhost".
I can access my app on foo.localhost now.
I would like to know how nginx was able to do it without any modificaion on /etc/hosts file.
I also want to access my app from different machine over same/different network.
I have used ngrok for this
ngrok http foo.localhost
but it points to nginx default page and not my app
How can I access it using ngrok if I don't want to use port forward or kube proxy.

On your machine, localhost and foo.localhost all resolve to the same address, 127.0.0.1. This is already there, it is not something nginx or k8s does. That's the reason why you cannot access that from another machine, because that name resolves to the localhost for that machine as well, not the one running your k8s ingress. When you exposed it using ngrok, it exposes it using a different name. When you try to access the ingress using that name, the request contains a Host header with the ngrok URL, which is not the same as foo.localhost, so the ingress thinks the request is for a different domain.
Try exposing your localhost in the ingress using the ngrok url.

Related

Getting client original ip address with azure aks

I'm currently working on copying AWS EKS cluster to Azure AKS.
In our EKS we use external Nginx with proxy protocol to identify the client real IP and check if it is whitelisted in our Nginx.
In AWS to do so we added to the Kubernetes service annotation aws-load-balancer-proxy-protocol to support Nginx proxy_protocol directive.
Now the day has come and we want to run our cluster also on Azure AKS and I'm trying to do the same mechanism.
I saw that AKS Load Balancer hashes the IPs so I removed the proxy_protocol directive from my Nginx conf, I tried several things, I understand that Azure Load Balancer is not used as a proxy but I did read here:
AKS Load Balancer Standard
I tried whitelisting IPs at the level of the Kubernetes service using the loadBalancerSourceRanges api instead on the Nginx level.
But I think that the Load Balancer sends the IP to the cluster already hashed (is it the right term?) and the cluster seem to ignore the ips under loadBalancerSourceRanges and pass them through.
I'm stuck now trying to understand where I lack the knowledge, I tried to handle it from both ends (load balancer and kubernetes service) and they both seem not to cooperate with me.
Given my failures, what is the "right" way of passing the client real IP address to my AKS cluster?
From the docs: https://learn.microsoft.com/en-us/azure/aks/ingress-basic#create-an-ingress-controller
If you would like to enable client source IP preservation for requests
to containers in your cluster, add --set controller.service.externalTrafficPolicy=Local to the Helm install
command. The client source IP is stored in the request header under
X-Forwarded-For. When using an ingress controller with client source
IP preservation enabled, SSL pass-through will not work.
More information here as well: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
You can use the real_ip and geo modules to create the IP whitelist configuration. Alternatively, the loadBalancerSourceRanges should let you whitelist any client IP ranges by updating the associated NSG.

Integrate a Load Balancer IP given by OVH with a Nginx Ingress Controller on a k8s cluster

I have a Load Balancer IP provided by OVH that I want to use with Nginx Ingress Controller but on a on-premises cluster. There are several guide s to do that using OVH Managed Kubernetes but it is not possible for me since I already a cluster.
I tried to use the LoadBalancerIP option using Helm and without Helm as well...
You should expose Nginx Ingress Controller as NodePort and point your OVH Load Balancer to your workers as endpoints.
User ---> OVH LB ----> Nginx Ingress on workers
Thank you for both you answer. I tried what you recommended but I think I'm missing a point. TO be more clear :
1/ The user part -> I have a OVH LB connected to a server of 3 node, this LB selects a node to be used by a user (round robin)
2/ Once a node had been selected, the user should be able to access to whatever service inside Kubernetes even if the service is not on this node by using the LoadBalancer IP.
For the 2nd point, I tried to expose/create an endpoint for the Nginx Ingress Controller where I gave the LB's IP, but I don't know if I have to create an Ingress object for each service (only 2-3 like grafana, prometheus..). I tried it but it didn't work. I also tried to create an Ingress for the service where I gave the LB IP but it didn't work. Note that my k8s cluster is on LXD containers which are inside 3 connected servers (one LXD container by server node). Also, concerning the OVH LoadBalancer, I'm not very confident with the purpose of Outbound IPs (which is a CIDR range)..
I understand that my OVH LB cannot be auto provisioned, but since its job is done outside of k8s (just attributing a node to a user), the problem is : how does the node can access the service based on a URL like grafana.example.com? I was using MetalLB as an internal LB and it worked fine but now i'm struggling with the OVH LB..

NGINX loadbalancing on Kubernetes

I have some services running in Kubernetes. I need an NGINX in front of them, to redirect traffic according to the URLs, handle SSL encryption and load balancing.
There is a working nginx.conf for that scenario. What I´m missing is the right way to set up the architecture on gcloud.
Is it correct to launch a StatefulSet with nginx and have a Loadbalancing Service expose NGINX? Do I understand it right, that gcloud LB would pass the configured Ports ( f.e. 80 + 443) to my NGINX service, where I can handle the rest and forward the traffic to the backend services?
You don't really need a StatefulSet, a Deployment will do since nginx is already being fronted by a gcloud TCP load balancer, if for any reason one of your nginx pods is down the gcloud load balancer will not forward traffic to it. Since you already have a gcloud load balancer you will have to use a NodePort Service type and you will have to point your gcloud load balancer to all the nodes on your K8s cluster on that specific port.
Note that your nginx.conf will have to know how to route to all the services internally in your K8s cluster. I recommend you set up an nginx ingress controller, which will basically manage the nginx.conf for you through an Ingress resource and you can also expose it as a LoadBalancer Service type.

Kubernetes: how to connect to service from outside the cluster?

I have created a service using this manual: https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address-service/
The service has IP in this example (10.32.0.16, by kubectl describe services example-service command) and we can create proxy_pass rule: proxy_pass http://10.32.0.16:8080; in external (outside the cluster) nginx.
This IP is always different (it depends on number of services, etc..). How can I specify this service for my external nginx?
An alternative, that I found very powerful, is to setup the nginx inside the cluster using the official nginx ingress controller.
Then you can have both load-balanced/HA nginx and have kubernetes automatically update its config from ingress rules.
At the moment Traefik seems very popular for such cases.
It's taking over nginx ingress too...
You can either:
specify a fixed IP for a service
proxy to the service DNS name

Use nginx to redirect requests to kuberenetes services

I'm looking for a way, if possible to route a request to for eg. team.mysite.com to team.default.svc.cluster.local using nginx. This way I could have multiple wordpress sites using different sub domains on my domains and working as explained above. Basically calling xyz.mysite.com would have the request forwarded to xyz.default.svc.cluster.local, provided the service exists.
Note:
I have the kube-dns service running at 10.254.0.2
Is this possible? And how exactly would I do this?
Thanks.
Edit:
Going over this again I could possibly use variables in the ngonx.conf i.e $host.$host.default.svc.cluster.conf where $host is $host.mydomain.com.
I'd need a way to let nginx resolve the kube dns services also a way to part out the xyz in xyz.mydomain.com in the nginx.conf and assigning it to $host
If your nodes have a public IP address you could use an Ingress resource to do exactly what you describe. An Ingress usually defines a list of paths and their target Service, so your service should be running before you try to expose it.
Ingress controllers then dynamically configure Web servers that listen on a defined list of ports for you (typically the host ports 80 and 443 as you may have guessed) using the Ingress configuration mentioned above, and can be deployed using the Kubernetes resource type of your choice: Pod, ReplicationController, DaemonSet.
Once your Ingress rules are configured and your Ingress controller is deployed, your can point the A or CNAME DNS records of your domain to the node(s) running the actual Web server(s).

Resources