I have an ASP.NET Core webapi running in an on-prem bare-metal Kubernetes cluster. There's no external load-balancer, and I'm using NGINX ingress.
I want to get the users' IP address, and am using HttpContext.Connection.RemoteIpAddress in the .NET code.
Unfortunately, this is picking up the IP address of the nginx ingress (possibly the controller given the namespace)...
::ffff:10.244.1.85
Doing a reverse DNS lookup resolves that to...
10-244-1-85.ingress-nginx.ingress-nginx.svc.cluster.local
After a little bit of Googling, I tried adding externalTrafficPolicy: "Local" to my service definition, but that didn't make a difference.
This seems like something that should be really trivial and quite a common requirement. Any ideas?
First try to get ip from header X-Forwarded-For as shown below, if it's null then you can use Connection.RemoteIpAddress. Also make sure your nginx configmap has proxy enabled as per screenshot:
var ip = IPAddress.Parse(_accessor.ActionContext.HttpContext.Request.Headers["X-Forwarded-For"].FirstOrDefault());
if (string.IsNullOrEmpty(ip.ToString()))
{
ip = _accessor.ActionContext.HttpContext.Connection.RemoteIpAddress.MapToIPv4();
}
Related
I have a k3s cluster and I'm trying to configure it to get a SSL certificate from let's encrypt. I have followed many guides, and I think I'm really near to manage it, but the problem is that the Challenge object in Kubernetes reports this error:
Waiting for HTTP-01 challenge propagation: failed to perform self check GET request 'http://devstore.XXXXXXX.com/.well-known/acme-challenge/kVVHaQaaGU7kbYqnt8v7LZGaQvWs54OHEe2WwI_MOgk': Get "http://devstore.XXXXXXX.com/.well-known/acme-challenge/kVVHaQaaGU7kbYqnt8v7LZGaQvWs54OHEe2WwI_MOgk": dial tcp: lookup devstore.XXXXXXX.com on 10.43.0.10:53: no such host
It seems that the in some way cert manager is trying to resolve my public DNS name internally, and is not managing to do it, so the challenge is not working. Can you help me on that, I googled it but I cannot find a solution for it...
Thank you
It is probable that the DNS record for the domain you want the certificate does not exist.
If if does, and you are using a split horizon DNS config (hijacking the .com domain in your local network) make sure it points out to your public ip (e.g. your home gateway)
[Edit]
Also, you have to figure out LE getting to your cluster in the network, so port-forward 80/443 to your cluster's IPs.
You can get away with it because k3s will default to cluster traffic policy in the load balancer
This can be caused by multiple different reasons. If you find that it is a transient issue (or possibly if you have misconfigured coredns before), you might want to double-check your coredns configmap (in the kube-system namespace).
E.g. you could remove/reduce caching, or point to different DNS nameservers.
Here's a description of the issue, where a switch to Google DNS + cache removal helped clear the issue.
Thank you DarthHTTP, I finally manage to make it work! The problem was, as I mentioned on the comment, that the firewall was not routing correctly the HTTP request using the public IP from the private network side. I solved configuring an internal DNS server that is resolving the name with the private IP address of the K3S node, and using that server as the DNS server for the K3S node. Eventually my HTTP web app got a valid let's encrypt certificate!
I'm currently working on copying AWS EKS cluster to Azure AKS.
In our EKS we use external Nginx with proxy protocol to identify the client real IP and check if it is whitelisted in our Nginx.
In AWS to do so we added to the Kubernetes service annotation aws-load-balancer-proxy-protocol to support Nginx proxy_protocol directive.
Now the day has come and we want to run our cluster also on Azure AKS and I'm trying to do the same mechanism.
I saw that AKS Load Balancer hashes the IPs so I removed the proxy_protocol directive from my Nginx conf, I tried several things, I understand that Azure Load Balancer is not used as a proxy but I did read here:
AKS Load Balancer Standard
I tried whitelisting IPs at the level of the Kubernetes service using the loadBalancerSourceRanges api instead on the Nginx level.
But I think that the Load Balancer sends the IP to the cluster already hashed (is it the right term?) and the cluster seem to ignore the ips under loadBalancerSourceRanges and pass them through.
I'm stuck now trying to understand where I lack the knowledge, I tried to handle it from both ends (load balancer and kubernetes service) and they both seem not to cooperate with me.
Given my failures, what is the "right" way of passing the client real IP address to my AKS cluster?
From the docs: https://learn.microsoft.com/en-us/azure/aks/ingress-basic#create-an-ingress-controller
If you would like to enable client source IP preservation for requests
to containers in your cluster, add --set controller.service.externalTrafficPolicy=Local to the Helm install
command. The client source IP is stored in the request header under
X-Forwarded-For. When using an ingress controller with client source
IP preservation enabled, SSL pass-through will not work.
More information here as well: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
You can use the real_ip and geo modules to create the IP whitelist configuration. Alternatively, the loadBalancerSourceRanges should let you whitelist any client IP ranges by updating the associated NSG.
I have created a service using this manual: https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address-service/
The service has IP in this example (10.32.0.16, by kubectl describe services example-service command) and we can create proxy_pass rule: proxy_pass http://10.32.0.16:8080; in external (outside the cluster) nginx.
This IP is always different (it depends on number of services, etc..). How can I specify this service for my external nginx?
An alternative, that I found very powerful, is to setup the nginx inside the cluster using the official nginx ingress controller.
Then you can have both load-balanced/HA nginx and have kubernetes automatically update its config from ingress rules.
At the moment Traefik seems very popular for such cases.
It's taking over nginx ingress too...
You can either:
specify a fixed IP for a service
proxy to the service DNS name
I have followed the instructions (https://cloud.google.com/container-engine/docs/tutorials/http-balancer, and http://kubernetes.io/docs/user-guide/ingress/) to create an Ingress resource for my Kubernetes Service - my cluster is deployed within Google Container Engine (GKE).
I understand that the Ingress controller will automatically allocate an external/public IP for me, but this is not exactly what I need. Am I able to decide what IP I want? I have a domain name and a static IP which I would like to use instead of the one assigned by the Ingress controller.
Hopefully this can be defined inside the json/yaml configuration file for the Ingress resource. This is my preferred way to create resources as I can keep track of the state of the created resources (rather than using kubectl edit from command line to edit my way to the preferred state).
I understand that the Ingress controller will automatically allocate an external/public IP for me, but this is not exactly what I need. Am I able to decide what IP I want?
You can ask Google for a static global IP address, which can then be used for your L7 load balancing (you would point your DNS name to this IP). There isn't a way to bring your own IP address into a google L7 load balancer (either directly or using the Ingress object).
I'm looking for a way, if possible to route a request to for eg. team.mysite.com to team.default.svc.cluster.local using nginx. This way I could have multiple wordpress sites using different sub domains on my domains and working as explained above. Basically calling xyz.mysite.com would have the request forwarded to xyz.default.svc.cluster.local, provided the service exists.
Note:
I have the kube-dns service running at 10.254.0.2
Is this possible? And how exactly would I do this?
Thanks.
Edit:
Going over this again I could possibly use variables in the ngonx.conf i.e $host.$host.default.svc.cluster.conf where $host is $host.mydomain.com.
I'd need a way to let nginx resolve the kube dns services also a way to part out the xyz in xyz.mydomain.com in the nginx.conf and assigning it to $host
If your nodes have a public IP address you could use an Ingress resource to do exactly what you describe. An Ingress usually defines a list of paths and their target Service, so your service should be running before you try to expose it.
Ingress controllers then dynamically configure Web servers that listen on a defined list of ports for you (typically the host ports 80 and 443 as you may have guessed) using the Ingress configuration mentioned above, and can be deployed using the Kubernetes resource type of your choice: Pod, ReplicationController, DaemonSet.
Once your Ingress rules are configured and your Ingress controller is deployed, your can point the A or CNAME DNS records of your domain to the node(s) running the actual Web server(s).