OpenShift / NGiNX reverse proxy guidance - pass to SDN-based addresses? - nginx

I am configuring a fairly complicated app for a client, and am getting stuck with the reverse proxy model. As far as my understanding goes, we should proxy_pass/uwsgi_pass to the internal endpoint addresses (172.30.0.0/16), such as
appname.project.svc.cluster.local
However, these addresses, although DNS-resolvable from within the pods which make up the app, are not reachable. The pods seem to run from the 10.200.0.0/14 SDN address range, and so no route exists for them by default from within the pod.
The alternative might be to proxy_pass to the exposed routes of each service, but this seems wrong - the request would then be routed back out of the OpenShift pod space, back through the (default haproxy) router to the exposed endpoint address.
What is the correct way?
Seasons greetings and thanks

In order to answer my own question, I wanted to mention.. I just discovered the other types of SkyDNS-based ranges, such as:
app.project.endpoints.cluster.local
See here: Table 1. DNS Example Names
These are reachable from the pods.
Thanks for your time

Related

DNS points to www.example.com but no to http://www.example.com?

So I'm trying to deploy a Ghost blog into a Google Cloud vm instance and I can't get it to work. Part of the problem, I think, is that I haven't set up the DNS correctly. I bought farodefe.org via Google Domains and I tried to configure it following this tutorial, and it worked... partially. I used DIG in Ubuntu to try and verify that my DNS configuration. Here are the results:
enter image description here
As seen in the image above, when I do:
dig farodefe.org
and/or
dig www.farodefe.org
I do receive an answer to my query.
But then I do dig http://www.farodefe.org and I receive nothing.
enter image description here
Why is this happening and how can I fix it?
Thanks in advance!
But then I do dig http://www.farodefe.org
But this does not mean anything, or at least certainly not what you think. The DNS has no concept of URLs, only names.
So you are doing here a query for the name http://www.farodefe.org (which is possible in the DNS, but not just for an A record type which is the default one used by dig), which is certainly not what you had in mind.
Part of the problem, I think, is that I haven't set up the DNS correctly.
Don't think, test. If you are not familiar with DNS, use good online troubleshooting tools, like DNSViz. If you see any red things in the output, your DNS configuration needs to be fixed. Alternatively, your DNS provider should be able to help you.
DNS wise, you first need to understand the difference between authoritative and recursive nameservers and service, and hence when doing tests you need to first send your queries to the authoritative nameservers (which is what DNSViz does) and only when that is ok and you still have problems, then you query recursive nameservers as needed.
If you want to understand more, also learn about the OSI/Internet layers, and how HTTP is layered on top of TCP and IP, which are some protocols among others, and how the DNS (a service itself using TCP and UDP) is used to map data, and in a web setting, to map a given hostname (website) to one or more IPv4 or IPv6 addresses, for an HTTP client (like a browser) to be able to initiate its TCP/IP connection.

Can't complete HTTP challenge for letsencrypt on Kubernetes

I have a k3s cluster and I'm trying to configure it to get a SSL certificate from let's encrypt. I have followed many guides, and I think I'm really near to manage it, but the problem is that the Challenge object in Kubernetes reports this error:
Waiting for HTTP-01 challenge propagation: failed to perform self check GET request 'http://devstore.XXXXXXX.com/.well-known/acme-challenge/kVVHaQaaGU7kbYqnt8v7LZGaQvWs54OHEe2WwI_MOgk': Get "http://devstore.XXXXXXX.com/.well-known/acme-challenge/kVVHaQaaGU7kbYqnt8v7LZGaQvWs54OHEe2WwI_MOgk": dial tcp: lookup devstore.XXXXXXX.com on 10.43.0.10:53: no such host
It seems that the in some way cert manager is trying to resolve my public DNS name internally, and is not managing to do it, so the challenge is not working. Can you help me on that, I googled it but I cannot find a solution for it...
Thank you
It is probable that the DNS record for the domain you want the certificate does not exist.
If if does, and you are using a split horizon DNS config (hijacking the .com domain in your local network) make sure it points out to your public ip (e.g. your home gateway)
[Edit]
Also, you have to figure out LE getting to your cluster in the network, so port-forward 80/443 to your cluster's IPs.
You can get away with it because k3s will default to cluster traffic policy in the load balancer
This can be caused by multiple different reasons. If you find that it is a transient issue (or possibly if you have misconfigured coredns before), you might want to double-check your coredns configmap (in the kube-system namespace).
E.g. you could remove/reduce caching, or point to different DNS nameservers.
Here's a description of the issue, where a switch to Google DNS + cache removal helped clear the issue.
Thank you DarthHTTP, I finally manage to make it work! The problem was, as I mentioned on the comment, that the firewall was not routing correctly the HTTP request using the public IP from the private network side. I solved configuring an internal DNS server that is resolving the name with the private IP address of the K3S node, and using that server as the DNS server for the K3S node. Eventually my HTTP web app got a valid let's encrypt certificate!

Best practise for a website hosted on Kubernetes (DigitalOcean)

I followed this guide: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes on how to setup an Nginx Ingress with Cert Manager with Kubernetes having DigitalOcean as a cloud provider.
The tutorial worked fine, I was able to setup everything according to what it was written. Though, (as it is stated) following the tutorial one ends up with three pods of which only one is in "Running 1/1", while the other two are "Down". Also when checking the comments section, it seems that it is quite a problem. Since if all the traffic gets routed to only 1 pods, it is not really scalable. Or am I missing something? Quoting from their tutorial:
Note: By default the Nginx Ingress LoadBalancer Service has
service.spec.externalTrafficPolicy set to the value Local, which
routes all load balancer traffic to nodes running Nginx Ingress Pods.
The other nodes will deliberately fail load balancer health checks so
that Ingress traffic does not get routed to them.
Mainly my question is: Is there a best practice that I am missing in order to have Kubernetes hosting my website? It seems I have to choose either scalability (having all the pods healthy and running) or getting IP of the client visitor.
And for whoever will ever find himself/herself in my situation, this is the reply I got from the DigitaOcean Support:
Unfortunately with that Kubernetes setup it would show those other
nodes as down without additional traffic configuration. It is possible
to skip the nginx ingress part and just use a DigitalOcean load
balancer but this again does require a good deal of setup and can be
more difficult then easy.
The suggestion to have a website with analytics (IP) and scalable was to setup a droplet with Nginx and setup a LoadBalancer to it. More specifically:
As for using a droplet this would be a normal website configuration
with Nginx as your webserver configured to serve content to your app.
You would have full access to your application and the Nginx logs on
the droplet itself. Putting a load balancer in front of this would
require additional configuration as load balancers do not pass the
x-forward header so the IP addresses of clients would not show up in
the logs by default. You would need to configured proxy protocol on
the load balancer and in your nginx configuration to be able to obtain
those IPs.
https://www.digitalocean.com/blog/load-balancers-now-support-proxy-protocol/
This is also a bit more complex unfortunately.
Hope it might save some time to someone

Do I need a service for exposing every app running in a pod?

I'm planning to build a website to host static files. Users will upload their files and I deploy bunch of deployments with nginx images on those to a Kubernetes node. My main goal is for some point, users will deploy their apps to a subdomain like my-blog-app.mysite.com. After some time users can use custom domains.
I understand that when I deploy an nginx image on a pod, I have to create a service to expose port 80 (or 443) to the internet via load balancer.
I also read about Ingress, looks like what I need but I don't think I understand that concept.
My question is, for example if I have 500 nginx pods running (each is a different website), do I need a service for every pod in that node (in this case 500 services)?
You are looking for https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting.
With this type of Ingress, you route the traffic to the different nginx instances, based on the Host header, which perfectly matches your use-case.
In any case, yes, assuming your current architecture you need to have a service for each pod. Haven't you considered a different approach? Like having a general listener (nginx instances) and get the correct content based on authorization or something?

How to log pod's outgoing HTTP requests in kubernetes?

I have a kubernetes cluster with running pods. In order to monitor and troubleshoot the infrastructure, I want to implement a centralized logging solution so all incoming and outgoing HTTP requests will be logged within one place.
For the incoming requests this is not a problem at all, I can use nginx log from ingress controller and present it.
I also understand that I can log outgoing requests inside the application I run in pod, but the problem is that applications from outside developers are also used and it may not contain logging implementation.
As for the outgoing requests, there is no any solution provided by default if I understand it right. I have explored k8s logging and k8s audit, but it does not provide such feature.
Probably, I need some network sniffer, but it is quite a low-level solution for such problem as I can see. So, the question is: is there any out-of-the-box implementation for such demand?
Thanks!
Take a look at a service mesh solution like Istio or Linkerd as well as tracing solutions like Jaeger or Zipkin. With these you can build to have full observability on how information flows in/out and through your kube cluster

Resources