How to expose kubernetes web server to port 80 - nginx

I have a tornado webserver + nginx + DNS. I have moved the webserver to a kubernetes pod and same with nginx.
But i realized that cant expose to port 80 then i kept nginx outside from kubernetes and changed webserver ip by the ip of the pod and works without problems.
The problem is that each time the ssh server restarts the pod's ip changes and i need to manually change the ip on nginx conf.
How can i or keep pod's ip between reloads or expose nginx on a pod to outside?

Use a load balancer between DNS and nginx ingress controller. The load balancer can accept traffic on port 80 and forward to nodeport on which nginx ingress controller is exposed.
Alternatively use the nginx ingress controller and run it with hostNetwork: true in deployment pod spec to run nginx ingress controller directly on port 80 in host's network namespace. Then configure DNS to forward traffic to nodeip:80
Create a cluster IP type kubernetes service and an ingress resource to access pod exposed via nginx. Nginx ingress controller will forward traffic to POD IPs directly. There is no change needed anywhere in this setup in case pod IP changes because nginx ingress controller watches for any change in POD IP and updates the nginx.conf accordingly.

Related

Unable to reach pod from outside of cluster using exposing external IP via metallb

I try to deploy nginx deployment to see if my cluster working properly on basic k8s installed on VPS (kubeadm, ubuntu 22.04, kubernetes 1.24, containerd runtime)
I successfully deployed metallb via helm on this VPS and assigned public IP of VPS to the
using CRD: apiVersion: metallb.io/v1beta1 kind: IPAddressPool
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
nginx LoadBalancer 10.106.57.195 145.181.xx.xx 80:31463/TCP
my target is to send a request to my public IP of VPS to 145.181.xx.xx and get nginx test page of nginx.
the problem is that I am getting timeout, and connection refused when I try to reach this IP address outside the cluster, inside the cluster -everything is working correctly - it means that calling 145.181.xx.xx inside cluster returns Test page of nginx.
There is no firewall issue - I tried to setup simple nginx without kubernetes with systemctl and I was able to reach port 80 on 145.181.xx.xx.
any suggestions and ideas what can be the problem or how I can try to debug it?
I'm facing the same issue.
Kubernetes cluster is deployed with Kubespray over 3 master and 5 worker nodes. MetalLB is deployed with Helm, IPAddressPool and L2Advertisement are configured. And I'm also deploying simple nginx pod and a service to check of MetalLB is working.
MetalLB assigns first IP from the pool to nginx service and I'm able to curl nginx default page from any node in the cluster. However, if I try to access this IP address from outside of the cluster, I'm getting timeouts.
But here is the fun part. When I modify nginx manifest (rename deployment and service) and deploy it in the cluster (so 2 nginx pods and services are present), MetalLB assigns another IP from the pool to the second nginx service and I'm able to access this second IP address from outside the cluster.
Unfortunately, I don't have an explanation or a solution to this issue, but I'm investigating it.

Set Nginx ingress in Kubernetes cluster GKE to subdomain DNS records?

Hello I have a GKE cluster with a cert manager pod and ingress nginx server pod.
The ingress NGINX server has two external ips with port 80 and 443.
I can access port 80 but it is insecure. How can I assign my sub domain for example:
dev.example.com to this ingress ip address?
If I have Cloud DNS set up and also Hover set up is it conflicting of each other?
Thanks

Internal nginx besides a k3s setup on one vps

currently I run a nginx on a vps and I want to install k3s. The vps has two public reachable IP addresses and I want that the nginx on the vps itself only react to one specific of these two addresses.
Where can I realize that I can run the internal nginx besides the k3s?
You can do that with NodePort. You can create Nginx Service in K3S of the NodePort type.
Node port will expose your service to host on specific port.
References:
Kubernetes docs: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
Rancher docs: https://rancher.com/docs/rancher/v2.x/en/v1.6-migration/expose-services/

how to config basehref for kubernetes dashboard

The k8s cluster is installed on a host which is only allowed requests through port 443 from external network. That means all the pods managed by k8s only can be reached through port 443. I installed a Nginx on the host to server reverse proxy to k8s cluster. I installed dashboard and other apps in k8s. The k8s dashboard is exposed with a nodePort 31117. How to config a basehref in k8s dashboard? For example https://ip/dashboard to open the k8s dashboard.
You need to set up an upstream in nginx. Since you have exposed it as NodePort -
location /dashboard/ {
proxy_pass https://<any-node-ip>:<dashboard-node-port>/;
}
You can also look at Ingress resource using which you can do the same without hosting your own nginx server.

How to expose kubernetes nginx-ingress service on public node IP at port 80 / 443?

I installed ingress-nginx in a cluster. I tried exposing the service with the kind: nodePort option, but this only allows for a port range between 30000-32767 (AFAIK)... I need to expose the service at port 80 for http and 443 for tls, so that I can link A Records for the domains directly to the service. Does anyone know how this can be done?
I tried with type: LoadBalancer before, which worked fine, but this creates a new external Load Balancer at my cloud provider for each cluster. In my current situation I want to spawn multiple mini clusters. It would be too expensive to create a new (digitalocean) Load Balalancer for each of those, so I decided to run each cluster with it's own internal ingress-controller and expose that directly on 80/443.
If you want on IP for 80 port from a service you could use the externalIP field in service config yaml. You could find how to write the yaml here
Kubernetes External IP
But if your usecase is really like getting the ingress controller up and running it does not need the service to be exposed externally.
if you are on bare metal so change your ingress-controller service type to NodePort and add a reverse proxy to flow traffic to your ingress-controller service with selected NodePort.
As #Pramod V answerd if you use externalIP in ingress-controller service so you loose real remote address in your EndPoints.
A more complete answer could be found Here

Resources