AWS Nginx ALB Port Configuration - nginx

I used terraform to deploy my k8s cluster, and i used kubectl to deploy nginx on my worker nodes. Again using kubectl and creating a LoadBalancer targeting the nginx deployment on port 80 worked perfectly fine. I wanted to test out using an ALB, rather than an ELB.
I deleted the ELB, and then used the EC2 interface to setup a target group.
The target group uses port 80, is on the same vpc, and is targeting the two worker nodes.
Next I created an ALB, which is internet facing uses the same security group as the nodes, and again is on the same VPC. Its listening on port 80 and forwarding traffic to my target group.
I cant access nginx using the DSN name. I'm pretty sure it has to do with my port configuration?

Kubernetes does not natively support alb's.
https://github.com/kubernetes-sigs/aws-alb-ingress-controller

Related

Unable to reach pod from outside of cluster using exposing external IP via metallb

I try to deploy nginx deployment to see if my cluster working properly on basic k8s installed on VPS (kubeadm, ubuntu 22.04, kubernetes 1.24, containerd runtime)
I successfully deployed metallb via helm on this VPS and assigned public IP of VPS to the
using CRD: apiVersion: metallb.io/v1beta1 kind: IPAddressPool
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
nginx LoadBalancer 10.106.57.195 145.181.xx.xx 80:31463/TCP
my target is to send a request to my public IP of VPS to 145.181.xx.xx and get nginx test page of nginx.
the problem is that I am getting timeout, and connection refused when I try to reach this IP address outside the cluster, inside the cluster -everything is working correctly - it means that calling 145.181.xx.xx inside cluster returns Test page of nginx.
There is no firewall issue - I tried to setup simple nginx without kubernetes with systemctl and I was able to reach port 80 on 145.181.xx.xx.
any suggestions and ideas what can be the problem or how I can try to debug it?
I'm facing the same issue.
Kubernetes cluster is deployed with Kubespray over 3 master and 5 worker nodes. MetalLB is deployed with Helm, IPAddressPool and L2Advertisement are configured. And I'm also deploying simple nginx pod and a service to check of MetalLB is working.
MetalLB assigns first IP from the pool to nginx service and I'm able to curl nginx default page from any node in the cluster. However, if I try to access this IP address from outside of the cluster, I'm getting timeouts.
But here is the fun part. When I modify nginx manifest (rename deployment and service) and deploy it in the cluster (so 2 nginx pods and services are present), MetalLB assigns another IP from the pool to the second nginx service and I'm able to access this second IP address from outside the cluster.
Unfortunately, I don't have an explanation or a solution to this issue, but I'm investigating it.

Internal nginx besides a k3s setup on one vps

currently I run a nginx on a vps and I want to install k3s. The vps has two public reachable IP addresses and I want that the nginx on the vps itself only react to one specific of these two addresses.
Where can I realize that I can run the internal nginx besides the k3s?
You can do that with NodePort. You can create Nginx Service in K3S of the NodePort type.
Node port will expose your service to host on specific port.
References:
Kubernetes docs: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
Rancher docs: https://rancher.com/docs/rancher/v2.x/en/v1.6-migration/expose-services/

Exposing application deployed on kubernetes cluster in front of Bigip

We have an application that is deployed to a Kubernetes cluster on a baremetal system. I have exposed the service as NodePort. We need to expose the service to the outside world using a domain name myapp.example.com. We have created the necessary DNS mapping and we have configured our VIP in our Bigip Loadbalancer. I would like to know what ingress solution we need to implement? Is it from the Nginx/Kubernetes or the Bigip controller? Will Nginx/Kubernetes Nginx controller support Bigip and how do we need to expose the ingress-nginx? is it type LB or Nodeport?
I haven't used Bigip that much but I found that they have a controller for kubernetes.
But I think the simplest way if you have Bigip Loadbalancer already setup and a k8s cluster running then just create the NodePort service for the pod that you want to expose and get the node port number of that service (lets assume 30001). This port is now open and can be used to communicate to the service inside the K8s using the Node's IP. Now configure the Bigip Loadbalancer pool to forward all the incoming traffic to < Node's IP >:30001.
All this is theory from what I know about k8s and how it works. Give it a try and let me know if it works.

Difference between metalLB and NodePort

What is difference between MetalLB and NodePort?
A node port is a built-in feature that allows users to access a service from the IP of any k8s node using a static port. The main drawback of using node ports is that your port must be in the range 30000-32767 and that there can, of course, be no overlapping node ports among services. Using node ports also forces you to expose your k8s nodes to users who need to access your services, which could pose security risks.
MetalLB is a third-party load balancer implementation for bare metal servers. A load balancer exposes a service on an IP external to your k8s cluster at any port of your choosing and routes those requests to yours k8s nodes.
MetalLB can be deployed either with a simple Kubernetes manifest or with Helm.
MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.
A NodePort is an open port on every node of your cluster. Kubernetes transparently routes incoming traffic on the NodePort to your service, even if your application is running on a different node.

How to expose kubernetes nginx-ingress service on public node IP at port 80 / 443?

I installed ingress-nginx in a cluster. I tried exposing the service with the kind: nodePort option, but this only allows for a port range between 30000-32767 (AFAIK)... I need to expose the service at port 80 for http and 443 for tls, so that I can link A Records for the domains directly to the service. Does anyone know how this can be done?
I tried with type: LoadBalancer before, which worked fine, but this creates a new external Load Balancer at my cloud provider for each cluster. In my current situation I want to spawn multiple mini clusters. It would be too expensive to create a new (digitalocean) Load Balalancer for each of those, so I decided to run each cluster with it's own internal ingress-controller and expose that directly on 80/443.
If you want on IP for 80 port from a service you could use the externalIP field in service config yaml. You could find how to write the yaml here
Kubernetes External IP
But if your usecase is really like getting the ingress controller up and running it does not need the service to be exposed externally.
if you are on bare metal so change your ingress-controller service type to NodePort and add a reverse proxy to flow traffic to your ingress-controller service with selected NodePort.
As #Pramod V answerd if you use externalIP in ingress-controller service so you loose real remote address in your EndPoints.
A more complete answer could be found Here

Resources