Nginx on Kubernetes: only one node replies - nginx

I have deployed the nginx pod and the service in the k-cluster via a yaml. All looks good (service, node, pods). This supposed to make nginx available from any node (http:/nodeA:port, http://nodeB:port, etc) but only one node replies (works).
All nodes have the firewall disabled.
All nodes identical OS.
Any ideas?

Related

why nginx ingress controller is deployed as a container(pod)?

my natural thought is that if nginx is just a daemon process on the k8s node, but not a pod(container) in the k8s cluster, looks like it still can fullfill ingress controller jobs. because:
if it's a process, because it is on the k8s node, it still can talk to apiserver to fetch service backend pods information, like IP addresses, so it's still can be used as a http proxy server to direct traffic to different services.
so 2 questions,
why nginx ingress controller has to be a pod?
why nginx ingress controller only got 1 replica? and on which node? if nginx controller pod is dead, things will go unstable.
Thanks!
why Nginx ingress controller has to be a pod?
it is possible to run the Nginx controller as a daemon set in Kubernetes however I am not sure about the running on the node.
Manging the POD using daemon set and deployment of Kubernetes easy compare to process on Node.
By default Nginx daemon process is not part of any Kubernetes node, if you cluster autoscale will you install the Nginx process manually on Node?
If you thinking to create own AMI with Nginx process inside and use it inside the Node pool and scale that pool, it's possible but how about OS patching and maintenance ?
why nginx ingress controller only got 1 replica? and on which node? if
nginx controller pod is dead, things will go unstable.
Running replicas 1 is the default configuration but you can implement the HPA and increase the replicas as per need. Nginx is lightweight so handling a large volume of traffic not require more replicas.
Still, as per need, you can run multiple replicas with HPA or increase manually replicas to get high availability.
Because Pods are how you run daemon processes (or really, all processes) inside Kubernetes. That's just how you run stuff. I suppose there is nothing stopping you from running it outside the cluster, manually setting up API configuration and authentication, doing all the needed networking bits yourself. But ... why?
As for replicas, you should indeed generally have more than one across multiple physical nodes for redundancy. A lot of the tutorials show it with replicas: 1 because either it's for a single-node dev cluster like Minikube or it's only an example.

Kubernetes on-premise Ingress traffic policy local

Kubernetes installed on premise,
nginx-ingress
a service with multiple pods on multiple nodes
All this nodes are working as an nginx ingress.
The problem is when a request come from a load balancer can jump to another worker that have a pod, this cause unecesary trafic inside the workers network, I want to force when a request come from outside to the ingress,
the ingress always choice pods on the same node, in case no pods then
can forward to other nodes.
More or less this image represent my case.
example
I have the problem in the blue case, what I expect is the red case.
I saw exist the "externalTrafficPolicy: Local" but this only work for
serviceType nodePort/loadBalancer, nginx ingress try to connect using the "clusterIP" so it skips this functionality.
There are a way to have this feature working for clusterIP or something similar? I started to read about istio and linkerd, they seem so powerful but I don't see any parameter to configure this workflow.
You have to deploy an Ingress Controller using a NodeSelector to deploy it to specific nodes, named ingress or whatever you want: so you can proceed to create an LB on these node IPs using simple health-checking on port 80 and 443 (just to update the zone in case of node failure) or, even better, with a custom health-check endpoint.
As you said, the externalTrafficPolicy=Local works only for Load-Balancer services: dealing with on-prem clusters is tough :)

How to make ingress connect to pod in my network

my k8s master node has Public network IP, and worker node deploy in private net. worker node can connect to master but master cannot connect to worker node.
I have tested that can deploy a pod by kubectl, the pod running on worker node and master can watch pod status. but when I deploy a ingress, and access the ingress on master node, traffic cannot go to worker node.
I use flannel network.
I have tried use ssh tunnel, but it hard to management
I don't know if there are some suggests, thanks.
If you are deployed in a cloud environment, the most likely cause is incorrect firewall settings or route configurations. However, ingress configuration errors also may appear to look like infrastructure problems at times.
The Ingress will redirect your requests to the different services that it is registered with. The endpoint health is also monitored and requests will only be sent to active and healthy endpoints. My troubleshooting flow is as follows:
Hit an unregistered path on your url and check if you get the default backend response. If no, then your ingress controller may not be correctly set up (whether it be domain name, access rules, or just configuration). If yes, then your ingress controller should be correctly set up, and this is a problem with the Ingress definition or backend.
Try hitting your registered path on your url. If you get a 504 gateway timeout, then your endpoint is accepting the request, but not responding correctly. You can follow the target pod logs to figure out whether it is behaving properly.
If you get a 503 Service Unavailable, then your service might be down or deemed unhealthy by the ingress. In this case, you should definitely verify that your pods are running properly.
Check your nginx-ingress-controller logs to see how the requests are being redirected and what the internal responses are.
All your nodes and master should have communication with each other, without this you are going to have problems on cluster functionalities.
The ingress creates a load balancer pointing to your nodes machines.
Why your master cannot connect to your nodes?
Give a check on:
https://kubernetes.io/docs/concepts/architecture/master-node-communication/

Traefik instance loadbalance to Kubernetes NodePort services

Intro:
On AWS, Loadbalancers are expensive ($20/month + usage), so I'm looking for a way to achieve flexible load-balancing between the k8s nodes, without having to pay that expense. The load is not that big, so I don't need the scalability of the AWS load balancer any time soon. I just need services to be HA. I can get a small EC2 instance for $3.5/month that can easily handle the current traffic, so I'm chasing that option now.
Current setup
Currently, I've set up a regular standalone Nginx instance (outside of k8s) that does load balancing between the nodes in my cluster, on which all services are set up to expose through NodePorts. This works really well, but whenever my cluster topology changes during restarts, adding, restarting or removing nodes, I have to manually update the upstream config on the Nginx instance, which is far from optimal, given that cluster nodes cannot be expected to stay around forever.
So the question is:
Can Trækfik be set up outside of Kubernetes to do simple load-balancing between the Kubernetes nodes, just like my Nginx setup, but keep the upstream/backend servers of the traefik config in sync with Kubernetes list of nodes, such that my Kubernetes services are still HA when I make changes to my node setup? All I really need is for Træfik to listen to the Kubernetes API and change the backend servers whenever the cluster changes.
Sounds simple, right? ;-)
When looking at the Træfik documentation, it seems to want an ingress resource to send its trafik to, and an ingress resource requires an ingress controller, which I guess, requires a load balancer to become accessible? Doesn't that defeat the purpose, or is there something I'm missing?
Here is something what would be useful in your case https://github.com/unibet/ext_nginx but I'm note sure if project is still in development and configuration is probably hard as you need to allow external ingress to access internal k8s network.
Maybe you can try to do that on AWS level? You can add cron job on Nginx EC2 instance where you query AWS using CLI for all EC2 instances tagged as "k8s" and make update in nginx configuration if something changed.

WSO2 ESB 4.8.1 Clustering

Is it possible to create one ESB node as a dual role as a worker and manager ?
I'm using wso2 ESB 4.8.1 and nginx as load balancer.
This is pretty easy. This is what you have to do.
Forget about nginx and setup the ESB cluster. Lets say a cluster with one manager and one worker. I think you will be able to get it done by following the instructions here. Instead of WSO2 ELB mentioned in the doc, you are going to use nginx. Instead of the ELB, You can set the management and worker node as the well known members. i.e. In both nodes, you set both nodes as the well known members.
Once you have the cluster working, you should be able to send requests to an artifact deployed to both nodes separately. Difference between the manager node and worker node is, manager node is the one who only commits to the svn repo. So, when you deploy new artifacts you should deploy them using the manager node.
Now you have to configure two sites in nginx. Lets assume you decided to use esbmgt.mydomain.com for the management node and esb.mydomain.com for the worker. In esbmgt's upstream, you only mention about the manager node and also you route the requests to the 9443 port of the node. In the esb's upstream, you mention both nodes and the requests are routed to 8280 (http) and 8243 (https). Thats because the ESB serves requests using those ports and the UI is exposed via 9443 (https)
I hope the above information will help you.

Resources