Is it possible to create one ESB node as a dual role as a worker and manager ?
I'm using wso2 ESB 4.8.1 and nginx as load balancer.
This is pretty easy. This is what you have to do.
Forget about nginx and setup the ESB cluster. Lets say a cluster with one manager and one worker. I think you will be able to get it done by following the instructions here. Instead of WSO2 ELB mentioned in the doc, you are going to use nginx. Instead of the ELB, You can set the management and worker node as the well known members. i.e. In both nodes, you set both nodes as the well known members.
Once you have the cluster working, you should be able to send requests to an artifact deployed to both nodes separately. Difference between the manager node and worker node is, manager node is the one who only commits to the svn repo. So, when you deploy new artifacts you should deploy them using the manager node.
Now you have to configure two sites in nginx. Lets assume you decided to use esbmgt.mydomain.com for the management node and esb.mydomain.com for the worker. In esbmgt's upstream, you only mention about the manager node and also you route the requests to the 9443 port of the node. In the esb's upstream, you mention both nodes and the requests are routed to 8280 (http) and 8243 (https). Thats because the ESB serves requests using those ports and the UI is exposed via 9443 (https)
I hope the above information will help you.
Related
I have 3 containers on ECS: web, api and nginx. Basically nginx is proxying traffic to web and api containers:
upstream web {
server web-container:3000;
}
upstream api {
server api-container:3001;
}
But every time I redeploy web or api they change their IPs so I need to redeploy nginx afterwards in order to make it to "pick up" new IPs.
Is there a way to avoid this so I could just update let's say api service and nginx service would automatically proxy to correct IP address?
I assume these containers belong to 3 different task definitions and ultimately 3 different tasks (or better 3 different services).
If that is the setup then you want to use service discovery for this. This only works with ECS services and the idea is that you create 3 distinct services each with 1+ tasks in it. You give the service a name (e.g. nginx, web, api) and each container in them is going to be able to resolve the other containers by pointing to the fqdn (e.g. api.local). When your container in the nginx service tries to connect to api.local service discovery will resolve that name to the IP of one of the tasks in the ECS service api.
If you want to see an example re how this is setup you can look at this demo app and particularly at this CloudFormation template
As far as I know, Instance Management and the Controller have the same functions, which managing NGINX Plus and the Instances. but it does not make more sense.
So my question is
What are the differences between Instance Management and Controller?
What is Ingress Controller?
Nginx Instance Management: NGINX Instance Manager empowers you to
Automate configuration and monitoring using APIs.
For example, if you have multiple servers using Nginx then in Nginx Plus service provides a dashboard where all the events can be monitored, Including spikes on specific events, Or think as one of the servers has not been updated from having multiple VM, monitor the list of inventory. To achieve nginx-agant needs to install along with Nginx server on the host.
Ensure your fleet of NGINX web servers and proxies have fixes for
active CVEs
Seamlessly integrate with third‑party monitoring solutions such as
Prometheus and Grafana for insights
Nginx Controller: NGINX Controller is cloud‑agnostic and includes a set of enterprise‑grade services that give you a clear line of sight to apps in development, test, or production. With per‑app analytics, you gain new insights into app performance and reliability so you can pinpoint performance issues before they impact production.
Example: To enable the ingress you did need an Ingress Controller to enabled first.
Nginx Ingress: Each LoadBalancer service requires its own load balancer with its own public IP address, whereas an Ingress only requires one, even when providing access to dozens of services. When a client sends an HTTP request to the Ingress, the host and path in the request determine which service the request is forwarded to.
For example Google Kubernetes Controller
I'm planning to build a website to host static files. Users will upload their files and I deploy bunch of deployments with nginx images on those to a Kubernetes node. My main goal is for some point, users will deploy their apps to a subdomain like my-blog-app.mysite.com. After some time users can use custom domains.
I understand that when I deploy an nginx image on a pod, I have to create a service to expose port 80 (or 443) to the internet via load balancer.
I also read about Ingress, looks like what I need but I don't think I understand that concept.
My question is, for example if I have 500 nginx pods running (each is a different website), do I need a service for every pod in that node (in this case 500 services)?
You are looking for https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting.
With this type of Ingress, you route the traffic to the different nginx instances, based on the Host header, which perfectly matches your use-case.
In any case, yes, assuming your current architecture you need to have a service for each pod. Haven't you considered a different approach? Like having a general listener (nginx instances) and get the correct content based on authorization or something?
my k8s master node has Public network IP, and worker node deploy in private net. worker node can connect to master but master cannot connect to worker node.
I have tested that can deploy a pod by kubectl, the pod running on worker node and master can watch pod status. but when I deploy a ingress, and access the ingress on master node, traffic cannot go to worker node.
I use flannel network.
I have tried use ssh tunnel, but it hard to management
I don't know if there are some suggests, thanks.
If you are deployed in a cloud environment, the most likely cause is incorrect firewall settings or route configurations. However, ingress configuration errors also may appear to look like infrastructure problems at times.
The Ingress will redirect your requests to the different services that it is registered with. The endpoint health is also monitored and requests will only be sent to active and healthy endpoints. My troubleshooting flow is as follows:
Hit an unregistered path on your url and check if you get the default backend response. If no, then your ingress controller may not be correctly set up (whether it be domain name, access rules, or just configuration). If yes, then your ingress controller should be correctly set up, and this is a problem with the Ingress definition or backend.
Try hitting your registered path on your url. If you get a 504 gateway timeout, then your endpoint is accepting the request, but not responding correctly. You can follow the target pod logs to figure out whether it is behaving properly.
If you get a 503 Service Unavailable, then your service might be down or deemed unhealthy by the ingress. In this case, you should definitely verify that your pods are running properly.
Check your nginx-ingress-controller logs to see how the requests are being redirected and what the internal responses are.
All your nodes and master should have communication with each other, without this you are going to have problems on cluster functionalities.
The ingress creates a load balancer pointing to your nodes machines.
Why your master cannot connect to your nodes?
Give a check on:
https://kubernetes.io/docs/concepts/architecture/master-node-communication/
Intro:
On AWS, Loadbalancers are expensive ($20/month + usage), so I'm looking for a way to achieve flexible load-balancing between the k8s nodes, without having to pay that expense. The load is not that big, so I don't need the scalability of the AWS load balancer any time soon. I just need services to be HA. I can get a small EC2 instance for $3.5/month that can easily handle the current traffic, so I'm chasing that option now.
Current setup
Currently, I've set up a regular standalone Nginx instance (outside of k8s) that does load balancing between the nodes in my cluster, on which all services are set up to expose through NodePorts. This works really well, but whenever my cluster topology changes during restarts, adding, restarting or removing nodes, I have to manually update the upstream config on the Nginx instance, which is far from optimal, given that cluster nodes cannot be expected to stay around forever.
So the question is:
Can Trækfik be set up outside of Kubernetes to do simple load-balancing between the Kubernetes nodes, just like my Nginx setup, but keep the upstream/backend servers of the traefik config in sync with Kubernetes list of nodes, such that my Kubernetes services are still HA when I make changes to my node setup? All I really need is for Træfik to listen to the Kubernetes API and change the backend servers whenever the cluster changes.
Sounds simple, right? ;-)
When looking at the Træfik documentation, it seems to want an ingress resource to send its trafik to, and an ingress resource requires an ingress controller, which I guess, requires a load balancer to become accessible? Doesn't that defeat the purpose, or is there something I'm missing?
Here is something what would be useful in your case https://github.com/unibet/ext_nginx but I'm note sure if project is still in development and configuration is probably hard as you need to allow external ingress to access internal k8s network.
Maybe you can try to do that on AWS level? You can add cron job on Nginx EC2 instance where you query AWS using CLI for all EC2 instances tagged as "k8s" and make update in nginx configuration if something changed.