what i know / have running:
i got a running rancher ha setup (2.4.2) on vsphere w/ a L4 nginx lb in front of it. access the ui and provision new clusters (vsphere node driver) works great. I know I'm not in the cloud and cannot use a L7 LB (apart from nip.ip or metal lb maybe), and deploying workloads and expose them via nodeport works great (so the workloads are available on the specified port on each node a according pod is running on).
my question:
is it possible to expose (maybe via ingress) applications on any of my running cluster under the domain/adress I can access the rancher ui (in my case: https://rancher-things.local)? like have external (local network, not public) if I would deploy maybe a harbor registry and can somehow expose it like https://rancherthings.local/harbor? or if this would not work is it possible to deploy a L4 load balancer for accessing applications on or in front of a specific cluster?
thank you.
There should be ingress resource already which exposes the rancher ui. You can edit the ingress and add a path /harbor to route the traffic to service for harbor.
paths:
- path: /harbor
backend:
serviceName: harbor
servicePort: 80
#arghya-sadhu, the LB is pointing to the HA cluster (a.k.a upstream/management/rke/ha cluster) running Rancher, not Harbor. It's not recommended to create any other ingresses in this HA cluster. Also, I think the harbor workload is running in a downstream cluster and there is no LB pointing to the nodes of this cluster.
Patrick, you can create a service exposing your application port via http and use Rancher's proxy mechanism to access the UI of your app via the Rancher URL. If you have monitoring enabled in your setup, you can follow how Grafana UI is exposed via this mechanism.
After creating the service, you can find the URL info using the following command:
kubectl -n <your_app_namespace> cluster-info
# or
kubectl cluster-info -A
The downside of this approach is you don't have a dedicated LoadBalancer handling the traffic, but for smaller scale setup, this should be ok.
Example URL of grafana:
https://<rancher-fqdn>/k8s/clusters/<cluster-id>/api/v1/namespaces/cattle-prometheus/services/http:access-grafana:80
Related
I have deployed my application in a cluster of 3 nodes. Now to make this application externally accessible, I have followed this documentation and integrated nginx ingress controller.
Now when I checked my Google's Load Balancer console, I can see a new load balancer created and everything works fine. But the strange thing is I found two of my nodes are unhealthy and only one node is accepting connection. Then I found this discussion and understood that the only node running nginx ingress controller pod will be healthy for load balancer.
Now I feel hard to understand this data flow and the use of external load balancer here. We use external load balancer to balance the load to multiple machines. But with this configuration external load balancer will always forward traffic to the node with nginx ingress controller pod. If that is correct, what is the role of external load balance here?
You can have more than one replica of the Nginx ingress controller pods deployed across more than one kubernetes nodes for high availability purpose to reduce the possibility of downtime in case one kubernetes node is unavailable. The LoadBalancer will send the request to one of those nginx ingress Controller pods. From nginx ingress controller pods it will forwarded to any of the backend pods. The role of the external load balancer is to expose nginx ingress controller pods outside the cluster. Because NodePort is not recommended for usage in production and ClusterIP can not be used expose pods outside the cluster, hence LoadBalancer is the viable option.
I have a k8s cluster that I run. I have the necessary scripts that can create the cluster and destroy it as necessary. I have some of my applications running on this cluster. Recently I configured Ingress controller service and I'm routing my traffic to the application services via this Ingress controller.
I would like to access the application using a hostname where I have defined this in my Ingress rules. For example., when I need to access application A, which has a service IP like 192.168.0.100, I would use the hostname application.a.local. For this I need to edit the /etc/hosts file on the machine that is running the cluster and add an entry there. This is a manual approach. How could I automate this?
Is there a better approach to configure hostname mapping? Any suggestions?
I have a Load Balancer IP provided by OVH that I want to use with Nginx Ingress Controller but on a on-premises cluster. There are several guide s to do that using OVH Managed Kubernetes but it is not possible for me since I already a cluster.
I tried to use the LoadBalancerIP option using Helm and without Helm as well...
You should expose Nginx Ingress Controller as NodePort and point your OVH Load Balancer to your workers as endpoints.
User ---> OVH LB ----> Nginx Ingress on workers
Thank you for both you answer. I tried what you recommended but I think I'm missing a point. TO be more clear :
1/ The user part -> I have a OVH LB connected to a server of 3 node, this LB selects a node to be used by a user (round robin)
2/ Once a node had been selected, the user should be able to access to whatever service inside Kubernetes even if the service is not on this node by using the LoadBalancer IP.
For the 2nd point, I tried to expose/create an endpoint for the Nginx Ingress Controller where I gave the LB's IP, but I don't know if I have to create an Ingress object for each service (only 2-3 like grafana, prometheus..). I tried it but it didn't work. I also tried to create an Ingress for the service where I gave the LB IP but it didn't work. Note that my k8s cluster is on LXD containers which are inside 3 connected servers (one LXD container by server node). Also, concerning the OVH LoadBalancer, I'm not very confident with the purpose of Outbound IPs (which is a CIDR range)..
I understand that my OVH LB cannot be auto provisioned, but since its job is done outside of k8s (just attributing a node to a user), the problem is : how does the node can access the service based on a URL like grafana.example.com? I was using MetalLB as an internal LB and it worked fine but now i'm struggling with the OVH LB..
My problem is simple. I have an AKS deployment with a LoadBalancer service that needs to use HTTPS with a certificate.
How do I do this?
Everything I'm seeing online involves Ingress and nginx-ingress in particular.
But my deployment is not a website, it's a Dropwizard service with a REST API on one port and an admin service on another port. I don't want to map the ports to a path on port 80, I want to keep the ports as is. Why is HTTPS tied to ingress?
I just want HTTPS with a certificate and nothing more changed, is there a simple solution to this?
A sidecar container with nginx with the correct certificates (possible loaded off a Secret or a ConfigMap) will do the job without ingress. This seems to be a good example, using nginx-ssl-proxy container.
Yes, that's right as of this writing an Ingress will currently work either on port 80 or port 443, potentially it can be extended to use any port because nginx, Traefik, haproxy, etc can all listen on different ports.
So you are down to either a LoadBalancer or a NodePort type of service. Type LoadBalancer will not work directly with TLS since the Azure load balancers are layer 4. So you will have to use Application Gateway and it's preferred to use an internal load balancer for security reasons.
Since you are using Azure you can run something like this (assuming that your K8s cluster is configured the right way to use the Azure cloud provider, either the --cloud-provider option or the cloud-controller-manager):
$ cat <<EOF
apiVersion: v1
kind: Service
metadata:
name: your-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: <your-port>
selector:
app: your-app
EOF | kubectl apply -f -
and that will create an Azure load balancer on the port you like for your service. Behind the scenes, the load balancer will point to a port on the nodes and within the nodes, there will be firewall rules that will route to your container. Then you can configure Application Gateway. Here's a good article describing it but using port 80, you will have to change it use port 443 and configuring the TLS certs, and the Application Gateway also supports end to end TLS in case you want to terminate TLS directly on your app too.
The other option is NodePort, and you can run something like this:
$ kubectl expose deployment <deployment-name> --type=NodePort
Then Kubernetes will pick a random port on all your nodes where you can send traffic to your service listening on <your-port>. So, in this case, you will have to manually create a load balancer with TLS or a traffic source that listens on TLS <your-port> and forwards it to a NodePort on all your nodes, this load balancer can be anything like haproxy, nginx, Traefik or something else that supports terminating TLS. And you can also use the Application Gateway to forward directly to your node ports, in other words, define a listener that listens on the NodePort of your cluster.
I have a specific use case that I can not seem to solve.
A typical gcloud setup:
A K8S cluster
A gcloud storage bucket
A gcloud loadbalancer
I managed to get my domain https://cdn.foobar.com/uploads/ to points to a google storage backend without any issue: I can access files. Its the backend service one that fails.
I would like the CDN to act as a cache, when a HTTP request hits it such as https://cdn.foobar.com/assets/x.jpg, if it does not have a copy of the asset it should query an other domain https://foobar.com/assets/x.jpg.
I understood that this what was load balancers backend-service were for. (Right?)
The backend-service is pointing to the instance group of the k8s cluster and requires a port. I guessed that I needed to allow the firewall to expose the Nodeport of my web application service for the loadbalancer to be able to query it.
Cloud CDN
Load balancing
Failing health-checks.
The backend service is pointing to the instance group of the k8s cluster and requires some ports (default 80?) 80 failed. I guessed that I needed to allow the firewall to expose the 32231 Nodeport of my web application service for the loadbalancer to be able to query it. That still failed with a 502.
?> kubectl describe svc
Name: backoffice-service
Namespace: default
Labels: app=backoffice
Selector: app=backoffice
Type: NodePort
IP: 10.7.xxx.xxx
Port: http 80/TCP
NodePort: http 32231/TCP
Endpoints: 10.4.x.x:8500,10.4.x.x:8500
Session Affinity: None
No events.
I ran out of ideas at this point.
Any hints int the right direction would be much appreciated.
When deploying your service as type 'NodePort', you are exposing the service on each Node's IP, but the service is not reachable to the exterior, so you need to expose your service as 'LoadBalancer'
Since you're looking to use an HTTP(s) Load Balancer, I'll recommend using a Kubernetes Ingress resource. This resource will be in charge of configuring the HTTP(s) load balancer and the required ports that your service is using, as well as the health checks on the specified port.
Since you're securing your application, you will need to configure a secret object for securing the Ingress.
This example will help you getting started on an Ingress with TLS termination.