I have a single-node k8s cluster on DigitalOcean. I used to use NGINX Ingress Controller with ClusterIP service, but then it stopped working for some reason, so I updated it to use LoadBalancer service instead, but it costs me $12, which is too much for my pet projects.
So as a temporary measure I am using NodePort service with a hardcoded external IP of the machine. Here's how the helm release configuration looks now:
controller:
enableExternalDNS: true
service:
externalIPs:
- 46.XXX.XXX.XXX
type: NodePort
rbac:
create: true
But I don't like it being hardcoded like that. Is there a way to tell NGINX Ingress Controller to use node external IP without hardcoding it?
Or maybe there's some other way to expose the service without using LoadBalancer?
Related
I have deployed a Linkerd Service mesh and my Kubernetes cluster is configured with the Nginx ingress controller as a DaemonSet and all the ingresses are working fine also the Linkerd. Recently, I have added a traffic split functionality to run my blue/green setup I can reach through to these services with separate ingress resources. I have created an apex-web service as described here. If I reached you this service internally it perfectly working. I have created another ingress resources and I'm not able to test the blue/green functionality outside of my cluster. I'd like to mention that I have meshed (injected the Linkerd proxy) to all my Nginx pods but it is returning "503 Service Temporarily Unavailable" message from the Nginx.
I went through the documentation and I have created ingress following this, I can confirm that below annotations were added to the ingress resources.
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
but still no luck with the out side of the cluster.
I'm testing with the given emojivoto app and all the traffic split and the apex-web services are in this training repository.
I'm not quite sure what went wrong and how to fix this outside from the cluster. I'd really appreciate if anyone assist me to fix this Linkerd, Blue/Green issue.
I have raised this question in the Linkerd Slack channel and got this fixed with the wonderful support from the community. Seems Nginx doesn't like the service which doesn't have an endpoint. My configuration was correct and asked to change the service pointed in the traffic split to a service with an endpoint and it fixed the issue.
In a nutshell, my traffic split was configured with web-svc and web-svc-2 services. I have changed the traffic split spec.service to the same web-svc and it worked
Here is the traffic split configuration after the update.
apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
name: web-svc-ts
namespace: emojivoto
spec:
# The root service that clients use to connect to the destination application.
service: web-svc
# Services inside the namespace with their own selectors, endpoints and configuration.
backends:
- service: web-svc
# Identical to resources, 1 = 1000m
weight: 500m
- service: web-svc-2
weight: 500m
Kudos to the Linkerd team who supported me to fix this issue. It is working like a charm.
tl;dr: The nginx ingress requires a Service resource to have an Endpoint resource in order to be considered a valid destination for traffic. The architecture in the repo creates three Service resources, one of which acts as an apex and has no Endpoint resources because it has no selectors, so the nginx ingress won't send traffic to it, and the leaf services will not get traffic as a result.
The example in the repo follows the SMI Spec by defining a single apex service and two leaf services. The web-apex service does not have any endpoints, so nginx will not send traffic to it.
According to the SMI Spec services can be self-referential, which means that a service can be both an apex and a leaf service, so to use the nginx ingress with this example, you can modify the TrafficSplit definition to change the spec.service value from web-apex to web-svc:
apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
name: web-svc-ts
namespace: emojivoto
spec:
# The root service that clients use to connect to the destination application.
service: web-svc
# Services inside the namespace with their own selectors, endpoints and configuration.
backends:
- service: web-svc
# Identical to resources, 1 = 1000m
weight: 500m
- service: web-svc-2
weight: 500m
OS: RHEL7 | k8s version: 1.12/13 | kubespray | baremetal
I have a standard kubespray bare metal cluster deployed and I am trying to understand what is the simplest recommended way to deploy nginx-ingress-controller that will allow me to deploy simple services. There is no load balancer provided. I want my MASTER public IP as the endpoint for my services.
Github k8s ingress-nginx suggests NodePort service as a "mandatory" step, which seems not to be enough to make it works along with kubespray's ingress_controller.
I was able to make it working forcing LoadBalancer service type and setting externalIP value as a MASTER public IP into nginx-ingress-controller via kubectl edit svc but it seems not to be a correct solution due to lack of a load balancer itself.
Similar results using helm chart:
helm install -n ingress-nginx stable/nginx-ingress --set controller.service.externalIPs[0]="MASTER PUBLIC IP"
I was able to make it working forcing LoadBalancer service type and setting externalIP value as a MASTER public IP into nginx-ingress-controller via kubectl edit svc but it seems not to be a correct solution due to lack of a load balancer itself.
Correct, that is not what LoadBalancer is intended for. It's intended for provisioning load balancers with cloud providers like AWS, GCP, or Azure, or a load balancer that has some sort of API so that the kube-controller-manager can interface with it. If you look at your kube-controller-manager logs you should see some errors. The way you made it work it's obviously a hack, but I suppose it works.
The standard way to implement this is just to use a NodePort service and have whatever proxy/load balancer (i.e. nginx, or haproxy) on your master to send traffic to the NodePorts. Note that I don't recommend the master to front your services either since it already handles some of the critical Kubernetes pods like the kube-controller-manager, kube-apiserver, kube-scheduler, etc.
The only exception is MetalLB which you can use with a LoadBalancer service type. Keep in mind that as of this writing the project is in its early stages.
My problem is simple. I have an AKS deployment with a LoadBalancer service that needs to use HTTPS with a certificate.
How do I do this?
Everything I'm seeing online involves Ingress and nginx-ingress in particular.
But my deployment is not a website, it's a Dropwizard service with a REST API on one port and an admin service on another port. I don't want to map the ports to a path on port 80, I want to keep the ports as is. Why is HTTPS tied to ingress?
I just want HTTPS with a certificate and nothing more changed, is there a simple solution to this?
A sidecar container with nginx with the correct certificates (possible loaded off a Secret or a ConfigMap) will do the job without ingress. This seems to be a good example, using nginx-ssl-proxy container.
Yes, that's right as of this writing an Ingress will currently work either on port 80 or port 443, potentially it can be extended to use any port because nginx, Traefik, haproxy, etc can all listen on different ports.
So you are down to either a LoadBalancer or a NodePort type of service. Type LoadBalancer will not work directly with TLS since the Azure load balancers are layer 4. So you will have to use Application Gateway and it's preferred to use an internal load balancer for security reasons.
Since you are using Azure you can run something like this (assuming that your K8s cluster is configured the right way to use the Azure cloud provider, either the --cloud-provider option or the cloud-controller-manager):
$ cat <<EOF
apiVersion: v1
kind: Service
metadata:
name: your-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: <your-port>
selector:
app: your-app
EOF | kubectl apply -f -
and that will create an Azure load balancer on the port you like for your service. Behind the scenes, the load balancer will point to a port on the nodes and within the nodes, there will be firewall rules that will route to your container. Then you can configure Application Gateway. Here's a good article describing it but using port 80, you will have to change it use port 443 and configuring the TLS certs, and the Application Gateway also supports end to end TLS in case you want to terminate TLS directly on your app too.
The other option is NodePort, and you can run something like this:
$ kubectl expose deployment <deployment-name> --type=NodePort
Then Kubernetes will pick a random port on all your nodes where you can send traffic to your service listening on <your-port>. So, in this case, you will have to manually create a load balancer with TLS or a traffic source that listens on TLS <your-port> and forwards it to a NodePort on all your nodes, this load balancer can be anything like haproxy, nginx, Traefik or something else that supports terminating TLS. And you can also use the Application Gateway to forward directly to your node ports, in other words, define a listener that listens on the NodePort of your cluster.
I have a specific use case that I can not seem to solve.
A typical gcloud setup:
A K8S cluster
A gcloud storage bucket
A gcloud loadbalancer
I managed to get my domain https://cdn.foobar.com/uploads/ to points to a google storage backend without any issue: I can access files. Its the backend service one that fails.
I would like the CDN to act as a cache, when a HTTP request hits it such as https://cdn.foobar.com/assets/x.jpg, if it does not have a copy of the asset it should query an other domain https://foobar.com/assets/x.jpg.
I understood that this what was load balancers backend-service were for. (Right?)
The backend-service is pointing to the instance group of the k8s cluster and requires a port. I guessed that I needed to allow the firewall to expose the Nodeport of my web application service for the loadbalancer to be able to query it.
Cloud CDN
Load balancing
Failing health-checks.
The backend service is pointing to the instance group of the k8s cluster and requires some ports (default 80?) 80 failed. I guessed that I needed to allow the firewall to expose the 32231 Nodeport of my web application service for the loadbalancer to be able to query it. That still failed with a 502.
?> kubectl describe svc
Name: backoffice-service
Namespace: default
Labels: app=backoffice
Selector: app=backoffice
Type: NodePort
IP: 10.7.xxx.xxx
Port: http 80/TCP
NodePort: http 32231/TCP
Endpoints: 10.4.x.x:8500,10.4.x.x:8500
Session Affinity: None
No events.
I ran out of ideas at this point.
Any hints int the right direction would be much appreciated.
When deploying your service as type 'NodePort', you are exposing the service on each Node's IP, but the service is not reachable to the exterior, so you need to expose your service as 'LoadBalancer'
Since you're looking to use an HTTP(s) Load Balancer, I'll recommend using a Kubernetes Ingress resource. This resource will be in charge of configuring the HTTP(s) load balancer and the required ports that your service is using, as well as the health checks on the specified port.
Since you're securing your application, you will need to configure a secret object for securing the Ingress.
This example will help you getting started on an Ingress with TLS termination.
I exposed a service with a static IP and an Ingress through an nginx controller as one of the examples of the kubernetes/ingress repository. I have a second LoadBalancer service, that is not managed by any Ingress resource that is no longer properly exposed after the adding the new resources for the first service (I do not understand why this is the case).
I tried to add a second Ingress and LoadBalancer service to assign the second static IP, but I cant get it to work.
How would I go about exposing the second service, preferably with an Ingress? Do I need to add a second Ingress resource or do I have to reconfigure the one I already have?
Using a Service with type: LoadBalancer and using an Ingress are usually mutually exclusive ways to expose your application.
When you create a Service with type: LoadBalancer, Kubernetes creates a LoadBalancer in your cloud account that has an IP, opens the ports on that LoadBalancer that match your Service, and then directs all traffic to that IP to the 1 Service. So if you have 2 Service objects, each with 'type: LoadBalancer' for 2 different Deployments, then you have 2 IPs as well (one for each Service).
The Ingress model is based on directing traffic through a single Ingress Controller which is running something like nginx. As the Ingress resources are added, the Ingress Controller reconfigures nginx to include the new Ingress details. In this case, there will be a Service for the Ingress Controller (e.g. nginx) that is type: LoadBalancer, but all of the services that the Ingress resources point to should be type: ClusterIP. Traffic for all the Ingress objects will flow through the same public IP of the LoadBalancer for the Ingress Controller Service to the Ingress Controller (e.g. nginx) Pods. The configuration details from the Ingress object (e.g. virtual host or port or route) will then determine which Service will get the traffic.