I have a set of services that i want to expose as an ingress load balancer. I select nginx to be the ingress because of the ability to force http to https redirects.
Having an ingress config like
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-https
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: true
nginx.ingress.kubernetes.io/force-ssl-redirect: true
nginx.org/ssl-services: "api,spa"
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- api.some.com
- www.some.com
secretName: secret
rules:
- host: api.some.com
http:
paths:
- path: /
backend:
serviceName: api
servicePort: 8080
- host: www.some.com
http:
paths:
- path: /
backend:
serviceName: spa
servicePort: 8081
gke creates the nginx ingress load balancer but also another load balancer with backends and everything like if where not nginx selected but gcp as ingress.
below screenshot shows in red the two unexpected LB and in blue the two nginx ingress LB one for our qa and prod env respectively.
output from kubectl get services
xyz#cloudshell:~ (xyz)$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api NodePort 1.2.3.4 <none> 8080:32332/TCP,4433:31866/TCP 10d
nginx-ingress-controller LoadBalancer 1.2.6.9 12.13.14.15 80:32321/TCP,443:32514/TCP 2d
nginx-ingress-default-backend ClusterIP 1.2.7.10 <none> 80/TCP 2d
spa NodePort 1.2.8.11 <none> 8082:31847/TCP,4435:31116/TCP 6d
screenshot from gcp gke services view of the ingress with wrong info
Is this expected?
Did i miss any configuration to prevent this extra load balancer for been created?
On GCP GKE the gcp ingress controller its enabled by default and will be always lead to a new LB in any ingress definition even if the .class its specified.
https://github.com/kubernetes/ingress-nginx/issues/3703
So to fix it we should remove the gcp ingress controller from the cluster as mention on https://github.com/kubernetes/ingress-gce/blob/master/docs/faq/gce.md#how-do-i-disable-the-gce-ingress-controller
When you create a deployment on GKE cluster, you have two possibilities to expose it:
Use a Service with a type LoadBalancer and expose it - this will
create a TCP load balancer
Create a Service as a NodePort or a Cluster
IP and expose it as an Ingress - this will create HTTP load balancer
If you can see both of them in Load Balancers, this means that you have probably created a Service type LoadBalancer and then exposed it as Ingress. You are opening the same deployment to be accessed from two different IPs, by service and Ingress. To confirm this try:
$ kubectl get ingress
$ kubectl get svc
You will get 2 ips from these 2 commands and they will both show you the same page.
Better way to configure it is to have a service type NodePort, and expose that service as an ingress. This is especially useful because you can use the same ingress for exposing more services.
This way you are saving the number of IPs exposed (and saving money by not using several Load Balancers).
Related
Kubernetes is based on ubuntu.When I run the application, the address part of ingress is empty.
service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: docker-testmrv
name: docker-testmrv-service
namespace: jenkins
spec:
selector:
app: docker-testmrv
ports:
- protocol: TCP
port: 80
targetPort: 8093
type: LoadBalancer
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
app: docker-testmrv
name: docker-testmrv-ingress
namespace: jenkins
spec:
rules:
- host: dockertest.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: docker-testmrv-service
port:
number: 80
ingressClassName: nginx
As you can see in the picture below, the hosts part is empty.I also tried the following in the annotation section, but it didn't work. I've looked and tried other sources as well.
nginx.ingress.kubernetes.io/rewrite-target: /$1
or
ingressclass.kubernetes.io/is-default-class: "true"
or
kubernetes.io/ingressClassName: nginx
kubectl get ing -n jenkins
First we need to ensure nginx enabled and nginx-ingress-controller pod in running status.
Follow below steps to verify :
Enable the NGINX Ingress controller, run the following command:
minikube addons enable ingress
Verify that the NGINX Ingress controller is running
kubectl get pods -n kube-system
As per your YAML, For ingress rule, change the port servicePort from 8093 to 80 the default http port.
Now apply those files and create your pods, service and ingress rule. Wait a few moments, it will take a few moments to get ADDRESS for your ingress rule.
Refer this SO Link
Updated Answer :
Do Nodes have an external ip by default?
If you're using public nodes, each node will have a different public IP and can change every time a node is recreated.
So, Make sure you use the service type as Load balancer to get an external IP on your ingress . NodePort opens any one of the available ports. You can also use NodePort but it might not give you an external IP though instead give a port that will be opened on all the nodes.
Refer this Link to get the difference between cluster IP Node Port and Load balancer different from each other.
Create the service type as Load balancer and add the last line ingressClassName: nginx definition to your ingress yaml. This will work. Refer to this SO
I am trying to setup a Kubernetes cluster using Kubeadm in GCE. I was able to access the deployment using a Nodeport service from the external IP. I am trying to set up an ingress that maps to a domain name but was not able to do it. So far what I have done:
Created a baremetal nginx ingress controller (I am using kubeadm)
Created a Nodeport service on the deployment (was able to connect it from the outside of the cluster)
Created an ingress resource using the configuration below:
and the command kubectl describe ingress my-ingress returns
my-ingress <none> sample.com 10.160.15.210 80, 443 32h which is the internal IP
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
ingressClassName: nginx
spec:
tls:
- hosts:
- sample.com
secretName: sample-tls
rules:
- host: sample.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: sample # the nodeport service name of the deployment
port:
number: 8000 # nodeport target port
I cannot access the deployment using sample.com . I double checked the DNS name, using the command dig sample.com and it returns the external IP.
If you created your cluster using GCP, you should have used GCE-GKE installation instructions.
Important difference is that GCE-GKE installation creates LoadBalancer instead of NodePort
I have setup an ingress for an application but want to whitelist my ip address. So I created this Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: ${MY_IP}/32
name: ${INGRESS_NAME}
spec:
rules:
- host: ${DNS_NAME}
http:
paths:
- backend:
serviceName: ${SVC_NAME}
servicePort: ${SVC_PORT}
tls:
- hosts:
- ${DNS_NAME}
secretName: tls-secret
But when I try to access it I get a 403 forbidden and in the nginx logging I see a client ip but that is from one of the cluster nodes and not my home ip.
I also created a configmap with this configuration:
data:
use-forwarded-headers: "true"
In the nginx.conf in the container I can see that has been correctly passed on/ configured, but I still get a 403 forbidden with still only the client ip from cluster node.
I am running on an AKS cluster and the nginx ingress controller is behind an Azure loadbalancer. The nginx ingress controller svc is exposed as type loadbalancer and locks in on the nodeport opened by the svc.
Do I need to configure something else within Nginx?
If you've installed nginx-ingress with the Helm chart, you can simply configure your values.yaml file with controller.service.externalTrafficPolicy: Local, which I believe will apply to all of your Services. Otherwise, you can configure specific Services with service.spec.externalTrafficPolicy: Local to achieve the same effect on those specific Services.
Here are some resources to further your understanding:
k8s docs - Preserving the client source IP
k8s docs - Using Source IP
It sounds like you have your Nginx Ingress Controller behind a NodePort (or LoadBalancer) Service, or rather behind a kube-proxy. Generally to get your controller to see the raw connecting IP you will need to deploy it using a hostNetwork port so it listens directly to incoming traffic.
In kubernetes I have a deployment of 3 pods in charge of the sockets.
I wish to load balance the traffic between the pods of the deployment. To do it, I'm using the NGINX Ingress controller installed via Helm using the chart stable/nginx-ingress.
The problem is that the clients always connect to the same pod. There is no balancing.
To test the load balancing, I'm using sevaral phones using the data (2-6 phones). Each of them opening a socket connection.
I have 2 ingress rules. For the sockets I'm using:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-socket-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/websocket-services: "node-socket-service"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/upstream-hash-by: "$host"
spec:
tls:
- hosts:
- example.com
rules:
- host: example.com
http:
paths:
- path: /socket.io/
backend:
servicePort: 4000
serviceName: node-socket-service
Service:
apiVersion: v1
kind: Service
metadata:
name: node-socket-service
spec:
type: ClusterIP
selector:
component: node-socket
ports:
- port: 4000
targetPort: 4000
I tried to change the value of upstream-hash-by with : $binary_remote_addr $remote_addr $host ewma $request_uri, unsuccessful...
I'm wondering if the way that I'm doing my test is good. May be the load balancing is working well but it needs to have more clients.
I am assuming you are using the following architecture to reach your pod:
Ingress controller ---> kubernetes service ---> kubernetes deployment (POD)
If this is the case, then you are using load balancing with a statistical round robin policy already. For which I would conclude that your deployment only has one replica. Check the amount of replicas by running kubectl describe deployment $YOUR_DEPLOYMENT. Increase the amount of replicas by running kubectl scale deployment --replicas=5.
In case you are using a different architecture I would need to check it in order to verify why load balancing is not working. Most likely you are not using a Deployment bud a Pod to deploy your container.
Try nginx.ingress.kubernetes.io/upstream-hash-by: $arg_token instead of $host
I'm a Kubernetes amateur trying to use NGINX ingress controller on GKE. I'm following this google cloud documentation to setup NGINX Ingress for my services, but, I'm facing issues in accessing the NGINX locations.
What's working?
Ingress-Controller deployment using Helm (RBAC enabled)
ClusterIP service deployments
What's not working?
Ingress resource to expose multiple ClusterIP services using unique paths (fanout routing)
K8S Services
[msekar#ebs kube-base]$ kubectl get services -n payment-gateway-7682352
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-controller LoadBalancer 10.35.241.255 35.188.161.171 80:31918/TCP,443:31360/TCP 6h
nginx-ingress-default-backend ClusterIP 10.35.251.5 <none> 80/TCP 6h
payment-gateway-dev ClusterIP 10.35.254.167 <none> 5000/TCP 6h
payment-gateway-qa ClusterIP 10.35.253.94 <none> 5000/TCP 6h
K8S Ingress
[msekar#ebs kube-base]$ kubectl get ing -n payment-gateway-7682352
NAME HOSTS ADDRESS PORTS AGE
pgw-nginx-ingress * 104.198.78.169 80 6h
[msekar#ebs kube-base]$ kubectl describe ing pgw-nginx-ingress -n payment-gateway-7682352
Name: pgw-nginx-ingress
Namespace: payment-gateway-7682352
Address: 104.198.78.169
Default backend: default-http-backend:80 (10.32.1.4:8080)
Rules:
Host Path Backends
---- ---- --------
*
/dev/ payment-gateway-dev:5000 (<none>)
/qa/ payment-gateway-qa:5000 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/ssl-redirect":"false"},"name":"pgw-nginx-ingress","namespace":"payment-gateway-7682352"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"payment-gateway-dev","servicePort":5000},"path":"/dev/"},{"backend":{"serviceName":"payment-gateway-qa","servicePort":5000},"path":"/qa/"}]}}]}}
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
Last applied configuration in the annotations (ingress description output) shows the ingress resource manifest. But, I'm pasting it below for reference
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pgw-nginx-ingress
namespace: payment-gateway-7682352
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- backend:
serviceName: payment-gateway-dev
servicePort: 5000
path: /dev/
- backend:
serviceName: payment-gateway-qa
servicePort: 5000
path: /qa/
Additional Info
The services I'm trying to access are springboot services that use contexts, so, the root location isn't a valid end-point.
The container's readiness and liveliness probes are defined accordingly.
For example, "payment-gateway-dev" service is using a context /pgw/v1 context, so, the deployment can only be accessed through the context. To access application's swagger spec you would use the URL
http://<>/pgw/v1/swagger-ui.html
Behaviour of my deployment
ingress-controller-LB-ip = 35.188.161.171
Accessing ingress controller load balancer "http://35.188.161.171" takes me to default 404 backend
Accessing ingress controller load balancer health "http://35.188.161.171/healthz" returns 200 HTTP response as expected
Trying to access the services using the URLs below returns "404: page not found" error
http://35.188.161.171/dev/pgw/v1/swagger-ui.html
http://35.188.161.171/qa/pgw/v1/swagger-ui.html
Any suggestions about or insights into what I might be doing wrong will be much appreciated.
+1 for this well asked question.
Your setup seemed right to me. In you explanation, I could find that your services would require http://<>/pgw/v1/swagger-ui.html as context. However, in your setup the path submitted to the service will be http://<>/qa/pgw/v1/swagger-ui.html if your route is /qa/.
To remove the prefix, what you would need to do is to add a rewrite rule to your ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pgw-nginx-ingress
namespace: payment-gateway-7682352
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- backend:
serviceName: payment-gateway-dev
servicePort: 5000
path: /dev/(.+)
- backend:
serviceName: payment-gateway-qa
servicePort: 5000
path: /qa/(.+)
After this, you service should receive the correct contexts.
Ref:
Rewrite: https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md
Ingress Route Matching: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/ingress-path-matching.md
An alternate approach would be to use host based routing. You can simply create a couple CNAME DNS entries for the Ingress static IP (pgw-dev.foo.com, pgw-qa.foo.com) and replace your path: attributes with host: attributes. No URL rewrites necessary.
The best reason for using the host based approach, imo, is clarity and flexibility for humans. I've worked in a lot of different places. Almost all of them have used host names to differentiate environments in this way. Tastes great, less filling.
For example, if you split DEV and QA onto separate clusters, no one has to change their configs (and your K8s templates will be reusable). Just update DNS. If you want to spin up a new Staging or Performance Test environment, again, your existing test harnesses should be very easily adapted to the new environment: just change the host name in the config.
Over time, I think you'll find hostname is a more natural way to distinguish environments than a path prefix.