Kubernetes loadbalancer stops serving traffic if using local traffic policy - networking

Currently I am having an issue with one of my services set to be a load balancer. I am trying to get the source ip preservation like its stated in the docs. However when I set the externalTrafficPolicy to local I lose all traffic to the service. Is there something I'm missing that is causing this to fail like this?
Load Balancer Service:
apiVersion: v1
kind: Service
metadata:
labels:
app: loadbalancer
role: loadbalancer-service
name: lb-test
namespace: default
spec:
clusterIP: 10.3.249.57
externalTrafficPolicy: Local
ports:
- name: example service
nodePort: 30581
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: loadbalancer-example
role: example
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: *example.ip*

Could be several things. A couple of suggestions:
Your service is getting an external IP and doesn't know how to reply back based on the local IP address of the pod.
Try running a sniffer on your pod see if you are getting packets from the external source.
Try checking at logs of your application.
Healthcheck in your load balancer is failing. Check the load balancer for your service on GCP console.
Check the instance port is listening. (probably not if your health check is failing)
Hope it helps.

Related

Expose traffic using Ingress Kubernetes

I am new to Kubernetes. I followed Kubernetes the hard way from Kesley Hightower and also this to set up Kubernetes in Azure. Now all the services are up and running fine. But I am not able to expose the traffic using Load balancer. I tried to add a Service object of type LoadBalancer but the external IP is showing as <pending>. I need to add ingress to expose the traffic.
nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-service
name: nginx-service
spec:
type: LoadBalancer
externalIPs:
- <ip>
ports:
- name: "80"
port: 80
targetPort: 80
- name: "443"
port: 443
targetPort: 443
selector:
app: nginx-service
Thank you,
By default, the solution proposed by Kubernetes The Hard Way doesn't include a solution for LoadBalancer. The fact it's pending forever is expected behavior. You need to use out-of-the box solution for that. A very commonly used is MetalLB.
MetalLB isn't going to allocate an External IP for you, it will allocate a internal IP inside our VPC and you have to create the necessary routing rules to route traffic to this IP.

Can I configure nginx-ingress to route traffic to outside the cluster

If I have a kubernetes cluster in AKS with an nginx-ingress, can I then forward certain traffic to something external to the cluster like an App Service?
If I open my-domain.com/svc3, I want traffic to be routed to the App Service.
If this is not directly possible, what would be the best solution?
1) I could put an additional load balancer (like AppGateway) in front of both the AKS cluster and the App Service
2) I could instantiate a second nginx as a service, which then routes traffic to the app service
3) ... ?
I think you can use mapping external service to kubernetes :
kind: Service
apiVersion: v1
metadata:
name: external-service
Spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
here the endpoint :
kind: Endpoints
apiVersion: v1
metadata:
name: external-service
subsets:
- addresses:
- ip: 101.280.1.44
ports:
- port: 80
For more information you can check this video also :
https://www.youtube.com/watch?v=fvpq4jqtuZ8
you can also read this document for more information : https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services

Gitlab kubernetes integration

I have a custom kubernetes cluster on a serve with public IP and DNS pointing to it (also wildcard).
Gitlab was configured with the cluster following this guide: https://gitlab.touch4it.com/help/user/project/clusters/index#add-existing-kubernetes-cluster
However, after installing Ingress, the ingress endpoint is never detected:
I tried patching the object in k8s, like so
externalIPs: (was empty)
- 1.2.3.4
externalTrafficPolicy: local (was cluster)
I suspect that the problem is empty ingress (scroll to the end) object then calling:
# kubectl get service ingress-nginx-ingress-controller -n gitlab-managed-apps -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-11-20T08:57:18Z"
labels:
app: nginx-ingress
chart: nginx-ingress-1.22.1
component: controller
heritage: Tiller
release: ingress
name: ingress-nginx-ingress-controller
namespace: gitlab-managed-apps
resourceVersion: "3940"
selfLink: /api/v1/namespaces/gitlab-managed-apps/services/ingress-nginx-ingress-controller
uid: c175afcc-0b73-11ea-91ec-5254008dd01b
spec:
clusterIP: 10.107.35.248
externalIPs:
- 1.2.3.4 # (public IP)
externalTrafficPolicy: Local
healthCheckNodePort: 30737
ports:
- name: http
nodePort: 31972
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31746
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
But Gitlab still cant find the ingress endpoint. I tried restarting cluster and Gitlab.
The network inspection in Gitlab always shows this response:
...
name ingress
status installed
status_reason null
version 1.22.1
external_ip null
external_hostname null
update_available false
can_uninstall false
...
Any ideas how to have a working Ingress Endpoint?
GitLab: 12.4.3 (4d477238500) k8s: 1.16.3-00
I had the exact same issue as you, and I finally figured out how to solve it.
The first to understand, is that on bare metal, you can't make it working without using MetalLB, because it calls the required Kubernetes APIs making it accepting the IP address you give to the Service of LoadBalancer type.
So first step is to deploy MetalLB to your cluster.
Then you need to have another machine, running a service like NGiNX or HAproxy or whatever can do some load balancing.
Last but not least, you have to give the Load Balancer machine IP address to MetalLB so that it can assign it to the Service.
Usually MetalLB requires a range of IP addresses, but you can also give one IP address like I did:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: staging-public-ips
protocol: layer2
addresses:
- 1.2.3.4/32
This way, MetalLB will assign the IP address to the Service with type LoadBalancer and Gitlab will finally find the IP address.
WARNING: MetalLB will assign only once an IP address. If you need many Service with type LoadBalancer, you will need many machines running NGiNX/HAproxy and so on and add its IP address in the MetalLB addresses pool.
For your information, I've posted all the technical details to my Gitlab issue here.

Kubernetes and ERR_CONNECTION_RESET

I've got a pod with 2 containers, both running nginx. One is running on port 80, the other on port 88. I have no trouble accessing the one on port 80, but can't seem to access the one on port 88. When I try, I get:
This site can’t be reached
The connection was reset.
ERR_CONNECTION_RESET
So here's the details.
1) The container is defined in the deployment YAML as:
- name: rss-reader
image: nickchase/nginx-php-rss:v3
ports:
- containerPort: 88
2) I created the service with:
kubectl expose deployment rss-site --port=88 --target-port=88 --type=NodePort --name=backend
3) This created a service of:
root#kubeclient:/home/ubuntu# kubectl describe service backend
Name: backend
Namespace: default
Labels: app=web
Selector: app=web
Type: NodePort
IP: 11.1.250.209
Port: <unset> 88/TCP
NodePort: <unset> 31754/TCP
Endpoints: 10.200.41.2:88,10.200.9.2:88
Session Affinity: None
No events.
And when I tried to access it, I used the URL
http://[nodeip]:31754/index.php
Now, when I instantiate the container manually with Docker, this works.
So anybody have a clue what I'm missing here?
Thanks in advance...
My presumtion is that you're using the wrong access IP. Are you trying to access the minion's IP address and port 31754?

Kubernetes service externalIPs not forwarding

I have a Kubernetes cluster running 1.2.3 binaries along with flannel 0.5.5. I am using the GCE backend with IP forwarding enabled. For some reason, although I specify a specific Node's external IP address, it will not forward to the appropriate node.
Additionally I cannot create external load balancers, which the controller-manager says it can't find the gce instances that are the nodes, which are in ready state. I've looked at the source where the load balancer creation happens, my guess is it's either permission issues (I gave the cluster full permissions for gce) or it's not finding the metadata.
Here is an example of the services in question:
kind: "Service"
apiVersion: "v1"
metadata:
name: "client"
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
name: "insecure"
- protocol: "TCP"
port: 443
targetPort: 443
name: "secure"
selector:
name: "client"
sessionAffinity: "ClientIP"
externalIPs:
- "<Node External IP>"
And when I was trying to create the load balancer, it had the type: LoadBalancer.
Why would the forwarding to the Node IP not work? I have an idea as to the Load balancer issue, but if anyone has any insight?
So I eventually worked around this issue by creating an external loadbalancer. Only then did I have a valid external IP.

Resources