Expose traffic using Ingress Kubernetes - nginx

I am new to Kubernetes. I followed Kubernetes the hard way from Kesley Hightower and also this to set up Kubernetes in Azure. Now all the services are up and running fine. But I am not able to expose the traffic using Load balancer. I tried to add a Service object of type LoadBalancer but the external IP is showing as <pending>. I need to add ingress to expose the traffic.
nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-service
name: nginx-service
spec:
type: LoadBalancer
externalIPs:
- <ip>
ports:
- name: "80"
port: 80
targetPort: 80
- name: "443"
port: 443
targetPort: 443
selector:
app: nginx-service
Thank you,

By default, the solution proposed by Kubernetes The Hard Way doesn't include a solution for LoadBalancer. The fact it's pending forever is expected behavior. You need to use out-of-the box solution for that. A very commonly used is MetalLB.
MetalLB isn't going to allocate an External IP for you, it will allocate a internal IP inside our VPC and you have to create the necessary routing rules to route traffic to this IP.

Related

How do I route traffic to an external SFTP server via a port in kubernetes nginx?

The end goal: be able to sftp into the server using domain.com:42150 using routing through Kubernetes.
The reason: This behavior is currently handled by an HAProxy config that we are moving away from, but we still need to support this behavior in our Kubernetes set up.
I came across this and could not figure out how to make it work.
I have the IP of the sftp server and the port.
So, basicaly if a request comes in at domain.com:42150 then it should connect to external-ip:22
I have created a config-map like the one in the linked article:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: nginx-ingress
data:
42150: "nginx-ingress/external-sftp:80"
Which, by my understanding should route requests to port 42150 to this service:
apiVersion: v1
kind: Service
metadata:
name: external-sftp
namespace: nginx-ingress
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 22
protocol: TCP
And although it's not listed in that article, I know from connecting to other outside services, I need to create an endpoint to use.
apiVersion: v1
kind: Endpoints
metadata:
name: external-sftp
namespace: nginx-ingress
subsets:
- addresses:
- ip: 12.345.67.89
ports:
- port: 22
protocol: TCP
Obviously this isn't working. I never ask questions here. Usually my answers are easy to find, but this one I cannot find an answer for. I'm just stuck.
Is there something I'm missing? I'm thinking this way of doing it is not possible. Is there a better way to go about doing this?

Can I configure nginx-ingress to route traffic to outside the cluster

If I have a kubernetes cluster in AKS with an nginx-ingress, can I then forward certain traffic to something external to the cluster like an App Service?
If I open my-domain.com/svc3, I want traffic to be routed to the App Service.
If this is not directly possible, what would be the best solution?
1) I could put an additional load balancer (like AppGateway) in front of both the AKS cluster and the App Service
2) I could instantiate a second nginx as a service, which then routes traffic to the app service
3) ... ?
I think you can use mapping external service to kubernetes :
kind: Service
apiVersion: v1
metadata:
name: external-service
Spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
here the endpoint :
kind: Endpoints
apiVersion: v1
metadata:
name: external-service
subsets:
- addresses:
- ip: 101.280.1.44
ports:
- port: 80
For more information you can check this video also :
https://www.youtube.com/watch?v=fvpq4jqtuZ8
you can also read this document for more information : https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services

Gitlab kubernetes integration

I have a custom kubernetes cluster on a serve with public IP and DNS pointing to it (also wildcard).
Gitlab was configured with the cluster following this guide: https://gitlab.touch4it.com/help/user/project/clusters/index#add-existing-kubernetes-cluster
However, after installing Ingress, the ingress endpoint is never detected:
I tried patching the object in k8s, like so
externalIPs: (was empty)
- 1.2.3.4
externalTrafficPolicy: local (was cluster)
I suspect that the problem is empty ingress (scroll to the end) object then calling:
# kubectl get service ingress-nginx-ingress-controller -n gitlab-managed-apps -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-11-20T08:57:18Z"
labels:
app: nginx-ingress
chart: nginx-ingress-1.22.1
component: controller
heritage: Tiller
release: ingress
name: ingress-nginx-ingress-controller
namespace: gitlab-managed-apps
resourceVersion: "3940"
selfLink: /api/v1/namespaces/gitlab-managed-apps/services/ingress-nginx-ingress-controller
uid: c175afcc-0b73-11ea-91ec-5254008dd01b
spec:
clusterIP: 10.107.35.248
externalIPs:
- 1.2.3.4 # (public IP)
externalTrafficPolicy: Local
healthCheckNodePort: 30737
ports:
- name: http
nodePort: 31972
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31746
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
But Gitlab still cant find the ingress endpoint. I tried restarting cluster and Gitlab.
The network inspection in Gitlab always shows this response:
...
name ingress
status installed
status_reason null
version 1.22.1
external_ip null
external_hostname null
update_available false
can_uninstall false
...
Any ideas how to have a working Ingress Endpoint?
GitLab: 12.4.3 (4d477238500) k8s: 1.16.3-00
I had the exact same issue as you, and I finally figured out how to solve it.
The first to understand, is that on bare metal, you can't make it working without using MetalLB, because it calls the required Kubernetes APIs making it accepting the IP address you give to the Service of LoadBalancer type.
So first step is to deploy MetalLB to your cluster.
Then you need to have another machine, running a service like NGiNX or HAproxy or whatever can do some load balancing.
Last but not least, you have to give the Load Balancer machine IP address to MetalLB so that it can assign it to the Service.
Usually MetalLB requires a range of IP addresses, but you can also give one IP address like I did:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: staging-public-ips
protocol: layer2
addresses:
- 1.2.3.4/32
This way, MetalLB will assign the IP address to the Service with type LoadBalancer and Gitlab will finally find the IP address.
WARNING: MetalLB will assign only once an IP address. If you need many Service with type LoadBalancer, you will need many machines running NGiNX/HAproxy and so on and add its IP address in the MetalLB addresses pool.
For your information, I've posted all the technical details to my Gitlab issue here.

Kubernetes loadbalancer stops serving traffic if using local traffic policy

Currently I am having an issue with one of my services set to be a load balancer. I am trying to get the source ip preservation like its stated in the docs. However when I set the externalTrafficPolicy to local I lose all traffic to the service. Is there something I'm missing that is causing this to fail like this?
Load Balancer Service:
apiVersion: v1
kind: Service
metadata:
labels:
app: loadbalancer
role: loadbalancer-service
name: lb-test
namespace: default
spec:
clusterIP: 10.3.249.57
externalTrafficPolicy: Local
ports:
- name: example service
nodePort: 30581
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: loadbalancer-example
role: example
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: *example.ip*
Could be several things. A couple of suggestions:
Your service is getting an external IP and doesn't know how to reply back based on the local IP address of the pod.
Try running a sniffer on your pod see if you are getting packets from the external source.
Try checking at logs of your application.
Healthcheck in your load balancer is failing. Check the load balancer for your service on GCP console.
Check the instance port is listening. (probably not if your health check is failing)
Hope it helps.

Kubernetes service externalIPs not forwarding

I have a Kubernetes cluster running 1.2.3 binaries along with flannel 0.5.5. I am using the GCE backend with IP forwarding enabled. For some reason, although I specify a specific Node's external IP address, it will not forward to the appropriate node.
Additionally I cannot create external load balancers, which the controller-manager says it can't find the gce instances that are the nodes, which are in ready state. I've looked at the source where the load balancer creation happens, my guess is it's either permission issues (I gave the cluster full permissions for gce) or it's not finding the metadata.
Here is an example of the services in question:
kind: "Service"
apiVersion: "v1"
metadata:
name: "client"
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
name: "insecure"
- protocol: "TCP"
port: 443
targetPort: 443
name: "secure"
selector:
name: "client"
sessionAffinity: "ClientIP"
externalIPs:
- "<Node External IP>"
And when I was trying to create the load balancer, it had the type: LoadBalancer.
Why would the forwarding to the Node IP not work? I have an idea as to the Load balancer issue, but if anyone has any insight?
So I eventually worked around this issue by creating an external loadbalancer. Only then did I have a valid external IP.

Resources