Kubernetes service external ip pending - nginx

I am trying to deploy nginx on kubernetes, kubernetes version is v1.5.2,
I have deployed nginx with 3 replica, YAML file is below,
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-example
spec:
replicas: 3
revisionHistoryLimit: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.10
ports:
- containerPort: 80
and now I want to expose its port 80 on port 30062 of node, for that I created a service below,
kind: Service
apiVersion: v1
metadata:
name: nginx-ils-service
spec:
ports:
- name: http
port: 80
nodePort: 30062
selector:
app: nginx
type: LoadBalancer
this service is working good as it should be, but it is showing as pending not only on kubernetes dashboard also on terminal.

It looks like you are using a custom Kubernetes Cluster (using minikube, kubeadm or the like). In this case, there is no LoadBalancer integrated (unlike AWS or Google Cloud). With this default setup, you can only use NodePort or an Ingress Controller.
With the Ingress Controller you can setup a domain name which maps to your pod; you don't need to give your Service the LoadBalancer type if you use an Ingress Controller.

If you are using Minikube, there is a magic command!
$ minikube tunnel
Hopefully someone can save a few minutes with this.
Reference link
https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel

If you are not using GCE or EKS (you used kubeadm) you can add an externalIPs spec to your service YAML. You can use the IP associated with your node's primary interface such as eth0. You can then access the service externally, using the external IP of the node.
...
spec:
type: LoadBalancer
externalIPs:
- 192.168.0.10

I created a single node k8s cluster using kubeadm. When i tried PortForward and kubectl proxy, it showed external IP as pending.
$ kubectl get svc -n argocd argocd-server
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argocd-server LoadBalancer 10.107.37.153 <pending> 80:30047/TCP,443:31307/TCP 110s
In my case I've patched the service like this:
kubectl patch svc <svc-name> -n <namespace> -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}'
After this, it started serving over the public IP
$ kubectl get svc argo-ui -n argo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argo-ui LoadBalancer 10.103.219.8 172.31.71.218 80:30981/TCP 7m50s

To access a service on minikube, you need to run the following command:
minikube service [-n NAMESPACE] [--url] NAME
More information here : Minikube GitHub

When using Minikube, you can get the IP and port through which you
can access the service by running:
minikube service [service name]
E.g.:
minikube service kubia-http

If it is your private k8s cluster, MetalLB would be a better fit. Below are the steps.
Step 1: Install MetalLB in your cluster
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
Step 2: Configure it by using a configmap
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.42.42.100-172.42.42.105 #Update this with your Nodes IP range
Step 3: Create your service to get an external IP (would be a private IP though).
FYR:
Before MetalLB installation:
After MetalLB installation:

If running on minikube, don't forget to mention namespace if you are not using default.
minikube service << service_name >> --url --namespace=<< namespace_name >>

Following #Javier's answer. I have decided to go with "patching up the external IP" for my load balancer.
$ kubectl patch service my-loadbalancer-service-name \
-n lb-service-namespace \
-p '{"spec": {"type": "LoadBalancer", "externalIPs":["192.168.39.25"]}}'
This will replace that 'pending' with a new patched up IP address you can use for your cluster.
For more on this. Please see karthik's post on LoadBalancer support with Minikube for Kubernetes
Not the cleanest way to do it. I needed a temporary solution. Hope this helps somebody.

If you are using minikube then run commands below from terminal,
$ minikube ip
$ 172.17.0.2 // then
$ curl http://172.17.0.2:31245
or simply
$ curl http://$(minikube ip):31245

In case someone is using MicroK8s: You need a network load balancer.
MicroK8s comes with metallb, you can enable it like this:
microk8s enable metallb
<pending> should turn into an actual IP address then.

A general way to expose an application running on a set of Pods as a network service is called service in Kubernetes. There are four types of service in Kubernetes.
ClusterIP
The Service is only reachable from within the cluster.
NodePort
You'll be able to communicate the Service from outside the cluster using NodeIP:NodePort.default node port range is 30000-32767, and this range can be changed by define --service-node-port-range in the time of cluster creation.
LoadBalancer
Exposes the Service externally using a cloud provider's load balancer.
ExternalName
Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
Only the LoadBalancer gives value for the External-IP Colum. and it only works if the Kubernetes cluster is able to assign an IP address for that particular service. you can use metalLB load balancer for provision IPs to your load balancer services.
I hope your doubt may go away.

You can patch the IP of Node where pods are hosted ( Private IP of Node ) , this is the easy workaround .
Taking reference with above posts , Following worked for me :
kubectl patch service my-loadbalancer-service-name \
-n lb-service-namespace \
-p '{"spec": {"type": "LoadBalancer", "externalIPs":["xxx.xxx.xxx.xxx Private IP of Physical Server - Node - where deployment is done "]}}'

Adding a solution for those who encountered this error while running on amazon-eks.
First of all run:
kubectl describe svc <service-name>
And then review the events field in the example output below:
Name: some-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"some-service","namespace":"default"},"spec":{"ports":[{"port":80,...
Selector: app=some
Type: LoadBalancer
IP: 10.100.91.19
Port: <unset> 80/TCP
TargetPort: 5000/TCP
NodePort: <unset> 31022/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 68s service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 67s service-controller Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB
Review the error message:
Failed to ensure load balancer: could not find any suitable subnets for creating the ELB
In my case, the reason that no suitable subnets were provided for creating the ELB were:
1: The EKS cluster was deployed on the wrong subnets group - internal subnets instead of public facing.
(*) By default, services of type LoadBalancer create public-facing load balancers if no service.beta.kubernetes.io/aws-load-balancer-internal: "true" annotation was provided).
2: The Subnets weren't tagged according to the requirements mentioned here.
Tagging VPC with:
Key: kubernetes.io/cluster/yourEKSClusterName
Value: shared
Tagging public subnets with:
Key: kubernetes.io/role/elb
Value: 1

If you are using a bare metal you need the NodePort type
https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
LoadBalancer works by default in other cloud providers like Digital Ocean, Aws, etc
k edit service ingress-nginx-controller
type: NodePort
spec:
externalIPs:
- xxx.xxx.xxx.xx
using the public IP

Use NodePort:
$ kubectl run user-login --replicas=2 --labels="run=user-login" --image=kingslayerr/teamproject:version2 --port=5000
$ kubectl expose deployment user-login --type=NodePort --name=user-login-service
$ kubectl describe services user-login-service
(Note down the port)
$ kubectl cluster-info
(IP-> Get The IP where master is running)
Your service is accessible at (IP):(port)

The LoadBalancer ServiceType will only work if the underlying infrastructure supports the automatic creation of Load Balancers and have the respective support in Kubernetes, as is the case with the Google Cloud Platform and AWS. If no such feature is configured, the LoadBalancer IP address field is not populated and still in pending status , and the Service will work the same way as a NodePort type Service

minikube tunnel
The below solution works in my case.
First of all, try this command:
minikube tunnel
If it's not working for you. follow the below:
I restart minikube container.
docker minikube stop
then
docker minikube start
After that re-run kubernetes
minikube dashboard
After finish execute :
minikube tunnel

I have the same problem.
Windows 10 Desktop + Docker Desktop 4.7.1 (77678) + Minikube v1.25.2
Following the official docs on my side, I resolve with:
PS C:\WINDOWS\system32> kubectl expose deployment sito-php --type=LoadBalancer --port=8080 --name=servizio-php
service/servizio-php exposed
PS C:\WINDOWS\system32> minikube tunnel
* Tunnel successfully started
* NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
* Starting tunnel for service servizio-php.
PS E:\docker\apache-php> kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 33h
servizio-php LoadBalancer 10.98.218.86 127.0.0.1 8080:30270/TCP 4m39s
The open browser on http://127.0.0.1:8080/

same issue:
os>kubectl get svc right-sabertooth-wordpress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
right-sabertooth-wordpress LoadBalancer 10.97.130.7 "pending" 80:30454/TCP,443:30427/TCP
os>minikube service list
|-------------|----------------------------|--------------------------------|
| NAMESPACE | NAME | URL |
|-------------|----------------------------|--------------------------------|
| default | kubernetes | No node port |
| default | right-sabertooth-mariadb | No node port |
| default | right-sabertooth-wordpress | http://192.168.99.100:30454 |
| | | http://192.168.99.100:30427 |
| kube-system | kube-dns | No node port |
| kube-system | tiller-deploy | No node port |
|-------------|----------------------------|--------------------------------|
It is, however,accesible via that http://192.168.99.100:30454.

There are three types of exposing your service
Nodeport
ClusterIP
LoadBalancer
When we use a loadbalancer we basically ask our cloud provider to give us a dns which can be accessed online
Note not a domain name but a dns.
So loadbalancer type does not work in our local minikube env.

Those who are using minikube and trying to access the service of kind NodePort or LoadBalancer.
We don’t get the external IP to access the service on the local
system. So a good option is to use minikube IP
Use the below command to get the minikube IP once your service is exposed.
minikube service service-name --url
Now use that URL to serve your purpose.

Check kube-controller logs. I was able to solve this issue by setting the clusterID tags to the ec2 instance I deployed the cluster on.

If you are not on a supported cloud (aws, azure, gcloud etc..) you can't use LoadBalancer without MetalLB https://metallb.universe.tf/
but it's in beta yet..

Delete existing service and create a same new service solved my problems. My problems is that the loading balancing IP I defines is used so that external endpoint is pending. When I changed a new load balancing IP it still couldn't work.
Finally, delete existing service and create a new one solved my problem.

For your use case best option is to use NordPort service instead of loadbalancer type because loadbalancer is not available.

I was getting this error on the Docker-desktop. I just exit and turn it on again(Docker-desktop). It took few seconds, then It worked fine.

Deleting all older services and creating new resolved my issue. IP was bound to older service. just try "$kubectl get svc" and then delete all svc's one by one "$kubectl delete svc 'svc name' "

May be the subnet in which you are deploying your service, have not enough ip's

If you are trying to do this in your on-prem cloud, you need an L4LB service to create the LB instances.
Otherwise you end up with the endless "pending" message you described. It is visible in a video here: https://www.youtube.com/watch?v=p6FYtNpsT1M
You can use open source tools to solve this problem, the video provides some guidance on how the automation process should work.

Related

How to route TCP traffic from outside to a Service inside a Kubernetes cluster?

I have a cluster on Azure (AKS). I am have a orientdb service
apiVersion: v1
kind: Service
metadata:
name: orientdb
labels:
app: orientdb
role: backend
spec:
selector:
app: orientdb
ports:
- protocol: TCP
port: 2424
name: binary
- protocol: TCP
port: 2480
name: http
which I want to expose to the outside, such that an app from the internet can send TCP traffic directly to this service.
(In order to connect to orientdb you need to connect over TCP to port 2424)
I am not good in networking so this is my understanding, which might as well be wrong.
I tried the following:
Setting up an Ingress
did not work, because ingress handles http, but is not well suited for tcp.
I tried to set ExternalIP field in the service config in NodePort definition
did not work.
So my problem is the following:
I cannot send tcp traffic to the service. Http traffic works fine.
I would really appreciate if someone would show me how to expose my service such that I can sen TCP traffic directly to my oriented service.
Thanks in advance.
You can use both the service of type Loadbalancer ( I assume AKS supports that) , or you can just use the node port.
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
kubectl get services my-service
The output is similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service ClusterIP 10.3.245.137 104.198.205.71 8080/TCP 54s
Reference here
kubectl expose usage:
Usage
$ expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type]
You can make use of --port= 2424 --target-port= 2424 options for correct ports in the kubectl expose command above

kubernetes expose nginx to static ip in gcp with ingress service configuration error

I had a couple of questions regarding kubernetes ingress service [/controllers]
For example I have an nginx frontend image that I am trying to run with kubectl -
kubectl run <deployment> --image <repo> --port <internal-nginx-port>.
Now I tried to expose this to the outer world with a service -
kubectl expose deployment <deployment> --target-port <port>.
Then tried to create an ingress service with the following nignx-ing.yaml -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: urtutorsv2ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "coreos"
spec:
backend:
serviceName: <service>
servicePort: <port>
Where my ingress.global-static-ip-name is correctly created & available
in Google cloud console.
[I am assuming the service port here is the port I want on my "coreos" IP , so I set it to 80 initially which didn't work so I tried setting it same as the specified in the first step but it still didn't work.]
So, the issue is I am not able to access the frontend at both the urls
http://COREOS_IP, http://COREOS_IPIP:
Which is why I tried to use -
kubectl expose deployment <deployment> --target-port <port>. --type NodePort
to see if it worked with a NodePort & I was able to access the frontend.
So, I am thinking there might be a configuration mistake here because of which I am not getting results with the ingress.
Can anyone here help debug / fix the issue ?
Yeah, the service is there. I tried to check the status with - kubectl get services, kubectl describe service k8urtutorsv2. It showed the service. I tried editing it & saved the nodeport value. the thing is it works with nodeport but not 80 or 443.
You cannot directly expose service on the port 80 or 443.
The available range of exposed services is predefined in the kube-api configuration by the service-node-port-range option with the default value 30000-32767.

Kubernetes - How to access nginx load balancing from outside the cluster using a NodePort service

I have a Kubernetes cluster with a master node and two other nodes:
sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready master 4h v1.10.2
kubernetes-node1 Ready <none> 4h v1.10.2
kubernetes-node2 Ready <none> 34m v1.10.2
Each of them is running on a VirtualBox Ubuntu VM, accessible from the guest computer:
kubernetes-master (192.168.56.3)
kubernetes-node1 (192.168.56.4)
kubernetes-node2 (192.168.56.6)
I deployed an nginx server with two replicas, having one pod per kubernetes-node-x:
sudo kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deployment-64ff85b579-5k5zh 1/1 Running 0 8s 192.168.129.71 kubernetes-node1
nginx-deployment-64ff85b579-b9zcz 1/1 Running 0 8s 192.168.22.66 kubernetes-node2
Next I expose a service for the nginx-deployment as a NodePort to access it from outside the cluster:
sudo kubectl expose deployment/nginx-deployment --type=NodePort
sudo kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h
nginx-deployment NodePort 10.96.194.15 <none> 80:32446/TCP 2m
sudo kubectl describe service nginx-deployment
Name: nginx-deployment
Namespace: default
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: NodePort
IP: 10.96.194.15
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32446/TCP
Endpoints: 192.168.129.72:80,192.168.22.67:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I can access each pod in a node directly using their node IP
kubernetes-node1 http://192.168.56.4:32446/
kubernetes-node2 http://192.168.56.6:32446/
But, I thought that K8s provided some kind of external cluster ip that balanced the requests to the nodes from the outside. What is that IP??
But, I thought that K8s provided some kind of external cluster ip that balanced the requests to the nodes from the outside. What is that IP??
Cluster IP is internal to Cluster. Not exposed to outside, it is for intercommunication across the cluster.
Indeed, you have LoadBanacer type of service that can do such a trick that you need, only it is dependent on cloud providers or minikube/docker edge to work properly.
I can access each pod in a node directly using their node IP
Actually you don't access them individually that way. NodePort does a bit different trick, since it is essentially loadbalancing requests from outside on ANY exposed node IP. In a nutshell, if you hit any of node's IPs with exposed NodePort, kube-proxy will make sure that required service gets it and then service is doing round-robin through active pods, so although you hit specific node IP, you don't necessarily get pod running on that specific node. More details on that you can find here: https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0, as author there said, not technically most accurate representation, but attempt to show on logical level what is happening with NodePort exposure:
As a sidenote, in order to do this on bare metal and do ssl or such, you need to provision ingress of your own. Say, place one nginx on specific node and then reference all appropriate services you want exposed (mind fqdn for service) as upstream(s) that can run on multiple nodes with as many nginx of their own as desired - you don't need to handle exact details of that since k8s runs the show. That way you have one node point (ingress nginx) with known IP address that is handling incoming traffic and redirecting it to services inside k8s that can run across any node(s). I suck with ascii art but will give it a try:
(outside) -> ingress (nginx) +--> my-service FQDN (running accross nodes):
[node-0] | [node-1]: my-service-pod-01 with nginx-01
| [node 2]: my-service-pod-02 with nginx-02
| ...
+--> my-second-service FQDN
| [node-1]: my-second-service-pod with apache?
...
In above sketch you have nginx ingress on node-0 (known IP) that takes external traffic and then handles my-service (running on two pods on two nodes) and my-second-service (single pod) as upstreams. You only need to expose FQDN on services for this to work without worrying about details of IPs of specific nodes. More info you can find in documentation: https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/
Also way better than my ansi-art is this representation from same link as in previous point that illustrate idea behind ingress:
Updated for comments
Why isn't the service load balancing the used pods from the service?
This can happen for several reasons. Depending on how your Liveness and Readiness Probes are configured, maybe service still don't see pod as out of service. Due to this async nature in distributed system such as k8s we experience temporary lost of requests when pods get removed during, for example rolling updates and similar. Secondly, depending on how your kube-proxy was configured, there are options where you can limit it. By official documentation (https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) using --nodeport-addresses you can change node-proxy behavior. Turns out that round-robin was old kube-proxy behavior, apparently new one should be random. Finally, to exclude connection and session issues from browser, did you try this from anonymous session as well? Do you have dns cached locally maybe?
Something else, I killed the pod from node1, and when calling to node1 it didn't use the pod from node 2.
This is a bit strange. Might be related to above mentioned probes thoug. According to official documentation this should not be the case. We had NodePort behaving inline with official documentation mentined above: and each Node will proxy that port (the same port number on every Node) into your Service. But if that is your case then probably LB or Ingress, maybe even ClusterIP with external address (see below) can do the trick for you.
if the service is internal (ClusterIP) ... does it load balance to any of the pods in the nodes
Most definitely yes. One more thing, you can use this behavior to also expose 'load balanced' behavior in 'standard' port range as opposed to 30k+ from NodePort. Here is excerpt of service manifest we use for ingress controller.
apiVersion: v1
kind: Service
metadata:
namespace: ns-my-namespace
name: svc-nginx-ingress-example
labels:
name: nginx-ingress-example
role: frontend-example
application: nginx-example
spec:
selector:
name: nginx-ingress-example
role: frontend-example
application: nginx-example
ports:
- protocol: TCP
name: http-port
port: 80
targetPort: 80
- protocol: TCP
name: ssl-port
port: 443
targetPort: 443
externalIPs:
- 123.123.123.123
Note that in the above example imaginary 123.123.123.123 that is exposed with externalIPs represents ip address of one of our worker nodes. Pods running in svc-nginx-ingress-example service doesn't need to be on this node at all but they still get the traffic routed to them (and loadbalanced across the pods as well) when that ip is hit on specified port.

Kubernetes whitelist-source-range blocks instead of whitelist IP

Running Kubernetes on GKE
Installed Nginx controller with latest stable release by using helm.
Everythings works well, except adding the whitelist-source-range annotation results in that I'm completely locked out from my service.
Ingress config
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: staging-ingress
namespace: staging
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/whitelist-source-range: "x.x.x.x, y.y.y.y"
spec:
rules:
- host: staging.com
http:
paths:
- path: /
backend:
serviceName:staging-service
servicePort: 80
I connected to the controller pod and checked the nginx config and found this:
# Deny for staging.com/
geo $the_real_ip $deny_5b3266e9d666401cb7ac676a73d8d5ae {
default 1;
x.x.x.x 0;
y.y.y.y 0;
}
It looks like he is locking me out instead of whitelist this IP's. But it also locking out all other addresses... I get 403 by going from staging.com host.
Yes. However, I figured out by myself. Your service has to be enabled externalTrafficPolicy: Local. That means that the actual client IP should be used instead of the internal cluster IP.
To accomplish this run
kubectl patch svc nginx-ingress-controller -p '{"spec":{"externalTrafficPolicy":"Local"}}'
Your nginx controller service has to be set as externalTrafficPolicy: Local. That means that the actual client IP will be used instead of cluster's internal IP.
You need to get the real service name from kubectl get svc command. The service is something like:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nobby-leopard-nginx-ingress-controller LoadBalancer 10.0.139.37 40.83.166.29 80:31223/TCP,443:30766/TCP 2d
nobby-leopard-nginx-ingress-controller is the service name you want to use.
To finish this, run
kubectl patch svc nobby-leopardnginx-ingress-controller -p '{"spec":{"externalTrafficPolicy":"Local"}}'
When you setting up a new nginx controller, you can use the command below:
helm install stable/nginx-ingress \
--namespace kube-system \
--set controller.service.externalTrafficPolicy=Local`
to have a nginx ingress controller accept whitelist after installing.

Assign External IP to a Kubernetes Service

EDIT: The whole point of my setup is to achieve (if possible) the following :
I have multiple k8s nodes
When I contact an IP address (from my company's network), it should be routed to one of my container/pod/service/whatever.
I should be able to easily setup that IP (like in my service .yml definition)
I'm running a small Kubernetes cluster (built with kubeadm) in order to evaluate if I can move my Docker (old)Swarm setup to k8s. The feature I absolutely need is the ability to assign IP to containers, like I do with MacVlan.
In my current docker setup, I'm using MacVlan to assign IP addresses from my company's network to some containers so I can reach directly (without reverse-proxy) like if it's any physical server. I'm trying to achieve something similar with k8s.
I found out that:
I have to use Service
I can't use the LoadBalancer type, as it's only for compatible cloud providers (like GCE or AWS).
I should use ExternalIPs
Ingress Resources are some kind of reverse proxy ?
My yaml file is :
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/hostname: k8s-slave-3
---
kind: Service
apiVersion: v1
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- A.B.C.D
I was hopping that my service would get the IP A.B.C.D (which is one of my company's network). My deployment is working as I can reach my nginx container from inside the k8s cluster using it's ClusterIP.
What am I missing ? Or at least, where can I find informations on my network traffic in order to see if packets are coming ?
EDIT :
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.96.0.1 <none> 443/TCP 6d
nginx-service 10.102.64.83 A.B.C.D 80/TCP 23h
Thanks.
First of all run this command:
kubectl get -n namespace services
Above command will return output like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 <none> 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 <none> 3000:30017/TCP 13h
It is clear from the above output that External IPs are not assigned to the services yet. To assign External IPs to backend service run the following command.
kubectl patch svc backend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
and to assign external IP to frontend service run this command.
kubectl patch svc frontend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
Now get namespace service to check either external IPs assignment:
kubectl get -n namespace services
We get an output like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 192.168.0.194 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 192.168.0.194 3000:30017/TCP 13h
Cheers!!! Kubernetes External IPs are now assigned .
If this is just for testing, then try
kubectl port-forward service/nginx-service 80:80
Then you can
curl http://localhost:80
A solution that could work (and not only for testing, though it has its shortcomings) is to set your Pod to map the host network with the hostNetwork spec field set to true.
It means that you won't need a service to expose your Pod, as it will always be accessible on your host via a single port (the containerPort you specified in the manifest). No need to keep a DNS mapping record in that case.
This also means that you can only run a single instance of this Pod on a given node (talking about shortcomings...). As such, it makes it a good candidate for a DaemonSet object.
If your Pod still needs to access/resolve internal Kubernetes hostnames, you need to set the dnsPolicy spec field set to ClusterFirstWithNoHostNet. This setting will enable your pod to access the K8S DNS service.
Example:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx-reverse-proxy
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
tolerations: # allow a Pod instance to run on Master - optional
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- image: nginx
name: nginx
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
EDIT: I was put on this track thanks to the the ingress-nginx documentation
You can just Patch an External IP
CMD: $ kubectl patch svc svc_name -p '{"spec":{"externalIPs":["your_external_ip"]}}'
Eg:- $ kubectl patch svc kubernetes -p '{"spec":{"externalIPs":["10.2.8.19"]}}'
you can try kube-keepalived-vip configurtion to route the traffic. https://github.com/kubernetes/contrib/tree/master/keepalived-vip
You can try to add "type: NodePort" in your yaml file for the service and then you'll have a port to access it via the web browser or from the outside. For my case, it helped.
I don't know if that helps in your particular case but what I did (and I'm on a Bare Metal cluster) was to use the LoadBalancer and set the loadBalancerIP as well as the externalIPs to my server IP as you did it.
After that the correct external IP showed up for the load balancer.
Always use the namespace flag either before or after the service name, because Namespace-based scoping is applicable for deployments and services and this points out to the service that is tagged to a specific namespace. kubectl patch svc service-name -n namespace -p '{"spec":{"externalIPs":["IP"]}}'
Just include additional option.
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service --external-ip=1.1.1.1

Resources