Troubleshoot OpenStack Octavia LBaaS v2 ERROR - nginx

I have two Ubuntu 18.04 bare metal servers. using devstack deployment I have stood up a multi-node (2 nodes) cluster where one server has the controller services and compute, while the second has only compute. In the controller node, I have enabled lbaas v2 with Octavia.
# LBaaS
enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas stable/queens
enable_plugin octavia https://git.openstack.org/openstack/octavia stable/queens
enable_service q-lbaasv2 octavia o-cw o-hk o-hm o-api
I've created a kubernetes cluster with 1 master and 2 minion nodes. some initial testing was successful. deploying WordPress via Helm created a load balancer and I was able to access the app as expected.
I'm now trying to set up a nginx-ingress controller. when I deploy my nginx-ingress controller LoadBalancer service, I can see the load balancer created in OpenStack. however, attempts to access the ingress controller using the external IP always result in an empty reply.
Using the CLI i can see the load balancer, pools, and members. The member entries indicate there is an error:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| address | 10.0.0.9 |
| admin_state_up | True |
| created_at | 2018-09-28T22:15:51 |
| id | 109ad896-5953-4b2b-bbc9-d251d44c3817 |
| name | |
| operating_status | ERROR |
| project_id | 12b95a935dc3481688eb840249c9b167 |
| protocol_port | 31042 |
| provisioning_status | ACTIVE |
| subnet_id | 1e5efaa0-f95f-44a1-a271-541197f372ab |
| updated_at | 2018-09-28T22:16:33 |
| weight | 1 |
| monitor_port | None |
| monitor_address | None |
+---------------------+--------------------------------------+
However, there is no indication of what the error is. there is no corresponding error in the log that I can find.
Using kubectl port-forward I verified that the nginx ingress controller is up/running and correctly configured. the problem seems to be in the load balancer.
My question is how can I diagnose what the error is?
I found only one troubleshooting guide related to lbaas v2 and it claims I should be able to see q-lbaas- namespaces when I run: ip netns list. However, there are none defined.
Using helm --dry-run --debug the service yaml is:
# Source: nginx-ingress/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-0.25.1
component: "controller"
heritage: Tiller
release: oslb2
name: oslb2-nginx-ingress-controller
spec:
clusterIP: ""
externalTrafficPolicy: "Local"
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: "controller"
release: oslb2
type: "LoadBalancer"
Interestingly, in comparing to a previous (wordpress) LoadBalancer service that worked, i noticed that the nginx-ingress externalRoutingPolicy is set to Local, while wordpress specified Cluster. I changed the values.yaml for the nginx-ingress chart to set externalRoutingPolicy to Cluster and now the load balancer is working.
We'd like to keep the policy at "Local" to preserve source IPs. Any thoughts on why it doesn't work?

It turns out I was barking up the wrong tree (apologies). There is no issue with the load balancer.
The problem stems from Kubernetes inability to match the minion/worker hostname with its node name. The nodes take the short form of the hostname, e.g.:
k8s-cluster-fj7cs2gokrnz-minion-1 while kube-proxy does the look-up based on the fully qualified name: k8s-cluster-fj7cs2gokrnz-minion-1.novalocal
i found this in the log for kube-proxy:
Sep 27 23:26:20 k8s-cluster-fj7cs2gokrnz-minion-1.novalocal runc[2205]: W0927 23:26:20.050146 1 server.go:586]
Failed to retrieve node info: nodes "k8s-cluster-fj7cs2gokrnz-minion-1.novalocal" not found
Sep 27 23:26:20 k8s-cluster-fj7cs2gokrnz-minion-1.novalocal runc[2205]: W0927 23:26:20.050241 1 proxier.go:463] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
This has the effect of making Kubernetes fail to find "Local" endpoints for LoadBalancer (or other) services. When you specify externalTrafficPolicy: "Local" K8s will drop packets since it i) is restricted to routing only to endpoints local to the node and ii) it believes there are no local endpoints.
other folks who have enountered this issue configure kube-proxy with hostname-override to make the two match up.

Related

Ingress not forwarding the requests - Docker desktop for Windows and kubernetes

EDIT:
I deleted minikube, enabled kubernetes in Docker desktop for Windows and installed ingress-nginx manually.
$helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
Release "ingress-nginx" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ServiceAccount "ingress-nginx" in namespace "ingress-nginx" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "ingress-nginx"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "ingress-nginx"
It gave me an error but I think it's because I did it already before because:
$kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.106.222.233 localhost 80:30199/TCP,443:31093/TCP 11m
ingress-nginx-controller-admission ClusterIP 10.106.52.106 <none> 443/TCP 11m
Then applied all my yaml files again but this time ingress is not getting any address:
$kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
myapp-ingress <none> myapp.com 80 10m
I am using docker desktop (windows) and installed nginx-ingress controller via minikube addons enable command:
$kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create--1-lp4md 0/1 Completed 0 67m
ingress-nginx-admission-patch--1-jdkn7 0/1 Completed 1 67m
ingress-nginx-controller-5f66978484-6mpfh 1/1 Running 0 67m
And applied all my yaml files:
$kubectl get svc --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default event-service-svc ClusterIP 10.108.251.79 <none> 80/TCP 16m app=event-service-app
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16m <none>
default mssql-clusterip-srv ClusterIP 10.98.10.22 <none> 1433/TCP 16m app=mssql
default mssql-loadbalancer LoadBalancer 10.109.106.174 <pending> 1433:31430/TCP 16m app=mssql
default user-service-svc ClusterIP 10.111.128.73 <none> 80/TCP 16m app=user-service-app
ingress-nginx ingress-nginx-controller NodePort 10.101.112.245 <none> 80:31583/TCP,443:30735/TCP 68m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.105.169.167 <none> 443/TCP 68m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 72m k8s-app=kube-dns
All pods and services seems to be running properly. Checked the pod logs, all migrations etc. has worked and app is up and running. But when I try to send an HTTP request, I get a socket hang up error. I've checked all the logs for all pods, couldn't find anything useful.
$kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
myapp-ingress nginx myapp.com localhost 80 74s
This one is also a bit weird, I was expecting ADRESS to be set to an IP not to localhost. So adding 127.0.0.1 entry for myapp.com in /etc/hosts also didn't seem so right.
My question here is what I might be doing wrong? Or how can I even trace where are my requests are being forwarded to?
ingress-svc.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
spec:
rules:
- host: myapp.com
http:
paths:
- path: /api/Users
pathType: Prefix
backend:
service:
name: user-service-svc
port:
number: 80
- path: /api/Events
pathType: Prefix
backend:
service:
name: event-service-svc
port:
number: 80
events-depl.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service-app
labels:
app: event-service-app
spec:
replicas: 1
selector:
matchLabels:
app: event-service-app
template:
metadata:
labels:
app: event-service-app
spec:
containers:
- name: event-service-app
image: ghcr.io/myapp/event-service:master
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: myapp
---
apiVersion: v1
kind: Service
metadata:
name: event-service-svc
spec:
selector:
app: event-service-app
ports:
- protocol: TCP
port: 80
targetPort: 80
Reproduction
I reproduced the case using minikube v1.24.0, Docker desktop 4.2.0, engine 20.10.10
First, localhost in ingress appears due to logic, it doesn't really matter what IP address is behind the domain in /etc/hosts, I added a different one for testing and still it showed localhost. Only metallb will provide an IP address from set up network.
What happens
When minikube driver is docker, minikube creates a big container (VM) where kubernetes components are run. This can be checked by running docker ps command in host system:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f087dc669944 gcr.io/k8s-minikube/kicbase:v0.0.28 "/usr/local/bin/entr…" 16 minutes ago Up 16 minutes 127.0.0.1:59762->22/tcp, 127.0.0.1:59758->2376/tcp, 127.0.0.1:59760->5000/tcp, 127.0.0.1:59761->8443/tcp, 127.0.0.1:59759->32443/tcp minikube
And then minikube ssh to get inside this container and run docker ps to see all kubernetes containers.
Moving forward. Before introducing ingress, it's already clear that even NodePort doesn't work as intended. Let's check it.
There are two ways to get minikube VM IP:
run minikube IP
kubectl get nodes -o wide and find the node's IP
What should happen next with NodePort is requests should go to minikube_IP:Nodeport while it doesn't work. It happens because docker containers inside the minikube VM are not exposed outside of the cluster which is another docker container.
On minikube to access services within cluster there is a special command - minikube service %service_name% which will create a direct tunnel to the service inside the minikube VM (you can see that it contains service URL with NodePort which is supposed to be working):
$ minikube service echo
|-----------|------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------|-------------|---------------------------|
| default | echo | 8080 | http://192.168.49.2:32034 |
|-----------|------|-------------|---------------------------|
* Starting tunnel for service echo.
|-----------|------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------|-------------|------------------------|
| default | echo | | http://127.0.0.1:61991 |
|-----------|------|-------------|------------------------|
* Opening service default/echo in default browser...
! Because you are using a Docker driver on windows, the terminal needs to be open to run it
And now it's available on host machine:
$ curl http://127.0.0.1:61991/
StatusCode : 200
StatusDescription : OK
Adding ingress
Moving forward and adding ingress.
$ minikube addons enable ingress
$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default echo NodePort 10.111.57.237 <none> 8080:32034/TCP 25m
ingress-nginx ingress-nginx-controller NodePort 10.104.52.175 <none> 80:31041/TCP,443:31275/TCP 2m12s
Trying to get any response from ingress by hitting minikube_IP:NodePort with no luck:
$ curl 192.168.49.2:31041
curl : Unable to connect to the remote server
At line:1 char:1
+ curl 192.168.49.2:31041
Trying to create a tunnel with minikube service command:
$ minikube service ingress-nginx-controller -n ingress-nginx
|---------------|--------------------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|---------------|--------------------------|-------------|---------------------------|
| ingress-nginx | ingress-nginx-controller | http/80 | http://192.168.49.2:31041 |
| | | https/443 | http://192.168.49.2:31275 |
|---------------|--------------------------|-------------|---------------------------|
* Starting tunnel for service ingress-nginx-controller.
|---------------|--------------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|---------------|--------------------------|-------------|------------------------|
| ingress-nginx | ingress-nginx-controller | | http://127.0.0.1:62234 |
| | | | http://127.0.0.1:62235 |
|---------------|--------------------------|-------------|------------------------|
* Opening service ingress-nginx/ingress-nginx-controller in default browser...
* Opening service ingress-nginx/ingress-nginx-controller in default browser...
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
And getting 404 from ingress-nginx which means we can send requests to ingress:
$ curl http://127.0.0.1:62234
curl : 404 Not Found
nginx
At line:1 char:1
+ curl http://127.0.0.1:62234
Solutions
Above I explained what happens. Here are three solutions how to get it work:
Use another minikube driver (e.g. virtualbox. I used hyperv since my laptop has windows 10 pro)
minikube ip will return "normal" IP address of virtual machine and all network functionality will work just fine. You will need to add this IP address into /etc/hosts for domain used in ingress rule
Note! Even though localhost was shown in kubectl get ing ingress output in ADDRESS.
Use built-in kubernetes feature in Docker desktop for Windows.
You will need to manually install ingress-nginx and change ingress-nginx-controller service type from NodePort to LoadBalancer so it will be available on localhost and will be working. Please find my another answer about Docker desktop for Windows
(testing only) - use port-forward
It's almost exactly the same idea as minikube service command. But with more control. You will open a tunnel from host VM port 80 to ingress-nginx-controller service (eventually pod) on port 80 as well. /etc/hosts should contain 127.0.0.1 test.domain entity.
$ kubectl port-forward service/ingress-nginx-controller -n ingress-nginx 80:80
Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80
And testing it works:
$ curl test.domain
StatusCode : 200
StatusDescription : OK
Update for kubernetes in docker desktop on windows and ingress:
On modern ingress-nginx versions .spec.ingressClassName should be added to ingress rules. See last updates, so ingress rule should look like:
apiVersion: networking.k8s.io/v1
kind: Ingress
...
spec:
ingressClassName: nginx # can be checked by kubectl get ingressclass
rules:
- host: myapp.com
http:
...

Unable to access application on pod on minikube

I have created a deployment using below simple yaml file using
kubectl apply -f deployment.yaml
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
tier: frontend
app: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
name: nginx-pod
labels:
app: myapp
spec:
containers:
- name: nginx-pod
image: nginx
and then created a service for it using below yaml file using
kubectl apply -f service.yaml
service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
type: NodePort
ports:
- port: 89
targetPort: 89
nodePort: 30009
selector:
app: myapp
Now when i run
minikube service myapp-service
it gives me
$ minikube service myapp-service
|-----------|---------------|-------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|---------------|-------------|-------------------------|
| default | myapp-service | 89 | http://172.17.0.2:30009 |
|-----------|---------------|-------------|-------------------------|
🏃 Starting tunnel for service myapp-service.
|-----------|---------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|---------------|-------------|------------------------|
| default | myapp-service | | http://127.0.0.1:52289 |
|-----------|---------------|-------------|------------------------|
🎉 Opening service default/myapp-service in default browser...
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
and when I try to access the given http://127.0.0.1:52289, I get "This site can’t be reached" error.
is there anything wrong in the yaml files? I am using
minikube version: v1.12.1
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Docker version
$ docker version
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:21:11 2020
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:29:16 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
This is a community wiki answer aimed to sum up the info from the comments with additional details and explanations.
Guys in the comments are right. There is a flaw in your ports configuration. There are few thing that you need to know that would help you better understand this concept. We have several different port configurations for Kubernetes services:
Port: exposes the Kubernetes service on the specified port within the cluster. Other pods within the cluster can communicate with this server on the specified port.
TargetPort: is the port on which the service will send requests to, that your pod will be listening on. Your application in the container will need to be listening on this port also.
NodePort: exposes a service externally to the cluster by means of the target nodes IP address and the NodePort. NodePort is the default setting if the port field is not specified.
From your example above the myapp-service service will be exposed internally to cluster applications on port 89 and externally to the cluster on the node IP address on 30009. It will also forward requests to pods with the label “app: my-app” on port 89.
You can check the details of your service with:
kubectl describe service myapp-service
Adjust your targetPort: to 80 and it should be fine.

Change Kubernetes nginx-ingress-controller ports

I installed Minikube v1.3.1 on my RedHat EC2 instance for some tests.
Since the ports that the nginx-ingress-controller uses by default are already in use, I am trying to change them in the deployment but without result. Could please somebody advise how to do it?
How do I know that the port are already in Use?
When I listed the system pods using the command kubectl -n kube-system get deployment | grep nginx, I get:
nginx-ingress-controller 0/1 1 0 9d
meaning that my container is not up. When I describe it using the command kubectl -n kube-system describe pod nginx-ingress-controller-xxxxx I get:
Type Reason Age From
Message ---- ------ ----
---- ------- Warning FailedCreatePodSandBox 42m (x163507 over 2d1h) kubelet, minikube (combined from similar
events): Failed create pod sandbox: rpc error: code = Unknown desc =
failed to start sandbox container for pod
"nginx-ingress-controller-xxxx": Error response from daemon: driver
failed programming external connectivity on endpoint
k8s_POD_nginx-ingress-controller-xxxx_kube-system_...: Error starting
userland proxy: listen tcp 0.0.0.0:443: bind: address already in use
Then I check the processes using those ports and I kill them. That free them up and the ingress-controller pod gets deployed correctly.
What did I try to change the nginx-ingress-controller port?
kubectl -n kube-system get deployment | grep nginx
> NAME READY UP-TO-DATE AVAILABLE AGE
> nginx-ingress-controller 0/1 1 0 9d
kubectl -n kube-system edit deployment nginx-ingress-controller
The relevant part of my deployment looks like this:
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- containerPort: 81
hostPort: 81
protocol: TCP
- containerPort: 444
hostPort: 444
protocol: TCP
- containerPort: 18080
hostPort: 18080
protocol: TCP
Then I remove the subsections with port 443 and 80, but when I rollout the changes, they get added again.
Now my services are not reachable anymore through ingress.
Please note that minikube ships with addon-manager, which role is to keep an eye on specific addon template files (default location: /etc/kubernetes/addons/) and do one of two specific actions based on the label's value of managed resource:
addonmanager.kubernetes.io/mode
addonmanager.kubernetes.io/mode=Reconcile
Will be periodically reconciled. Direct manipulation to these addons
through apiserver is discouraged because addon-manager will bring
them back to the original state. In particular
addonmanager.kubernetes.io/mode=KeepOnly
Will be checked for existence only. Users can edit these addons as
they want.
So to keep your customized version of default Ingress service listening ports, please change first the Ingress deployment template configuration to KeepOnly on minikube VM.
Basically, minikube bootstraps Nginx Ingress Controller as the separate addon, thus as per design you might have to enable it in order to propagate the particular Ingress Controller's resources within minikube cluster.
Once you enabled some specific minikube Addon, Addon-manager creates template files for each component by placing them into /etc/kubernetes/addons/ folder on the host machine, and then spin up each manifest file, creating corresponded K8s resources; furthermore Addon-manager continuously inspects the actual state for all addon resources synchronizing K8s target resources (service, deployment, etc.) according to the template data.
Therefore, you can consider modifying Ingress addon template data throughout ingress-*.yaml files under /etc/kubernetes/addons/ directory, propagating the desired values into the target k8s objects; it may takes some until K8s engine reflects the changes and re-spawns the relative ReplicaSet based resources.
Well, I think you have to modify the Ingress which refer to the service you're trying to expose on custom port.
This can be done with custom annotation. Here is an example for your port 444:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/listen-ports-ssl: "444"
spec:
tls:
- hosts:
- host.org
secretName: my-host-tls-cert
rules:
- host: host.org
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 444

Assign External IP to a Kubernetes Service

EDIT: The whole point of my setup is to achieve (if possible) the following :
I have multiple k8s nodes
When I contact an IP address (from my company's network), it should be routed to one of my container/pod/service/whatever.
I should be able to easily setup that IP (like in my service .yml definition)
I'm running a small Kubernetes cluster (built with kubeadm) in order to evaluate if I can move my Docker (old)Swarm setup to k8s. The feature I absolutely need is the ability to assign IP to containers, like I do with MacVlan.
In my current docker setup, I'm using MacVlan to assign IP addresses from my company's network to some containers so I can reach directly (without reverse-proxy) like if it's any physical server. I'm trying to achieve something similar with k8s.
I found out that:
I have to use Service
I can't use the LoadBalancer type, as it's only for compatible cloud providers (like GCE or AWS).
I should use ExternalIPs
Ingress Resources are some kind of reverse proxy ?
My yaml file is :
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/hostname: k8s-slave-3
---
kind: Service
apiVersion: v1
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- A.B.C.D
I was hopping that my service would get the IP A.B.C.D (which is one of my company's network). My deployment is working as I can reach my nginx container from inside the k8s cluster using it's ClusterIP.
What am I missing ? Or at least, where can I find informations on my network traffic in order to see if packets are coming ?
EDIT :
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.96.0.1 <none> 443/TCP 6d
nginx-service 10.102.64.83 A.B.C.D 80/TCP 23h
Thanks.
First of all run this command:
kubectl get -n namespace services
Above command will return output like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 <none> 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 <none> 3000:30017/TCP 13h
It is clear from the above output that External IPs are not assigned to the services yet. To assign External IPs to backend service run the following command.
kubectl patch svc backend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
and to assign external IP to frontend service run this command.
kubectl patch svc frontend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
Now get namespace service to check either external IPs assignment:
kubectl get -n namespace services
We get an output like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 192.168.0.194 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 192.168.0.194 3000:30017/TCP 13h
Cheers!!! Kubernetes External IPs are now assigned .
If this is just for testing, then try
kubectl port-forward service/nginx-service 80:80
Then you can
curl http://localhost:80
A solution that could work (and not only for testing, though it has its shortcomings) is to set your Pod to map the host network with the hostNetwork spec field set to true.
It means that you won't need a service to expose your Pod, as it will always be accessible on your host via a single port (the containerPort you specified in the manifest). No need to keep a DNS mapping record in that case.
This also means that you can only run a single instance of this Pod on a given node (talking about shortcomings...). As such, it makes it a good candidate for a DaemonSet object.
If your Pod still needs to access/resolve internal Kubernetes hostnames, you need to set the dnsPolicy spec field set to ClusterFirstWithNoHostNet. This setting will enable your pod to access the K8S DNS service.
Example:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx-reverse-proxy
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
tolerations: # allow a Pod instance to run on Master - optional
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- image: nginx
name: nginx
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
EDIT: I was put on this track thanks to the the ingress-nginx documentation
You can just Patch an External IP
CMD: $ kubectl patch svc svc_name -p '{"spec":{"externalIPs":["your_external_ip"]}}'
Eg:- $ kubectl patch svc kubernetes -p '{"spec":{"externalIPs":["10.2.8.19"]}}'
you can try kube-keepalived-vip configurtion to route the traffic. https://github.com/kubernetes/contrib/tree/master/keepalived-vip
You can try to add "type: NodePort" in your yaml file for the service and then you'll have a port to access it via the web browser or from the outside. For my case, it helped.
I don't know if that helps in your particular case but what I did (and I'm on a Bare Metal cluster) was to use the LoadBalancer and set the loadBalancerIP as well as the externalIPs to my server IP as you did it.
After that the correct external IP showed up for the load balancer.
Always use the namespace flag either before or after the service name, because Namespace-based scoping is applicable for deployments and services and this points out to the service that is tagged to a specific namespace. kubectl patch svc service-name -n namespace -p '{"spec":{"externalIPs":["IP"]}}'
Just include additional option.
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service --external-ip=1.1.1.1

Kubernetes service external ip pending

I am trying to deploy nginx on kubernetes, kubernetes version is v1.5.2,
I have deployed nginx with 3 replica, YAML file is below,
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-example
spec:
replicas: 3
revisionHistoryLimit: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.10
ports:
- containerPort: 80
and now I want to expose its port 80 on port 30062 of node, for that I created a service below,
kind: Service
apiVersion: v1
metadata:
name: nginx-ils-service
spec:
ports:
- name: http
port: 80
nodePort: 30062
selector:
app: nginx
type: LoadBalancer
this service is working good as it should be, but it is showing as pending not only on kubernetes dashboard also on terminal.
It looks like you are using a custom Kubernetes Cluster (using minikube, kubeadm or the like). In this case, there is no LoadBalancer integrated (unlike AWS or Google Cloud). With this default setup, you can only use NodePort or an Ingress Controller.
With the Ingress Controller you can setup a domain name which maps to your pod; you don't need to give your Service the LoadBalancer type if you use an Ingress Controller.
If you are using Minikube, there is a magic command!
$ minikube tunnel
Hopefully someone can save a few minutes with this.
Reference link
https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel
If you are not using GCE or EKS (you used kubeadm) you can add an externalIPs spec to your service YAML. You can use the IP associated with your node's primary interface such as eth0. You can then access the service externally, using the external IP of the node.
...
spec:
type: LoadBalancer
externalIPs:
- 192.168.0.10
I created a single node k8s cluster using kubeadm. When i tried PortForward and kubectl proxy, it showed external IP as pending.
$ kubectl get svc -n argocd argocd-server
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argocd-server LoadBalancer 10.107.37.153 <pending> 80:30047/TCP,443:31307/TCP 110s
In my case I've patched the service like this:
kubectl patch svc <svc-name> -n <namespace> -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}'
After this, it started serving over the public IP
$ kubectl get svc argo-ui -n argo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argo-ui LoadBalancer 10.103.219.8 172.31.71.218 80:30981/TCP 7m50s
To access a service on minikube, you need to run the following command:
minikube service [-n NAMESPACE] [--url] NAME
More information here : Minikube GitHub
When using Minikube, you can get the IP and port through which you
can access the service by running:
minikube service [service name]
E.g.:
minikube service kubia-http
If it is your private k8s cluster, MetalLB would be a better fit. Below are the steps.
Step 1: Install MetalLB in your cluster
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
Step 2: Configure it by using a configmap
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.42.42.100-172.42.42.105 #Update this with your Nodes IP range
Step 3: Create your service to get an external IP (would be a private IP though).
FYR:
Before MetalLB installation:
After MetalLB installation:
If running on minikube, don't forget to mention namespace if you are not using default.
minikube service << service_name >> --url --namespace=<< namespace_name >>
Following #Javier's answer. I have decided to go with "patching up the external IP" for my load balancer.
$ kubectl patch service my-loadbalancer-service-name \
-n lb-service-namespace \
-p '{"spec": {"type": "LoadBalancer", "externalIPs":["192.168.39.25"]}}'
This will replace that 'pending' with a new patched up IP address you can use for your cluster.
For more on this. Please see karthik's post on LoadBalancer support with Minikube for Kubernetes
Not the cleanest way to do it. I needed a temporary solution. Hope this helps somebody.
If you are using minikube then run commands below from terminal,
$ minikube ip
$ 172.17.0.2 // then
$ curl http://172.17.0.2:31245
or simply
$ curl http://$(minikube ip):31245
In case someone is using MicroK8s: You need a network load balancer.
MicroK8s comes with metallb, you can enable it like this:
microk8s enable metallb
<pending> should turn into an actual IP address then.
A general way to expose an application running on a set of Pods as a network service is called service in Kubernetes. There are four types of service in Kubernetes.
ClusterIP
The Service is only reachable from within the cluster.
NodePort
You'll be able to communicate the Service from outside the cluster using NodeIP:NodePort.default node port range is 30000-32767, and this range can be changed by define --service-node-port-range in the time of cluster creation.
LoadBalancer
Exposes the Service externally using a cloud provider's load balancer.
ExternalName
Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
Only the LoadBalancer gives value for the External-IP Colum. and it only works if the Kubernetes cluster is able to assign an IP address for that particular service. you can use metalLB load balancer for provision IPs to your load balancer services.
I hope your doubt may go away.
You can patch the IP of Node where pods are hosted ( Private IP of Node ) , this is the easy workaround .
Taking reference with above posts , Following worked for me :
kubectl patch service my-loadbalancer-service-name \
-n lb-service-namespace \
-p '{"spec": {"type": "LoadBalancer", "externalIPs":["xxx.xxx.xxx.xxx Private IP of Physical Server - Node - where deployment is done "]}}'
Adding a solution for those who encountered this error while running on amazon-eks.
First of all run:
kubectl describe svc <service-name>
And then review the events field in the example output below:
Name: some-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"some-service","namespace":"default"},"spec":{"ports":[{"port":80,...
Selector: app=some
Type: LoadBalancer
IP: 10.100.91.19
Port: <unset> 80/TCP
TargetPort: 5000/TCP
NodePort: <unset> 31022/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 68s service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 67s service-controller Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB
Review the error message:
Failed to ensure load balancer: could not find any suitable subnets for creating the ELB
In my case, the reason that no suitable subnets were provided for creating the ELB were:
1: The EKS cluster was deployed on the wrong subnets group - internal subnets instead of public facing.
(*) By default, services of type LoadBalancer create public-facing load balancers if no service.beta.kubernetes.io/aws-load-balancer-internal: "true" annotation was provided).
2: The Subnets weren't tagged according to the requirements mentioned here.
Tagging VPC with:
Key: kubernetes.io/cluster/yourEKSClusterName
Value: shared
Tagging public subnets with:
Key: kubernetes.io/role/elb
Value: 1
If you are using a bare metal you need the NodePort type
https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
LoadBalancer works by default in other cloud providers like Digital Ocean, Aws, etc
k edit service ingress-nginx-controller
type: NodePort
spec:
externalIPs:
- xxx.xxx.xxx.xx
using the public IP
Use NodePort:
$ kubectl run user-login --replicas=2 --labels="run=user-login" --image=kingslayerr/teamproject:version2 --port=5000
$ kubectl expose deployment user-login --type=NodePort --name=user-login-service
$ kubectl describe services user-login-service
(Note down the port)
$ kubectl cluster-info
(IP-> Get The IP where master is running)
Your service is accessible at (IP):(port)
The LoadBalancer ServiceType will only work if the underlying infrastructure supports the automatic creation of Load Balancers and have the respective support in Kubernetes, as is the case with the Google Cloud Platform and AWS. If no such feature is configured, the LoadBalancer IP address field is not populated and still in pending status , and the Service will work the same way as a NodePort type Service
minikube tunnel
The below solution works in my case.
First of all, try this command:
minikube tunnel
If it's not working for you. follow the below:
I restart minikube container.
docker minikube stop
then
docker minikube start
After that re-run kubernetes
minikube dashboard
After finish execute :
minikube tunnel
I have the same problem.
Windows 10 Desktop + Docker Desktop 4.7.1 (77678) + Minikube v1.25.2
Following the official docs on my side, I resolve with:
PS C:\WINDOWS\system32> kubectl expose deployment sito-php --type=LoadBalancer --port=8080 --name=servizio-php
service/servizio-php exposed
PS C:\WINDOWS\system32> minikube tunnel
* Tunnel successfully started
* NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
* Starting tunnel for service servizio-php.
PS E:\docker\apache-php> kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 33h
servizio-php LoadBalancer 10.98.218.86 127.0.0.1 8080:30270/TCP 4m39s
The open browser on http://127.0.0.1:8080/
same issue:
os>kubectl get svc right-sabertooth-wordpress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
right-sabertooth-wordpress LoadBalancer 10.97.130.7 "pending" 80:30454/TCP,443:30427/TCP
os>minikube service list
|-------------|----------------------------|--------------------------------|
| NAMESPACE | NAME | URL |
|-------------|----------------------------|--------------------------------|
| default | kubernetes | No node port |
| default | right-sabertooth-mariadb | No node port |
| default | right-sabertooth-wordpress | http://192.168.99.100:30454 |
| | | http://192.168.99.100:30427 |
| kube-system | kube-dns | No node port |
| kube-system | tiller-deploy | No node port |
|-------------|----------------------------|--------------------------------|
It is, however,accesible via that http://192.168.99.100:30454.
There are three types of exposing your service
Nodeport
ClusterIP
LoadBalancer
When we use a loadbalancer we basically ask our cloud provider to give us a dns which can be accessed online
Note not a domain name but a dns.
So loadbalancer type does not work in our local minikube env.
Those who are using minikube and trying to access the service of kind NodePort or LoadBalancer.
We don’t get the external IP to access the service on the local
system. So a good option is to use minikube IP
Use the below command to get the minikube IP once your service is exposed.
minikube service service-name --url
Now use that URL to serve your purpose.
Check kube-controller logs. I was able to solve this issue by setting the clusterID tags to the ec2 instance I deployed the cluster on.
If you are not on a supported cloud (aws, azure, gcloud etc..) you can't use LoadBalancer without MetalLB https://metallb.universe.tf/
but it's in beta yet..
Delete existing service and create a same new service solved my problems. My problems is that the loading balancing IP I defines is used so that external endpoint is pending. When I changed a new load balancing IP it still couldn't work.
Finally, delete existing service and create a new one solved my problem.
For your use case best option is to use NordPort service instead of loadbalancer type because loadbalancer is not available.
I was getting this error on the Docker-desktop. I just exit and turn it on again(Docker-desktop). It took few seconds, then It worked fine.
Deleting all older services and creating new resolved my issue. IP was bound to older service. just try "$kubectl get svc" and then delete all svc's one by one "$kubectl delete svc 'svc name' "
May be the subnet in which you are deploying your service, have not enough ip's
If you are trying to do this in your on-prem cloud, you need an L4LB service to create the LB instances.
Otherwise you end up with the endless "pending" message you described. It is visible in a video here: https://www.youtube.com/watch?v=p6FYtNpsT1M
You can use open source tools to solve this problem, the video provides some guidance on how the automation process should work.

Resources