I have a corporate network(10.22..) which hosts a Kubernetes cluster(10.225.0.1). How can I access some VM in the same network but outside the cluster from within the pod in the cluster?
For example, I have a VM with IP 10.22.0.1:30000, which I need to access from a Pod in Kubernetes cluster. I tried to create a Service like this
apiVersion: v1
kind: Service
metadata:
name: vm-ip
spec:
selector:
app: vm-ip
ports:
- name: vm
protocol: TCP
port: 30000
targetPort: 30000
externalIPs:
- 10.22.0.1
But when I do "curl http://vm-ip:30000" from a Pod(kubectl exec -it), it returns "connection refused" error. But it works with "google.com". What are the ways of accessing the external IPs?
You can create an endpoint for that.
Let's go through an example:
In this example, I have a http server on my network with IP 10.128.15.209 and I want it to be accessible from my pods inside my Kubernetes Cluster.
First thing is to create an endpoint. This is going to let me create a service pointing to this endpoint that will redirect the traffic to my external http server.
My endpoint manifest is looking like this:
apiVersion: v1
kind: Endpoints
metadata:
name: http-server
subsets:
- addresses:
- ip: 10.128.15.209
ports:
- port: 80
$ kubectl apply -f http-server-endpoint.yaml
endpoints/http-server configured
Let's create our service:
apiVersion: v1
kind: Service
metadata:
name: http-server
spec:
ports:
- port: 80
targetPort: 80
$ kubectl apply -f http-server-service.yaml
service/http-server created
Checking if our service exists and save it's clusterIP for letter usage:
user#minikube-server:~$$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-server ClusterIP 10.96.228.220 <none> 80/TCP 30m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10d
Now it's time to verify if we can access our service from a pod:
$ kubectl run ubuntu -it --rm=true --restart=Never --image=ubuntu bash
This command will create and open a bash session inside a ubuntu pod.
In my case I'll install curl to be able to check if I can access my http server. You may need install mysql:
root#ubuntu:/# apt update; apt install -y curl
Checking connectivity with my service using clusterIP:
root#ubuntu:/# curl 10.128.15.209:80
Hello World!
And finally using the service name (DNS):
root#ubuntu:/# curl http-server
Hello World!
So, in your specific case you have to create this:
apiVersion: v1
kind: Endpoints
metadata:
name: vm-server
subsets:
- addresses:
- ip: 10.22.0.1
ports:
- port: 30000
---
apiVersion: v1
kind: Service
metadata:
name: vm-server
spec:
ports:
- port: 30000
targetPort: 30000
Related
EDIT:
I deleted minikube, enabled kubernetes in Docker desktop for Windows and installed ingress-nginx manually.
$helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
Release "ingress-nginx" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ServiceAccount "ingress-nginx" in namespace "ingress-nginx" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "ingress-nginx"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "ingress-nginx"
It gave me an error but I think it's because I did it already before because:
$kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.106.222.233 localhost 80:30199/TCP,443:31093/TCP 11m
ingress-nginx-controller-admission ClusterIP 10.106.52.106 <none> 443/TCP 11m
Then applied all my yaml files again but this time ingress is not getting any address:
$kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
myapp-ingress <none> myapp.com 80 10m
I am using docker desktop (windows) and installed nginx-ingress controller via minikube addons enable command:
$kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create--1-lp4md 0/1 Completed 0 67m
ingress-nginx-admission-patch--1-jdkn7 0/1 Completed 1 67m
ingress-nginx-controller-5f66978484-6mpfh 1/1 Running 0 67m
And applied all my yaml files:
$kubectl get svc --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default event-service-svc ClusterIP 10.108.251.79 <none> 80/TCP 16m app=event-service-app
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16m <none>
default mssql-clusterip-srv ClusterIP 10.98.10.22 <none> 1433/TCP 16m app=mssql
default mssql-loadbalancer LoadBalancer 10.109.106.174 <pending> 1433:31430/TCP 16m app=mssql
default user-service-svc ClusterIP 10.111.128.73 <none> 80/TCP 16m app=user-service-app
ingress-nginx ingress-nginx-controller NodePort 10.101.112.245 <none> 80:31583/TCP,443:30735/TCP 68m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.105.169.167 <none> 443/TCP 68m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 72m k8s-app=kube-dns
All pods and services seems to be running properly. Checked the pod logs, all migrations etc. has worked and app is up and running. But when I try to send an HTTP request, I get a socket hang up error. I've checked all the logs for all pods, couldn't find anything useful.
$kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
myapp-ingress nginx myapp.com localhost 80 74s
This one is also a bit weird, I was expecting ADRESS to be set to an IP not to localhost. So adding 127.0.0.1 entry for myapp.com in /etc/hosts also didn't seem so right.
My question here is what I might be doing wrong? Or how can I even trace where are my requests are being forwarded to?
ingress-svc.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
spec:
rules:
- host: myapp.com
http:
paths:
- path: /api/Users
pathType: Prefix
backend:
service:
name: user-service-svc
port:
number: 80
- path: /api/Events
pathType: Prefix
backend:
service:
name: event-service-svc
port:
number: 80
events-depl.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service-app
labels:
app: event-service-app
spec:
replicas: 1
selector:
matchLabels:
app: event-service-app
template:
metadata:
labels:
app: event-service-app
spec:
containers:
- name: event-service-app
image: ghcr.io/myapp/event-service:master
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: myapp
---
apiVersion: v1
kind: Service
metadata:
name: event-service-svc
spec:
selector:
app: event-service-app
ports:
- protocol: TCP
port: 80
targetPort: 80
Reproduction
I reproduced the case using minikube v1.24.0, Docker desktop 4.2.0, engine 20.10.10
First, localhost in ingress appears due to logic, it doesn't really matter what IP address is behind the domain in /etc/hosts, I added a different one for testing and still it showed localhost. Only metallb will provide an IP address from set up network.
What happens
When minikube driver is docker, minikube creates a big container (VM) where kubernetes components are run. This can be checked by running docker ps command in host system:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f087dc669944 gcr.io/k8s-minikube/kicbase:v0.0.28 "/usr/local/bin/entr…" 16 minutes ago Up 16 minutes 127.0.0.1:59762->22/tcp, 127.0.0.1:59758->2376/tcp, 127.0.0.1:59760->5000/tcp, 127.0.0.1:59761->8443/tcp, 127.0.0.1:59759->32443/tcp minikube
And then minikube ssh to get inside this container and run docker ps to see all kubernetes containers.
Moving forward. Before introducing ingress, it's already clear that even NodePort doesn't work as intended. Let's check it.
There are two ways to get minikube VM IP:
run minikube IP
kubectl get nodes -o wide and find the node's IP
What should happen next with NodePort is requests should go to minikube_IP:Nodeport while it doesn't work. It happens because docker containers inside the minikube VM are not exposed outside of the cluster which is another docker container.
On minikube to access services within cluster there is a special command - minikube service %service_name% which will create a direct tunnel to the service inside the minikube VM (you can see that it contains service URL with NodePort which is supposed to be working):
$ minikube service echo
|-----------|------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------|-------------|---------------------------|
| default | echo | 8080 | http://192.168.49.2:32034 |
|-----------|------|-------------|---------------------------|
* Starting tunnel for service echo.
|-----------|------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------|-------------|------------------------|
| default | echo | | http://127.0.0.1:61991 |
|-----------|------|-------------|------------------------|
* Opening service default/echo in default browser...
! Because you are using a Docker driver on windows, the terminal needs to be open to run it
And now it's available on host machine:
$ curl http://127.0.0.1:61991/
StatusCode : 200
StatusDescription : OK
Adding ingress
Moving forward and adding ingress.
$ minikube addons enable ingress
$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default echo NodePort 10.111.57.237 <none> 8080:32034/TCP 25m
ingress-nginx ingress-nginx-controller NodePort 10.104.52.175 <none> 80:31041/TCP,443:31275/TCP 2m12s
Trying to get any response from ingress by hitting minikube_IP:NodePort with no luck:
$ curl 192.168.49.2:31041
curl : Unable to connect to the remote server
At line:1 char:1
+ curl 192.168.49.2:31041
Trying to create a tunnel with minikube service command:
$ minikube service ingress-nginx-controller -n ingress-nginx
|---------------|--------------------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|---------------|--------------------------|-------------|---------------------------|
| ingress-nginx | ingress-nginx-controller | http/80 | http://192.168.49.2:31041 |
| | | https/443 | http://192.168.49.2:31275 |
|---------------|--------------------------|-------------|---------------------------|
* Starting tunnel for service ingress-nginx-controller.
|---------------|--------------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|---------------|--------------------------|-------------|------------------------|
| ingress-nginx | ingress-nginx-controller | | http://127.0.0.1:62234 |
| | | | http://127.0.0.1:62235 |
|---------------|--------------------------|-------------|------------------------|
* Opening service ingress-nginx/ingress-nginx-controller in default browser...
* Opening service ingress-nginx/ingress-nginx-controller in default browser...
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
And getting 404 from ingress-nginx which means we can send requests to ingress:
$ curl http://127.0.0.1:62234
curl : 404 Not Found
nginx
At line:1 char:1
+ curl http://127.0.0.1:62234
Solutions
Above I explained what happens. Here are three solutions how to get it work:
Use another minikube driver (e.g. virtualbox. I used hyperv since my laptop has windows 10 pro)
minikube ip will return "normal" IP address of virtual machine and all network functionality will work just fine. You will need to add this IP address into /etc/hosts for domain used in ingress rule
Note! Even though localhost was shown in kubectl get ing ingress output in ADDRESS.
Use built-in kubernetes feature in Docker desktop for Windows.
You will need to manually install ingress-nginx and change ingress-nginx-controller service type from NodePort to LoadBalancer so it will be available on localhost and will be working. Please find my another answer about Docker desktop for Windows
(testing only) - use port-forward
It's almost exactly the same idea as minikube service command. But with more control. You will open a tunnel from host VM port 80 to ingress-nginx-controller service (eventually pod) on port 80 as well. /etc/hosts should contain 127.0.0.1 test.domain entity.
$ kubectl port-forward service/ingress-nginx-controller -n ingress-nginx 80:80
Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80
And testing it works:
$ curl test.domain
StatusCode : 200
StatusDescription : OK
Update for kubernetes in docker desktop on windows and ingress:
On modern ingress-nginx versions .spec.ingressClassName should be added to ingress rules. See last updates, so ingress rule should look like:
apiVersion: networking.k8s.io/v1
kind: Ingress
...
spec:
ingressClassName: nginx # can be checked by kubectl get ingressclass
rules:
- host: myapp.com
http:
...
I'm doing some tutorials using k3d (k3s in docker) and my yml looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: NodePort
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
With the resulting node port being 31747:
:~$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 18m
nginx NodePort 10.43.254.138 <none> 80:31747/TCP 17m
:~$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 172.18.0.2:6443 22m
nginx 10.42.0.8:80 21m
However wget does not work:
:~$ wget localhost:31747
Connecting to localhost:31747 ([::1]:31747)
wget: can't connect to remote host: Connection refused
:~$
What have I missed? I've ensured that my labels all say app: nginx and my containerPort, port and targetPort are all 80
The question is, is the NodePort range mapped from the host to the docker container acting as the node. The command docker ps will show you, for more details you can docker inspect $container_id and look at the Ports attribute under NetworkSettings. I don't have k3d around, but here is an example from kind.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d2225b83a73 kindest/node:v1.17.0 "/usr/local/bin/entr…" 18 hours ago Up 18 hours 127.0.0.1:32769->6443/tcp kind-control-plane
$ docker inspect kind-control-plane
[
{
# [...]
"NetworkSettings": {
# [...]
"Ports": {
"6443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
]
},
# [...]
}
]
If it is not, working with kubectl port-forward as suggested in the comment is probably the easiest approach. Alternatively, start looking into Ingress. Ingress is the preferred method to expose workloads outside of a cluster, and in the case of kind, they have support for Ingress. It seems k3d also has a way to map the ingress port to the host.
Turns out I didn't expose the ports when creating the cluster
https://k3d.io/usage/guides/exposing_services/
maybe, your pod is running on the other work node, not localhost. you should use the correct node ip.
I want to deploy a simple nginx app on my own kubernetes cluster.
I used the basic nginx deployment. On the machine with the ip 192.168.188.10. It is part of cluster of 3 raspberries.
NAME STATUS ROLES AGE VERSION
master-pi4 Ready master 2d20h v1.18.2
node1-pi4 Ready <none> 2d19h v1.18.2
node2-pi3 Ready <none> 2d19h v1.18.2
$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
$ kubectl create service nodeport nginx --tcp=80:80
service/nginx created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-8fb6d868-6957j 1/1 Running 0 10m
my-nginx-8fb6d868-8c59b 1/1 Running 0 10m
nginx-f89759699-n6f79 1/1 Running 0 4m20s
$ kubectl describe service nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: NodePort
IP: 10.98.41.205
Port: 80-80 80/TCP
TargetPort: 80/TCP
NodePort: 80-80 31400/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
But I always get a time out
$ curl http://192.168.188.10:31400/
curl: (7) Failed to connect to 192.168.188.10 port 31400: Connection timed out
Why is the web server nginx not reachable? I tried to run it from the same machine I deployed it to? How can I make it accessible from an other machine from the network on port 31400?
As mentioned by #suren, you are creating a stand-alone service without any link with your deployment.
You can solve using the command from suren answer, or creating a new deployment using the follow yaml spec:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
After, type kubectl get svc to get the nodeport to access your service.
nginx-svc NodePort 10.100.136.135 <none> 80:31816/TCP 34s
To access use http://<YOUR_NODE_IP>:31816
so is 192.168.188.10 your host ip / your vm ip ?
you have to check it first if any service using that port or maybe you haven't add it into your security group if you using cloud platform.
just to make sure you can create a pod and access it using fqdn like my-svc.my-namespace.svc.cluster-domain.example
I'm trying to expose kubernetes dashboard publicly via an ingress on a single master bare-metal cluster. The issue is that the LoadBalancer (nginx ingress controller) service I'm using is not opening the 80/443 ports which I would expect it to open/use. Instead it takes some random ports from the 30-32k range. I know I can set this range with --service-node-port-range but I'm quite certain I didn't have to do this a year ago on another server. Am I missing something here?
Currently this is my stack/setup (clean install of Ubuntu 16.04):
Nginx Ingress Controller (installed via helm)
MetalLB
Kubernetes Dashboard
Kubernetes Dashboard Ingress to deploy it publicly on <domain>
Cert-Manager (installed via helm)
k8s-dashboard-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
# add an annotation indicating the issuer to use.
cert-manager.io/cluster-issuer: letsencrypt-staging
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: kubernetes-dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: <domain>
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 443
path: /
tls:
- hosts:
- <domain>
secretName: kubernetes-dashboard-staging-cert
This is what my kubectl get svc -A looks like:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cert-manager cert-manager ClusterIP 10.101.142.87 <none> 9402/TCP 23h
cert-manager cert-manager-webhook ClusterIP 10.104.104.232 <none> 443/TCP 23h
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d6h
ingress-nginx nginx-ingress-controller LoadBalancer 10.100.64.210 10.65.106.240 80:31122/TCP,443:32697/TCP 16m
ingress-nginx nginx-ingress-default-backend ClusterIP 10.111.73.136 <none> 80/TCP 16m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6d6h
kubernetes-dashboard cm-acme-http-solver-kw8zn NodePort 10.107.15.18 <none> 8089:30074/TCP 140m
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.96.228.215 <none> 8000/TCP 5d18h
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.99.250.49 <none> 443/TCP 4d6h
Here are some more examples of what's happening:
curl -D- http://<public_ip>:31122 -H 'Host: <domain>'
returns 308, as the protocol is http not https. This is expected
curl -D- http://<public_ip> -H 'Host: <domain>'
curl: (7) Failed to connect to <public_ip> port 80: Connection refused
port 80 is closed
curl -D- --insecure https://10.65.106.240 -H "Host: <domain>"
reaching the dashboard through an internal IP obviously works and I get the correct k8s-dashboard html.
--insecure is due to the let's encrypt not working yet as the acme challenge on port 80 is unreachable.
So to recap, how do I get 2. working? E.g. reaching the service through 80/443?
EDIT: Nginx Ingress Controller .yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-02-12T20:20:45Z"
labels:
app: nginx-ingress
chart: nginx-ingress-1.30.1
component: controller
heritage: Helm
release: nginx-ingress
name: nginx-ingress-controller
namespace: ingress-nginx
resourceVersion: "1785264"
selfLink: /api/v1/namespaces/ingress-nginx/services/nginx-ingress-controller
uid: b3ce0ff2-ad3e-46f7-bb02-4dc45c1e3a62
spec:
clusterIP: 10.100.64.210
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 31122
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 32697
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 10.65.106.240
EDIT 2: metallb configmap yaml
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.65.106.240-10.65.106.250
So, to solve the 2nd question, as I suggested, you can use hostNetwork: true parameter to map container port to the host it is running on. Note that this is not a recommended practice, and you should always avoid to do this, unless you have a reason.
Example:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
hostNetwork: true
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
hostPort: 80 # this parameter is optional, but recommended when using host network
name: nginx
When I deploy this yaml, I can check where the pod is running and curl that host's port 80.
root#v1-16-master:~# kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 105s 10.132.0.50 v1-16-worker-2 <none> <none>
Note: now I know the pod is running on worker node 2. I just need its IP address.
root#v1-16-master:~# kubectl get no -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
v1-16-master Ready master 52d v1.16.4 10.132.0.48 xxxx Ubuntu 16.04.6 LTS 4.15.0-1052-gcp docker://19.3.5
v1-16-worker-1 Ready <none> 52d v1.16.4 10.132.0.49 xxxx Ubuntu 16.04.6 LTS 4.15.0-1052-gcp docker://19.3.5
v1-16-worker-2 Ready <none> 52d v1.16.4 10.132.0.50 xxxx Ubuntu 16.04.6 LTS 4.15.0-1052-gcp docker://19.3.5
v1-16-worker-3 Ready <none> 20d v1.16.4 10.132.0.51 xxxx Ubuntu 16.04.6 LTS 4.15.0-1052-gcp docker://19.3.5
root#v1-16-master:~# curl 10.132.0.50 2>/dev/null | grep title
<title>Welcome to nginx!</title>
root#v1-16-master:~# kubectl delete po nginx
pod "nginx" deleted
root#v1-16-master:~# curl 10.132.0.50
curl: (7) Failed to connect to 10.132.0.50 port 80: Connection refused
And of course it also works if I go to the public IP on my browser.
update:
i didn't see the edit part of the question when I was writing this answer. it doesn't make sense given the additional info provided. please disregard.
original:
apparently the cluster you are using now has its ingress controller setup over a node-port type service instead of a load-balancer. in order to get desired behavior you need to change configuration of ingress-controller. refer to nginx ingress controller documentation for metalLB cases how to do this.
I am setting up a kubernetes cluster to run hyperledger fabric apps. My cluster is on a private cloud hence I don't have a load balancer. How do I set an IP address for my nginx-ingress-controller(pending) to expose my services? I think it is interfering with my creation of pods since when I run kubectl get pods, I see very many evicted pods. I am using certmanager which I think also needs IPs.
CA_POD=$(kubectl get pods -n cas -l "app=hlf-ca,release=ca" -o jsonpath="{.items[0].metadata.name}")
This does not create any pods.
nginx-ingress-controller-5bb5cd56fb-lckmm 1/1 Running
nginx-ingress-default-backend-dc47d79c-8kqbp 1/1 Running
The rest take the form
nginx-ingress-controller-5bb5cd56fb-d48sj 0/1 Evicted
ca-hlf-ca-5c5854bd66-nkcst 0/1 Pending 0 0s
ca-postgresql-0 0/1 Pending 0 0s
I would like to create pods from which I can run exec commands like
kubectl exec -n cas $CA_POD -- cat /var/hyperledger/fabric-ca/msp/signcertscert.pem
You are not exposing nginx-controller IP address, but nginx's service via node port. For example:
piVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-controller
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
selector:
app: nginx
In this case you'd be able to reach your service like
curl -v <NODE_EXTERNAL_IP>:30080
To the question, why your pods are in pending state, pls describe misbehaving pods:
kubectl describe pod nginx-ingress-controller-5bb5cd56fb-d48sj
Best approach is to use helm
helm install stable/nginx-ingress