kubectl port-forward connection refused [ socat ] - portforwarding

I am running pyspark on one of the ports of kubernetes. I am trying to port forward to my local machine. I am getting this error while executing my python file.
Forwarding from 127.0.0.1:7077 -> 7077
Forwarding from [::1]:7077 -> 7077
Handling connection for 7077
E0401 01:08:11.964798 20399 portforward.go:400] an error occurred forwarding 7077 -> 7077: error forwarding port 7077 to pod 68ced395bd081247d1ee6b431776ac2bd3fbfda4d516da156959b6271c2ad90c, uid : exit status 1: 2019/03/31 19:38:11 socat[1748104] E connect(5, AF=2 127.0.0.1:7077, 16): Connection refused
this a few lines of my python file. I am getting error in the lines where conf is defined.
from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext
conf = SparkConf().setMaster("spark://localhost:7077").setAppName("Stand Alone Python Script")
I already tried installing socat on the kubernetes. I am using spark version 2.4.0 locally. I even tried exposing port 7077 in YAML file. Did not work out.
This is the YAML file used for deployment.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
creationTimestamp: 2018-10-07T15:23:35Z
generation: 16
labels:
chart: spark-0.2.1
component: m3-zeppelin
heritage: Tiller
release: m3
name: m3-zeppelin
namespace: default
resourceVersion: "55461362"
selfLink: /apis/apps/v1beta1/namespaces/default/statefulsets/m3-zeppelin
uid: f56e86fa-ca44-11e8-af6c-42010a8a00f2
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
component: m3-zeppelin
serviceName: m3-zeppelin
template:
metadata:
creationTimestamp: null
labels:
chart: spark-0.2.1
component: m3-zeppelin
heritage: Tiller
release: m3
spec:
containers:
- args:
- bash
- -c
- wget -qO- https://archive.apache.org/dist/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz
| tar xz; mv spark-2.2.2-bin-hadoop2.7 spark; curl -sSLO https://storage.googleapis.com/hadoop-lib/gcs/gcs-connector-latest-hadoop2.jar;
mv gcs-connector-latest-hadoop2.jar lib; ./bin/zeppelin.sh
env:
- name: SPARK_MASTER
value: spark://m3-master:7077
image: apache/zeppelin:0.8.0
imagePullPolicy: IfNotPresent
name: m3-zeppelin
ports:
- containerPort: 8080
name: http
protocol: TCP
resources:
requests:
cpu: 100m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /zeppelin/conf
name: m3-zeppelin-config
- mountPath: /zeppelin/notebook
name: m3-zeppelin-notebook
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: m3-zeppelin-config
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
storageClassName: standard
status:
phase: Pending
- metadata:
creationTimestamp: null
name: m3-zeppelin-notebook
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
storageClassName: standard
status:
phase: Pending
status:
collisionCount: 0
currentReplicas: 1
currentRevision: m3-zeppelin-5779b84d99
observedGeneration: 16
readyReplicas: 1
replicas: 1
updateRevision: m3-zeppelin-5779b84d99
updatedReplicas: 1

Focusing specifically on the error from the Kubernetes perspective it could be related to:
Mismatch between the ports that request is sent to and the receiving end (for example sending a request to an NGINX instance on port: 1234)
Pod not listening on a desired port.
I've managed to reproduce this error with a Kubernetes cluster created with kubespray.
Assuming that you've run following steps:
$ kubectl create deployment nginx --image=nginx
$ kubectl port-forward deployment/nginx 8080:80
Everything should be correct and the NGINX welcome page should appear when running: $ curl localhost:8080.
If we made a change to the port-forward command like below (notice the 1234 port):
$ kubectl port-forward deployment/nginx 8080:1234
You will get following error:
Forwarding from 127.0.0.1:8080 -> 1234
Forwarding from [::1]:8080 -> 1234
Handling connection for 8080
E0303 22:37:30.698827 625081 portforward.go:400] an error occurred forwarding 8080 -> 1234: error forwarding port 1234 to pod e535674b2c8fbf66252692b083f89e40f22e48b7a29dbb98495d8a15326cd4c4, uid : exit status 1: 2021/03/23 11:44:38 socat[674028] E connect(5, AF=2 127.0.0.1:1234, 16): Connection refused
This would also work on a Pod that application haven't bound to the port and/or is not listening.
A side note!
You can simulate it by running an Ubuntu Pod where you try to curl its port 80. It will fail as nothing listens on its port. Try to exec into it and run $ apt update && apt install -y nginx and try to curl again (with kubectl port-forward configured). It will work and won't produce the error mentioned (socat).
Addressing the part of the question:
I even tried exposing port 7077 in YAML file. Did not work out.
If you mean that you've included the - containerPort: 8080. This field is purely informational and does not carry any configuration to be made. You can read more about it here:
Stackoverflow.com: Answer: Why do we need a port/containerPort in a Kuberntes deployment/container definition?
(which besides that I consider incorrect as you are using the port: 7077)
As for the $ kubectl port-forward --address 0.0.0.0. It's a way to expose your port-forward so that it would listen on all interfaces (on a host machine). It could allow for an access to your port-forward from LAN:
$ kubectl port-forward --help (part):
# Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod
kubectl port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000
Additional resources:
Kubernetes.io: Docs: Tasks: Access application cluster: Port forward access to application cluster

Maybe you should use command:
kubectl get pods
to check whether your pods are running.
In my case, I start minikube without network connection, so I meet the same question when I use port-forward to trans pod's port to local mechine's port.

Related

Kubernetes NodePort not listening

I'm doing some tutorials using k3d (k3s in docker) and my yml looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: NodePort
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
With the resulting node port being 31747:
:~$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 18m
nginx NodePort 10.43.254.138 <none> 80:31747/TCP 17m
:~$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 172.18.0.2:6443 22m
nginx 10.42.0.8:80 21m
However wget does not work:
:~$ wget localhost:31747
Connecting to localhost:31747 ([::1]:31747)
wget: can't connect to remote host: Connection refused
:~$
What have I missed? I've ensured that my labels all say app: nginx and my containerPort, port and targetPort are all 80
The question is, is the NodePort range mapped from the host to the docker container acting as the node. The command docker ps will show you, for more details you can docker inspect $container_id and look at the Ports attribute under NetworkSettings. I don't have k3d around, but here is an example from kind.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d2225b83a73 kindest/node:v1.17.0 "/usr/local/bin/entr…" 18 hours ago Up 18 hours 127.0.0.1:32769->6443/tcp kind-control-plane
$ docker inspect kind-control-plane
[
{
# [...]
"NetworkSettings": {
# [...]
"Ports": {
"6443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
]
},
# [...]
}
]
If it is not, working with kubectl port-forward as suggested in the comment is probably the easiest approach. Alternatively, start looking into Ingress. Ingress is the preferred method to expose workloads outside of a cluster, and in the case of kind, they have support for Ingress. It seems k3d also has a way to map the ingress port to the host.
Turns out I didn't expose the ports when creating the cluster
https://k3d.io/usage/guides/exposing_services/
maybe, your pod is running on the other work node, not localhost. you should use the correct node ip.

Kubernetes, access IP outside the cluster

I have a corporate network(10.22..) which hosts a Kubernetes cluster(10.225.0.1). How can I access some VM in the same network but outside the cluster from within the pod in the cluster?
For example, I have a VM with IP 10.22.0.1:30000, which I need to access from a Pod in Kubernetes cluster. I tried to create a Service like this
apiVersion: v1
kind: Service
metadata:
name: vm-ip
spec:
selector:
app: vm-ip
ports:
- name: vm
protocol: TCP
port: 30000
targetPort: 30000
externalIPs:
- 10.22.0.1
But when I do "curl http://vm-ip:30000" from a Pod(kubectl exec -it), it returns "connection refused" error. But it works with "google.com". What are the ways of accessing the external IPs?
You can create an endpoint for that.
Let's go through an example:
In this example, I have a http server on my network with IP 10.128.15.209 and I want it to be accessible from my pods inside my Kubernetes Cluster.
First thing is to create an endpoint. This is going to let me create a service pointing to this endpoint that will redirect the traffic to my external http server.
My endpoint manifest is looking like this:
apiVersion: v1
kind: Endpoints
metadata:
name: http-server
subsets:
- addresses:
- ip: 10.128.15.209
ports:
- port: 80
$ kubectl apply -f http-server-endpoint.yaml
endpoints/http-server configured
Let's create our service:
apiVersion: v1
kind: Service
metadata:
name: http-server
spec:
ports:
- port: 80
targetPort: 80
$ kubectl apply -f http-server-service.yaml
service/http-server created
Checking if our service exists and save it's clusterIP for letter usage:
user#minikube-server:~$$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-server ClusterIP 10.96.228.220 <none> 80/TCP 30m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10d
Now it's time to verify if we can access our service from a pod:
$ kubectl run ubuntu -it --rm=true --restart=Never --image=ubuntu bash
This command will create and open a bash session inside a ubuntu pod.
In my case I'll install curl to be able to check if I can access my http server. You may need install mysql:
root#ubuntu:/# apt update; apt install -y curl
Checking connectivity with my service using clusterIP:
root#ubuntu:/# curl 10.128.15.209:80
Hello World!
And finally using the service name (DNS):
root#ubuntu:/# curl http-server
Hello World!
So, in your specific case you have to create this:
apiVersion: v1
kind: Endpoints
metadata:
name: vm-server
subsets:
- addresses:
- ip: 10.22.0.1
ports:
- port: 30000
---
apiVersion: v1
kind: Service
metadata:
name: vm-server
spec:
ports:
- port: 30000
targetPort: 30000

A proxy inside a kubernetes pod doesn't intercept any HTTP traffic

What I am craving for is to have 2 applications running in a pod, each of those applications has its own container. The Application A is a simple spring-boot application which makes HTTP requests to the other application which is deployed on Kubernetes. The purpose of Application B (proxy) is to intercept that HTTP request and add an Authorization token to its header. The Application B is a mitmdump with a python script. The issue I am having is that when I have deployed in on Kubernetes, the proxy seems to not intercept any traffic at all ( I tried to reproduce this issue on my local machine and I didn't find any troubles, so I guess the issue lies somewhere withing networking inside a pod). Can someone have a look into it and guide me how to solve it?
Here's the deployment and service file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy-deployment
namespace: myown
labels:
app: application-a
spec:
replicas: 1
selector:
matchLabels:
app: application-a
template:
metadata:
labels:
app: application-a
spec:
containers:
- name: application-a
image: registry.gitlab.com/application-a
resources:
requests:
memory: "230Mi"
cpu: "100m"
limits:
memory: "460Mi"
cpu: "200m"
imagePullPolicy: Always
ports:
- containerPort: 8090
env:
- name: "HTTP_PROXY"
value: "http://localhost:1030"
- name:
image: registry.gitlab.com/application-b-proxy
resources:
requests:
memory: "230Mi"
cpu: "100m"
limits:
memory: "460Mi"
cpu: "200m"
imagePullPolicy: Always
ports:
- containerPort: 1080
---
kind: Service
apiVersion: v1
metadata:
name: proxy-svc
namespace: myown
spec:
ports:
- nodePort: 31000
port: 8090
protocol: TCP
targetPort: 8090
selector:
app: application-a
sessionAffinity: None
type: NodePort
And here's how i build the docker image of mitmproxy/mitmdump
FROM mitmproxy/mitmproxy:latest
ADD get_token.py .
WORKDIR ~/mit_docker
COPY get_token.py .
EXPOSE 1080:1080
ENTRYPOINT ["mitmdump","--listen-port", "1030", "-s","get_token.py"]
EDIT
I created two dummy docker images in order to have this scenario recreated locally.
APPLICATION A - a spring boot application with a job to create an HTTP GET request every 1 minute for specified but irrelevant address, the address should be accessible. The response should be 302 FOUND. Every time an HTTP request is made, a message in the logs of the application appears.
APPLICATION B - a proxy application which is supposed to proxy the docker container with application A. Every request is logged.
Make sure your docker proxy config is set to listen to http://localhost:8080 - you can check how to do so here
Open a terminal and run this command:
docker run -p 8080:8080 -ti registry.gitlab.com/dyrekcja117/proxyexample:application-b-proxy
Open another terminal and run this command:
docker run --network="host" registry.gitlab.com/dyrekcja117/proxyexample:application-a
Go into the shell with the container of application A in 3rd terminal:
docker exec -ti <name of docker container> sh
and try to make curl to whatever address you want.
And the issue I am struggling with is that when I make curl from inside the container with Application A it is intercepted by my proxy and it can be seen in the logs. But whenever Application A itself makes the same request it is not intercepted. The same thing happens on Kubernetes
Let's first wrap up the facts we discover over our troubleshooting discussion in the comments:
Your need is that APP-A receives a HTTP request and a token needs to be added inflight by PROXY before sending the request to your datastorage.
Every container in a Pod shares the network namespace, including the IP address and network ports. Containers inside a Pod can communicate with one another using localhost, source here.
You was able to login to container application-a and send a curl request to container
application-b-proxy on port 1030, proving the above statement.
The problem is that your proxy is not intercepting the request as expected.
You mention that in you was able to make it work on localhost, but in localhost the proxy has more power than inside a container.
Since I don't have access neither to your app-a code nor the mitmproxy token.py I will give you a general example how to redirect traffic from container-a to container-b
In order to make it work, I'll use NGINX Proxy Pass: it simply proxies the request to container-b.
Reproduction:
I'll use a nginx server as container-a.
I'll build it with this Dockerfile:
FROM nginx:1.17.3
RUN rm /etc/nginx/conf.d/default.conf
COPY frontend.conf /etc/nginx/conf.d
I'll add this configuration file frontend.conf:
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:8080;
}
}
It's ordering the traffic should be sent to container-b that is listening in port 8080 inside the same pod.
I'll build this image as nginxproxy in my local repo:
$ docker build -t nginxproxy .
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginxproxy latest 7c203a72c650 4 minutes ago 126MB
Now the full.yaml deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy-deployment
labels:
app: application-a
spec:
replicas: 1
selector:
matchLabels:
app: application-a
template:
metadata:
labels:
app: application-a
spec:
containers:
- name: container-a
image: nginxproxy:latest
ports:
- containerPort: 80
imagePullPolicy: Never
- name: container-b
image: echo8080:latest
ports:
- containerPort: 8080
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: proxy-svc
spec:
ports:
- nodePort: 31000
port: 80
protocol: TCP
targetPort: 80
selector:
app: application-a
sessionAffinity: None
type: NodePort
NOTE: I set imagePullPolicy as Never because I'm using my local docker image cache.
I'll list the changes I made to help you link it to your current environment:
container-a is doing the work of your application-a and I'm serving nginx on port 80 where you are using port 8090
container-b is receiving the request, as your application-b-proxy. The image I'm using was based on mendhak/http-https-echo, normally it listens on port 80, I've made a custom image just changing to listen on port 8080 and named it echo8080.
First I created a nginx pod and exposed it alone to show you it's running (since it's empty in content, it will return bad gateway but you can see the output is from nginx:
$ kubectl apply -f nginx.yaml
pod/nginx created
service/nginx-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 64s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-svc NodePort 10.103.178.109 <none> 80:31491/TCP 66s
$ curl http://192.168.39.51:31491
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.17.3</center>
</body>
</html>
I deleted the nginx pod and created a echo-apppod and exposed it to show you the response it gives when directly curled from outside:
$ kubectl apply -f echo.yaml
pod/echo created
service/echo-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
echo 1/1 Running 0 118s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo-svc NodePort 10.102.168.235 <none> 8080:32116/TCP 2m
$ curl http://192.168.39.51:32116
{
"path": "/",
"headers": {
"host": "192.168.39.51:32116",
"user-agent": "curl/7.52.1",
},
"method": "GET",
"hostname": "192.168.39.51",
"ip": "::ffff:172.17.0.1",
"protocol": "http",
"os": {
"hostname": "echo"
},
Now I'll apply the full.yaml:
$ kubectl apply -f full.yaml
deployment.apps/proxy-deployment created
service/proxy-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
proxy-deployment-9fc4ff64b-qbljn 2/2 Running 0 1s
$ k get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
proxy-svc NodePort 10.103.238.103 <none> 80:31000/TCP 31s
Now the Proof of concept, from outside the cluster, I'll send a curl to my node IP 192.168.39.51 in port 31000 which is sending the request to port 80 on the pod (handled by nginx):
$ curl http://192.168.39.51:31000
{
"path": "/",
"headers": {
"host": "127.0.0.1:8080",
"user-agent": "curl/7.52.1",
},
"method": "GET",
"hostname": "127.0.0.1",
"ip": "::ffff:127.0.0.1",
"protocol": "http",
"os": {
"hostname": "proxy-deployment-9fc4ff64b-qbljn"
},
As you can see, the response has all the parameters of the pod, indicating it was sent from 127.0.0.1 instead of a public IP, showing that the NGINX is proxying the request to container-b.
Considerations:
This example was created to show you how the communication works inside kubernetes.
You will have to check how your application-a is handling the requests and edit it to send the traffic to your proxy.
Here are a few links with tutorials and explanation that could help you port your application to kubernetes environment:
Virtual Hosts on nginx
Implementing a Reverse proxy Server in Kubernetes Using the Sidecar Pattern
Validating OAuth 2.0 Access Tokens with NGINX and NGINX Plus
Use nginx to Add Authentication to Any Application
Connecting a Front End to a Back End Using a Service
Transparent Proxy and Filtering on K8s
I Hope to help you with this example.

Kubernetes sevice with CluserIp does not pass the request to some of it's Endpoints

I'm new to Kubernetes. I have setup 3 node cluster with two workers according to here.
My configurations
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.10", GitCommit:"575467a0eaf3ca1f20eb86215b3bde40a5ae617a", GitTreeState:"clean", BuildDate:"2019-12-11T12:32:32Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:51:21Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Deployed simple python service listen to 8000 port http and reply "Hello world"
my deployment config
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-app
labels:
app: frontend-app
spec:
replicas: 2
selector:
matchLabels:
app: frontend-app
template:
metadata:
labels:
app: frontend-app
spec:
containers:
- name: pyfrontend
image: rushantha/pyfront:1.0
ports:
- containerPort: 8000
Exposed this as a service
kubectl expose deploy frontend-app --port 8000
I can see it deployed and running.
kubectl describe svc frontend-app
Name: frontend-app
Namespace: default
Labels: app=frontend-app
Annotations: <none>
Selector: app=frontend-app
Type: ClusterIP
IP: 10.96.113.192
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
Endpoints: 172.16.1.10:8000,172.16.2.9:8000
Session Affinity: None
Events: <none>
when I log in to each service machine and do curl pods respond
ie. curl 172.16.1.10:8000 or curl 172.16.2.9:8000
However when I try to access the pods via the ClusterIp only one pod always responds. So curl sometimes hangs, most probably the other pod does not respond. I confirmed when I tail the access logs for both pods. One pod never received any requests.
curl 10.96.113.192:8000/ ---> Hangs sometimes.
Any ideas how to troubleshoot this and fix ?
After comparing the tutorial document and the outputs configuration
I discovered that the --pod-network-cidr declared in the document is different from the OP endpoints which solved the problem.
The network in the flannel configuration should match the pod network CIDR otherwise pods won`t be able to communicate with each other.
Some additional information that are worth checking:
Under the CIDR Notation section there is a good explanation how this system works.
I find this document about networking in kuberenetes very helpful.

How to set external IP for nginx-ingress controller in private cloud kubernetes cluster

I am setting up a kubernetes cluster to run hyperledger fabric apps. My cluster is on a private cloud hence I don't have a load balancer. How do I set an IP address for my nginx-ingress-controller(pending) to expose my services? I think it is interfering with my creation of pods since when I run kubectl get pods, I see very many evicted pods. I am using certmanager which I think also needs IPs.
CA_POD=$(kubectl get pods -n cas -l "app=hlf-ca,release=ca" -o jsonpath="{.items[0].metadata.name}")
This does not create any pods.
nginx-ingress-controller-5bb5cd56fb-lckmm 1/1 Running
nginx-ingress-default-backend-dc47d79c-8kqbp 1/1 Running
The rest take the form
nginx-ingress-controller-5bb5cd56fb-d48sj 0/1 Evicted
ca-hlf-ca-5c5854bd66-nkcst 0/1 Pending 0 0s
ca-postgresql-0 0/1 Pending 0 0s
I would like to create pods from which I can run exec commands like
kubectl exec -n cas $CA_POD -- cat /var/hyperledger/fabric-ca/msp/signcertscert.pem
You are not exposing nginx-controller IP address, but nginx's service via node port. For example:
piVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-controller
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
selector:
app: nginx
In this case you'd be able to reach your service like
curl -v <NODE_EXTERNAL_IP>:30080
To the question, why your pods are in pending state, pls describe misbehaving pods:
kubectl describe pod nginx-ingress-controller-5bb5cd56fb-d48sj
Best approach is to use helm
helm install stable/nginx-ingress

Resources