I have created a deployment using below simple yaml file using
kubectl apply -f deployment.yaml
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
tier: frontend
app: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
name: nginx-pod
labels:
app: myapp
spec:
containers:
- name: nginx-pod
image: nginx
and then created a service for it using below yaml file using
kubectl apply -f service.yaml
service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
type: NodePort
ports:
- port: 89
targetPort: 89
nodePort: 30009
selector:
app: myapp
Now when i run
minikube service myapp-service
it gives me
$ minikube service myapp-service
|-----------|---------------|-------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|---------------|-------------|-------------------------|
| default | myapp-service | 89 | http://172.17.0.2:30009 |
|-----------|---------------|-------------|-------------------------|
🏃 Starting tunnel for service myapp-service.
|-----------|---------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|---------------|-------------|------------------------|
| default | myapp-service | | http://127.0.0.1:52289 |
|-----------|---------------|-------------|------------------------|
🎉 Opening service default/myapp-service in default browser...
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
and when I try to access the given http://127.0.0.1:52289, I get "This site can’t be reached" error.
is there anything wrong in the yaml files? I am using
minikube version: v1.12.1
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Docker version
$ docker version
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:21:11 2020
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:29:16 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
This is a community wiki answer aimed to sum up the info from the comments with additional details and explanations.
Guys in the comments are right. There is a flaw in your ports configuration. There are few thing that you need to know that would help you better understand this concept. We have several different port configurations for Kubernetes services:
Port: exposes the Kubernetes service on the specified port within the cluster. Other pods within the cluster can communicate with this server on the specified port.
TargetPort: is the port on which the service will send requests to, that your pod will be listening on. Your application in the container will need to be listening on this port also.
NodePort: exposes a service externally to the cluster by means of the target nodes IP address and the NodePort. NodePort is the default setting if the port field is not specified.
From your example above the myapp-service service will be exposed internally to cluster applications on port 89 and externally to the cluster on the node IP address on 30009. It will also forward requests to pods with the label “app: my-app” on port 89.
You can check the details of your service with:
kubectl describe service myapp-service
Adjust your targetPort: to 80 and it should be fine.
Related
EDIT:
I deleted minikube, enabled kubernetes in Docker desktop for Windows and installed ingress-nginx manually.
$helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
Release "ingress-nginx" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ServiceAccount "ingress-nginx" in namespace "ingress-nginx" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "ingress-nginx"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "ingress-nginx"
It gave me an error but I think it's because I did it already before because:
$kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.106.222.233 localhost 80:30199/TCP,443:31093/TCP 11m
ingress-nginx-controller-admission ClusterIP 10.106.52.106 <none> 443/TCP 11m
Then applied all my yaml files again but this time ingress is not getting any address:
$kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
myapp-ingress <none> myapp.com 80 10m
I am using docker desktop (windows) and installed nginx-ingress controller via minikube addons enable command:
$kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create--1-lp4md 0/1 Completed 0 67m
ingress-nginx-admission-patch--1-jdkn7 0/1 Completed 1 67m
ingress-nginx-controller-5f66978484-6mpfh 1/1 Running 0 67m
And applied all my yaml files:
$kubectl get svc --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default event-service-svc ClusterIP 10.108.251.79 <none> 80/TCP 16m app=event-service-app
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16m <none>
default mssql-clusterip-srv ClusterIP 10.98.10.22 <none> 1433/TCP 16m app=mssql
default mssql-loadbalancer LoadBalancer 10.109.106.174 <pending> 1433:31430/TCP 16m app=mssql
default user-service-svc ClusterIP 10.111.128.73 <none> 80/TCP 16m app=user-service-app
ingress-nginx ingress-nginx-controller NodePort 10.101.112.245 <none> 80:31583/TCP,443:30735/TCP 68m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.105.169.167 <none> 443/TCP 68m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 72m k8s-app=kube-dns
All pods and services seems to be running properly. Checked the pod logs, all migrations etc. has worked and app is up and running. But when I try to send an HTTP request, I get a socket hang up error. I've checked all the logs for all pods, couldn't find anything useful.
$kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
myapp-ingress nginx myapp.com localhost 80 74s
This one is also a bit weird, I was expecting ADRESS to be set to an IP not to localhost. So adding 127.0.0.1 entry for myapp.com in /etc/hosts also didn't seem so right.
My question here is what I might be doing wrong? Or how can I even trace where are my requests are being forwarded to?
ingress-svc.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
spec:
rules:
- host: myapp.com
http:
paths:
- path: /api/Users
pathType: Prefix
backend:
service:
name: user-service-svc
port:
number: 80
- path: /api/Events
pathType: Prefix
backend:
service:
name: event-service-svc
port:
number: 80
events-depl.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service-app
labels:
app: event-service-app
spec:
replicas: 1
selector:
matchLabels:
app: event-service-app
template:
metadata:
labels:
app: event-service-app
spec:
containers:
- name: event-service-app
image: ghcr.io/myapp/event-service:master
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: myapp
---
apiVersion: v1
kind: Service
metadata:
name: event-service-svc
spec:
selector:
app: event-service-app
ports:
- protocol: TCP
port: 80
targetPort: 80
Reproduction
I reproduced the case using minikube v1.24.0, Docker desktop 4.2.0, engine 20.10.10
First, localhost in ingress appears due to logic, it doesn't really matter what IP address is behind the domain in /etc/hosts, I added a different one for testing and still it showed localhost. Only metallb will provide an IP address from set up network.
What happens
When minikube driver is docker, minikube creates a big container (VM) where kubernetes components are run. This can be checked by running docker ps command in host system:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f087dc669944 gcr.io/k8s-minikube/kicbase:v0.0.28 "/usr/local/bin/entr…" 16 minutes ago Up 16 minutes 127.0.0.1:59762->22/tcp, 127.0.0.1:59758->2376/tcp, 127.0.0.1:59760->5000/tcp, 127.0.0.1:59761->8443/tcp, 127.0.0.1:59759->32443/tcp minikube
And then minikube ssh to get inside this container and run docker ps to see all kubernetes containers.
Moving forward. Before introducing ingress, it's already clear that even NodePort doesn't work as intended. Let's check it.
There are two ways to get minikube VM IP:
run minikube IP
kubectl get nodes -o wide and find the node's IP
What should happen next with NodePort is requests should go to minikube_IP:Nodeport while it doesn't work. It happens because docker containers inside the minikube VM are not exposed outside of the cluster which is another docker container.
On minikube to access services within cluster there is a special command - minikube service %service_name% which will create a direct tunnel to the service inside the minikube VM (you can see that it contains service URL with NodePort which is supposed to be working):
$ minikube service echo
|-----------|------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------|-------------|---------------------------|
| default | echo | 8080 | http://192.168.49.2:32034 |
|-----------|------|-------------|---------------------------|
* Starting tunnel for service echo.
|-----------|------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------|-------------|------------------------|
| default | echo | | http://127.0.0.1:61991 |
|-----------|------|-------------|------------------------|
* Opening service default/echo in default browser...
! Because you are using a Docker driver on windows, the terminal needs to be open to run it
And now it's available on host machine:
$ curl http://127.0.0.1:61991/
StatusCode : 200
StatusDescription : OK
Adding ingress
Moving forward and adding ingress.
$ minikube addons enable ingress
$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default echo NodePort 10.111.57.237 <none> 8080:32034/TCP 25m
ingress-nginx ingress-nginx-controller NodePort 10.104.52.175 <none> 80:31041/TCP,443:31275/TCP 2m12s
Trying to get any response from ingress by hitting minikube_IP:NodePort with no luck:
$ curl 192.168.49.2:31041
curl : Unable to connect to the remote server
At line:1 char:1
+ curl 192.168.49.2:31041
Trying to create a tunnel with minikube service command:
$ minikube service ingress-nginx-controller -n ingress-nginx
|---------------|--------------------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|---------------|--------------------------|-------------|---------------------------|
| ingress-nginx | ingress-nginx-controller | http/80 | http://192.168.49.2:31041 |
| | | https/443 | http://192.168.49.2:31275 |
|---------------|--------------------------|-------------|---------------------------|
* Starting tunnel for service ingress-nginx-controller.
|---------------|--------------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|---------------|--------------------------|-------------|------------------------|
| ingress-nginx | ingress-nginx-controller | | http://127.0.0.1:62234 |
| | | | http://127.0.0.1:62235 |
|---------------|--------------------------|-------------|------------------------|
* Opening service ingress-nginx/ingress-nginx-controller in default browser...
* Opening service ingress-nginx/ingress-nginx-controller in default browser...
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
And getting 404 from ingress-nginx which means we can send requests to ingress:
$ curl http://127.0.0.1:62234
curl : 404 Not Found
nginx
At line:1 char:1
+ curl http://127.0.0.1:62234
Solutions
Above I explained what happens. Here are three solutions how to get it work:
Use another minikube driver (e.g. virtualbox. I used hyperv since my laptop has windows 10 pro)
minikube ip will return "normal" IP address of virtual machine and all network functionality will work just fine. You will need to add this IP address into /etc/hosts for domain used in ingress rule
Note! Even though localhost was shown in kubectl get ing ingress output in ADDRESS.
Use built-in kubernetes feature in Docker desktop for Windows.
You will need to manually install ingress-nginx and change ingress-nginx-controller service type from NodePort to LoadBalancer so it will be available on localhost and will be working. Please find my another answer about Docker desktop for Windows
(testing only) - use port-forward
It's almost exactly the same idea as minikube service command. But with more control. You will open a tunnel from host VM port 80 to ingress-nginx-controller service (eventually pod) on port 80 as well. /etc/hosts should contain 127.0.0.1 test.domain entity.
$ kubectl port-forward service/ingress-nginx-controller -n ingress-nginx 80:80
Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80
And testing it works:
$ curl test.domain
StatusCode : 200
StatusDescription : OK
Update for kubernetes in docker desktop on windows and ingress:
On modern ingress-nginx versions .spec.ingressClassName should be added to ingress rules. See last updates, so ingress rule should look like:
apiVersion: networking.k8s.io/v1
kind: Ingress
...
spec:
ingressClassName: nginx # can be checked by kubectl get ingressclass
rules:
- host: myapp.com
http:
...
I have a corporate network(10.22..) which hosts a Kubernetes cluster(10.225.0.1). How can I access some VM in the same network but outside the cluster from within the pod in the cluster?
For example, I have a VM with IP 10.22.0.1:30000, which I need to access from a Pod in Kubernetes cluster. I tried to create a Service like this
apiVersion: v1
kind: Service
metadata:
name: vm-ip
spec:
selector:
app: vm-ip
ports:
- name: vm
protocol: TCP
port: 30000
targetPort: 30000
externalIPs:
- 10.22.0.1
But when I do "curl http://vm-ip:30000" from a Pod(kubectl exec -it), it returns "connection refused" error. But it works with "google.com". What are the ways of accessing the external IPs?
You can create an endpoint for that.
Let's go through an example:
In this example, I have a http server on my network with IP 10.128.15.209 and I want it to be accessible from my pods inside my Kubernetes Cluster.
First thing is to create an endpoint. This is going to let me create a service pointing to this endpoint that will redirect the traffic to my external http server.
My endpoint manifest is looking like this:
apiVersion: v1
kind: Endpoints
metadata:
name: http-server
subsets:
- addresses:
- ip: 10.128.15.209
ports:
- port: 80
$ kubectl apply -f http-server-endpoint.yaml
endpoints/http-server configured
Let's create our service:
apiVersion: v1
kind: Service
metadata:
name: http-server
spec:
ports:
- port: 80
targetPort: 80
$ kubectl apply -f http-server-service.yaml
service/http-server created
Checking if our service exists and save it's clusterIP for letter usage:
user#minikube-server:~$$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-server ClusterIP 10.96.228.220 <none> 80/TCP 30m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10d
Now it's time to verify if we can access our service from a pod:
$ kubectl run ubuntu -it --rm=true --restart=Never --image=ubuntu bash
This command will create and open a bash session inside a ubuntu pod.
In my case I'll install curl to be able to check if I can access my http server. You may need install mysql:
root#ubuntu:/# apt update; apt install -y curl
Checking connectivity with my service using clusterIP:
root#ubuntu:/# curl 10.128.15.209:80
Hello World!
And finally using the service name (DNS):
root#ubuntu:/# curl http-server
Hello World!
So, in your specific case you have to create this:
apiVersion: v1
kind: Endpoints
metadata:
name: vm-server
subsets:
- addresses:
- ip: 10.22.0.1
ports:
- port: 30000
---
apiVersion: v1
kind: Service
metadata:
name: vm-server
spec:
ports:
- port: 30000
targetPort: 30000
I'm new to Kubernetes. I have setup 3 node cluster with two workers according to here.
My configurations
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.10", GitCommit:"575467a0eaf3ca1f20eb86215b3bde40a5ae617a", GitTreeState:"clean", BuildDate:"2019-12-11T12:32:32Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:51:21Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Deployed simple python service listen to 8000 port http and reply "Hello world"
my deployment config
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-app
labels:
app: frontend-app
spec:
replicas: 2
selector:
matchLabels:
app: frontend-app
template:
metadata:
labels:
app: frontend-app
spec:
containers:
- name: pyfrontend
image: rushantha/pyfront:1.0
ports:
- containerPort: 8000
Exposed this as a service
kubectl expose deploy frontend-app --port 8000
I can see it deployed and running.
kubectl describe svc frontend-app
Name: frontend-app
Namespace: default
Labels: app=frontend-app
Annotations: <none>
Selector: app=frontend-app
Type: ClusterIP
IP: 10.96.113.192
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
Endpoints: 172.16.1.10:8000,172.16.2.9:8000
Session Affinity: None
Events: <none>
when I log in to each service machine and do curl pods respond
ie. curl 172.16.1.10:8000 or curl 172.16.2.9:8000
However when I try to access the pods via the ClusterIp only one pod always responds. So curl sometimes hangs, most probably the other pod does not respond. I confirmed when I tail the access logs for both pods. One pod never received any requests.
curl 10.96.113.192:8000/ ---> Hangs sometimes.
Any ideas how to troubleshoot this and fix ?
After comparing the tutorial document and the outputs configuration
I discovered that the --pod-network-cidr declared in the document is different from the OP endpoints which solved the problem.
The network in the flannel configuration should match the pod network CIDR otherwise pods won`t be able to communicate with each other.
Some additional information that are worth checking:
Under the CIDR Notation section there is a good explanation how this system works.
I find this document about networking in kuberenetes very helpful.
I am running pyspark on one of the ports of kubernetes. I am trying to port forward to my local machine. I am getting this error while executing my python file.
Forwarding from 127.0.0.1:7077 -> 7077
Forwarding from [::1]:7077 -> 7077
Handling connection for 7077
E0401 01:08:11.964798 20399 portforward.go:400] an error occurred forwarding 7077 -> 7077: error forwarding port 7077 to pod 68ced395bd081247d1ee6b431776ac2bd3fbfda4d516da156959b6271c2ad90c, uid : exit status 1: 2019/03/31 19:38:11 socat[1748104] E connect(5, AF=2 127.0.0.1:7077, 16): Connection refused
this a few lines of my python file. I am getting error in the lines where conf is defined.
from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext
conf = SparkConf().setMaster("spark://localhost:7077").setAppName("Stand Alone Python Script")
I already tried installing socat on the kubernetes. I am using spark version 2.4.0 locally. I even tried exposing port 7077 in YAML file. Did not work out.
This is the YAML file used for deployment.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
creationTimestamp: 2018-10-07T15:23:35Z
generation: 16
labels:
chart: spark-0.2.1
component: m3-zeppelin
heritage: Tiller
release: m3
name: m3-zeppelin
namespace: default
resourceVersion: "55461362"
selfLink: /apis/apps/v1beta1/namespaces/default/statefulsets/m3-zeppelin
uid: f56e86fa-ca44-11e8-af6c-42010a8a00f2
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
component: m3-zeppelin
serviceName: m3-zeppelin
template:
metadata:
creationTimestamp: null
labels:
chart: spark-0.2.1
component: m3-zeppelin
heritage: Tiller
release: m3
spec:
containers:
- args:
- bash
- -c
- wget -qO- https://archive.apache.org/dist/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz
| tar xz; mv spark-2.2.2-bin-hadoop2.7 spark; curl -sSLO https://storage.googleapis.com/hadoop-lib/gcs/gcs-connector-latest-hadoop2.jar;
mv gcs-connector-latest-hadoop2.jar lib; ./bin/zeppelin.sh
env:
- name: SPARK_MASTER
value: spark://m3-master:7077
image: apache/zeppelin:0.8.0
imagePullPolicy: IfNotPresent
name: m3-zeppelin
ports:
- containerPort: 8080
name: http
protocol: TCP
resources:
requests:
cpu: 100m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /zeppelin/conf
name: m3-zeppelin-config
- mountPath: /zeppelin/notebook
name: m3-zeppelin-notebook
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: m3-zeppelin-config
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
storageClassName: standard
status:
phase: Pending
- metadata:
creationTimestamp: null
name: m3-zeppelin-notebook
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
storageClassName: standard
status:
phase: Pending
status:
collisionCount: 0
currentReplicas: 1
currentRevision: m3-zeppelin-5779b84d99
observedGeneration: 16
readyReplicas: 1
replicas: 1
updateRevision: m3-zeppelin-5779b84d99
updatedReplicas: 1
Focusing specifically on the error from the Kubernetes perspective it could be related to:
Mismatch between the ports that request is sent to and the receiving end (for example sending a request to an NGINX instance on port: 1234)
Pod not listening on a desired port.
I've managed to reproduce this error with a Kubernetes cluster created with kubespray.
Assuming that you've run following steps:
$ kubectl create deployment nginx --image=nginx
$ kubectl port-forward deployment/nginx 8080:80
Everything should be correct and the NGINX welcome page should appear when running: $ curl localhost:8080.
If we made a change to the port-forward command like below (notice the 1234 port):
$ kubectl port-forward deployment/nginx 8080:1234
You will get following error:
Forwarding from 127.0.0.1:8080 -> 1234
Forwarding from [::1]:8080 -> 1234
Handling connection for 8080
E0303 22:37:30.698827 625081 portforward.go:400] an error occurred forwarding 8080 -> 1234: error forwarding port 1234 to pod e535674b2c8fbf66252692b083f89e40f22e48b7a29dbb98495d8a15326cd4c4, uid : exit status 1: 2021/03/23 11:44:38 socat[674028] E connect(5, AF=2 127.0.0.1:1234, 16): Connection refused
This would also work on a Pod that application haven't bound to the port and/or is not listening.
A side note!
You can simulate it by running an Ubuntu Pod where you try to curl its port 80. It will fail as nothing listens on its port. Try to exec into it and run $ apt update && apt install -y nginx and try to curl again (with kubectl port-forward configured). It will work and won't produce the error mentioned (socat).
Addressing the part of the question:
I even tried exposing port 7077 in YAML file. Did not work out.
If you mean that you've included the - containerPort: 8080. This field is purely informational and does not carry any configuration to be made. You can read more about it here:
Stackoverflow.com: Answer: Why do we need a port/containerPort in a Kuberntes deployment/container definition?
(which besides that I consider incorrect as you are using the port: 7077)
As for the $ kubectl port-forward --address 0.0.0.0. It's a way to expose your port-forward so that it would listen on all interfaces (on a host machine). It could allow for an access to your port-forward from LAN:
$ kubectl port-forward --help (part):
# Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod
kubectl port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000
Additional resources:
Kubernetes.io: Docs: Tasks: Access application cluster: Port forward access to application cluster
Maybe you should use command:
kubectl get pods
to check whether your pods are running.
In my case, I start minikube without network connection, so I meet the same question when I use port-forward to trans pod's port to local mechine's port.
I have deployed a windows container which runs successfully in my local sytem using docker. Moved the image to Azure container registry and deployed the image from ACR to Azure Container service kubernetes cluster
cluster. It says it has been deployed successfully but we can't access it using the public IP assigned to it.
Docker File
# The `FROM` instruction specifies the base image. You are
# extending the `microsoft/aspnet` image.
FROM microsoft/aspnet
# The final instruction copies the site you published earlier into the container.
COPY . /inetpub/wwwroot
Manifest File YAML
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: ewimscloudpoc-v1
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: ewimscloudpoc-v1
spec:
containers:
- name: ewims
image: acraramsam.azurecr.io/ewims:v1
ports:
- containerPort: 80
args: ["-it"]
resources:
requests:
cpu: 250m
limits:
cpu: 500m
env:
- name: dev
value: "ewimscloudpoc-v1"
nodeSelector:
beta.kubernetes.io/os: windows
---
apiVersion: v1
kind: Service
metadata:
name: ewimscloudpoc-v1
spec:
loadBalancerIP: 104.40.9.103
type: LoadBalancer
ports:
- port: 80
selector:
app: ewimscloudpoc-v1
This is the code written in yaml file for deployment from ACR to ACS
Command used to deploy: kubectl create -f filename.yaml
While reaching the IP assigned in browser it says site not reached.
D:\>kubectl describe po ewimscloudpoc-v1-2192714781-hg5z3
Name: ewimscloudpoc-v1-2192714781-hg5z3
Namespace: default
Node: 54d99acs9000/10.240.0.4
Start Time: Fri, 21 Dec 2018 18:42:38 +0530
Labels: app=ewimscloudpoc-v1
pod-template-hash=2192714781
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"ewimscloudpoc-v1-2192714781","uid":"170fbfeb-0522-11e9-9805-000d...
Status: Pending
IP:
Controlled By: ReplicaSet/ewimscloudpoc-v1-2192714781
Containers:
ewims:
Container ID:
Image: acraramsam.azurecr.io/ewims:v1
Image ID:
Port: 80/TCP
Host Port: 0/TCP
Args:
-it
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Limits:
cpu: 500m
Requests:
cpu: 250m
Environment:
dev: ewimscloudpoc-v1
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8nmv0 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-8nmv0:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-8nmv0
Optional: false
QoS Class: Burstable
Node-Selectors: beta.kubernetes.io/os=windows
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned ewimscloudpoc-v1-2192714781-hg5z3 to 54d99acs9000
Normal SuccessfulMountVolume 11m kubelet, 54d99acs9000 MountVolume.SetUp succeeded for volume "default-token-8nmv0"
Normal Pulling 1m (x7 over 11m) kubelet, 54d99acs9000 pulling image "acraramsam.azurecr.io/ewims:v1"
Warning FailedSync 7s (x56 over 11m) kubelet, 54d99acs9000 Error syncing pod
Normal BackOff 7s (x49 over 11m) kubelet, 54d99acs9000 Back-off pulling image "acraramsam.azurecr.io/ewims:v1"
your pod fails to get created due to you not having secret for ACR:
kubectrl create secret docker-registry <SECRET_NAME> --docker-server <REGISTRY_NAME>.azurecr.io --docker-email <YOUR_MAIL> --docker-username=<SERVICE_PRINCIPAL_ID> --docker-password <YOUR_PASSWORD>
https://thorsten-hans.com/how-to-use-a-private-azure-container-registry-with-kubernetes-9b86e67b93b6
Added the security rules for ACS to access ACR repos as stated in this link - https://thorsten-hans.com/how-to-use-a-private-azure-container-registry-with-kubernetes-9b86e67b93b6 and updated my docker file as below fixed my issues,
FROM microsoft/iis:10.0.14393.206
SHELL ["powershell"]
RUN Install-WindowsFeature NET-Framework-45-ASPNET ; \
Install-WindowsFeature Web-Asp-Net45
COPY sampleapp sampleapp
RUN Remove-WebSite -Name 'Default Web Site'
RUN New-Website -Name 'sampleapp' -Port 80 \
-PhysicalPath 'c:\sampleapp' -ApplicationPool '.NET v4.5'
EXPOSE 80
CMD ["ping", "-t", "localhost"]