Making ingress available at <nodeinternalip>/<serviceendpoint> in kubernetes cluster - nginx

I have 2 node cluster with 1 worker node. I have setup ingress controller as per documentation and then created Deployment, service ( nodeport ) and ingress object. My goal is to make the service accessible using curl -s INTERNAL_IP/Serviceendpoint. What configurations are required to make this happen. This works great on minikube but not on the cluster.
Note - service works fine and shows nginx page when accessed using <INTERNALIP>:NODEPORT
Here is sample service and Ingress object definition -
apiVersion: v1
kind: Service
metadata:
name: nginx-test1
labels:
app: nginx-test1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-test1
type: NodePort
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-test1
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /nginx-test1
backend:
serviceName: nginx-test1
servicePort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx-test1
name: nginx-test1
spec:
replicas: 2
selector:
matchLabels:
app: nginx-test1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx-test1
spec:
containers:
- image: nginx
name: nginx-test1
resources: {}
ports:
- containerPort: 80
protocol: TCP
$kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
controlplane Ready master 50m v1.19.0 172.17.0.10 <none> Ubuntu 18.04.4 LTS 4.15.0-111-generic docker://19.3.6
node01 Ready <none> 49m v1.19.0 **172.17.0.11** <none> Ubuntu 18.04.4 LTS 4.15.0-111-generic docker://19.3.6
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 51m
nginx-test1 NodePort 10.96.244.119 <none> 80:30844/TCP 3m3s
kubectl describe ing
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Name: nginx-test1
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/nginx-test1 nginx-test1:80 10.244.1.8:80,10.244.1.9:80)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
**curl -s -v 172.17.0.11/nginx-test1**
*** Trying 172.17.0.11...
* TCP_NODELAY set
* connect to 172.17.0.11 port 80 failed: Connection refused
* Failed to connect to 172.17.0.11 port 80: Connection refused
* Closing connection 0**

Related

FastAPI docs on kubernetes not working with devspace on a minikube cluster. 502 bad gateway

I am trying to develop an application on kubernetes with hot-reloading (instant code sync). I am using DevSpace. When running my application on a minikube cluster, everything works and I am able to hit the ingress to reach my FastAPI docs. The problem is when I try to use devspace, I can exec into my pods and see my changes reflected right away, but then when I try to hit my ingress to reach my FastAPI docs, I get a 502 bad gateway.
I have an api-pod.yaml file as such:
apiVersion: apps/v1
kind: Deployment
metadata:
name: project-api
spec:
replicas: 1
selector:
matchLabels:
app: project-api
template:
metadata:
labels:
app: project-api
spec:
containers:
- image: project/project-api:0.0.1
name: project-api
command: ["uvicorn"]
args: ["endpoint:app", "--port=8000", "--host", "0.0.0.0"]
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /api/v1/project/tasks/
port: 8000
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 600
failureThreshold: 3
ports:
- containerPort: 8000
name: http
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: project-api
spec:
selector:
app: project-api
ports:
- port: 8000
protocol: TCP
targetPort: http
type: ClusterIP
I have an api-ingress.yaml file as such:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: project-ingress
spec:
rules:
- http:
paths:
- path: /api/v1/project/tasks/
pathType: Prefix
backend:
service:
name: project-api
port:
number: 8000
ingressClassName: nginx
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
spec:
controller: k8s.io/ingress-nginx
Using kubectl get ep, I get:
NAME ENDPOINTS AGE
project-api 172.17.0.6:8000 17m
Using kubectl get svc, I get:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
project-api ClusterIP 10.97.182.167 <none> 8000/TCP 17m
Using kubectl get ingress I get:
NAME CLASS HOSTS ADDRESS PORTS AGE
api-ingress nginx * 192.168.64.112 80 17m
to reiterate, my problem is when I try reaching the FastAPI docs using 192.168.64.112/api/v1/project/tasks/docs I get a 502 bad gateway.
Im running:
MacOS Monterey: 12.4
Minikube version: v1.26.0 (with hyperkit as the vm)
Ingress controller: k8s.gcr.io/ingress-nginx/controller:v1.2.1
Devspace version: 5.18.5
I believe the problem was within DevSpace. I am now comfortably using Tilt. Everything is working as expected.

I cannot acces from my master Kubernetes cluster to a pod

If I have a set of deployments that are connected using a NetworkPolicy ingress. It's work! However, if I have to connect from outside (IP got from kubectl get ep), I have to set another ingress to the endpoint? or egress policy?
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: nginx
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: nginx
strategy: {}
template:
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.network/nginx: "true"
io.kompose.service: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 8000
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: mariadb
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: mariadb
name: mariadb
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: mariadb
strategy: {}
template:
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.network/nginx: "true"
io.kompose.service: mariadb
spec:
containers:
- image: mariadb
name: mariadb
ports:
- containerPort: 5432
resources: {}
restartPolicy: Always
status: {}
...
You can see more code here http://pastie.org/p/2QpNHjFdAK9xj7SYuZvGPf
Endpoints:
kubectl get ep -n nginx
NAME ENDPOINTS AGE
mariadb 192.168.112.203:5432 2d2h
nginx 192.168.112.204:8000 42h
Services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mariadb ClusterIP 10.99.76.78 <none> 5432/TCP 2d2h
nginx NodePort 10.111.176.21 <none> 8000:31604/TCP 42h
Tests from server:
If I do curl 10.111.176.21:31604 -- No answer
If I do curl 192.168.112.204:8000 -- No answer
If I do curl 192.168.112.204:31604 -- No answer
If I do curl 10.0.0.2:8000 or 31604 -- No answer
10.0.0.2 is a worker node IP.
UPDATED If I do kubectl port-forward nginx-PODXXX 8000:8000
I can access it from HTTP://localhost:8000
So What's I am wrong in on?
It looks like you're using the Network Policy as an ingress for incoming traffic, but what you probably want to be using is an Ingress Controller to manage Ingress traffic.
Egress is for traffic flowing outbound from your services within your cluster to external sources. Ingress is for external traffic to be directed to specific services within your cluster.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my-example.site.tld
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 5432

I can't get a basic example of an Ingress service working

I'm struggling with a very basic example of an Ingress service fronting an nginx pod. When ever I try to visit my example site I get this simple text output instead of the default nginx page:
404 page not found
Here is the deployment I'm working with:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: default
spec:
rules:
- host: argo.corbe.net
http:
paths:
- backend:
serviceName: ningx
servicePort: 80
k3s kubectl get pods -o wide:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-d6dcb985-942cz 1/1 Running 0 8h 10.42.0.17 k3s-1 <none> <none>
nginx-deployment-d6dcb985-d7v69 1/1 Running 0 8h 10.42.0.18 k3s-1 <none> <none>
nginx-deployment-d6dcb985-dqbn9 1/1 Running 0 8h 10.42.1.26 k3s-2 <none> <none>
nginx-deployment-d6dcb985-vpf79 1/1 Running 0 8h 10.42.1.25 k3s-2 <none> <none>
k3s kubectl -o wide get services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 5d <none>
nginx-service ClusterIP 10.43.218.234 <none> 80/TCP 8h app=nginx
k3s kubectl -o wide get ingress:
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-ingress <none> argo.corbe.net 207.148.25.119 80 8h
k3s kubectl describe deployment nginx-deployment:
Name: nginx-deployment
Namespace: default
CreationTimestamp: Mon, 22 Feb 2021 15:19:07 +0000
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision: 2
Selector: app=nginx
Replicas: 4 desired | 4 updated | 4 total | 4 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx
Port: 8080/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-7848d4b86f (4/4 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 2m43s deployment-controller Scaled up replica set nginx-deployment-7848d4b86f to 1
Normal ScalingReplicaSet 2m43s deployment-controller Scaled down replica set nginx-deployment-d6dcb985 to 3
Normal ScalingReplicaSet 2m43s deployment-controller Scaled up replica set nginx-deployment-7848d4b86f to 2
Normal ScalingReplicaSet 2m40s deployment-controller Scaled down replica set nginx-deployment-d6dcb985 to 2
Normal ScalingReplicaSet 2m40s deployment-controller Scaled up replica set nginx-deployment-7848d4b86f to 3
Normal ScalingReplicaSet 2m40s deployment-controller Scaled down replica set nginx-deployment-d6dcb985 to 1
Normal ScalingReplicaSet 2m40s deployment-controller Scaled up replica set nginx-deployment-7848d4b86f to 4
Normal ScalingReplicaSet 2m38s deployment-controller Scaled down replica set nginx-deployment-d6dcb985 to 0
nginx image listens for connection on port 80 by default.
$ kubectl run --image nginx
$ kubectl exec -it nginx -- bash
root#nginx:/# apt update
**output hidden**
root#nginx:/# apt install iproute2
**output hidden**
root#nginx:/# ss -lunpt
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 0 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=1,fd=7))
tcp LISTEN 0 0 *:80 *:* users:(("nginx",pid=1,fd=8))
Notice it's port 80 that is open, not port 8080.
This mean that your service is misconfigured because it forwards to port 8080.
You should set target port to 80 like following.:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80 # <- HERE
Also notice the service name:
kind: Service
metadata:
name: nginx-service
And as a backed you put service of a different name:
- backend:
serviceName: ningx
Change it to the actual name of a service, like below:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: default
spec:
rules:
- host: argo.corbe.net
http:
paths:
- backend:
serviceName: ningx-service
servicePort: 80
Apply the changes and it should work now.
you are getting 404 which mean request is coming till nginx or ingress you are using
there might be now issue with your ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: default
spec:
rules:
- host: argo.corbe.net
http:
paths:
- backend:
serviceName: ningx
servicePort: 80
check the service name you are using serviceName: ningx.
it should be nginx-service ingress should be something like
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: default
spec:
rules:
- host: argo.corbe.net
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80
whole flow will goes something like : DNS > ingress > service (nginx-service) > deployment (nginx-deployment) or pod replicas.

Kubernetes NGINX-INGRESS Do I need an NGINX Service running?

I am attempting to create an NGINX-INGRESS (locally at first, then to be deployed to AWS behind a load-balancer). However I am new to Kubernetes, and I understand the Ingress model for NGINX- the configurations are confusing me as to weather I should be deploying an NGINX-INGRESS Service, Ingress or Both
I am working with multiple Flask-Apps I would like to have routed by path (/users, /content, etc.) My services are named user-service on port: 8000 (their container port is 8000 as well)
In this example an Ingress is defined. However, when I apply an ingress (in the same Namespace as my Flask there is no response from http://localhost
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-name
namespace: my-namespace
spec:
rules:
- http:
paths:
- path: /users
backend:
serviceName: users-service
servicePort: 8000
- path: /content
backend:
serviceName: content-service
servicePort: 8000
Furthermore, looking at the nginx-ingress "Deployment" docs, under Docker for Mac (which I assume I can use as I am using Docker on a MacOS) they define a Service like so:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
---
This seems to function for me (When I open "localhost" I get Nginx "not found", but it is a service in a different namespace then my apps- and there is no association between the port 80/443 and my service-ports.
For reference here is one of my deployment/service definitions:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: users-service
labels:
app: users-service
namespace: example
spec:
replicas: 1
selector:
matchLabels:
app: users-service
template:
metadata:
labels:
app: users-service
spec:
containers:
- name: users-service
image: users-service:latest
imagePullPolicy: Never
ports:
- containerPort: 8000
---
kind: Service
apiVersion: v1
metadata:
name: users-service
spec:
selector:
app: users-service
ports:
- protocol: TCP
port: 8000
Update
I followed a video for setting up an NGINX-Controller+Ingress, here the results, entering "localhost/users" does not work,
describe-ingress:
(base) MacBook-Pro-2018-i9:microservices jordanbaucke$ kubectl describe ingress users-ingress
Name: users-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/users users-service:8000 (10.1.0.75:8000)
Annotations: Events: <none>
users-service:
(base) MacBook-Pro-2018-i9:microservices jordanbaucke$ kubectl describe svc users-service
Name: users-service
Namespace: default
Labels: <none>
Annotations: Selector: app=users-service
Type: ClusterIP
IP: 10.100.213.229
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
Endpoints: 10.1.0.75:8000
Session Affinity: None
Events: <none>
nginx-ingress
(base) MacBook-Pro-2018-i9:microservices jordanbaucke$ kubectl describe svc nginx-ingress
Name: nginx-ingress
Namespace: default
Labels: <none>
Annotations: Selector: name=nginx-ingress
Type: NodePort
IP: 10.106.167.181
LoadBalancer Ingress: localhost
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 32710/TCP
Endpoints: 10.1.0.74:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32240/TCP
Endpoints: 10.1.0.74:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Now when I try to enter the combination of NodeIP:NodePort/users, it does not connect?
From inside my nginx-ingress pod, calling:
curl 10.1.0.75:8000 or curl 10.100.213.229:8000 returns results.
For nginx or any other ingress to work properly:
Nginx ingress controller need to deployed on the cluster
A LoadBalancer or NodePort type service need to be created to expose nginx ingress controller via port 80 and 443 in the same namespace where nginx ingress controller is deployed.LoadBalancer works in supported public cloud(AWS etc). NodePort works if running locally.
ClusterIP type service need to be created for workload pods in the namespace where workload pods are deployed.
Workload Pods will be exposed via nginx ingress and you need to create ingress resource in the same namespace as of the clusterIP service of your workload Pods.
You will use either the LoadBalancer(in case nginx ingress controller was exposed via LoadBalancer) or NodeIP:NodePort(in case Nginx ingress controller was exposed via NodePort) to access your workload Pods.
So in this case since docker desktop is being used Loadbalancer type service(ingress-nginx) to expose nginx ingress controller will not work. This needs to be of NodePort type. Once done workload pods can be accessed via NodeIP:NodePort/users and NodeIP:NodePort/content. NodeIP:NodePort should give nginx homepage as well.

I need help understanding kubernetes architecture best practices

I have 2 nodes on GCP in a kubernetes cluster. I also have a load balancer in GCP as well. this is a regular cluster (not GCK). I am trying to expose my front-end-service to the world. I am trying nginx-ingress type:nodePort as a solution. Where should my loadbalancer be pointing to? is this a good architecture approach?
world --> GCP-LB --> nginx-ingress-resource(GCP k8s cluster) --> services(pods)
to access my site I would have to point LB to worker-node-IP where nginx pod is running. Is this bad practice. I am new in this subject and trying to understand.
Thank you
deployservice:
apiVersion: v1
kind: Service
metadata:
name: mycha-service
labels:
run: mycha-app
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
selector:
app: mycha-app
nginxservice:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
labels:
app: nginx-ingress
spec:
type: NodePort
ports:
- nodePort: 31000
port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: nginx-ingress
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
labels:
run: nginx-ingress
spec:
type: NodePort
ports:
- nodePort: 31000
port: 80
targetPort: 3000
protocol: TCP
selector:
app: nginx-ingress
nginx-resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mycha-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: mycha-service
servicePort: 80
This configuration is not working.
When you use ingress in-front of your workload pods the service type for workload pods will always be of type clusterIP because you are not exposing pods directly outside the cluster.
But you need to expose the ingress controller outside the cluster either using NodePort type service or using Load Balancer type service and for production its recommended to use Loadbalancer type service.
This is the recommended pattern.
Client -> LoadBalancer -> Ingress Controller -> Kubernetes Pods
Ingress controller avoids usage of kube-proxy and load balancing provided by kube-proxy. You can configure layer 7 load balancing in the ingress itself.
The best practise of exposing application is:
World > LoadBalancer/NodePort (for connecting to the cluster) > Ingress (Mostly to redirect traffic) > Service
If you are using Google Cloud Platform, I would use GKE as it is optimized for containers and configure many things automatically for you.
Regarding your issue, I also couldn't obtain IP address for LB <Pending> state, however you can expose your application using NodePort and VMs IP. I will try few other config to obtain ExternalIP and will edit answer.
Below is one of examples how to expose your app using Kubeadm on GCE.
On GCE, your VM already have ExternalIP. This way you can just use Service with NodePort and Ingress to redirect traffic to proper services.
Deploy Nginx Ingress using Helm 3 as tiller is not required anymore ($ helm install nginx stable/nginx-ingress).
As Default it will deploy service with LoadBalancer type but it won't get externalIP and it will stuck in <Pending> state. You have to change it to NodePort and apply changes.
$ kubectl edit svc nginx-nginx-ingress-controller
Default it will open Vi editor. If you want other you need to specify it
$ KUBE_EDITOR="nano" kubectl edit svc nginx-nginx-ingress-controller
Now you can deploy service, deployment and ingress.
apiVersion: v1
kind: Service
metadata:
name: fs
spec:
selector:
key: app
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fd
spec:
replicas: 1
selector:
matchLabels:
key: app
template:
metadata:
labels:
key: app
spec:
containers:
- name: hello1
image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mycha-deploy
labels:
app: mycha-app
spec:
replicas: 1
selector:
matchLabels:
app: mycha-app
template:
metadata:
labels:
app: mycha-app
spec:
containers:
- name: mycha-container
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mycha-service
labels:
app: mycha-app
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: mycha-app
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.pod.svc
http:
paths:
- path: /mycha
backend:
serviceName: mycha-service
servicePort: 80
- path: /hello
backend:
serviceName: fs
servicePort: 80
service/fs created
deployment.apps/fd created
deployment.apps/mycha-deploy created
service/mycha-service created
ingress.extensions/two-svc-ingress created
$ kubectl get svc nginx-nginx-ingress-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-nginx-ingress-controller NodePort 10.105.247.148 <none> 80:31143/TCP,443:32224/TCP 97m
Now you should use your VM ExternalIP (slave VM) with port from NodePort service. My VM ExternalIP: 35.228.133.12, service: 80:31143/TCP,443:32224/TCP
IMPORTANT
If you would curl your VM with port you would get response:
$ curl 35.228.235.99:31143
curl: (7) Failed to connect to 35.228.235.99 port 31143: Connection timed out
As you are doing this manually, you also need add Firewall rule to allow traffic from outside on this specific port or range.
Information about creation of Firewall Rules can be found here.
If you will set proper values (open ports, set IP range (0.0.0.0/0), etc) you will be able to get service from you machine.
Curl from my local machine:
$ curl -H "HOST: my.pod.svc" http://35.228.235.99:31143/mycha
<!DOCTYPE html>
...
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
$ curl -H "HOST: my.pod.svc" http://35.228.235.99:31143/hello
Hello, world!
Version: 1.0.0
Hostname: fd-c6d79cdf8-dq2d6

Resources