I cannot acces from my master Kubernetes cluster to a pod - nginx

If I have a set of deployments that are connected using a NetworkPolicy ingress. It's work! However, if I have to connect from outside (IP got from kubectl get ep), I have to set another ingress to the endpoint? or egress policy?
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: nginx
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: nginx
strategy: {}
template:
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.network/nginx: "true"
io.kompose.service: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 8000
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: mariadb
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: mariadb
name: mariadb
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: mariadb
strategy: {}
template:
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.network/nginx: "true"
io.kompose.service: mariadb
spec:
containers:
- image: mariadb
name: mariadb
ports:
- containerPort: 5432
resources: {}
restartPolicy: Always
status: {}
...
You can see more code here http://pastie.org/p/2QpNHjFdAK9xj7SYuZvGPf
Endpoints:
kubectl get ep -n nginx
NAME ENDPOINTS AGE
mariadb 192.168.112.203:5432 2d2h
nginx 192.168.112.204:8000 42h
Services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mariadb ClusterIP 10.99.76.78 <none> 5432/TCP 2d2h
nginx NodePort 10.111.176.21 <none> 8000:31604/TCP 42h
Tests from server:
If I do curl 10.111.176.21:31604 -- No answer
If I do curl 192.168.112.204:8000 -- No answer
If I do curl 192.168.112.204:31604 -- No answer
If I do curl 10.0.0.2:8000 or 31604 -- No answer
10.0.0.2 is a worker node IP.
UPDATED If I do kubectl port-forward nginx-PODXXX 8000:8000
I can access it from HTTP://localhost:8000
So What's I am wrong in on?

It looks like you're using the Network Policy as an ingress for incoming traffic, but what you probably want to be using is an Ingress Controller to manage Ingress traffic.
Egress is for traffic flowing outbound from your services within your cluster to external sources. Ingress is for external traffic to be directed to specific services within your cluster.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my-example.site.tld
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 5432

Related

Ingress in GKE does not do the routing identically despite same IP at DNS level

I have setup in my GKE cluster an nginx ingress as follows:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace nginx-ingress
A load balancer with its IP came up.
Now I added two DNS pointing to that domain at Cloudflare:
In addition I created a namespace app-a
kubectl create namespace app-a
kubectl label namespace app-a project=a
and deployed an app there:
apiVersion: v1
kind: Service
metadata:
name: echo1
namespace: app-a
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo1
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo1
namespace: app-a
spec:
selector:
matchLabels:
app: echo1
replicas: 2
template:
metadata:
labels:
app: echo1
spec:
containers:
- name: echo1
image: hashicorp/http-echo
args:
- "-text=echo1"
ports:
- containerPort: 5678
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo-ingress-global
namespace: app-a
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: "test.my-domain.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: echo1
port:
number: 80
Things look good in Lens, so I thought to test it out.
When I was entering eu1.my-domain.com, I get
which is intended, of course.
but when I entered test.my-domain.com, I get that the website is unreachable: DNS_PROBE_FINISHED_NXDOMAIN, although I expected to see the dummy output of the dummy app.
Even more strangely, no matter if I get the well-responding result or the non-responding one, in the logs of the nginx controller there is nothing showing up for any of the calls.
Can you help me, such that I can access the test.my-domain.com homepage?

Kubernetes nginx ingress cannot find backend service

I have deployed my API to Kubernetes on AKS through kubectl command from my local machine. But the nginx ingress is not able to resolve the backend endpoint. The ingress logs has an error The service 'hello-world/filter-api' does not have any active endpoint
Steps followed:
Install dapr on AKS
dapr init -k --set global.tag=1.1.2
Install nginx ingress on AKS
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx -f ...\dapr\components\dapr-annotations.yaml --set image.tag=1.11.1 -n ingress-nginx
Apply manifest
kubectl apply -f .\services\filter.yaml
What did I try?
Verified the selectors and labels
Followed the steps mentioned Troublshooting nginx ingress
I tried to deploy this to local Kubernetes cluster on windows with docker desktop. This works fine. What am I missing?
filter.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: filter-cm
namespace: hello-world
labels:
app: hello-world
service: filter
data:
ASPNETCORE_ENVIRONMENT: Development
ASPNETCORE_URLS: http://0.0.0.0:80
PATH_BASE: /filter
PORT: "80"
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: filter
namespace: hello-world
labels:
app: hello-world
service: filter
spec:
replicas: 1
selector:
matchLabels:
service: filter
template:
metadata:
labels:
app: hello-world
service: filter
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "filter-api"
dapr.io/app-port: "80"
dapr.io/config: "dapr-config"
spec:
containers:
- name: filter-api
image: client/hello-world-filter-api:0.0.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
protocol: TCP
envFrom:
- configMapRef:
name: filter-cm
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: filter-api
namespace: hello-world
labels:
app: hello-world
service: filter
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30001
protocol: TCP
name: http
selector:
service: filter
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: filter-ingress
namespace: hello-world
labels:
app: hello-world
service: filter
spec:
rules:
- http:
paths:
- path: /filter
pathType: Prefix
backend:
service:
name: filter-api
port:
number: 80
In the service selector, use matchLabels for the service to find the backend pods
example:
selector:
matchLabels:
service: filter

yaml configuration for Istio and gRPC

I am working on a POC for Istio + gRPC, the Istio version is 1.6, but I could not see any gRPC traffic to my pods.
I suspect my Istio Gateway or VirtualService miss something, but I could not figure out what's wrong here? Could anybody help review my yaml file and correct me what's missing or wrong in my yaml?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: syslogserver
name: syslogserver
namespace: mynamespace
spec:
selector:
matchLabels:
app: syslogserver
replicas: 1
template:
metadata:
labels:
app: syslogserver
spec:
containers:
- name: syslogserver
image: docker.io/grpc-syslog:latest
imagePullPolicy: Always
ports:
- containerPort: 5555
imagePullSecrets:
- name: pull-image-credential
---
apiVersion: v1
kind: Service
metadata:
name: syslogserver
namespace: mynamespace
labels:
app: syslogserver
spec:
selector:
app: syslogserver
ports:
- name: grpc
port: 6666
protocol: TCP
targetPort: 5555
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: xyz-ingress-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 7777
name: http2
protocol: HTTP2
hosts:
- "*"
---
apiVersion: v1
kind: Service
metadata:
name: xyz-istio-ingressgateway
namespace: istio-system
labels:
app: xyz-istio-ingressgateway
spec:
selector:
app: istio-ingressgateway
istio: ingressgateway
type: NodePort
ports:
- protocol: TCP
nodePort: 32555
port: 7777
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: xyz-ingress-gateway-virtualservice
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- xyz-ingress-gateway
#tls:
http:
- match:
- port: 7777
route:
- destination:
host: syslogserver.mynamespace.svc.cluster.local
port:
number: 6666
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: xyz-destinationrule
namespace: istio-system
spec:
host: syslogserver.mynamespace.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
Please give your guidance, thanks.
From what I see the service name: xyz-istio-ingressgateway should be deleted, as that's not how you communicate when using Istio.
Instead you should use istio ingress gateway, combined with a gateway, virtual service and destination rule.
If you've choosen port number 7777 on your gateway, you have to open this port on istio ingress gateway, there are few ways to do that in this stackoverflow question. There are the default istio ingress gateway values.
After you configure the port you can use kubectl get svc istio-ingressgateway -n istio-system to get the istio ingress gateway external IP.
If the external IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is pending, then your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
The rest of your configuration looks fine for me. Just a reminder about injecting a sidecar proxy to your pods.

Making ingress available at <nodeinternalip>/<serviceendpoint> in kubernetes cluster

I have 2 node cluster with 1 worker node. I have setup ingress controller as per documentation and then created Deployment, service ( nodeport ) and ingress object. My goal is to make the service accessible using curl -s INTERNAL_IP/Serviceendpoint. What configurations are required to make this happen. This works great on minikube but not on the cluster.
Note - service works fine and shows nginx page when accessed using <INTERNALIP>:NODEPORT
Here is sample service and Ingress object definition -
apiVersion: v1
kind: Service
metadata:
name: nginx-test1
labels:
app: nginx-test1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-test1
type: NodePort
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-test1
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /nginx-test1
backend:
serviceName: nginx-test1
servicePort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx-test1
name: nginx-test1
spec:
replicas: 2
selector:
matchLabels:
app: nginx-test1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx-test1
spec:
containers:
- image: nginx
name: nginx-test1
resources: {}
ports:
- containerPort: 80
protocol: TCP
$kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
controlplane Ready master 50m v1.19.0 172.17.0.10 <none> Ubuntu 18.04.4 LTS 4.15.0-111-generic docker://19.3.6
node01 Ready <none> 49m v1.19.0 **172.17.0.11** <none> Ubuntu 18.04.4 LTS 4.15.0-111-generic docker://19.3.6
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 51m
nginx-test1 NodePort 10.96.244.119 <none> 80:30844/TCP 3m3s
kubectl describe ing
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Name: nginx-test1
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/nginx-test1 nginx-test1:80 10.244.1.8:80,10.244.1.9:80)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
**curl -s -v 172.17.0.11/nginx-test1**
*** Trying 172.17.0.11...
* TCP_NODELAY set
* connect to 172.17.0.11 port 80 failed: Connection refused
* Failed to connect to 172.17.0.11 port 80: Connection refused
* Closing connection 0**

Configuring Static IP address with Ingress Nginx Sticky Session on Azure Kubernetes

I am trying to configure an additional layer of Sticky Session to my current Kubernetes architecture. Instead of routing every request through the main LoadBalancer service, I want to route the requests through an upper layer of nginx sticky session. I am following the guide on https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/
I am using Azure Cloud for my cluster deployment. Previously, using a Service with LoadBalancer type would automatically generate an external IP address for users to connect to my cluster. Now I need to configure the static IP address for my users to connect to, with the nginx ingress in place. How can I do so? I followed the guide here - https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/static-ip but the external address of the Ingress is still empty!!
What did I do wrongly?
# nginx-sticky-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-lb
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 443
selector:
# Selects nginx-ingress-controller pods
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
# nginx-sticky-controller.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.31.0
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
resources:
limits:
cpu: 0.5
memory: "0.5Gi"
requests:
cpu: 0.5
memory: "0.5Gi"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/nginx-ingress-lb
# nginx-sticky-server.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "nginx-sticky-server"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
rules:
- http:
paths:
- backend:
# This assumes http-svc exists and routes to healthy endpoints.
serviceName: my-own-service-master
servicePort: http
Ok I got it working. I think the difference lies in the cloud provider you are using, and for Azure Cloud, you should follow their documentation and their way of implementing ingress controller in the Kubernetes cluster.
Link over here for deploying the ingress controller. Their way of creating the public IP address within the Kubernetes cluster and linking it up with the ingress controller works. I can confirm as of now, the time of writing.
Once I am done deploying the steps in the link above, I can apply the ingress .yaml file as usual i.e. kubectl apply -f nginx-sticky-server.yaml to set up the nginx sticky session. IF the service name and service port stated in your ingress .yaml file is correct, the nginx ingress controller should redirect your user requests to the correct service.

Resources