yaml configuration for Istio and gRPC - grpc

I am working on a POC for Istio + gRPC, the Istio version is 1.6, but I could not see any gRPC traffic to my pods.
I suspect my Istio Gateway or VirtualService miss something, but I could not figure out what's wrong here? Could anybody help review my yaml file and correct me what's missing or wrong in my yaml?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: syslogserver
name: syslogserver
namespace: mynamespace
spec:
selector:
matchLabels:
app: syslogserver
replicas: 1
template:
metadata:
labels:
app: syslogserver
spec:
containers:
- name: syslogserver
image: docker.io/grpc-syslog:latest
imagePullPolicy: Always
ports:
- containerPort: 5555
imagePullSecrets:
- name: pull-image-credential
---
apiVersion: v1
kind: Service
metadata:
name: syslogserver
namespace: mynamespace
labels:
app: syslogserver
spec:
selector:
app: syslogserver
ports:
- name: grpc
port: 6666
protocol: TCP
targetPort: 5555
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: xyz-ingress-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 7777
name: http2
protocol: HTTP2
hosts:
- "*"
---
apiVersion: v1
kind: Service
metadata:
name: xyz-istio-ingressgateway
namespace: istio-system
labels:
app: xyz-istio-ingressgateway
spec:
selector:
app: istio-ingressgateway
istio: ingressgateway
type: NodePort
ports:
- protocol: TCP
nodePort: 32555
port: 7777
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: xyz-ingress-gateway-virtualservice
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- xyz-ingress-gateway
#tls:
http:
- match:
- port: 7777
route:
- destination:
host: syslogserver.mynamespace.svc.cluster.local
port:
number: 6666
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: xyz-destinationrule
namespace: istio-system
spec:
host: syslogserver.mynamespace.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
Please give your guidance, thanks.

From what I see the service name: xyz-istio-ingressgateway should be deleted, as that's not how you communicate when using Istio.
Instead you should use istio ingress gateway, combined with a gateway, virtual service and destination rule.
If you've choosen port number 7777 on your gateway, you have to open this port on istio ingress gateway, there are few ways to do that in this stackoverflow question. There are the default istio ingress gateway values.
After you configure the port you can use kubectl get svc istio-ingressgateway -n istio-system to get the istio ingress gateway external IP.
If the external IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is pending, then your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
The rest of your configuration looks fine for me. Just a reminder about injecting a sidecar proxy to your pods.

Related

MetalLB Cannot access service from outside of k8s cluster

I'm getting problem accessing k8s service from outside of cluster
K8s version: v1.25
MetalLB version: 0.13 (Flannel v0.20)
IP pool
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 151.62.196.222-151.62.196.222
Advertisement
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: adv
namespace: metallb-system
spec:
ipAddressPools:
- first-pool
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Service
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: LoadBalancer
When I check services, I saw the nginx-service has been assigned IP (EXTERNAL-IP: 151.62.196.222)
But when I try to access, it returns me error
curl: (7) Failed to connect to 151.62.196.222 port 80 after 15 ms: Connection refused
Anyone experienced the same problem? Thanks!
I tried to access it from the cluster, and it works, returned me the nginx welcome page

FastAPI docs on kubernetes not working with devspace on a minikube cluster. 502 bad gateway

I am trying to develop an application on kubernetes with hot-reloading (instant code sync). I am using DevSpace. When running my application on a minikube cluster, everything works and I am able to hit the ingress to reach my FastAPI docs. The problem is when I try to use devspace, I can exec into my pods and see my changes reflected right away, but then when I try to hit my ingress to reach my FastAPI docs, I get a 502 bad gateway.
I have an api-pod.yaml file as such:
apiVersion: apps/v1
kind: Deployment
metadata:
name: project-api
spec:
replicas: 1
selector:
matchLabels:
app: project-api
template:
metadata:
labels:
app: project-api
spec:
containers:
- image: project/project-api:0.0.1
name: project-api
command: ["uvicorn"]
args: ["endpoint:app", "--port=8000", "--host", "0.0.0.0"]
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /api/v1/project/tasks/
port: 8000
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 600
failureThreshold: 3
ports:
- containerPort: 8000
name: http
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: project-api
spec:
selector:
app: project-api
ports:
- port: 8000
protocol: TCP
targetPort: http
type: ClusterIP
I have an api-ingress.yaml file as such:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: project-ingress
spec:
rules:
- http:
paths:
- path: /api/v1/project/tasks/
pathType: Prefix
backend:
service:
name: project-api
port:
number: 8000
ingressClassName: nginx
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
spec:
controller: k8s.io/ingress-nginx
Using kubectl get ep, I get:
NAME ENDPOINTS AGE
project-api 172.17.0.6:8000 17m
Using kubectl get svc, I get:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
project-api ClusterIP 10.97.182.167 <none> 8000/TCP 17m
Using kubectl get ingress I get:
NAME CLASS HOSTS ADDRESS PORTS AGE
api-ingress nginx * 192.168.64.112 80 17m
to reiterate, my problem is when I try reaching the FastAPI docs using 192.168.64.112/api/v1/project/tasks/docs I get a 502 bad gateway.
Im running:
MacOS Monterey: 12.4
Minikube version: v1.26.0 (with hyperkit as the vm)
Ingress controller: k8s.gcr.io/ingress-nginx/controller:v1.2.1
Devspace version: 5.18.5
I believe the problem was within DevSpace. I am now comfortably using Tilt. Everything is working as expected.

400: Bad Request blog page via http/https SSL-enabled k3s deployment

Using nginx-ingress controller and metallb for my loadbalancer in my k3s raspberry pi cluster. Trying to access my blog site but I get white page with 400: Bad Request.
I'm using Cloudflare to managed my domain and SSL/TLS mode is on "Full". Created an A name "Blog" and pointed the content to my public external IP. I opened the Loadbalancer IP address on my router exposing 80 and 433. What am I missing. I've been pulling my hair with his issues for days now. Here's the example of my k3s entire deployment
apiVersion: v1
kind: Service
metadata:
namespace: nginx
name: nginx-web
labels:
app: nginx-web
spec:
ports:
# the port that this service should serve on
- port: 8000
targetPort: 80
protocol: TCP
selector:
app: nginx-web
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: nginx
labels:
app: nginx-web
name: nginx-web
spec:
replicas: 1
selector:
matchLabels:
app: nginx-web
template:
metadata:
namespace: nginx
labels:
app: nginx-web
name: nginx-web
spec:
containers:
- name: nginx-web
image: nginx:latest
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-web
labels:
app: nginx-web
namespace: nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- blog.example.com
secretName: blog-example-com-tls
rules:
- host: blog.example.com
http:
paths:
- backend:
service:
name: nginx-web
port:
number: 80
path: /
pathType: Prefix

rpc error: code = Unknown desc = Moved Permanently: HTTP status code 301

I have GRPC service written in go and i need to deploy the service on top of AWS-EKS, we are using nginx-ingress and cloudflare to point to our cluster gateway (nginx).
but when i tried to deploy the service and test it using this command grpcurl grpc.fd-admin.com:443 list
i always get the following error:
Failed to list services: rpc error: code = Unknown desc = Moved Permanently: HTTP status code 301; transport: missing content-type field
And this is what i did for kubernetes resources:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grpc
labels:
k8s-app: grpc
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: grpc
spec:
containers:
- name: grpc
image: quay.io/kubernetes-ingress-controller/grpc-fortune-teller:0.1
ports:
- containerPort: 50051
name: grpc
---
apiVersion: v1
kind: Service
metadata:
name: grpc
namespace: grpc
spec:
selector:
k8s-app: grpc
ports:
- port: 50051
targetPort: 50051
protocol: TCP
name: grpc
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
name: grpc
namespace: grpc
spec:
rules:
- host: grpc.fd-admin.com
http:
paths:
- backend:
serviceName: grpc
servicePort: grpc
tls:
- secretName: grpc
hosts:
- grpc.fd-admin.com
So can any one explain why i got this error or what is the reasons may cause this kind of error ?
Try below the annotations of nginx ingress,
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
nginx.ingress.kubernetes.io/grpc-backend-for-port: "grpc"
name: grpc

Kubernetes nginx ingress is not resolving services

Cloud: Google Cloud Platform.
I have the following configuration
kind: Deployment
apiVersion: apps/v1
metadata:
name: api
spec:
replicas: 2
selector:
matchLabels:
run: api
template:
metadata:
labels:
run: api
spec:
containers:
- name: api
image: gcr.io/*************/api
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /_ah/health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
---
kind: Service
apiVersion: v1
metadata:
name: api
spec:
selector:
run: api
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /api/*
backend:
serviceName: api
servicePort: 8080
All set. GKE saying that all deployments are okay, pods number are met and main ingress with nginx-ingress-controllers are set as well. But I'm not able to reach any of the services. Even application specific 404. Nothing. Like, it's not resolved at all.
Another related question I see to entrypoints. The first one through main-ingress. It created it's own LoadBalancer with own IP address. The second one address from nginx-ingress-controller. The second one is at least returning 404 from default backend, but also not pointing to expected api service.

Resources