istio internal GRPC services communication - grpc

I am having trouble having two in-cluster GRPC services (written in netcore3.0) I get Grpc.Core.RpcException: Status(StatusCode=Unavailable, Detail="Connection reset by peer") (with uri = <service>.default.svc.cluster.local) or Grpc.Core.RpcException: Status(StatusCode=Unimplemented, Detail="") with uri = user.default.svc.cluster.local:80. The weird part is all the services work fine if they are communicating from different clusters. I'm I using the right urls. The configuration of one of the services in attached here.
apiVersion: v1
kind: Service
metadata:
name: user
labels:
app: user
service: user
spec:
ports:
- port: 80
name: grpc-port
protocol: TCP
selector:
app: user
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: user-v1
labels:
app: user
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: user
version: v1
spec:
containers:
- name: user
image: ***
imagePullPolicy: IfNotPresent
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "***"
ports:
- containerPort: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: user
spec:
hosts:
- user.default.svc.cluster.local
http:
- route:
- destination:
port:
number: 80
host: user.default.svc.cluster.local
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: user
spec:
host: user.default.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
---
FIXED: I managed to the it to work by using gRPC's .NETCORE3 client factory integration as described here

Instead of creating a channel and client manually as one would usually do. i.e
var endpoint = "test"; //or http://test
AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
Channel channel = new Channel(endpoint, ChannelCredentials.Insecure);
client = new TestService.TestServiceClient(channel);
I used GRPC client factory integration (in ConfigureServices (startup.cs) like this (after adding Grpc.Net.ClientFactory package version 0.1.22-pre1:
services.AddGrpcClient<TestService.TestServiceClient>(o =>
{
AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
o.BaseAddress = new Uri("http://test");
});
Thereafter you can access the client by using Dependency Injection.
I'm not sure why the second approach works but the first one doesn't.

Related

Setup ingress for my application with url start by /# on GKE and EKS

I have setup application with statefulset
# Simple deployment used to deploy and manage the app in nigelpoulton/getting-started-k8s:1.0
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: coredeploy
labels:
app: core123
spec:
replicas: 1
# minReadySeconds: 10
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0
# maxSurge: 1
selector:
matchLabels:
app: core123
serviceName: core123
template:
metadata:
labels:
app: core123
spec:
terminationGracePeriodSeconds: 1
containers:
- name: hello
image: docker-registry.myregistry.com:5000/core_centos:LMS-130022
imagePullPolicy: Always
ports:
- containerPort: 8008
readinessProbe:
tcpSocket:
port: 8008
periodSeconds: 1
This is my service
apiVersion: v1
kind: Service
metadata:
name: service-core
spec:
selector:
app: core123
type: NodePort
ports:
- name: nodeportcore
protocol: TCP
port: 9988
targetPort: 8008
This is my ingress
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "testIngress"
spec:
rules:
- http:
paths:
- path: "/"
backend:
service:
name: "service-core"
port:
number: 9988
pathType: "ImplementationSpecific"
After i apply ingress manifest file. My application is running but not login => Logs login successful but still back to login screen. After check i recognize url of my application when i run it in localhost on-premise (Not container this url in container is the same)
http://localhost:8008/#/public/login
http://localhost:8008/#/user/settings
http://localhost:8008/#/user/dashboard/overview
http://localhost:8008/#/user/history/processing
http://localhost:8008/#/user/policy/template
It url start with # and then url name as /public/login, /user/settings, /user/dashboard/overview, /#//
=> My question how i setup correctly ingress to run with my application

SPRING_CLOUD_CONFIG_URI setting in Kubernetes

I have a simple setup of 2 containers, one for configuration and one for gateway.
Below is the definitions of service and deployments. They get created fine but the gateway container isn't able to connect to the http://configuration:8888 and throws unknown host error.
The configuration server is running Nginx on port 8888.
I am able to access the configuration from a browser with URL http://configuration/actuator/.
I thought the pods in Kubernetes should be able to communicate fine as long as they are in the same network (host here). Not sure what is missing here.
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
name: configuration
spec:
ports:
- port: 8888
targetPort: 8888
selector:
app: configuration
status:
loadBalancer: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: configuration
name: configuration
spec:
replicas: 1
selector:
matchLabels:
app: configuration
strategy:
type: Recreate
template:
metadata:
labels:
app: configuration
spec:
hostNetwork: true
containers:
- env:
- name: JVM_OPTS
value: -Xss512k -XX:MaxRAM=128m
image: <image>
name: configuration
resources: {}
volumeMounts:
- mountPath: /data
name: configuration-claim0
restartPolicy: Always
volumes:
- name: configuration-claim0
persistentVolumeClaim:
claimName: configuration-claim0
status: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: gateway
name: gateway
spec:
replicas: 1
selector:
matchLabels:
app: gateway
strategy:
type: Recreate
template:
metadata:
labels:
app: gateway
spec:
hostNetwork: true
containers:
- env:
- name: JVM_OPTS
value: -Xss512k -XX:MaxRAM=128m
**- name: SPRING_CLOUD_CONFIG_URI
value: http://configuration:8888**
- name: SPRING_PROFILES_ACTIVE
value: container
image: <image>
name: gateway
You are using:
hostNetwork: true
So you can reference the service using: http://localhost:8888.
Otherwise, remove hostNetwork: true

Kubernetes Ingress/nginx pod specific value

I'm very new to Kubernetes and still learning how to use LB, ingress, etc. Currently, I'm trying to set pod-specific value(config) for each host. Looks like in ingress yaml spec, it can read config from values. But I would like to read ingress spec, e.g. host, in Values.yaml.
For example, I have two hosts
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: service
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: service-A.com
http:
paths:
- path: /
backend:
serviceName: myservicea
servicePort: 80
- host: service-B.com
http:
paths:
- path: /
backend:
serviceName: myserviceb
servicePort: 80
And I have two variables in values.yaml:
var1: aaa
var2: bbb
I want to pass
var1 to service-A.com/myservicea
var2 to service-B.com/myserviceb
or pass both, but the application must be able to identify what host it is, then it can use the right variable.
Is there any configuration/apis available to use for this purpose?
This is how you can create a secret.
kubectl create secret generic CUSTOM_VAR\
--from-literal=VAR_A=aaa\
--from-literal=VAR_B=bbb
This is how you can access the secrets in you deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myservicea-depl
spec:
replicas: 1
selector:
matchLabels:
app: myservice
template:
metadata:
labels:
app: myservice
spec:
containers:
- name: myservice
image: user/myservicea
env:
- name: VAR_A
value: aaa // this way you directly pass values here
- name: VAR_A // this way you can store this as secret in k8s
valueFrom:
secretKeyRef:
name: CUSTOM_VAR
key: VAR_A

Istio - default ssl certificate to work with Azure Front Door

For nginx ingress, there is a way to define default-ssl-certificate with --default-ssl-certificate flag.
Ref: https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-ssl-certificate
How can I do the same for istio?
I have assigned tls.credentialName in istio gateway. But, it's not the same as nginx-ingress default-ssl-certificate.
istio_gateway.yaml
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: SERVICE_GATEWAY
spec:
selector:
istio: ingressgateway # Use Istio default gateway implementation
servers:
- port:
name: SERVICE_NAME-http-80
number: 80
protocol: HTTP
hosts:
- "SERVICE_DNS"
- port:
name: SERVICE_NAME-https-443
number: 443
protocol: HTTPS
tls:
credentialName: SERVICE_CRT
mode: SIMPLE
minProtocolVersion: TLSV1_2
hosts:
- "SERVICE_DNS"
VirtualService:
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: SERVICE_NAME
spec:
hosts:
- SERVICE_DNS
gateways:
- SERVICE_GATEWAY
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: SERVICE_PORT
host: "SERVICE_NAME.default.svc.cluster.local"
This setup is working for nginx-ingress: https://ssbkang.com/2020/08/17/end-to-end-tls-for-azure-front-door-and-azure-kubernetes-service/
I want to do the same thing with istio.

How to set sticky session for multiple services in kubernetes?

I have 2 services:
Restful/websocket API service with Nginx (2 replicas)
Daemon service (1 replica)
The daemon service will emit a websocket event to the frontend at some point. However, the event doesn't seem to be emitted successfully to the frontend from the daemon service.
I also tried to emit events from the API server to the frontend, and the event was successfully emitted to the front end. (maybe because the frontend is connected to the API WebSocket server).
What I have done for sticky-session:
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "daemon"
namespace: app
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
selector:
app: "daemon"
type: "NodePort"
sessionAffinity: ClientIP
---
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "api"
namespace: app
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
selector:
app: "api"
type: "NodePort"
sessionAffinity: ClientIP
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
annotations:
getambassador.io/resource-downloaded: '2020-03-30T16:10:34.466Z'
name: api
namespace: app
spec:
prefix: /api
service: api:80
load_balancer:
policy: ring_hash
cookie:
name: sticky-cookie
ttl: 60s
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
annotations:
getambassador.io/resource-downloaded: '2020-03-30T16:10:34.466Z'
name: api-ws
namespace: app
spec:
prefix: /private
service: api:80
use_websocket: true
load_balancer:
policy: ring_hash
cookie:
name: sticky-cookie
ttl: 60s
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
annotations:
getambassador.io/resource-downloaded: '2020-03-30T16:10:34.466Z'
name: api-daemon
namespace: app
spec:
prefix: /daemon
service: daemon:80
use_websocket: true
load_balancer:
policy: ring_hash
cookie:
name: sticky-cookie
ttl: 60s
From kubernetes.io DaemonSet docs:
Service: Create a service with the same Pod selector, and use the service to reach a daemon on a random node. (No way to reach specific node.)
So I think sessionAffinity cannot work with DaemonSet.

Resources