How to set sticky session for multiple services in kubernetes? - nginx

I have 2 services:
Restful/websocket API service with Nginx (2 replicas)
Daemon service (1 replica)
The daemon service will emit a websocket event to the frontend at some point. However, the event doesn't seem to be emitted successfully to the frontend from the daemon service.
I also tried to emit events from the API server to the frontend, and the event was successfully emitted to the front end. (maybe because the frontend is connected to the API WebSocket server).
What I have done for sticky-session:
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "daemon"
namespace: app
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
selector:
app: "daemon"
type: "NodePort"
sessionAffinity: ClientIP
---
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "api"
namespace: app
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
selector:
app: "api"
type: "NodePort"
sessionAffinity: ClientIP
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
annotations:
getambassador.io/resource-downloaded: '2020-03-30T16:10:34.466Z'
name: api
namespace: app
spec:
prefix: /api
service: api:80
load_balancer:
policy: ring_hash
cookie:
name: sticky-cookie
ttl: 60s
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
annotations:
getambassador.io/resource-downloaded: '2020-03-30T16:10:34.466Z'
name: api-ws
namespace: app
spec:
prefix: /private
service: api:80
use_websocket: true
load_balancer:
policy: ring_hash
cookie:
name: sticky-cookie
ttl: 60s
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
annotations:
getambassador.io/resource-downloaded: '2020-03-30T16:10:34.466Z'
name: api-daemon
namespace: app
spec:
prefix: /daemon
service: daemon:80
use_websocket: true
load_balancer:
policy: ring_hash
cookie:
name: sticky-cookie
ttl: 60s

From kubernetes.io DaemonSet docs:
Service: Create a service with the same Pod selector, and use the service to reach a daemon on a random node. (No way to reach specific node.)
So I think sessionAffinity cannot work with DaemonSet.

Related

Setup ingress for my application with url start by /# on GKE and EKS

I have setup application with statefulset
# Simple deployment used to deploy and manage the app in nigelpoulton/getting-started-k8s:1.0
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: coredeploy
labels:
app: core123
spec:
replicas: 1
# minReadySeconds: 10
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0
# maxSurge: 1
selector:
matchLabels:
app: core123
serviceName: core123
template:
metadata:
labels:
app: core123
spec:
terminationGracePeriodSeconds: 1
containers:
- name: hello
image: docker-registry.myregistry.com:5000/core_centos:LMS-130022
imagePullPolicy: Always
ports:
- containerPort: 8008
readinessProbe:
tcpSocket:
port: 8008
periodSeconds: 1
This is my service
apiVersion: v1
kind: Service
metadata:
name: service-core
spec:
selector:
app: core123
type: NodePort
ports:
- name: nodeportcore
protocol: TCP
port: 9988
targetPort: 8008
This is my ingress
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "testIngress"
spec:
rules:
- http:
paths:
- path: "/"
backend:
service:
name: "service-core"
port:
number: 9988
pathType: "ImplementationSpecific"
After i apply ingress manifest file. My application is running but not login => Logs login successful but still back to login screen. After check i recognize url of my application when i run it in localhost on-premise (Not container this url in container is the same)
http://localhost:8008/#/public/login
http://localhost:8008/#/user/settings
http://localhost:8008/#/user/dashboard/overview
http://localhost:8008/#/user/history/processing
http://localhost:8008/#/user/policy/template
It url start with # and then url name as /public/login, /user/settings, /user/dashboard/overview, /#//
=> My question how i setup correctly ingress to run with my application

SPRING_CLOUD_CONFIG_URI setting in Kubernetes

I have a simple setup of 2 containers, one for configuration and one for gateway.
Below is the definitions of service and deployments. They get created fine but the gateway container isn't able to connect to the http://configuration:8888 and throws unknown host error.
The configuration server is running Nginx on port 8888.
I am able to access the configuration from a browser with URL http://configuration/actuator/.
I thought the pods in Kubernetes should be able to communicate fine as long as they are in the same network (host here). Not sure what is missing here.
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
name: configuration
spec:
ports:
- port: 8888
targetPort: 8888
selector:
app: configuration
status:
loadBalancer: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: configuration
name: configuration
spec:
replicas: 1
selector:
matchLabels:
app: configuration
strategy:
type: Recreate
template:
metadata:
labels:
app: configuration
spec:
hostNetwork: true
containers:
- env:
- name: JVM_OPTS
value: -Xss512k -XX:MaxRAM=128m
image: <image>
name: configuration
resources: {}
volumeMounts:
- mountPath: /data
name: configuration-claim0
restartPolicy: Always
volumes:
- name: configuration-claim0
persistentVolumeClaim:
claimName: configuration-claim0
status: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: gateway
name: gateway
spec:
replicas: 1
selector:
matchLabels:
app: gateway
strategy:
type: Recreate
template:
metadata:
labels:
app: gateway
spec:
hostNetwork: true
containers:
- env:
- name: JVM_OPTS
value: -Xss512k -XX:MaxRAM=128m
**- name: SPRING_CLOUD_CONFIG_URI
value: http://configuration:8888**
- name: SPRING_PROFILES_ACTIVE
value: container
image: <image>
name: gateway
You are using:
hostNetwork: true
So you can reference the service using: http://localhost:8888.
Otherwise, remove hostNetwork: true

How can I update kuberenet pod based on configuration change?

I have a configuration file like below. I deploy it to EKS cluster successfully. However, when I change the nginx-conf ConfigMap and run kubectl apply command, it doesn't seem to update the nginx.config in the pod. I tried to login to the pod and look at the file /etc/nginx/nginx.config but its content is still the old one.
I have tried to run kubectl rollout status deployment sidecar-app but it doesn't help.
And it shows the updated config map when I run kubectl describe configmap nginx-conf.
It seems the container doesn't take the config map change. How can I apply the changes without deleting the pod?
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
nginx.conf: |
user nginx;
worker_processes 1;
events {
worker_connections 10240;
}
http {
server {
listen 8080;
server_name localhost;
location / {
proxy_pass http://localhost:9200/;
}
location /health {
proxy_pass http://localhost:9200/_cluster/health;
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sidecar-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
name: sidecar-app
template:
metadata:
labels:
name: sidecar-app
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- name: http
containerPort: 8080
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
---
apiVersion: v1
kind: Service
metadata:
name: sidecar-entrypoint
spec:
selector:
name: sidecar-app
ports:
- port: 8080
targetPort: 8080
protocol: TCP
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sidecar-ingress-1
namespace: default
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: sidecar-ingress
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.order: '2'
alb.ingress.kubernetes.io/healthcheck-path: /health
# alb.ingress.kubernetes.io/healthcheck-port: '8080'
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: sidecar-entrypoint
servicePort: 8080
Kubernetes treats POD and Configmap as a different/separate object and pods don't automatically restart on specific Configmap version.
There are few alternatives to achieve this.
1 ) Reloader: https://github.com/stakater/Reloader
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
spec:
template: metadata:
configHash annotation.
https://blog.questionable.services/article/kubernetes-deployments-configmap-change/
Use wave.
https://github.com/wave-k8s/wave
You can use kubectl rollout restart deploy/sidecar-app but this will restart the pods with zero downtime. Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones.

Istio - default ssl certificate to work with Azure Front Door

For nginx ingress, there is a way to define default-ssl-certificate with --default-ssl-certificate flag.
Ref: https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-ssl-certificate
How can I do the same for istio?
I have assigned tls.credentialName in istio gateway. But, it's not the same as nginx-ingress default-ssl-certificate.
istio_gateway.yaml
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: SERVICE_GATEWAY
spec:
selector:
istio: ingressgateway # Use Istio default gateway implementation
servers:
- port:
name: SERVICE_NAME-http-80
number: 80
protocol: HTTP
hosts:
- "SERVICE_DNS"
- port:
name: SERVICE_NAME-https-443
number: 443
protocol: HTTPS
tls:
credentialName: SERVICE_CRT
mode: SIMPLE
minProtocolVersion: TLSV1_2
hosts:
- "SERVICE_DNS"
VirtualService:
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: SERVICE_NAME
spec:
hosts:
- SERVICE_DNS
gateways:
- SERVICE_GATEWAY
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: SERVICE_PORT
host: "SERVICE_NAME.default.svc.cluster.local"
This setup is working for nginx-ingress: https://ssbkang.com/2020/08/17/end-to-end-tls-for-azure-front-door-and-azure-kubernetes-service/
I want to do the same thing with istio.

istio internal GRPC services communication

I am having trouble having two in-cluster GRPC services (written in netcore3.0) I get Grpc.Core.RpcException: Status(StatusCode=Unavailable, Detail="Connection reset by peer") (with uri = <service>.default.svc.cluster.local) or Grpc.Core.RpcException: Status(StatusCode=Unimplemented, Detail="") with uri = user.default.svc.cluster.local:80. The weird part is all the services work fine if they are communicating from different clusters. I'm I using the right urls. The configuration of one of the services in attached here.
apiVersion: v1
kind: Service
metadata:
name: user
labels:
app: user
service: user
spec:
ports:
- port: 80
name: grpc-port
protocol: TCP
selector:
app: user
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: user-v1
labels:
app: user
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: user
version: v1
spec:
containers:
- name: user
image: ***
imagePullPolicy: IfNotPresent
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "***"
ports:
- containerPort: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: user
spec:
hosts:
- user.default.svc.cluster.local
http:
- route:
- destination:
port:
number: 80
host: user.default.svc.cluster.local
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: user
spec:
host: user.default.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
---
FIXED: I managed to the it to work by using gRPC's .NETCORE3 client factory integration as described here
Instead of creating a channel and client manually as one would usually do. i.e
var endpoint = "test"; //or http://test
AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
Channel channel = new Channel(endpoint, ChannelCredentials.Insecure);
client = new TestService.TestServiceClient(channel);
I used GRPC client factory integration (in ConfigureServices (startup.cs) like this (after adding Grpc.Net.ClientFactory package version 0.1.22-pre1:
services.AddGrpcClient<TestService.TestServiceClient>(o =>
{
AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
o.BaseAddress = new Uri("http://test");
});
Thereafter you can access the client by using Dependency Injection.
I'm not sure why the second approach works but the first one doesn't.

Resources