How can I update kuberenet pod based on configuration change? - nginx

I have a configuration file like below. I deploy it to EKS cluster successfully. However, when I change the nginx-conf ConfigMap and run kubectl apply command, it doesn't seem to update the nginx.config in the pod. I tried to login to the pod and look at the file /etc/nginx/nginx.config but its content is still the old one.
I have tried to run kubectl rollout status deployment sidecar-app but it doesn't help.
And it shows the updated config map when I run kubectl describe configmap nginx-conf.
It seems the container doesn't take the config map change. How can I apply the changes without deleting the pod?
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
nginx.conf: |
user nginx;
worker_processes 1;
events {
worker_connections 10240;
}
http {
server {
listen 8080;
server_name localhost;
location / {
proxy_pass http://localhost:9200/;
}
location /health {
proxy_pass http://localhost:9200/_cluster/health;
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sidecar-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
name: sidecar-app
template:
metadata:
labels:
name: sidecar-app
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- name: http
containerPort: 8080
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
---
apiVersion: v1
kind: Service
metadata:
name: sidecar-entrypoint
spec:
selector:
name: sidecar-app
ports:
- port: 8080
targetPort: 8080
protocol: TCP
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sidecar-ingress-1
namespace: default
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: sidecar-ingress
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.order: '2'
alb.ingress.kubernetes.io/healthcheck-path: /health
# alb.ingress.kubernetes.io/healthcheck-port: '8080'
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: sidecar-entrypoint
servicePort: 8080

Kubernetes treats POD and Configmap as a different/separate object and pods don't automatically restart on specific Configmap version.
There are few alternatives to achieve this.
1 ) Reloader: https://github.com/stakater/Reloader
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
spec:
template: metadata:
configHash annotation.
https://blog.questionable.services/article/kubernetes-deployments-configmap-change/
Use wave.
https://github.com/wave-k8s/wave
You can use kubectl rollout restart deploy/sidecar-app but this will restart the pods with zero downtime. Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones.

Related

SPRING_CLOUD_CONFIG_URI setting in Kubernetes

I have a simple setup of 2 containers, one for configuration and one for gateway.
Below is the definitions of service and deployments. They get created fine but the gateway container isn't able to connect to the http://configuration:8888 and throws unknown host error.
The configuration server is running Nginx on port 8888.
I am able to access the configuration from a browser with URL http://configuration/actuator/.
I thought the pods in Kubernetes should be able to communicate fine as long as they are in the same network (host here). Not sure what is missing here.
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
name: configuration
spec:
ports:
- port: 8888
targetPort: 8888
selector:
app: configuration
status:
loadBalancer: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: configuration
name: configuration
spec:
replicas: 1
selector:
matchLabels:
app: configuration
strategy:
type: Recreate
template:
metadata:
labels:
app: configuration
spec:
hostNetwork: true
containers:
- env:
- name: JVM_OPTS
value: -Xss512k -XX:MaxRAM=128m
image: <image>
name: configuration
resources: {}
volumeMounts:
- mountPath: /data
name: configuration-claim0
restartPolicy: Always
volumes:
- name: configuration-claim0
persistentVolumeClaim:
claimName: configuration-claim0
status: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: gateway
name: gateway
spec:
replicas: 1
selector:
matchLabels:
app: gateway
strategy:
type: Recreate
template:
metadata:
labels:
app: gateway
spec:
hostNetwork: true
containers:
- env:
- name: JVM_OPTS
value: -Xss512k -XX:MaxRAM=128m
**- name: SPRING_CLOUD_CONFIG_URI
value: http://configuration:8888**
- name: SPRING_PROFILES_ACTIVE
value: container
image: <image>
name: gateway
You are using:
hostNetwork: true
So you can reference the service using: http://localhost:8888.
Otherwise, remove hostNetwork: true

Kubernetes Ingress/nginx pod specific value

I'm very new to Kubernetes and still learning how to use LB, ingress, etc. Currently, I'm trying to set pod-specific value(config) for each host. Looks like in ingress yaml spec, it can read config from values. But I would like to read ingress spec, e.g. host, in Values.yaml.
For example, I have two hosts
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: service
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: service-A.com
http:
paths:
- path: /
backend:
serviceName: myservicea
servicePort: 80
- host: service-B.com
http:
paths:
- path: /
backend:
serviceName: myserviceb
servicePort: 80
And I have two variables in values.yaml:
var1: aaa
var2: bbb
I want to pass
var1 to service-A.com/myservicea
var2 to service-B.com/myserviceb
or pass both, but the application must be able to identify what host it is, then it can use the right variable.
Is there any configuration/apis available to use for this purpose?
This is how you can create a secret.
kubectl create secret generic CUSTOM_VAR\
--from-literal=VAR_A=aaa\
--from-literal=VAR_B=bbb
This is how you can access the secrets in you deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myservicea-depl
spec:
replicas: 1
selector:
matchLabels:
app: myservice
template:
metadata:
labels:
app: myservice
spec:
containers:
- name: myservice
image: user/myservicea
env:
- name: VAR_A
value: aaa // this way you directly pass values here
- name: VAR_A // this way you can store this as secret in k8s
valueFrom:
secretKeyRef:
name: CUSTOM_VAR
key: VAR_A

Why target is not created when I use ALB ingress controller in EKS?

I deployed a nginx container to EKS cluster. it includes a ALB ingress controller and I am able to see the ALB is created and its rule is also created. But when I click the rule to open the target group, I see there is empty under Target tab which means the route doesn't route any traffic to nginx container.
I am able to see the pod (sidecar-app-6889d46d46-bns46) is running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sample-es-74db8bcd65-2sq5v 1/1 Running 0 4h40m
sidecar-app-6889d46d46-bns46 1/1 Running 0 4h42m
The service is also look good to me:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 4d5h
sample-es-entrypoint NodePort 10.100.57.164 <none> 9200:31987/TCP 4h41m
sidecar-entrypoint NodePort 10.100.164.72 <none> 80:31144/TCP 4h43m
So what is the reason that the target is not created?
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
nginx.conf: |
user nginx;
worker_processes 1;
events {
worker_connections 10240;
}
http {
server {
listen 80;
server_name localhost;
location /* {
proxy_pass http://localhost:9200/;
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sidecar-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- name: http
containerPort: 80
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
---
apiVersion: v1
kind: Service
metadata:
name: sidecar-entrypoint
spec:
selector:
name: sidecar-app
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sidecar-ingress-1
namespace: default
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: sidecar-ingress
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.order: '2'
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: sidecar-entrypoint
servicePort: 80
After did some debugging, the issue is the match label in my configuration is wrong app: nginx, which should be name: sidecar-app

How to set sticky session for multiple services in kubernetes?

I have 2 services:
Restful/websocket API service with Nginx (2 replicas)
Daemon service (1 replica)
The daemon service will emit a websocket event to the frontend at some point. However, the event doesn't seem to be emitted successfully to the frontend from the daemon service.
I also tried to emit events from the API server to the frontend, and the event was successfully emitted to the front end. (maybe because the frontend is connected to the API WebSocket server).
What I have done for sticky-session:
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "daemon"
namespace: app
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
selector:
app: "daemon"
type: "NodePort"
sessionAffinity: ClientIP
---
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "api"
namespace: app
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
selector:
app: "api"
type: "NodePort"
sessionAffinity: ClientIP
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
annotations:
getambassador.io/resource-downloaded: '2020-03-30T16:10:34.466Z'
name: api
namespace: app
spec:
prefix: /api
service: api:80
load_balancer:
policy: ring_hash
cookie:
name: sticky-cookie
ttl: 60s
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
annotations:
getambassador.io/resource-downloaded: '2020-03-30T16:10:34.466Z'
name: api-ws
namespace: app
spec:
prefix: /private
service: api:80
use_websocket: true
load_balancer:
policy: ring_hash
cookie:
name: sticky-cookie
ttl: 60s
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
annotations:
getambassador.io/resource-downloaded: '2020-03-30T16:10:34.466Z'
name: api-daemon
namespace: app
spec:
prefix: /daemon
service: daemon:80
use_websocket: true
load_balancer:
policy: ring_hash
cookie:
name: sticky-cookie
ttl: 60s
From kubernetes.io DaemonSet docs:
Service: Create a service with the same Pod selector, and use the service to reach a daemon on a random node. (No way to reach specific node.)
So I think sessionAffinity cannot work with DaemonSet.

Setup HTTP to HTTPS redirect in nginx-config.yaml ( Terminating SSL in AWS ELB + NGINX Ingress for routing)

I would like to redirect the HTTP calls -> HTTPS but I can't get it to work. I have searched and tried different solutions here on StackOverflow and some other blogs without making the redirection to work. Currently Both HTTP and HTTPS returns value. Commented out in to code below you can see one of the solutions have tried: changing the HTTP targetPort to 8080 and setup in nginx-config.yaml to listen to 8080 and return 301 https://$host$request_uri;
Nginx image: nginx/nginx-ingress:1.7.0. Installation with manifests (https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/)
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
replicas: 1
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
# annotations:
#prometheus.io/scrape: "true"
#prometheus.io/port: "9113"
spec:
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:1.7.0
name: nginx-ingress
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
#- name: prometheus
#containerPort: 9113
securityContext:
allowPrivilegeEscalation: true
runAsUser: 101 #nginx
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
#- -v=3 # Enables extensive logging. Useful for troubleshooting.
#- -report-ingress-status
#- -external-service=nginx-ingress
#- -enable-leader-election
#- -enable-prometheus-metrics
#- -global-configuration=$(POD_NAMESPACE)/nginx-configuration
Service
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:xxxxxxxxxxxxxxxxx"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
# targetPort: 8080
protocol: TCP
name: http
- port: 443
targetPort: 80
protocol: TCP
name: https
selector:
app: nginx-ingress
ConfigMap
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: nginx-ingress
data:
proxy-protocol: "True"
real-ip-header: "proxy_protocol"
set-real-ip-from: "0.0.0.0/0"
# kind: ConfigMap
# apiVersion: v1
# metadata:
# name: nginx-config
# namespace: nginx-ingress
# data:
# proxy-protocol: "True"
# real-ip-header: "proxy_protocol"
# set-real-ip-from: "0.0.0.0/0"
# force-ssl-redirect: "false"
# use-forwarded-headers: "true"
# http-snippet: |
# server {
# listen 8080 proxy_protocol;
# server_tokens off;
# return 301 https://$host$request_uri;
# }
Add following annotation on your ingress to sets an unconditional 301 redirect rule for all incoming HTTP traffic to force incoming traffic over HTTPS.
ingress.kubernetes.io/ssl-redirect: "true"

Resources