Istio - default ssl certificate to work with Azure Front Door - nginx

For nginx ingress, there is a way to define default-ssl-certificate with --default-ssl-certificate flag.
Ref: https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-ssl-certificate
How can I do the same for istio?
I have assigned tls.credentialName in istio gateway. But, it's not the same as nginx-ingress default-ssl-certificate.
istio_gateway.yaml
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: SERVICE_GATEWAY
spec:
selector:
istio: ingressgateway # Use Istio default gateway implementation
servers:
- port:
name: SERVICE_NAME-http-80
number: 80
protocol: HTTP
hosts:
- "SERVICE_DNS"
- port:
name: SERVICE_NAME-https-443
number: 443
protocol: HTTPS
tls:
credentialName: SERVICE_CRT
mode: SIMPLE
minProtocolVersion: TLSV1_2
hosts:
- "SERVICE_DNS"
VirtualService:
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: SERVICE_NAME
spec:
hosts:
- SERVICE_DNS
gateways:
- SERVICE_GATEWAY
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: SERVICE_PORT
host: "SERVICE_NAME.default.svc.cluster.local"
This setup is working for nginx-ingress: https://ssbkang.com/2020/08/17/end-to-end-tls-for-azure-front-door-and-azure-kubernetes-service/
I want to do the same thing with istio.

Related

Setup ingress for my application with url start by /# on GKE and EKS

I have setup application with statefulset
# Simple deployment used to deploy and manage the app in nigelpoulton/getting-started-k8s:1.0
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: coredeploy
labels:
app: core123
spec:
replicas: 1
# minReadySeconds: 10
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0
# maxSurge: 1
selector:
matchLabels:
app: core123
serviceName: core123
template:
metadata:
labels:
app: core123
spec:
terminationGracePeriodSeconds: 1
containers:
- name: hello
image: docker-registry.myregistry.com:5000/core_centos:LMS-130022
imagePullPolicy: Always
ports:
- containerPort: 8008
readinessProbe:
tcpSocket:
port: 8008
periodSeconds: 1
This is my service
apiVersion: v1
kind: Service
metadata:
name: service-core
spec:
selector:
app: core123
type: NodePort
ports:
- name: nodeportcore
protocol: TCP
port: 9988
targetPort: 8008
This is my ingress
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "testIngress"
spec:
rules:
- http:
paths:
- path: "/"
backend:
service:
name: "service-core"
port:
number: 9988
pathType: "ImplementationSpecific"
After i apply ingress manifest file. My application is running but not login => Logs login successful but still back to login screen. After check i recognize url of my application when i run it in localhost on-premise (Not container this url in container is the same)
http://localhost:8008/#/public/login
http://localhost:8008/#/user/settings
http://localhost:8008/#/user/dashboard/overview
http://localhost:8008/#/user/history/processing
http://localhost:8008/#/user/policy/template
It url start with # and then url name as /public/login, /user/settings, /user/dashboard/overview, /#//
=> My question how i setup correctly ingress to run with my application

NGINX Ingress giving 503 Service Temporarily Unavailable. nginx/1.19.1 error in frontend

I am trying to install the Cyclos Mobile app on GCP Everything setup perfectly but when I am trying to access the setup on browser it always showing either default backend - 404 or 503 Service Temporarily Unavailable. nginx/1.19.1. I have tried everything as per stack overflow several previous questions but still same error.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencypt-staging
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"cert-manager.io/cluster-issuer":"letsencypt-staging","kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/proxy-connect-timeout":"3600"},"name":"cyclos-ingress-nginx-https","namespace":"cyclos-name-space"},"spec":{"backend":{"serviceName":"default-http-backend","servicePort":80},"rules":[{"host":"ip-address.xip.io","http":{"paths":[{"backend":{"serviceName":"cyclos-app-stateful","servicePort":80},"path":"/*"}]}}],"tls":[{"hosts":["ip-address.xip.io"],"secretName":"ip-address.xip.io-tls-secret"}]}}
kubernetes.io/ingress.class: nginx
creationTimestamp: "2020-09-29T07:00:01Z"
generation: 11
name: cyclos-ingress-nginx-https
namespace: cyclos-name-space
resourceVersion: "643221534"
selfLink: /apis/extensions/v1beta1/namespaces/cyclos-name-space/ingresses/cyclos-ingress-nginx-https
uid: uid
spec:
backend:
serviceName: default-http-backend
servicePort: 80
rules:
- host: ip-address.xip.io
http:
paths:
- backend:
serviceName: cyclos-app-stateful
servicePort: 80
path: /*
tls:
- hosts:
- ip-address.xip.io
secretName: ip-address.xip.io-tls-secret
status:
loadBalancer:
ingress:
- ip: IP
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencypt-staging
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"cert-manager.io/cluster-issuer":"letsencypt-staging","kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/proxy-connect-timeout":"3600"},"name":"cyclos-ingress-nginx-https","namespace":"cyclos-name-space"},"spec":{"backend":{"serviceName":"default-http-backend","servicePort":80},"rules":[{"host":"ip-address.xip.io","http":{"paths":[{"backend":{"serviceName":"cyclos-app-stateful","servicePort":80},"path":"/*"}]}}],"tls":[{"hosts":["ip-address.xip.io"],"secretName":"ip-address.xip.io-tls-secret"}]}}
kubernetes.io/ingress.class: nginx
creationTimestamp: "2020-09-29T07:00:01Z"
generation: 11
name: cyclos-ingress-nginx-https
namespace: cyclos-name-space
resourceVersion: "643221534"
selfLink: /apis/extensions/v1beta1/namespaces/cyclos-name-space/ingresses/cyclos-ingress-nginx-https
uid: uid
spec:
backend:
serviceName: default-http-backend
servicePort: 80
rules:
- host: ip-address.xip.io
http:
paths:
- backend:
serviceName: cyclos-app-stateful
servicePort: 80
path: /*
tls:
- hosts:
- ip-address.xip.io
secretName: ip-address.xip.io-tls-secret
status:
loadBalancer:
ingress:
- ip: IP

How to set sticky session for multiple services in kubernetes?

I have 2 services:
Restful/websocket API service with Nginx (2 replicas)
Daemon service (1 replica)
The daemon service will emit a websocket event to the frontend at some point. However, the event doesn't seem to be emitted successfully to the frontend from the daemon service.
I also tried to emit events from the API server to the frontend, and the event was successfully emitted to the front end. (maybe because the frontend is connected to the API WebSocket server).
What I have done for sticky-session:
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "daemon"
namespace: app
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
selector:
app: "daemon"
type: "NodePort"
sessionAffinity: ClientIP
---
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "api"
namespace: app
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
selector:
app: "api"
type: "NodePort"
sessionAffinity: ClientIP
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
annotations:
getambassador.io/resource-downloaded: '2020-03-30T16:10:34.466Z'
name: api
namespace: app
spec:
prefix: /api
service: api:80
load_balancer:
policy: ring_hash
cookie:
name: sticky-cookie
ttl: 60s
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
annotations:
getambassador.io/resource-downloaded: '2020-03-30T16:10:34.466Z'
name: api-ws
namespace: app
spec:
prefix: /private
service: api:80
use_websocket: true
load_balancer:
policy: ring_hash
cookie:
name: sticky-cookie
ttl: 60s
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
annotations:
getambassador.io/resource-downloaded: '2020-03-30T16:10:34.466Z'
name: api-daemon
namespace: app
spec:
prefix: /daemon
service: daemon:80
use_websocket: true
load_balancer:
policy: ring_hash
cookie:
name: sticky-cookie
ttl: 60s
From kubernetes.io DaemonSet docs:
Service: Create a service with the same Pod selector, and use the service to reach a daemon on a random node. (No way to reach specific node.)
So I think sessionAffinity cannot work with DaemonSet.

Setup HTTP to HTTPS redirect in nginx-config.yaml ( Terminating SSL in AWS ELB + NGINX Ingress for routing)

I would like to redirect the HTTP calls -> HTTPS but I can't get it to work. I have searched and tried different solutions here on StackOverflow and some other blogs without making the redirection to work. Currently Both HTTP and HTTPS returns value. Commented out in to code below you can see one of the solutions have tried: changing the HTTP targetPort to 8080 and setup in nginx-config.yaml to listen to 8080 and return 301 https://$host$request_uri;
Nginx image: nginx/nginx-ingress:1.7.0. Installation with manifests (https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/)
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
replicas: 1
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
# annotations:
#prometheus.io/scrape: "true"
#prometheus.io/port: "9113"
spec:
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:1.7.0
name: nginx-ingress
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
#- name: prometheus
#containerPort: 9113
securityContext:
allowPrivilegeEscalation: true
runAsUser: 101 #nginx
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
#- -v=3 # Enables extensive logging. Useful for troubleshooting.
#- -report-ingress-status
#- -external-service=nginx-ingress
#- -enable-leader-election
#- -enable-prometheus-metrics
#- -global-configuration=$(POD_NAMESPACE)/nginx-configuration
Service
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:xxxxxxxxxxxxxxxxx"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
# targetPort: 8080
protocol: TCP
name: http
- port: 443
targetPort: 80
protocol: TCP
name: https
selector:
app: nginx-ingress
ConfigMap
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: nginx-ingress
data:
proxy-protocol: "True"
real-ip-header: "proxy_protocol"
set-real-ip-from: "0.0.0.0/0"
# kind: ConfigMap
# apiVersion: v1
# metadata:
# name: nginx-config
# namespace: nginx-ingress
# data:
# proxy-protocol: "True"
# real-ip-header: "proxy_protocol"
# set-real-ip-from: "0.0.0.0/0"
# force-ssl-redirect: "false"
# use-forwarded-headers: "true"
# http-snippet: |
# server {
# listen 8080 proxy_protocol;
# server_tokens off;
# return 301 https://$host$request_uri;
# }
Add following annotation on your ingress to sets an unconditional 301 redirect rule for all incoming HTTP traffic to force incoming traffic over HTTPS.
ingress.kubernetes.io/ssl-redirect: "true"

How to expose gRPC in Istio

Anyone know if it is possible to use gRPC with Istio-ingress or other ways?
Yes/no, anything is welcome - thanks in advance.
apiVersion: v1
kind: Service
metadata:
name: grpc-service
spec:
# type: LoadBalancer
selector:
app: grpc
ports:
- port: 3000
name: grpc
# protocol: HTTP2
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grpc-ingress
annotations:
kubernetes.io/ingress.class: "istio"
# ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- http:
paths:
- path: /ghw/.*
backend:
serviceName: grpc-service
servicePort: 3000
Go code:
const (
address = "localhost/ghw/:3000"
)

Resources