Knative: Enabling automatic TLS certificate provisioning - not working - tls1.2

I am trying to "Enabling automatic TLS certificate provisioning"
I have a working ClusterIssuer(status: "True") and I am able to manually create a Certificate(status: "True").
I am trying to enable Automatic TLS provision mode.
Environment setup:
Knative: v0.12
Istio: v1.4 (SDS)
cert-manager: v0.13.1
kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.9", GitCommit:"2e808b7cb054ee242b68e62455323aa783991f03", GitTreeState:"clean", BuildDate:"2020-01-18T23:33:14Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.9-gke.9", GitCommit:"a9973cbb2722793e2ea08d20880633ca61d3e669", GitTreeState:"clean", BuildDate:"2020-02-07T22:35:02Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}
I have the following gateway:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: knative-ingress-gateway
namespace: knative-serving
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
# Sends 301 redirect for all http requests.
# Omit to allow http and https.
httpsRedirect: false
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "mydomain.com"
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
And when applying:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: helloworld-go
namespace: default
spec:
template:
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go # The URL to the image of the app
env:
- name: TARGET
value: "Go Sample v1"
I can(note: httpsRedirect: false):
curl http://helloworld-go.default.mydomain.com
Hello Go Sample v1!
But when trying with https:
curl https://helloworld-go.default.mydomain.com
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to helloworld-go.default.mydomain.com:443
Also:
The Knative documentation state: "In this mode, a single Certificate will be provisioned per namespace and is reused across the Knative", but I dont see any certificates in any namespaces.
Note that kubectl get ksvc url is http and not https:
kubectl get ksvc
NAME URL LATESTCREATED LATESTREADY READY REASON
helloworld-go http://helloworld-go.default.mydomain.com helloworld-go-lxr2n helloworld-go-lxr2n True

I had the same issue. But with version 0.16.0. I fixed it by not using "Enabling automatic TLS certificate provisioning" with cert-manager instead I used the HTTP-01 directly provided from knative.
How to automatically provisioning TLS certificates using Let’s Encrypt HTTP01 challenges:
go to https://knative.dev/docs/install/any-kubernetes-cluster/#optional-serving-extensions
click on "TLS via HTTP01"
Follow the instruction
Should work

Related

Different hosts (Pods in Kubernetes) responds with different certificate for the same hostname

I have a werid problem - when asking for my internal hostname, xxx.home.arpa via e.g openssl s_client -connect xxx.home.arpa:443 one (example) pod
- image: docker.io/library/node:8.17.0-slim
name: node
args:
- "86400"
command:
- sleep
is getting response with DEFAULT NGINX INGRESS CERTIFICATE.
Other pod in the same namespace for the same command is getting response with my custom certificate.
Question:
Why one pod RECEIVES different cert for the same request?
For the purpose of this problem, please assume that cert-manager and certs should be properly configured - they are working in most of the system, it's only few pods that are misbehaving
Configuration: k8s nginx ingress, calico CNI, custom coredns svc which manages DNS responses (might be important?), my own CA authority.
e:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: ca-issuer
kubernetes.io/ingress.class: nginx
creationTimestamp: "2022-03-13T06:54:17Z"
generation: 1
name: gerrit-ingress
namespace: gerrit
resourceVersion: "739842"
uid: f22034ab-0ed8-4779-b01e-2738e6f63eb7
spec:
rules:
- host: gerrit.home.arpa
http:
paths:
- backend:
service:
name: gerrit-gerrit-service
port:
number: 80
pathType: ImplementationSpecific
tls:
- hosts:
- gerrit.home.arpa
secretName: gerrit-tls
status:
loadBalancer:
ingress:
- ip: 192.168.10.2
Most of the configuration (Except DNS) is up here.
As it turns out, my initial guesses were far off - particular container had a set of tools which were both configured to not send servername (Or not support SNI at all, which was the problem), specifically yarn:1.x and openssl:1.0.x.
The problem was with SNI of course, newer openssl or curl do use -servername by default satisfying SNI requirements.
To this I've considered two solutions:
Wildcard DNS for the clients that do not support SNI, which is easier but does not feel secure
TLS termination with reverse proxy allowing me to transparently use client with SNI support, which I haven't yet tried.
I went with wildcard DNS, though I don't feel that this should be done in prod. :)

Using self-signed certificates in nginx Ingress

I'm migrating services into a kubernetes cluster on minikube, these services require a self-signed certificate on load, accessing the service via NodePort works perfectly and demands the certificate in the browser (picture below), but accessing via the ingress host (the domain is modified locally in /etc/hosts) provides me with a Kubernetes Ingress Controller Fake Certificate by Acme and skips my self-signed cert without any message.
The SSLs should be decrypted inside the app and not in the Ingress, and the tls-acme: "false" flag does not work and still gives me the fake cert
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# decryption of tls occurs in the backend service
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/tls-acme: "false"
spec:
rules:
- host: admin.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: admin-service
port:
number: 443
when signing in it should show the following before loading:
minikube version: v1.15.1
kubectl version: 1.19
using ingress-nginx 3.18.0
The problem turned out to be a bug on Minikube, and also having to enable ssl passthrough in the nginx controller (in addition to the annotation) with the flag --enable-ssl-passthrough=true.
I was doing all my cluster testing on a Minikube cluster version v1.15.1 with kubernetes v1.19.4 where ssl passthrough failed, and after following the guidance in the ingress-nginx GitHub issue, I discovered that the issue didn't replicate in kind, so I tried deploying my app on a new AWS cluster (k8 version 1.18) and everything worked great.

Need help troubleshooting Istio IngressGateway HTTP ERROR 503

My Test Environment Cluster has the following configurations :
Global Mesh Policy (Installed as part of cluster setup by our org) : output of kubectl describe MeshPolicy default
Name: default
Namespace:
Labels: operator.istio.io/component=Pilot
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.5.6
release=istio
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"authentication.istio.io/v1alpha1","kind":"MeshPolicy","metadata":{"annotations":{},"labels":{"operator.istio.io/component":...
API Version: authentication.istio.io/v1alpha1
Kind: MeshPolicy
Metadata:
Creation Timestamp: 2020-07-23T17:41:55Z
Generation: 1
Resource Version: 1088966
Self Link: /apis/authentication.istio.io/v1alpha1/meshpolicies/default
UID: d3a416fa-8733-4d12-9d97-b0bb4383c479
Spec:
Peers:
Mtls:
Events: <none>
The above configuration I believe enables services to receive connections in mTls mode.
DestinationRule : Output of kubectl describe DestinationRule commerce-mesh-port -n istio-system
Name: commerce-mesh-port
Namespace: istio-system
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.istio.io/v1alpha3","kind":"DestinationRule","metadata":{"annotations":{},"name":"commerce-mesh-port","namespace"...
API Version: networking.istio.io/v1beta1
Kind: DestinationRule
Metadata:
Creation Timestamp: 2020-07-23T17:41:59Z
Generation: 1
Resource Version: 33879
Self Link: /apis/networking.istio.io/v1beta1/namespaces/istio-system/destinationrules/commerce-mesh-port
UID: 4ef0d49a-88d9-4b40-bb62-7879c500240a
Spec:
Host: *
Ports:
Name: commerce-mesh-port
Number: 16443
Protocol: TLS
Traffic Policy:
Tls:
Mode: ISTIO_MUTUAL
Events: <none>
Istio Ingress-Gateway :
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: finrpt-gateway
namespace: finrpt
spec:
selector:
istio: ingressgateway # use Istio's default ingress gateway
servers:
- port:
name: https
number: 443
protocol: https
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*"
- port:
name: http
number: 80
protocol: http
tls:
httpsRedirect: true
hosts:
- "*"
I created a secret to be used for TLS and using that to terminate the TLS traffic at the gateway (as configured in mode SIMPLE)
Next, I configured my VirtualService in the same namespace and did a URL match for HTTP :
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: finrpt-virtualservice
namespace: finrpt
spec:
hosts:
- "*"
gateways:
- finrpt-gateway
http:
- match:
- queryParams:
target:
exact: "commercialprocessor"
ignoreUriCase: true
route:
- destination:
host: finrpt-commercialprocessor
port:
number: 8118
The Service CommercialProcessor (ClusterIP) is expecting traffic on HTTP/8118.
With the above setting in place, when I browse to the External IP of my Ingress-Gateway, first I get a certificate error (expected as I am using self-signed for testing) and then on proceeding I get HTTP Error 503.
I am not able to find any useful logs in the gateway, I am wondering if the gateway is unable to communicate to my VirtualService in plaintext (TLS termination) and it is expecting https but I have put it as http?
Any help is highly appreciated, I am very new to Istio and I think I might be missing something naive here.
My expectation is : I should be able to hit the Gateway with https, gateway does the termination and forwards the unencrypted traffic to the destination configured in the VirtualService on HTTP port based on URL regex match ONLY (I have to keep URL match part constant here).
As 503 often occurs and it´s hard to find the issue I set up little troubleshooting answer, there are another questions with 503 error which I encountered for several months with answers, useful informations from istio documentation and things I would check.
Examples with 503 error:
Istio 503:s between (Public) Gateway and Service
IstIO egress gateway gives HTTP 503 error
Istio Ingress Gateway with TLS termination returning 503 service unavailable
how to terminate ssl at ingress-gateway in istio?
Accessing service using istio ingress gives 503 error when mTLS is enabled
Common cause of 503 errors from istio documentation:
https://istio.io/docs/ops/best-practices/traffic-management/#avoid-503-errors-while-reconfiguring-service-routes
https://istio.io/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule
https://istio.io/latest/docs/concepts/traffic-management/#working-with-your-applications
Few things I would check first:
Check services ports name, Istio can route correctly the traffic if it knows the protocol. It should be <protocol>[-<suffix>] as mentioned in istio
documentation.
Check mTLS, if there are any problems caused by mTLS, usually those problems would result in error 503.
Check if istio works, I would recommend to apply bookinfo application example and check if it works as expected.
Check if your namespace is injected with kubectl get namespace -L istio-injection
If the VirtualService using the subsets arrives before the DestinationRule where the subsets are defined, the Envoy configuration generated by Pilot would refer to non-existent upstream pools. This results in HTTP 503 errors until all configuration objects are available to Pilot.
Hope you find this useful.

A proxy inside a kubernetes pod doesn't intercept any HTTP traffic

What I am craving for is to have 2 applications running in a pod, each of those applications has its own container. The Application A is a simple spring-boot application which makes HTTP requests to the other application which is deployed on Kubernetes. The purpose of Application B (proxy) is to intercept that HTTP request and add an Authorization token to its header. The Application B is a mitmdump with a python script. The issue I am having is that when I have deployed in on Kubernetes, the proxy seems to not intercept any traffic at all ( I tried to reproduce this issue on my local machine and I didn't find any troubles, so I guess the issue lies somewhere withing networking inside a pod). Can someone have a look into it and guide me how to solve it?
Here's the deployment and service file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy-deployment
namespace: myown
labels:
app: application-a
spec:
replicas: 1
selector:
matchLabels:
app: application-a
template:
metadata:
labels:
app: application-a
spec:
containers:
- name: application-a
image: registry.gitlab.com/application-a
resources:
requests:
memory: "230Mi"
cpu: "100m"
limits:
memory: "460Mi"
cpu: "200m"
imagePullPolicy: Always
ports:
- containerPort: 8090
env:
- name: "HTTP_PROXY"
value: "http://localhost:1030"
- name:
image: registry.gitlab.com/application-b-proxy
resources:
requests:
memory: "230Mi"
cpu: "100m"
limits:
memory: "460Mi"
cpu: "200m"
imagePullPolicy: Always
ports:
- containerPort: 1080
---
kind: Service
apiVersion: v1
metadata:
name: proxy-svc
namespace: myown
spec:
ports:
- nodePort: 31000
port: 8090
protocol: TCP
targetPort: 8090
selector:
app: application-a
sessionAffinity: None
type: NodePort
And here's how i build the docker image of mitmproxy/mitmdump
FROM mitmproxy/mitmproxy:latest
ADD get_token.py .
WORKDIR ~/mit_docker
COPY get_token.py .
EXPOSE 1080:1080
ENTRYPOINT ["mitmdump","--listen-port", "1030", "-s","get_token.py"]
EDIT
I created two dummy docker images in order to have this scenario recreated locally.
APPLICATION A - a spring boot application with a job to create an HTTP GET request every 1 minute for specified but irrelevant address, the address should be accessible. The response should be 302 FOUND. Every time an HTTP request is made, a message in the logs of the application appears.
APPLICATION B - a proxy application which is supposed to proxy the docker container with application A. Every request is logged.
Make sure your docker proxy config is set to listen to http://localhost:8080 - you can check how to do so here
Open a terminal and run this command:
docker run -p 8080:8080 -ti registry.gitlab.com/dyrekcja117/proxyexample:application-b-proxy
Open another terminal and run this command:
docker run --network="host" registry.gitlab.com/dyrekcja117/proxyexample:application-a
Go into the shell with the container of application A in 3rd terminal:
docker exec -ti <name of docker container> sh
and try to make curl to whatever address you want.
And the issue I am struggling with is that when I make curl from inside the container with Application A it is intercepted by my proxy and it can be seen in the logs. But whenever Application A itself makes the same request it is not intercepted. The same thing happens on Kubernetes
Let's first wrap up the facts we discover over our troubleshooting discussion in the comments:
Your need is that APP-A receives a HTTP request and a token needs to be added inflight by PROXY before sending the request to your datastorage.
Every container in a Pod shares the network namespace, including the IP address and network ports. Containers inside a Pod can communicate with one another using localhost, source here.
You was able to login to container application-a and send a curl request to container
application-b-proxy on port 1030, proving the above statement.
The problem is that your proxy is not intercepting the request as expected.
You mention that in you was able to make it work on localhost, but in localhost the proxy has more power than inside a container.
Since I don't have access neither to your app-a code nor the mitmproxy token.py I will give you a general example how to redirect traffic from container-a to container-b
In order to make it work, I'll use NGINX Proxy Pass: it simply proxies the request to container-b.
Reproduction:
I'll use a nginx server as container-a.
I'll build it with this Dockerfile:
FROM nginx:1.17.3
RUN rm /etc/nginx/conf.d/default.conf
COPY frontend.conf /etc/nginx/conf.d
I'll add this configuration file frontend.conf:
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:8080;
}
}
It's ordering the traffic should be sent to container-b that is listening in port 8080 inside the same pod.
I'll build this image as nginxproxy in my local repo:
$ docker build -t nginxproxy .
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginxproxy latest 7c203a72c650 4 minutes ago 126MB
Now the full.yaml deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy-deployment
labels:
app: application-a
spec:
replicas: 1
selector:
matchLabels:
app: application-a
template:
metadata:
labels:
app: application-a
spec:
containers:
- name: container-a
image: nginxproxy:latest
ports:
- containerPort: 80
imagePullPolicy: Never
- name: container-b
image: echo8080:latest
ports:
- containerPort: 8080
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: proxy-svc
spec:
ports:
- nodePort: 31000
port: 80
protocol: TCP
targetPort: 80
selector:
app: application-a
sessionAffinity: None
type: NodePort
NOTE: I set imagePullPolicy as Never because I'm using my local docker image cache.
I'll list the changes I made to help you link it to your current environment:
container-a is doing the work of your application-a and I'm serving nginx on port 80 where you are using port 8090
container-b is receiving the request, as your application-b-proxy. The image I'm using was based on mendhak/http-https-echo, normally it listens on port 80, I've made a custom image just changing to listen on port 8080 and named it echo8080.
First I created a nginx pod and exposed it alone to show you it's running (since it's empty in content, it will return bad gateway but you can see the output is from nginx:
$ kubectl apply -f nginx.yaml
pod/nginx created
service/nginx-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 64s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-svc NodePort 10.103.178.109 <none> 80:31491/TCP 66s
$ curl http://192.168.39.51:31491
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.17.3</center>
</body>
</html>
I deleted the nginx pod and created a echo-apppod and exposed it to show you the response it gives when directly curled from outside:
$ kubectl apply -f echo.yaml
pod/echo created
service/echo-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
echo 1/1 Running 0 118s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo-svc NodePort 10.102.168.235 <none> 8080:32116/TCP 2m
$ curl http://192.168.39.51:32116
{
"path": "/",
"headers": {
"host": "192.168.39.51:32116",
"user-agent": "curl/7.52.1",
},
"method": "GET",
"hostname": "192.168.39.51",
"ip": "::ffff:172.17.0.1",
"protocol": "http",
"os": {
"hostname": "echo"
},
Now I'll apply the full.yaml:
$ kubectl apply -f full.yaml
deployment.apps/proxy-deployment created
service/proxy-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
proxy-deployment-9fc4ff64b-qbljn 2/2 Running 0 1s
$ k get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
proxy-svc NodePort 10.103.238.103 <none> 80:31000/TCP 31s
Now the Proof of concept, from outside the cluster, I'll send a curl to my node IP 192.168.39.51 in port 31000 which is sending the request to port 80 on the pod (handled by nginx):
$ curl http://192.168.39.51:31000
{
"path": "/",
"headers": {
"host": "127.0.0.1:8080",
"user-agent": "curl/7.52.1",
},
"method": "GET",
"hostname": "127.0.0.1",
"ip": "::ffff:127.0.0.1",
"protocol": "http",
"os": {
"hostname": "proxy-deployment-9fc4ff64b-qbljn"
},
As you can see, the response has all the parameters of the pod, indicating it was sent from 127.0.0.1 instead of a public IP, showing that the NGINX is proxying the request to container-b.
Considerations:
This example was created to show you how the communication works inside kubernetes.
You will have to check how your application-a is handling the requests and edit it to send the traffic to your proxy.
Here are a few links with tutorials and explanation that could help you port your application to kubernetes environment:
Virtual Hosts on nginx
Implementing a Reverse proxy Server in Kubernetes Using the Sidecar Pattern
Validating OAuth 2.0 Access Tokens with NGINX and NGINX Plus
Use nginx to Add Authentication to Any Application
Connecting a Front End to a Back End Using a Service
Transparent Proxy and Filtering on K8s
I Hope to help you with this example.

Kubernetes sevice with CluserIp does not pass the request to some of it's Endpoints

I'm new to Kubernetes. I have setup 3 node cluster with two workers according to here.
My configurations
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.10", GitCommit:"575467a0eaf3ca1f20eb86215b3bde40a5ae617a", GitTreeState:"clean", BuildDate:"2019-12-11T12:32:32Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:51:21Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Deployed simple python service listen to 8000 port http and reply "Hello world"
my deployment config
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-app
labels:
app: frontend-app
spec:
replicas: 2
selector:
matchLabels:
app: frontend-app
template:
metadata:
labels:
app: frontend-app
spec:
containers:
- name: pyfrontend
image: rushantha/pyfront:1.0
ports:
- containerPort: 8000
Exposed this as a service
kubectl expose deploy frontend-app --port 8000
I can see it deployed and running.
kubectl describe svc frontend-app
Name: frontend-app
Namespace: default
Labels: app=frontend-app
Annotations: <none>
Selector: app=frontend-app
Type: ClusterIP
IP: 10.96.113.192
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
Endpoints: 172.16.1.10:8000,172.16.2.9:8000
Session Affinity: None
Events: <none>
when I log in to each service machine and do curl pods respond
ie. curl 172.16.1.10:8000 or curl 172.16.2.9:8000
However when I try to access the pods via the ClusterIp only one pod always responds. So curl sometimes hangs, most probably the other pod does not respond. I confirmed when I tail the access logs for both pods. One pod never received any requests.
curl 10.96.113.192:8000/ ---> Hangs sometimes.
Any ideas how to troubleshoot this and fix ?
After comparing the tutorial document and the outputs configuration
I discovered that the --pod-network-cidr declared in the document is different from the OP endpoints which solved the problem.
The network in the flannel configuration should match the pod network CIDR otherwise pods won`t be able to communicate with each other.
Some additional information that are worth checking:
Under the CIDR Notation section there is a good explanation how this system works.
I find this document about networking in kuberenetes very helpful.

Resources