Istio tcp traffic mirroring - tcp

I'm trying to mirror my TCP production traffic to our dev environment.
We're using istio and kubernetes.
I checked the istio documentation about mirroring:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- httpbin
http:
- route:
- destination:
host: httpbin
subset: v1
weight: 100
mirror:
host: httpbin
subset: v2
But this only seems to work for http traffic. Right?
When using for TCP I get:
unknown field "mirror" in v1alpha3.TCPRoute
Does anyone know an alternative way to duplicate the traffic?
Thanks in advance,
Chris

There is no concept of TCP traffic mirroring in Istio. The reference documents what is supported for TCP:
https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/

Related

Different hosts (Pods in Kubernetes) responds with different certificate for the same hostname

I have a werid problem - when asking for my internal hostname, xxx.home.arpa via e.g openssl s_client -connect xxx.home.arpa:443 one (example) pod
- image: docker.io/library/node:8.17.0-slim
name: node
args:
- "86400"
command:
- sleep
is getting response with DEFAULT NGINX INGRESS CERTIFICATE.
Other pod in the same namespace for the same command is getting response with my custom certificate.
Question:
Why one pod RECEIVES different cert for the same request?
For the purpose of this problem, please assume that cert-manager and certs should be properly configured - they are working in most of the system, it's only few pods that are misbehaving
Configuration: k8s nginx ingress, calico CNI, custom coredns svc which manages DNS responses (might be important?), my own CA authority.
e:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: ca-issuer
kubernetes.io/ingress.class: nginx
creationTimestamp: "2022-03-13T06:54:17Z"
generation: 1
name: gerrit-ingress
namespace: gerrit
resourceVersion: "739842"
uid: f22034ab-0ed8-4779-b01e-2738e6f63eb7
spec:
rules:
- host: gerrit.home.arpa
http:
paths:
- backend:
service:
name: gerrit-gerrit-service
port:
number: 80
pathType: ImplementationSpecific
tls:
- hosts:
- gerrit.home.arpa
secretName: gerrit-tls
status:
loadBalancer:
ingress:
- ip: 192.168.10.2
Most of the configuration (Except DNS) is up here.
As it turns out, my initial guesses were far off - particular container had a set of tools which were both configured to not send servername (Or not support SNI at all, which was the problem), specifically yarn:1.x and openssl:1.0.x.
The problem was with SNI of course, newer openssl or curl do use -servername by default satisfying SNI requirements.
To this I've considered two solutions:
Wildcard DNS for the clients that do not support SNI, which is easier but does not feel secure
TLS termination with reverse proxy allowing me to transparently use client with SNI support, which I haven't yet tried.
I went with wildcard DNS, though I don't feel that this should be done in prod. :)

How do I route traffic to an external SFTP server via a port in kubernetes nginx?

The end goal: be able to sftp into the server using domain.com:42150 using routing through Kubernetes.
The reason: This behavior is currently handled by an HAProxy config that we are moving away from, but we still need to support this behavior in our Kubernetes set up.
I came across this and could not figure out how to make it work.
I have the IP of the sftp server and the port.
So, basicaly if a request comes in at domain.com:42150 then it should connect to external-ip:22
I have created a config-map like the one in the linked article:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: nginx-ingress
data:
42150: "nginx-ingress/external-sftp:80"
Which, by my understanding should route requests to port 42150 to this service:
apiVersion: v1
kind: Service
metadata:
name: external-sftp
namespace: nginx-ingress
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 22
protocol: TCP
And although it's not listed in that article, I know from connecting to other outside services, I need to create an endpoint to use.
apiVersion: v1
kind: Endpoints
metadata:
name: external-sftp
namespace: nginx-ingress
subsets:
- addresses:
- ip: 12.345.67.89
ports:
- port: 22
protocol: TCP
Obviously this isn't working. I never ask questions here. Usually my answers are easy to find, but this one I cannot find an answer for. I'm just stuck.
Is there something I'm missing? I'm thinking this way of doing it is not possible. Is there a better way to go about doing this?

kubernetes ingress - exposing neo4j endpoint to internal network

I'm getting below error when trying to reach an internal neo4j endpoint from another cluster
neobolt.exceptions.ServiceUnavailable: Timed out trying to establish connection to ('xx.xxx.xx.xx', 7687)
When accessing this endpoint through the browser, it shows
not a WebSocket handshake request: missing upgrade
I work on GCP. This is what I've got:
Cluster A with Composer running Airflow
Cluster B with K8s where my application is deployed
I know for sure both clusters can communicate
Cluster B has a neo4j ingress defined as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
field.cattle.io/publicEndpoints: '[{"addresses":["xx.xxx.xx.xx"],"port":443,"protocol":"HTTPS","serviceName":"dev:neo4j","ingressName":"dev:neo4j-dev-ing","hostname":"neo4j-dev.host_name","allNodes":false}]'
generation: 6
name: neo4j-dev-ing
spec:
rules:
- host: neo4j-dev.host_name
http:
paths:
- backend:
serviceName: neo4j
servicePort: neo4j-dev-bolt
tls:
- hosts:
- neo4j-dev.host_name
status:
loadBalancer:
ingress:
- ip: xx.xxx.xx.xx
My neo4j service looks as follows:
apiVersion: v1
kind: Service
metadata:
name: neo4j
spec:
type: ClusterIP
selector:
app: neo4j
component: neo4j
ports:
- port: 7473
name: neo4j-dev-https
targetPort: 7473
- port: 7474
name: neo4j-dev-http
targetPort: 7474
- port: 7687
name: neo4j-dev-bolt
targetPort: 7687
- port: 1337
name: neo4j-dev-shell
targetPort: 1337
I've seen a few related questions but nothing concrete and a lot of contradictory information.
Any ideas if this can even work at all? Can someone with some insight into networks explain to me why this isn't working or put forward the concepts I need to research in order to get what's going on? Got stuck
It’s a known issue with accessing neo4j outside of kubernetes and there are no straightforward workarounds
A complex workaround using multiple static IPs has been described here
https://neo4j.com/labs/neo4j-helm/1.0.0/externalexposure/

Need help troubleshooting Istio IngressGateway HTTP ERROR 503

My Test Environment Cluster has the following configurations :
Global Mesh Policy (Installed as part of cluster setup by our org) : output of kubectl describe MeshPolicy default
Name: default
Namespace:
Labels: operator.istio.io/component=Pilot
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.5.6
release=istio
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"authentication.istio.io/v1alpha1","kind":"MeshPolicy","metadata":{"annotations":{},"labels":{"operator.istio.io/component":...
API Version: authentication.istio.io/v1alpha1
Kind: MeshPolicy
Metadata:
Creation Timestamp: 2020-07-23T17:41:55Z
Generation: 1
Resource Version: 1088966
Self Link: /apis/authentication.istio.io/v1alpha1/meshpolicies/default
UID: d3a416fa-8733-4d12-9d97-b0bb4383c479
Spec:
Peers:
Mtls:
Events: <none>
The above configuration I believe enables services to receive connections in mTls mode.
DestinationRule : Output of kubectl describe DestinationRule commerce-mesh-port -n istio-system
Name: commerce-mesh-port
Namespace: istio-system
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.istio.io/v1alpha3","kind":"DestinationRule","metadata":{"annotations":{},"name":"commerce-mesh-port","namespace"...
API Version: networking.istio.io/v1beta1
Kind: DestinationRule
Metadata:
Creation Timestamp: 2020-07-23T17:41:59Z
Generation: 1
Resource Version: 33879
Self Link: /apis/networking.istio.io/v1beta1/namespaces/istio-system/destinationrules/commerce-mesh-port
UID: 4ef0d49a-88d9-4b40-bb62-7879c500240a
Spec:
Host: *
Ports:
Name: commerce-mesh-port
Number: 16443
Protocol: TLS
Traffic Policy:
Tls:
Mode: ISTIO_MUTUAL
Events: <none>
Istio Ingress-Gateway :
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: finrpt-gateway
namespace: finrpt
spec:
selector:
istio: ingressgateway # use Istio's default ingress gateway
servers:
- port:
name: https
number: 443
protocol: https
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*"
- port:
name: http
number: 80
protocol: http
tls:
httpsRedirect: true
hosts:
- "*"
I created a secret to be used for TLS and using that to terminate the TLS traffic at the gateway (as configured in mode SIMPLE)
Next, I configured my VirtualService in the same namespace and did a URL match for HTTP :
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: finrpt-virtualservice
namespace: finrpt
spec:
hosts:
- "*"
gateways:
- finrpt-gateway
http:
- match:
- queryParams:
target:
exact: "commercialprocessor"
ignoreUriCase: true
route:
- destination:
host: finrpt-commercialprocessor
port:
number: 8118
The Service CommercialProcessor (ClusterIP) is expecting traffic on HTTP/8118.
With the above setting in place, when I browse to the External IP of my Ingress-Gateway, first I get a certificate error (expected as I am using self-signed for testing) and then on proceeding I get HTTP Error 503.
I am not able to find any useful logs in the gateway, I am wondering if the gateway is unable to communicate to my VirtualService in plaintext (TLS termination) and it is expecting https but I have put it as http?
Any help is highly appreciated, I am very new to Istio and I think I might be missing something naive here.
My expectation is : I should be able to hit the Gateway with https, gateway does the termination and forwards the unencrypted traffic to the destination configured in the VirtualService on HTTP port based on URL regex match ONLY (I have to keep URL match part constant here).
As 503 often occurs and it´s hard to find the issue I set up little troubleshooting answer, there are another questions with 503 error which I encountered for several months with answers, useful informations from istio documentation and things I would check.
Examples with 503 error:
Istio 503:s between (Public) Gateway and Service
IstIO egress gateway gives HTTP 503 error
Istio Ingress Gateway with TLS termination returning 503 service unavailable
how to terminate ssl at ingress-gateway in istio?
Accessing service using istio ingress gives 503 error when mTLS is enabled
Common cause of 503 errors from istio documentation:
https://istio.io/docs/ops/best-practices/traffic-management/#avoid-503-errors-while-reconfiguring-service-routes
https://istio.io/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule
https://istio.io/latest/docs/concepts/traffic-management/#working-with-your-applications
Few things I would check first:
Check services ports name, Istio can route correctly the traffic if it knows the protocol. It should be <protocol>[-<suffix>] as mentioned in istio
documentation.
Check mTLS, if there are any problems caused by mTLS, usually those problems would result in error 503.
Check if istio works, I would recommend to apply bookinfo application example and check if it works as expected.
Check if your namespace is injected with kubectl get namespace -L istio-injection
If the VirtualService using the subsets arrives before the DestinationRule where the subsets are defined, the Envoy configuration generated by Pilot would refer to non-existent upstream pools. This results in HTTP 503 errors until all configuration objects are available to Pilot.
Hope you find this useful.

Two ingress controller on same K8S cluster

I have installed the following two different ingress controllers on my DigitalOcean managed K8S cluster:
Nginx
Istio
and they have been assigned to two different IP addresses. My question is, if it is wrong to have two different ingress controllers on the same K8S cluster?
The reason, why I have done it, because nginx is for tools like harbor, argocd, etc. and istio for microservices.
I have also figured out, when both are installed alongside each other, sometimes during the deployment, the K8S suddenly goes down.
For example, I have deployed:
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
namespace: dev
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
namespace: dev
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.7
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: helloworld-ingress
namespace: dev
spec:
rules:
- host: hello.service.databaker.io
http:
paths:
- path: /*
backend:
serviceName: hello-kubernetes-first
servicePort: 80
---
Then I've got:
Error from server (InternalError): error when creating "istio-app.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: dial tcp 10.245.107.175:443: i/o timeout
You have raised several points - before answering your question, let's take a step back.
K8s Ingress not recommended by Istio
It is important to note how Istio does not recommend using K8s Ingress:
Using the Istio Gateway, rather than Ingress, is recommended to make use of the full feature set that Istio offers, such as rich traffic management and security features.
Ref: https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/
As noted, Istio Gateway (Istio IngressGateway and EgressGateway) acts as the edge, which you can find more in https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/.
Multiple endpoints within Istio
If you need to assign one public endpoint for business requirement, and another for monitoring (such as Argo CD, Harbor as you mentioned), you can achieve that by using Istio only. There are roughly 2 approaches to this.
Create separate Istio IngressGateways - one for main traffic, and another for monitoring
Create one Istio IngressGateway, and use Gateway definition to handle multiple access patterns
Both are valid, and depending on requirements, you may need to choose one way or the other.
As to the Approach #2., it is where Istio's traffic management system shines. It is a great example of Istio's power, but the setup is slightly complex if you are new to it. So here goes an example.
Example of Approach #2
When you create Istio IngressGateway by following the default installation, it would create istio-ingressgateway like below (I overly simplified YAML definition):
apiVersion: v1
kind: Service
metadata:
labels:
app: istio-ingressgateway
istio: ingressgateway
name: istio-ingressgateway
namespace: istio-system
# ... other attributes ...
spec:
type: LoadBalancer
# ... other attributes ...
This LB Service would then be your endpoint. (I'm not familiar with DigitalOcean K8s env, but I suppose they would handle LB creation.)
Then, you can create Gateway definition like below:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: your-gateway
namespace: istio-system
spec:
selector:
app: istio-ingressgateway
istio: ingressgateway
servers:
- port:
number: 3000
name: https-your-system
protocol: HTTPS
hosts:
- "your-business-domain.com"
- "*.monitoring-domain.com"
# ... other attributes ...
You can then create 2 or more VirtualService definitions.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: business-virtsvc
spec:
gateways:
- istio-ingressgateway.istio-system.svc.cluster.local
hosts:
- "your-business-domain.com"
http:
- match:
- port: 3000
route:
- destination:
host: some-business-pod
port:
number: 3000
# ... other attributes ...
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: monitoring-virtsvc
spec:
gateways:
- istio-ingressgateway.istio-system.svc.cluster.local
hosts:
- "harbor.monitoring-domain.com"
http:
- match:
- port: 3000
route:
- destination:
host: harbor-pod
port:
number: 3000
# ... other attributes ...
NOTE: The above is assuming a lot of things, such as port mapping, traffic handling, etc.. Please check out the official doc for details.
So, back to the question after long detour:
Question: [Is it] wrong to have two different ingress controllers on the same K8S cluster[?]
I believe it is OK, though this can cause an error like you are seeing, as two ingress controller fight for the K8s Ingress resource.
As mentioned above, if you are using Istio, it's better to stick with Istio IngressGateway instead of K8s Ingress. If you need K8s Ingress for some specific reason, you could use other Ingress controller for K8s Ingress, like Nginx.
As to the error you saw, it's coming from Nginx deployed webhook, that ingress-nginx-controller-admission.nginx.svc is not available. This means you have created a K8s Ingress helloworld-ingress with kubernetes.io/ingress.class: istio annotation, but Nginx webhook is interfering with K8s Ingress handling. The webhook is then failing to handle the resource, as the Pod / Svc responsible for webhook traffic is not found.
The error itself just says something is unhealthy in K8s - potentially not enough Node allocated to the cluster, and thus Pod allocation not happening. It's also good to note that Istio does require some CPU and memory footprint, which may be putting more pressure to the cluster.
Both products have distinct characteristics and solve different type of problems. So, no issue in having both installed on your cluster.
To call them Ingress Controller is not correct:
- Nginx is a well known web server
- Nginx ingress controller is an implementation of a Kubernetes Ingress controller based on Nginx (Load balancing, HTTPS termination, authentication, traffic routing , etc)
- Istio is a service mesh (well known to microservice architecture and used to address cross cutting concerns in a standard way - things like, logging, tracing, Https termination, etc - at the POD level)
Can you provide more details to what you mean by "K8S suddenly goes down". Are you talking about the cluster nodes or the PODs running inside?
Thanks.
Have you looked specifying the ingress.class (kubernetes.io/ingress.class: "nginx" ), like mentioned here? - https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/

Resources