gRPC service not accessible from Traefik - grpc

I'm trying to route traffic for my gRPC service with Traefik. My Traefik configuration:
accessLog:
filePath: "/var/logs/access.log"
api:
dashboard: true
insecure: true
metrics:
prometheus: {}
providers:
nomad:
endpoint:
address: http://host.docker.internal:4646
entryPoints:
web:
http2:
maxConcurrentStreams: 250
address: ":80"
grpc:
http2:
maxConcurrentStreams: 250
address: ":81"
When I spin up the container of my gRPC service, it is registered to Traefik, with h2c protocol:
When I send the gRPC request from Postman directly to the allocated node (127.0.0.1:29165), the request works:
But if I want to send the request to the host, specified in rules (grpc.localhost), I get an error:
I tried changing entryPoint ports, and also disabling host header forwarding, but the problem persists.
Any help would be appreciated! Thank you.

Related

externalTrafficPolicy Local on GKE service not working

I'm using GKE version 1.21.12-gke.1700 and I'm trying to configure externalTrafficPolicy to "local" on my nginx external load balancer (not ingress). After the change, nothing happens, and I still see the source as the internal IP for the kubernetes IP range instead of the client's IP.
This is my service's YAML:
apiVersion: v1
kind: Service
metadata:
name: nginx-ext
namespace: my-namespace
spec:
externalTrafficPolicy: Local
healthCheckNodePort: xxxxx
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
loadBalancerSourceRanges:
- x.x.x.x/32
ports:
- name: dashboard
port: 443
protocol: TCP
targetPort: 443
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
And the nginx logs:
*2 access forbidden by rule, client: 10.X.X.X
My goal is to make a restriction endpoint based (to deny all and allow only specific clients)
You can use curl to query the ip from the load balance, this is an example curl 202.0.113.120 . Please notice that the service.spec.externalTrafficPolicy set to Local in GKE will force to remove the nodes without service endpoints from the list of nodes eligible for load balanced traffic; so if you are applying the Local value to your external traffic policy, you will have at least one Service Endpoint. So based on this, it is important to deploy the service.spec.healthCheckNodePort . This port needs to be allowed in the ingress firewall rule, you can get the health check node port from your yaml file with this command:
kubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort
You can follow this guide if you need more information about how the service load balancer type works in GKE and finally you can limit the traffic from outside at your external load balancer deploying loadBalancerSourceRanges. In the following link, you can find more information related on how to protect your applications from outside traffic.

why ingress nginx cannot proxy grpc when client using insecure?

path: go-client --> ingress-nginx --> grpc pod
Because all the traffic is in our private network, so we didn't buy a public Certificate, rather we use a self-signed certificate. What happened is that the first code below worked well, but the second failed. I don't know why, and I want to know what the insecure exactly means.
code that worked well:
cert, _ := credentials.NewClientTLSFromFile("./example.pem", "example.com")
conn, err := grpc.DialContext(
ctx,
"example.com:443",
grpc.WithTransportCredentials(cert),
grpc.WithBlock(),
)
code that received 400 bad request
conn, err := grpc.DialContext(
ctx,
"example.com:443",
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithBlock(),
)
nginx access log for bad request
"PRI * HTTP/2.0" 400
ingress yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
spec:
ingressClassName: nginx
tls:
- hosts: example.com
secretName: example-tls
rules:
- host: example.com
http:
paths:
- path: /foo/bar
pathType: Prefix
backend:
service: grpc-svc
port:
name: grpc-port
Package insecure provides an implementation of the credentials.TransportCredentials interface which disables transport security. More specifically, it does not perform any TLS handshaking or use any certificates.
gRPC requires that the user pass it some credentials when attempting to create the ClientConn. If your deployment does not use any certificates and you know that it is secure (based on whatever reasons), then the insecure package will be your friend. But if you are using self-signed certificates, they are still certificates and a TLS handshake needs to happen here. So, in this case, you should continue using the code that you have mentioned at the top of your question. Hope this helps.

Need help troubleshooting Istio IngressGateway HTTP ERROR 503

My Test Environment Cluster has the following configurations :
Global Mesh Policy (Installed as part of cluster setup by our org) : output of kubectl describe MeshPolicy default
Name: default
Namespace:
Labels: operator.istio.io/component=Pilot
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.5.6
release=istio
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"authentication.istio.io/v1alpha1","kind":"MeshPolicy","metadata":{"annotations":{},"labels":{"operator.istio.io/component":...
API Version: authentication.istio.io/v1alpha1
Kind: MeshPolicy
Metadata:
Creation Timestamp: 2020-07-23T17:41:55Z
Generation: 1
Resource Version: 1088966
Self Link: /apis/authentication.istio.io/v1alpha1/meshpolicies/default
UID: d3a416fa-8733-4d12-9d97-b0bb4383c479
Spec:
Peers:
Mtls:
Events: <none>
The above configuration I believe enables services to receive connections in mTls mode.
DestinationRule : Output of kubectl describe DestinationRule commerce-mesh-port -n istio-system
Name: commerce-mesh-port
Namespace: istio-system
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.istio.io/v1alpha3","kind":"DestinationRule","metadata":{"annotations":{},"name":"commerce-mesh-port","namespace"...
API Version: networking.istio.io/v1beta1
Kind: DestinationRule
Metadata:
Creation Timestamp: 2020-07-23T17:41:59Z
Generation: 1
Resource Version: 33879
Self Link: /apis/networking.istio.io/v1beta1/namespaces/istio-system/destinationrules/commerce-mesh-port
UID: 4ef0d49a-88d9-4b40-bb62-7879c500240a
Spec:
Host: *
Ports:
Name: commerce-mesh-port
Number: 16443
Protocol: TLS
Traffic Policy:
Tls:
Mode: ISTIO_MUTUAL
Events: <none>
Istio Ingress-Gateway :
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: finrpt-gateway
namespace: finrpt
spec:
selector:
istio: ingressgateway # use Istio's default ingress gateway
servers:
- port:
name: https
number: 443
protocol: https
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*"
- port:
name: http
number: 80
protocol: http
tls:
httpsRedirect: true
hosts:
- "*"
I created a secret to be used for TLS and using that to terminate the TLS traffic at the gateway (as configured in mode SIMPLE)
Next, I configured my VirtualService in the same namespace and did a URL match for HTTP :
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: finrpt-virtualservice
namespace: finrpt
spec:
hosts:
- "*"
gateways:
- finrpt-gateway
http:
- match:
- queryParams:
target:
exact: "commercialprocessor"
ignoreUriCase: true
route:
- destination:
host: finrpt-commercialprocessor
port:
number: 8118
The Service CommercialProcessor (ClusterIP) is expecting traffic on HTTP/8118.
With the above setting in place, when I browse to the External IP of my Ingress-Gateway, first I get a certificate error (expected as I am using self-signed for testing) and then on proceeding I get HTTP Error 503.
I am not able to find any useful logs in the gateway, I am wondering if the gateway is unable to communicate to my VirtualService in plaintext (TLS termination) and it is expecting https but I have put it as http?
Any help is highly appreciated, I am very new to Istio and I think I might be missing something naive here.
My expectation is : I should be able to hit the Gateway with https, gateway does the termination and forwards the unencrypted traffic to the destination configured in the VirtualService on HTTP port based on URL regex match ONLY (I have to keep URL match part constant here).
As 503 often occurs and it´s hard to find the issue I set up little troubleshooting answer, there are another questions with 503 error which I encountered for several months with answers, useful informations from istio documentation and things I would check.
Examples with 503 error:
Istio 503:s between (Public) Gateway and Service
IstIO egress gateway gives HTTP 503 error
Istio Ingress Gateway with TLS termination returning 503 service unavailable
how to terminate ssl at ingress-gateway in istio?
Accessing service using istio ingress gives 503 error when mTLS is enabled
Common cause of 503 errors from istio documentation:
https://istio.io/docs/ops/best-practices/traffic-management/#avoid-503-errors-while-reconfiguring-service-routes
https://istio.io/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule
https://istio.io/latest/docs/concepts/traffic-management/#working-with-your-applications
Few things I would check first:
Check services ports name, Istio can route correctly the traffic if it knows the protocol. It should be <protocol>[-<suffix>] as mentioned in istio
documentation.
Check mTLS, if there are any problems caused by mTLS, usually those problems would result in error 503.
Check if istio works, I would recommend to apply bookinfo application example and check if it works as expected.
Check if your namespace is injected with kubectl get namespace -L istio-injection
If the VirtualService using the subsets arrives before the DestinationRule where the subsets are defined, the Envoy configuration generated by Pilot would refer to non-existent upstream pools. This results in HTTP 503 errors until all configuration objects are available to Pilot.
Hope you find this useful.

Kubernetes loadbalancer stops serving traffic if using local traffic policy

Currently I am having an issue with one of my services set to be a load balancer. I am trying to get the source ip preservation like its stated in the docs. However when I set the externalTrafficPolicy to local I lose all traffic to the service. Is there something I'm missing that is causing this to fail like this?
Load Balancer Service:
apiVersion: v1
kind: Service
metadata:
labels:
app: loadbalancer
role: loadbalancer-service
name: lb-test
namespace: default
spec:
clusterIP: 10.3.249.57
externalTrafficPolicy: Local
ports:
- name: example service
nodePort: 30581
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: loadbalancer-example
role: example
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: *example.ip*
Could be several things. A couple of suggestions:
Your service is getting an external IP and doesn't know how to reply back based on the local IP address of the pod.
Try running a sniffer on your pod see if you are getting packets from the external source.
Try checking at logs of your application.
Healthcheck in your load balancer is failing. Check the load balancer for your service on GCP console.
Check the instance port is listening. (probably not if your health check is failing)
Hope it helps.

Communicating with Redis server from a container behind Envoy

I have deployed envoy containers as part of an Istio deployment over k8s.
Each Envoy proxy container is installed as a "sidecar" next to the app container within the k8s's pod.
I'm able to initiate HTTP traffic from within the application, but when trying to contact Redis server (another container with another envoy proxy), I'm not able to connect and receive HTTP/1.1 400 Bad Request message from envoy.
When examining the envoy's logs I can see the following message whenever this connection passing through the envoy: HTTP/1.1" 0 - 0 0 0 "_"."_"."_"."_""
As far as I understand, Redis commands being sent using pure TCP transport w/o HTTP.
Is it possible that Envoy expects to see only HTTP traffic and rejects TCP only traffic?
Assuming my understanding is correct, is there a way to change this behavior using Istio and accept and process generic TCP traffic as well?
The following are my related deployment yaml files:
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: default
labels:
component: redis
role: client
spec:
selector:
app: redis
ports:
- name: http
port: 6379
targetPort: 6379
protocol: TCP
type: ClusterIP
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis-db
spec:
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:3.2-alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 6379
Thanks
Getting into envoy (istio proxy):
kubectl exec -it my-pod -c proxy bash
Looking at envoy configuration:
cat /etc/envoy/envoy-rev2.json
You will see that it generates a TCP proxy filter which handles TCP only traffic. Redis example:
"address": "tcp://10.35.251.188:6379",
"filters": [
{
"type": "read",
"name": "tcp_proxy",
"config": {
"stat_prefix": "tcp",
"route_config": {
"routes": [
{
"cluster": "out.cd7acf6fcf8d36f0f3bbf6d5cccfdb5da1d1820c",
"destination_ip_list": [
"10.35.251.188/32"
]
}
]
}
}
In your case, adding http into Redis service port name (Kubernetes deployment file), generates http_connection_manager filter which doesn't handle row TCP.
See istio docs:
Kubernetes Services are required for properly functioning Istio service. Service ports must be named and these names must begin with http or grpc prefix to take advantage of Istio’s L7 routing features, e.g. name: http-foo or name: http is good. Services with non-named ports or with ports that do not have a http or grpc prefix will be routed as L4 traffic.
Bottom line, just remove port name form Redis service and it should solve the issue :)

Resources