Communicating with Redis server from a container behind Envoy - http

I have deployed envoy containers as part of an Istio deployment over k8s.
Each Envoy proxy container is installed as a "sidecar" next to the app container within the k8s's pod.
I'm able to initiate HTTP traffic from within the application, but when trying to contact Redis server (another container with another envoy proxy), I'm not able to connect and receive HTTP/1.1 400 Bad Request message from envoy.
When examining the envoy's logs I can see the following message whenever this connection passing through the envoy: HTTP/1.1" 0 - 0 0 0 "_"."_"."_"."_""
As far as I understand, Redis commands being sent using pure TCP transport w/o HTTP.
Is it possible that Envoy expects to see only HTTP traffic and rejects TCP only traffic?
Assuming my understanding is correct, is there a way to change this behavior using Istio and accept and process generic TCP traffic as well?
The following are my related deployment yaml files:
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: default
labels:
component: redis
role: client
spec:
selector:
app: redis
ports:
- name: http
port: 6379
targetPort: 6379
protocol: TCP
type: ClusterIP
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis-db
spec:
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:3.2-alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 6379
Thanks

Getting into envoy (istio proxy):
kubectl exec -it my-pod -c proxy bash
Looking at envoy configuration:
cat /etc/envoy/envoy-rev2.json
You will see that it generates a TCP proxy filter which handles TCP only traffic. Redis example:
"address": "tcp://10.35.251.188:6379",
"filters": [
{
"type": "read",
"name": "tcp_proxy",
"config": {
"stat_prefix": "tcp",
"route_config": {
"routes": [
{
"cluster": "out.cd7acf6fcf8d36f0f3bbf6d5cccfdb5da1d1820c",
"destination_ip_list": [
"10.35.251.188/32"
]
}
]
}
}
In your case, adding http into Redis service port name (Kubernetes deployment file), generates http_connection_manager filter which doesn't handle row TCP.
See istio docs:
Kubernetes Services are required for properly functioning Istio service. Service ports must be named and these names must begin with http or grpc prefix to take advantage of Istio’s L7 routing features, e.g. name: http-foo or name: http is good. Services with non-named ports or with ports that do not have a http or grpc prefix will be routed as L4 traffic.
Bottom line, just remove port name form Redis service and it should solve the issue :)

Related

Expose both TCP and UDP from the same ingress

In kubernetes, I have the following service:
apiVersion: v1
kind: Service
metadata:
name: test-service
namespace: default
spec:
ports:
- name: tcp
protocol: TCP
port: 5555
targetPort: 5555
- name: udp
protocol: UDP
port: 5556
targetPort: 5556
selector:
tt: test
Which exposes two ports, 5555 for TCP and 5556 for UDP. How can expose these ports externally using the same ingress? I tried using nginx to do something like the following but it doesn't work. It complains that mixed ports are not supported.
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
5555: "default/test-service:5555"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
data:
5556: "default/test-service:5556"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: tcp
port: 5555
targetPort: 5555
protocol: TCP
- name: udp
port: 5556
targetPort: 5556
protocol: UDP
args:
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
Is there a way to do this?
Most cloud providers do not support UDP load-balancing or mix protocols and might have cloud-specific methods to bypass this issue.
The DigitalOcean CPI does not support mixed protocols in the Service
definition, it accepts TCP only for load balancer Services. It is
possible to ask for HTTP(S) and HTTP2 ports with Service annotations.
Summary: The DO CPI is the current bottleneck with its TCP-only limitation. As long as it is there the implementation of this feature
will have no effect on the DO bills.
See more: do-mixed-protocols.
A simple solution that may solve your problem is to create a reverse
proxy system on a standalone server - using Nginx and route UDP and
TCP traffic directly to your Kubernetes service.
Follow these two steps:
1. Create a NodePort service application
2. Create a small server instance and run Nginx with LB config on it
Use NodePort type of service which will expose your application on your cluster nodes, and makes them accessible through your node IP on a static port. This type supports multi-protocol services. Read more about services here.
apiVersion: v1
kind: Service
metadata:
name: test-service
namespace: default
spec:
type: NodePort
ports:
- name: tcp
protocol: TCP
port: 5555
targetPort: 5555
nodePort: 30010
- name: udp
protocol: UDP
port: 5556
targetPort: 5556
nodePort: 30011
selector:
tt: test
For example this service exposes test pods’ port 5555 through nodeIP:30010 with the TCP protocol and port 5556 through nodeIP:30011 with UDP. Please adjust ports according to your needs, this is just an example.
Then create a small server instance and run Nginx with LB config.
For this step, you can get a small server from any cloud provider.
Once you have the server, ssh inside and run the following to install Nginx:
$ sudo yum install nginx
In the next step, you will need your node IP addresses, which you can get by running:
$ kubectl get nodes -o wide.
Note: If you have private cluster without external access to your nodes, you will have to set up a point of entry for this use ( for example NAT gateway).
Then you have to add the following to your nginx.conf (run command $ sudo vi /etc/nginx/nginx.conf):
worker_processes 1;
events {
worker_connections 1024;
}
stream {
upstream tcp_backend {
server <node ip 1>:30010;
server <node ip 2>:30010;
server <node ip 3>:30010;
...
}
upstream udp_backend {
server <node ip 1>:30011;
server <node ip 2>:30011;
server <node ip 3>:30011;
...
}
server {
listen 5555;
proxy_pass tcp_backend;
proxy_timeout 1s; }
server {
listen 5556 udp;
proxy_pass udp_backend;
proxy_timeout 1s;
}
}
Now you can start your Nginx server using command:
$ sudo /etc/init.d/nginx start
If you have already started you Nginx server before applying changes to your config file, you have to restart it - execute commands below:
$ sudo netstat -tulpn # Get the PID of nginx.conf program
$ sudo kill -2 <PID of nginx.conf>
$ sudo service nginx restart
And now you have UDP/TCP LoadBalancer which you can access through <server IP>:<nodePort>.
See more: tcp-udp-loadbalancer.
You can enable feature gates MixedProtocolLBService. For instructions on how to enable function gates, see below.
How do you enable Feature Gates in K8s?
Restart (delete and re-create) the Ingress controller after enabling it for the settings to take effect.
MixedProtocolLBService is a beta feature since Kubernetes 1.20. Whether it becomes stable or depprecated remains to be seen.

Need help troubleshooting Istio IngressGateway HTTP ERROR 503

My Test Environment Cluster has the following configurations :
Global Mesh Policy (Installed as part of cluster setup by our org) : output of kubectl describe MeshPolicy default
Name: default
Namespace:
Labels: operator.istio.io/component=Pilot
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.5.6
release=istio
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"authentication.istio.io/v1alpha1","kind":"MeshPolicy","metadata":{"annotations":{},"labels":{"operator.istio.io/component":...
API Version: authentication.istio.io/v1alpha1
Kind: MeshPolicy
Metadata:
Creation Timestamp: 2020-07-23T17:41:55Z
Generation: 1
Resource Version: 1088966
Self Link: /apis/authentication.istio.io/v1alpha1/meshpolicies/default
UID: d3a416fa-8733-4d12-9d97-b0bb4383c479
Spec:
Peers:
Mtls:
Events: <none>
The above configuration I believe enables services to receive connections in mTls mode.
DestinationRule : Output of kubectl describe DestinationRule commerce-mesh-port -n istio-system
Name: commerce-mesh-port
Namespace: istio-system
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.istio.io/v1alpha3","kind":"DestinationRule","metadata":{"annotations":{},"name":"commerce-mesh-port","namespace"...
API Version: networking.istio.io/v1beta1
Kind: DestinationRule
Metadata:
Creation Timestamp: 2020-07-23T17:41:59Z
Generation: 1
Resource Version: 33879
Self Link: /apis/networking.istio.io/v1beta1/namespaces/istio-system/destinationrules/commerce-mesh-port
UID: 4ef0d49a-88d9-4b40-bb62-7879c500240a
Spec:
Host: *
Ports:
Name: commerce-mesh-port
Number: 16443
Protocol: TLS
Traffic Policy:
Tls:
Mode: ISTIO_MUTUAL
Events: <none>
Istio Ingress-Gateway :
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: finrpt-gateway
namespace: finrpt
spec:
selector:
istio: ingressgateway # use Istio's default ingress gateway
servers:
- port:
name: https
number: 443
protocol: https
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*"
- port:
name: http
number: 80
protocol: http
tls:
httpsRedirect: true
hosts:
- "*"
I created a secret to be used for TLS and using that to terminate the TLS traffic at the gateway (as configured in mode SIMPLE)
Next, I configured my VirtualService in the same namespace and did a URL match for HTTP :
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: finrpt-virtualservice
namespace: finrpt
spec:
hosts:
- "*"
gateways:
- finrpt-gateway
http:
- match:
- queryParams:
target:
exact: "commercialprocessor"
ignoreUriCase: true
route:
- destination:
host: finrpt-commercialprocessor
port:
number: 8118
The Service CommercialProcessor (ClusterIP) is expecting traffic on HTTP/8118.
With the above setting in place, when I browse to the External IP of my Ingress-Gateway, first I get a certificate error (expected as I am using self-signed for testing) and then on proceeding I get HTTP Error 503.
I am not able to find any useful logs in the gateway, I am wondering if the gateway is unable to communicate to my VirtualService in plaintext (TLS termination) and it is expecting https but I have put it as http?
Any help is highly appreciated, I am very new to Istio and I think I might be missing something naive here.
My expectation is : I should be able to hit the Gateway with https, gateway does the termination and forwards the unencrypted traffic to the destination configured in the VirtualService on HTTP port based on URL regex match ONLY (I have to keep URL match part constant here).
As 503 often occurs and it´s hard to find the issue I set up little troubleshooting answer, there are another questions with 503 error which I encountered for several months with answers, useful informations from istio documentation and things I would check.
Examples with 503 error:
Istio 503:s between (Public) Gateway and Service
IstIO egress gateway gives HTTP 503 error
Istio Ingress Gateway with TLS termination returning 503 service unavailable
how to terminate ssl at ingress-gateway in istio?
Accessing service using istio ingress gives 503 error when mTLS is enabled
Common cause of 503 errors from istio documentation:
https://istio.io/docs/ops/best-practices/traffic-management/#avoid-503-errors-while-reconfiguring-service-routes
https://istio.io/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule
https://istio.io/latest/docs/concepts/traffic-management/#working-with-your-applications
Few things I would check first:
Check services ports name, Istio can route correctly the traffic if it knows the protocol. It should be <protocol>[-<suffix>] as mentioned in istio
documentation.
Check mTLS, if there are any problems caused by mTLS, usually those problems would result in error 503.
Check if istio works, I would recommend to apply bookinfo application example and check if it works as expected.
Check if your namespace is injected with kubectl get namespace -L istio-injection
If the VirtualService using the subsets arrives before the DestinationRule where the subsets are defined, the Envoy configuration generated by Pilot would refer to non-existent upstream pools. This results in HTTP 503 errors until all configuration objects are available to Pilot.
Hope you find this useful.

A proxy inside a kubernetes pod doesn't intercept any HTTP traffic

What I am craving for is to have 2 applications running in a pod, each of those applications has its own container. The Application A is a simple spring-boot application which makes HTTP requests to the other application which is deployed on Kubernetes. The purpose of Application B (proxy) is to intercept that HTTP request and add an Authorization token to its header. The Application B is a mitmdump with a python script. The issue I am having is that when I have deployed in on Kubernetes, the proxy seems to not intercept any traffic at all ( I tried to reproduce this issue on my local machine and I didn't find any troubles, so I guess the issue lies somewhere withing networking inside a pod). Can someone have a look into it and guide me how to solve it?
Here's the deployment and service file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy-deployment
namespace: myown
labels:
app: application-a
spec:
replicas: 1
selector:
matchLabels:
app: application-a
template:
metadata:
labels:
app: application-a
spec:
containers:
- name: application-a
image: registry.gitlab.com/application-a
resources:
requests:
memory: "230Mi"
cpu: "100m"
limits:
memory: "460Mi"
cpu: "200m"
imagePullPolicy: Always
ports:
- containerPort: 8090
env:
- name: "HTTP_PROXY"
value: "http://localhost:1030"
- name:
image: registry.gitlab.com/application-b-proxy
resources:
requests:
memory: "230Mi"
cpu: "100m"
limits:
memory: "460Mi"
cpu: "200m"
imagePullPolicy: Always
ports:
- containerPort: 1080
---
kind: Service
apiVersion: v1
metadata:
name: proxy-svc
namespace: myown
spec:
ports:
- nodePort: 31000
port: 8090
protocol: TCP
targetPort: 8090
selector:
app: application-a
sessionAffinity: None
type: NodePort
And here's how i build the docker image of mitmproxy/mitmdump
FROM mitmproxy/mitmproxy:latest
ADD get_token.py .
WORKDIR ~/mit_docker
COPY get_token.py .
EXPOSE 1080:1080
ENTRYPOINT ["mitmdump","--listen-port", "1030", "-s","get_token.py"]
EDIT
I created two dummy docker images in order to have this scenario recreated locally.
APPLICATION A - a spring boot application with a job to create an HTTP GET request every 1 minute for specified but irrelevant address, the address should be accessible. The response should be 302 FOUND. Every time an HTTP request is made, a message in the logs of the application appears.
APPLICATION B - a proxy application which is supposed to proxy the docker container with application A. Every request is logged.
Make sure your docker proxy config is set to listen to http://localhost:8080 - you can check how to do so here
Open a terminal and run this command:
docker run -p 8080:8080 -ti registry.gitlab.com/dyrekcja117/proxyexample:application-b-proxy
Open another terminal and run this command:
docker run --network="host" registry.gitlab.com/dyrekcja117/proxyexample:application-a
Go into the shell with the container of application A in 3rd terminal:
docker exec -ti <name of docker container> sh
and try to make curl to whatever address you want.
And the issue I am struggling with is that when I make curl from inside the container with Application A it is intercepted by my proxy and it can be seen in the logs. But whenever Application A itself makes the same request it is not intercepted. The same thing happens on Kubernetes
Let's first wrap up the facts we discover over our troubleshooting discussion in the comments:
Your need is that APP-A receives a HTTP request and a token needs to be added inflight by PROXY before sending the request to your datastorage.
Every container in a Pod shares the network namespace, including the IP address and network ports. Containers inside a Pod can communicate with one another using localhost, source here.
You was able to login to container application-a and send a curl request to container
application-b-proxy on port 1030, proving the above statement.
The problem is that your proxy is not intercepting the request as expected.
You mention that in you was able to make it work on localhost, but in localhost the proxy has more power than inside a container.
Since I don't have access neither to your app-a code nor the mitmproxy token.py I will give you a general example how to redirect traffic from container-a to container-b
In order to make it work, I'll use NGINX Proxy Pass: it simply proxies the request to container-b.
Reproduction:
I'll use a nginx server as container-a.
I'll build it with this Dockerfile:
FROM nginx:1.17.3
RUN rm /etc/nginx/conf.d/default.conf
COPY frontend.conf /etc/nginx/conf.d
I'll add this configuration file frontend.conf:
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:8080;
}
}
It's ordering the traffic should be sent to container-b that is listening in port 8080 inside the same pod.
I'll build this image as nginxproxy in my local repo:
$ docker build -t nginxproxy .
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginxproxy latest 7c203a72c650 4 minutes ago 126MB
Now the full.yaml deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy-deployment
labels:
app: application-a
spec:
replicas: 1
selector:
matchLabels:
app: application-a
template:
metadata:
labels:
app: application-a
spec:
containers:
- name: container-a
image: nginxproxy:latest
ports:
- containerPort: 80
imagePullPolicy: Never
- name: container-b
image: echo8080:latest
ports:
- containerPort: 8080
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: proxy-svc
spec:
ports:
- nodePort: 31000
port: 80
protocol: TCP
targetPort: 80
selector:
app: application-a
sessionAffinity: None
type: NodePort
NOTE: I set imagePullPolicy as Never because I'm using my local docker image cache.
I'll list the changes I made to help you link it to your current environment:
container-a is doing the work of your application-a and I'm serving nginx on port 80 where you are using port 8090
container-b is receiving the request, as your application-b-proxy. The image I'm using was based on mendhak/http-https-echo, normally it listens on port 80, I've made a custom image just changing to listen on port 8080 and named it echo8080.
First I created a nginx pod and exposed it alone to show you it's running (since it's empty in content, it will return bad gateway but you can see the output is from nginx:
$ kubectl apply -f nginx.yaml
pod/nginx created
service/nginx-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 64s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-svc NodePort 10.103.178.109 <none> 80:31491/TCP 66s
$ curl http://192.168.39.51:31491
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.17.3</center>
</body>
</html>
I deleted the nginx pod and created a echo-apppod and exposed it to show you the response it gives when directly curled from outside:
$ kubectl apply -f echo.yaml
pod/echo created
service/echo-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
echo 1/1 Running 0 118s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo-svc NodePort 10.102.168.235 <none> 8080:32116/TCP 2m
$ curl http://192.168.39.51:32116
{
"path": "/",
"headers": {
"host": "192.168.39.51:32116",
"user-agent": "curl/7.52.1",
},
"method": "GET",
"hostname": "192.168.39.51",
"ip": "::ffff:172.17.0.1",
"protocol": "http",
"os": {
"hostname": "echo"
},
Now I'll apply the full.yaml:
$ kubectl apply -f full.yaml
deployment.apps/proxy-deployment created
service/proxy-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
proxy-deployment-9fc4ff64b-qbljn 2/2 Running 0 1s
$ k get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
proxy-svc NodePort 10.103.238.103 <none> 80:31000/TCP 31s
Now the Proof of concept, from outside the cluster, I'll send a curl to my node IP 192.168.39.51 in port 31000 which is sending the request to port 80 on the pod (handled by nginx):
$ curl http://192.168.39.51:31000
{
"path": "/",
"headers": {
"host": "127.0.0.1:8080",
"user-agent": "curl/7.52.1",
},
"method": "GET",
"hostname": "127.0.0.1",
"ip": "::ffff:127.0.0.1",
"protocol": "http",
"os": {
"hostname": "proxy-deployment-9fc4ff64b-qbljn"
},
As you can see, the response has all the parameters of the pod, indicating it was sent from 127.0.0.1 instead of a public IP, showing that the NGINX is proxying the request to container-b.
Considerations:
This example was created to show you how the communication works inside kubernetes.
You will have to check how your application-a is handling the requests and edit it to send the traffic to your proxy.
Here are a few links with tutorials and explanation that could help you port your application to kubernetes environment:
Virtual Hosts on nginx
Implementing a Reverse proxy Server in Kubernetes Using the Sidecar Pattern
Validating OAuth 2.0 Access Tokens with NGINX and NGINX Plus
Use nginx to Add Authentication to Any Application
Connecting a Front End to a Back End Using a Service
Transparent Proxy and Filtering on K8s
I Hope to help you with this example.

Gitlab kubernetes integration

I have a custom kubernetes cluster on a serve with public IP and DNS pointing to it (also wildcard).
Gitlab was configured with the cluster following this guide: https://gitlab.touch4it.com/help/user/project/clusters/index#add-existing-kubernetes-cluster
However, after installing Ingress, the ingress endpoint is never detected:
I tried patching the object in k8s, like so
externalIPs: (was empty)
- 1.2.3.4
externalTrafficPolicy: local (was cluster)
I suspect that the problem is empty ingress (scroll to the end) object then calling:
# kubectl get service ingress-nginx-ingress-controller -n gitlab-managed-apps -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-11-20T08:57:18Z"
labels:
app: nginx-ingress
chart: nginx-ingress-1.22.1
component: controller
heritage: Tiller
release: ingress
name: ingress-nginx-ingress-controller
namespace: gitlab-managed-apps
resourceVersion: "3940"
selfLink: /api/v1/namespaces/gitlab-managed-apps/services/ingress-nginx-ingress-controller
uid: c175afcc-0b73-11ea-91ec-5254008dd01b
spec:
clusterIP: 10.107.35.248
externalIPs:
- 1.2.3.4 # (public IP)
externalTrafficPolicy: Local
healthCheckNodePort: 30737
ports:
- name: http
nodePort: 31972
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31746
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
But Gitlab still cant find the ingress endpoint. I tried restarting cluster and Gitlab.
The network inspection in Gitlab always shows this response:
...
name ingress
status installed
status_reason null
version 1.22.1
external_ip null
external_hostname null
update_available false
can_uninstall false
...
Any ideas how to have a working Ingress Endpoint?
GitLab: 12.4.3 (4d477238500) k8s: 1.16.3-00
I had the exact same issue as you, and I finally figured out how to solve it.
The first to understand, is that on bare metal, you can't make it working without using MetalLB, because it calls the required Kubernetes APIs making it accepting the IP address you give to the Service of LoadBalancer type.
So first step is to deploy MetalLB to your cluster.
Then you need to have another machine, running a service like NGiNX or HAproxy or whatever can do some load balancing.
Last but not least, you have to give the Load Balancer machine IP address to MetalLB so that it can assign it to the Service.
Usually MetalLB requires a range of IP addresses, but you can also give one IP address like I did:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: staging-public-ips
protocol: layer2
addresses:
- 1.2.3.4/32
This way, MetalLB will assign the IP address to the Service with type LoadBalancer and Gitlab will finally find the IP address.
WARNING: MetalLB will assign only once an IP address. If you need many Service with type LoadBalancer, you will need many machines running NGiNX/HAproxy and so on and add its IP address in the MetalLB addresses pool.
For your information, I've posted all the technical details to my Gitlab issue here.

Kubernetes loadbalancer stops serving traffic if using local traffic policy

Currently I am having an issue with one of my services set to be a load balancer. I am trying to get the source ip preservation like its stated in the docs. However when I set the externalTrafficPolicy to local I lose all traffic to the service. Is there something I'm missing that is causing this to fail like this?
Load Balancer Service:
apiVersion: v1
kind: Service
metadata:
labels:
app: loadbalancer
role: loadbalancer-service
name: lb-test
namespace: default
spec:
clusterIP: 10.3.249.57
externalTrafficPolicy: Local
ports:
- name: example service
nodePort: 30581
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: loadbalancer-example
role: example
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: *example.ip*
Could be several things. A couple of suggestions:
Your service is getting an external IP and doesn't know how to reply back based on the local IP address of the pod.
Try running a sniffer on your pod see if you are getting packets from the external source.
Try checking at logs of your application.
Healthcheck in your load balancer is failing. Check the load balancer for your service on GCP console.
Check the instance port is listening. (probably not if your health check is failing)
Hope it helps.

Resources