Putting kibana behind nginx-ingress fails with a HTTP error - nginx

I have deployed kibana in a kubernetes environment. If I give that a LoadBalancer type Service, I could access it fine. However, when I try to access the same via a nginx-ingress it fails. The configuration that I use in my nginx ingress is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: my-ingress
spec:
rules:
- http:
paths:
- backend:
serviceName: kibana
servicePort: {{ .Values.kibanaPort }}
path: /kibana
I have launched my kibana with the following setting:
- name: SERVER_BASEPATH
value: /kibana
and I am able to access the kibana fine via the LoadBalancer IP. However when I try to access via the Ingress, most of the calls go through fine except for a GET call to vendors.bundle.js where it fails almost consistently.
The log messages in the ingress during this call is as follows:
2019/10/25 07:31:48 [error] 430#430: *21284 upstream prematurely closed connection while sending to client, client: 10.142.0.84, server: _, request: "GET /kibana/bundles/vendors.bundle.js HTTP/2.0", upstream: "http://10.20.3.5:3000/kibana/bundles/vendors.bundle.js", host: "1.2.3.4", referrer: "https://1.2.3.4/kibana/app/kibana"
10.142.0.84 - [10.142.0.84] - - [25/Oct/2019:07:31:48 +0000] "GET /kibana/bundles/vendors.bundle.js HTTP/2.0" 200 1854133 "https://1.2.3.4/kibana/app/kibana" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36" 47 13.512 [some-service] 10.20.3.5:3000 7607326 13.513 200 506f778b25471822e62fbda2e57ccd6b
I am not sure why I get the upstream prematurely closed connection while sending to client across different browsers. I have tried setting proxy-connect-timeout and proxy-read-timeout to 100 seconds and even then it fails. I am not sure if this is due to some kind of default size or chunks.
Also it is interesting to note that only some kibana calls are failing and not all are failing.
In the browser, I see the error message:
GET https://<ip>/kibana/bundles/vendors.bundle.js net::ERR_SPDY_PROTOCOL_ERROR 200
in the developer console.
Anyone has any idea on what kind of config options I need to pass to my nginx-ingress to make kibana proxy_pass fine ?

I have found the cause of the error. The vendors.bundle.js file was relatively bigger and since I was accessing from a relatively slow network, the requests were terminated. The way I fixed this is, by adding to the nginx-ingress configuration the following fields:
nginx.ingress.kubernetes.io/proxy-body-size: 10m (Change this as you need)
nginx.ingress.kubernetes.io/proxy-connect-timeout: "100"
nginx.ingress.kubernetes.io/proxy-send-timeout: "100"
nginx.ingress.kubernetes.io/proxy-read-timeout: "100"
nginx.ingress.kubernetes.io/proxy-buffering: "on"

Related

why ingress nginx cannot proxy grpc when client using insecure?

path: go-client --> ingress-nginx --> grpc pod
Because all the traffic is in our private network, so we didn't buy a public Certificate, rather we use a self-signed certificate. What happened is that the first code below worked well, but the second failed. I don't know why, and I want to know what the insecure exactly means.
code that worked well:
cert, _ := credentials.NewClientTLSFromFile("./example.pem", "example.com")
conn, err := grpc.DialContext(
ctx,
"example.com:443",
grpc.WithTransportCredentials(cert),
grpc.WithBlock(),
)
code that received 400 bad request
conn, err := grpc.DialContext(
ctx,
"example.com:443",
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithBlock(),
)
nginx access log for bad request
"PRI * HTTP/2.0" 400
ingress yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
spec:
ingressClassName: nginx
tls:
- hosts: example.com
secretName: example-tls
rules:
- host: example.com
http:
paths:
- path: /foo/bar
pathType: Prefix
backend:
service: grpc-svc
port:
name: grpc-port
Package insecure provides an implementation of the credentials.TransportCredentials interface which disables transport security. More specifically, it does not perform any TLS handshaking or use any certificates.
gRPC requires that the user pass it some credentials when attempting to create the ClientConn. If your deployment does not use any certificates and you know that it is secure (based on whatever reasons), then the insecure package will be your friend. But if you are using self-signed certificates, they are still certificates and a TLS handshake needs to happen here. So, in this case, you should continue using the code that you have mentioned at the top of your question. Hope this helps.

Different hosts (Pods in Kubernetes) responds with different certificate for the same hostname

I have a werid problem - when asking for my internal hostname, xxx.home.arpa via e.g openssl s_client -connect xxx.home.arpa:443 one (example) pod
- image: docker.io/library/node:8.17.0-slim
name: node
args:
- "86400"
command:
- sleep
is getting response with DEFAULT NGINX INGRESS CERTIFICATE.
Other pod in the same namespace for the same command is getting response with my custom certificate.
Question:
Why one pod RECEIVES different cert for the same request?
For the purpose of this problem, please assume that cert-manager and certs should be properly configured - they are working in most of the system, it's only few pods that are misbehaving
Configuration: k8s nginx ingress, calico CNI, custom coredns svc which manages DNS responses (might be important?), my own CA authority.
e:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: ca-issuer
kubernetes.io/ingress.class: nginx
creationTimestamp: "2022-03-13T06:54:17Z"
generation: 1
name: gerrit-ingress
namespace: gerrit
resourceVersion: "739842"
uid: f22034ab-0ed8-4779-b01e-2738e6f63eb7
spec:
rules:
- host: gerrit.home.arpa
http:
paths:
- backend:
service:
name: gerrit-gerrit-service
port:
number: 80
pathType: ImplementationSpecific
tls:
- hosts:
- gerrit.home.arpa
secretName: gerrit-tls
status:
loadBalancer:
ingress:
- ip: 192.168.10.2
Most of the configuration (Except DNS) is up here.
As it turns out, my initial guesses were far off - particular container had a set of tools which were both configured to not send servername (Or not support SNI at all, which was the problem), specifically yarn:1.x and openssl:1.0.x.
The problem was with SNI of course, newer openssl or curl do use -servername by default satisfying SNI requirements.
To this I've considered two solutions:
Wildcard DNS for the clients that do not support SNI, which is easier but does not feel secure
TLS termination with reverse proxy allowing me to transparently use client with SNI support, which I haven't yet tried.
I went with wildcard DNS, though I don't feel that this should be done in prod. :)

Cant Access Kibana URL: This Kibana installation has strict security requirements enabled that your current browser does not meet

am not able to access kibana from browser. getting the below error when i do curl to kibana. kibana is accessed via ingress controller.
curl xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.elb.ap-south-1.amazonaws.com/app/kibana
<div class="kibanaWelcomeLogo"></div></div></div><h2 class="kibanaWelcomeTitle">Please upgrade your browser</h2><div class="kibanaWelcomeText">This Kibana installation has strict security requirements enabled that your current browser does not meet.</div></div><script>
// Since this is an unsafe inline script, this code will not run
// in browsers that support content security policy(CSP). This is
// intentional as we check for the existence of __kbnCspNotEnforced__ in
// bootstrap.
window.__kbnCspNotEnforced__ = true;
</script><script src="/bundles/app/kibana/bootstrap.js"></script></body></html>root#10:~/EK/work#
kibana ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kibana
namespace: logging-od
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /app/kibana
backend:
serviceName: logging-kibana
servicePort: 5601
using kubectl proxy forward to kibana service works without any issues
kubectl -n logging port-forward svc/kibana --address 0.0.0.0 8088:5601
looked at ingress controller logs but it goes through fine.
10.224.91.15 - - [04/Mar/2021:05:12:35 +0000] "GET /app/kibana HTTP/1.1" 200 75425 "-" "curl/7.47.0" 152 0.019 [logging-od-logging-kibana-5601] [] 100.64.131.52:5601 75425 0.016 200 429c46c4006caefa2a160018cca3195d
any idea
Go to conf/kibana.yml, and try to set csp.strict: false
But make sure this is not done with production instance

Ingress-nginx on GKE configuration 502 bad gateway

I am trying to expose an mlflow model in a GKE cluster through an ingress-nginx and a google cloud load balancer.
The configuration of service to the respective deployment looks as follows:
apiVersion: v1
kind: Service
metadata:
name: model-inference-service
labels:
app: inference
spec:
ports:
- port: 5555
targetPort: 5555
selector:
app: inference
When forwarding this service to localhost using kubectl port-forward service/model-inference-service 5555:5555 I can successfully query the model by sending a test image to the api endpoint using the following script.
The url the request is sent to is http://127.0.0.1:5555/invocations.
This works as intended so I assume the deployment running the pod exposing the model and the corresponding clusterIP service model-inference-service is configured correctly.
Next, I installed ingress-nxinx into the cluster by doing
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install my-release ingress-nginx/ingress-nginx
The ingress is configured as follows (I suspect the error has to be here?):
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/rewrite-target: /invocations
name: inference-ingress
namespace: default
labels:
app: inference
spec:
rules:
- http:
paths:
- path: /invocations
backend:
serviceName: model-inference-service
servicePort: 5555
The ingress controller pod is running successfully:
my-release-ingress-nginx-controller-6758cc8f45-fwtw7 1/1 Running 0 3h33m
In the GCP console I can see that the load balancer was created successfully as well and I can optain its IP.
When using the same test script I used before to make a request to the Rest api endpoint (previously the service was forwarded to localhost) but now with the ip of the load balancer, I get a 502 Bad Gateway error:
The url is the following now: http://34.90.4.0:80/invocations
Traceback (most recent call last):
File "test_inference.py", line 80, in <module>
run()
File "//anaconda3/lib/python3.7/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "//anaconda3/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "//anaconda3/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "//anaconda3/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "test_inference.py", line 76, in run
print(score_model(data_path, host, port).text)
File "test_inference.py", line 54, in score_model
status_code=response.status_code, text=response.text
Exception: Status Code 502. <html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.19.1</center>
</body>
</html>
When accessing the same url in a browser it says:
502 Bad Gateway
nginx/1.19.1
The logs of the ingress controller state:
2020/08/26 16:06:45 [warn] 86#86: *42282 a client request body is buffered to a temporary file /tmp/client-body/0000000009, client: 10.10.0.30, server: _, request: "POST /invocations HTTP/1.1", host: "34.90.4.0"
2020/08/26 16:06:45 [error] 86#86: *42282 connect() failed (111: Connection refused) while connecting to upstream, client: 10.10.0.30, server: _, request: "POST /invocations HTTP/1.1", upstream: "http://10.52.3.7:5555/invocations", host: "34.90.4.0"
2020/08/26 16:06:45 [error] 86#86: *42282 connect() failed (111: Connection refused) while connecting to upstream, client: 10.10.0.30, server: _, request: "POST /invocations HTTP/1.1", upstream: "http://10.52.3.7:5555/invocations", host: "34.90.4.0"
2020/08/26 16:06:45 [error] 86#86: *42282 connect() failed (111: Connection refused) while connecting to upstream, client: 10.10.0.30, server: _, request: "POST /invocations HTTP/1.1", upstream: "http://10.52.3.7:5555/invocations", host: "34.90.4.0"
10.10.0.30 - - [26/Aug/2020:16:06:45 +0000] "POST /invocations HTTP/1.1" 502 157 "-" "python-requests/2.24.0" 86151 0.738 [default-model-inference-service-5555] [] 10.52.3.7:5555, 10.52.3.7:5555, 10.52.3.7:5555 0, 0, 0 0.000, 0.001, 0.000 502, 502, 502 0d86e360427c0a81c287da4ff5e907bc
To test if the ingress and the load balancer work in principle I replaced the docker image with the real rest api I want to expose with this docker image which returns "hello world" on port 5050 and path /. I changed the port and the path (from /invocations to /) in the service and ingress manifests shown above and could successfully see "hello world" when accessing the ip of the load balancer in the browser.
Does anyone see what I might have done wrong?
Thank you very much!
Best regards,
F
The configuration you have shared is looking fine. There must be something in your cluster environment that is causing this behavior. See if pod-to-pod communication is working. Launch a test pod on the same node as the Nginx ingress controller and do a curl from that pod to the target service. See if you get any DNS or Network issues. Try changing the host header when calling the service and see if it's sensitive to that.

Need help troubleshooting Istio IngressGateway HTTP ERROR 503

My Test Environment Cluster has the following configurations :
Global Mesh Policy (Installed as part of cluster setup by our org) : output of kubectl describe MeshPolicy default
Name: default
Namespace:
Labels: operator.istio.io/component=Pilot
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.5.6
release=istio
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"authentication.istio.io/v1alpha1","kind":"MeshPolicy","metadata":{"annotations":{},"labels":{"operator.istio.io/component":...
API Version: authentication.istio.io/v1alpha1
Kind: MeshPolicy
Metadata:
Creation Timestamp: 2020-07-23T17:41:55Z
Generation: 1
Resource Version: 1088966
Self Link: /apis/authentication.istio.io/v1alpha1/meshpolicies/default
UID: d3a416fa-8733-4d12-9d97-b0bb4383c479
Spec:
Peers:
Mtls:
Events: <none>
The above configuration I believe enables services to receive connections in mTls mode.
DestinationRule : Output of kubectl describe DestinationRule commerce-mesh-port -n istio-system
Name: commerce-mesh-port
Namespace: istio-system
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.istio.io/v1alpha3","kind":"DestinationRule","metadata":{"annotations":{},"name":"commerce-mesh-port","namespace"...
API Version: networking.istio.io/v1beta1
Kind: DestinationRule
Metadata:
Creation Timestamp: 2020-07-23T17:41:59Z
Generation: 1
Resource Version: 33879
Self Link: /apis/networking.istio.io/v1beta1/namespaces/istio-system/destinationrules/commerce-mesh-port
UID: 4ef0d49a-88d9-4b40-bb62-7879c500240a
Spec:
Host: *
Ports:
Name: commerce-mesh-port
Number: 16443
Protocol: TLS
Traffic Policy:
Tls:
Mode: ISTIO_MUTUAL
Events: <none>
Istio Ingress-Gateway :
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: finrpt-gateway
namespace: finrpt
spec:
selector:
istio: ingressgateway # use Istio's default ingress gateway
servers:
- port:
name: https
number: 443
protocol: https
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*"
- port:
name: http
number: 80
protocol: http
tls:
httpsRedirect: true
hosts:
- "*"
I created a secret to be used for TLS and using that to terminate the TLS traffic at the gateway (as configured in mode SIMPLE)
Next, I configured my VirtualService in the same namespace and did a URL match for HTTP :
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: finrpt-virtualservice
namespace: finrpt
spec:
hosts:
- "*"
gateways:
- finrpt-gateway
http:
- match:
- queryParams:
target:
exact: "commercialprocessor"
ignoreUriCase: true
route:
- destination:
host: finrpt-commercialprocessor
port:
number: 8118
The Service CommercialProcessor (ClusterIP) is expecting traffic on HTTP/8118.
With the above setting in place, when I browse to the External IP of my Ingress-Gateway, first I get a certificate error (expected as I am using self-signed for testing) and then on proceeding I get HTTP Error 503.
I am not able to find any useful logs in the gateway, I am wondering if the gateway is unable to communicate to my VirtualService in plaintext (TLS termination) and it is expecting https but I have put it as http?
Any help is highly appreciated, I am very new to Istio and I think I might be missing something naive here.
My expectation is : I should be able to hit the Gateway with https, gateway does the termination and forwards the unencrypted traffic to the destination configured in the VirtualService on HTTP port based on URL regex match ONLY (I have to keep URL match part constant here).
As 503 often occurs and it´s hard to find the issue I set up little troubleshooting answer, there are another questions with 503 error which I encountered for several months with answers, useful informations from istio documentation and things I would check.
Examples with 503 error:
Istio 503:s between (Public) Gateway and Service
IstIO egress gateway gives HTTP 503 error
Istio Ingress Gateway with TLS termination returning 503 service unavailable
how to terminate ssl at ingress-gateway in istio?
Accessing service using istio ingress gives 503 error when mTLS is enabled
Common cause of 503 errors from istio documentation:
https://istio.io/docs/ops/best-practices/traffic-management/#avoid-503-errors-while-reconfiguring-service-routes
https://istio.io/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule
https://istio.io/latest/docs/concepts/traffic-management/#working-with-your-applications
Few things I would check first:
Check services ports name, Istio can route correctly the traffic if it knows the protocol. It should be <protocol>[-<suffix>] as mentioned in istio
documentation.
Check mTLS, if there are any problems caused by mTLS, usually those problems would result in error 503.
Check if istio works, I would recommend to apply bookinfo application example and check if it works as expected.
Check if your namespace is injected with kubectl get namespace -L istio-injection
If the VirtualService using the subsets arrives before the DestinationRule where the subsets are defined, the Envoy configuration generated by Pilot would refer to non-existent upstream pools. This results in HTTP 503 errors until all configuration objects are available to Pilot.
Hope you find this useful.

Resources