Cant Access Kibana URL: This Kibana installation has strict security requirements enabled that your current browser does not meet - kibana

am not able to access kibana from browser. getting the below error when i do curl to kibana. kibana is accessed via ingress controller.
curl xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.elb.ap-south-1.amazonaws.com/app/kibana
<div class="kibanaWelcomeLogo"></div></div></div><h2 class="kibanaWelcomeTitle">Please upgrade your browser</h2><div class="kibanaWelcomeText">This Kibana installation has strict security requirements enabled that your current browser does not meet.</div></div><script>
// Since this is an unsafe inline script, this code will not run
// in browsers that support content security policy(CSP). This is
// intentional as we check for the existence of __kbnCspNotEnforced__ in
// bootstrap.
window.__kbnCspNotEnforced__ = true;
</script><script src="/bundles/app/kibana/bootstrap.js"></script></body></html>root#10:~/EK/work#
kibana ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kibana
namespace: logging-od
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /app/kibana
backend:
serviceName: logging-kibana
servicePort: 5601
using kubectl proxy forward to kibana service works without any issues
kubectl -n logging port-forward svc/kibana --address 0.0.0.0 8088:5601
looked at ingress controller logs but it goes through fine.
10.224.91.15 - - [04/Mar/2021:05:12:35 +0000] "GET /app/kibana HTTP/1.1" 200 75425 "-" "curl/7.47.0" 152 0.019 [logging-od-logging-kibana-5601] [] 100.64.131.52:5601 75425 0.016 200 429c46c4006caefa2a160018cca3195d
any idea

Go to conf/kibana.yml, and try to set csp.strict: false
But make sure this is not done with production instance

Related

accessing service form another pod Kubernetes

I have two services running in my k8s.
I am trying to access my wallet service from my user service but my curl cmd just returns 504 gateway timeout.
here is my ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dev-ingress
namespace: dev
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
# nginx.ingress.kubernetes.io/use-regex: "true"
# nginx.ingress.kubernetes.io/rewrite-target: /api/v1$uri
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /api/v1/wallet
pathType: Prefix
backend:
service:
name: wallet-service
port:
number: 80
- path: /api/v1/user
pathType: Prefix
backend:
service:
name: accounts-service
port:
number: 80
this is the way I passed the env in my account service.
http://wallet-service:3007
and I log the URL when hitting my endpoint
curl http://EXTERNAL_IP/api/v1/user/health/wallet
every other non-related endpoint works.
Any help is appreciated
I am running Azure Kubernetes
I have two services running in my k8s.
Do you have an actual K8S service?
apiVerison: ...
kind: Service
Check it with kubectl get svc -A and you should see your services there
Check to see that your pods are exposed to the services
kubectl get endpoints -A
Are you running on a cloud provider (GCP, Azure, AWS, etc), if so check your security configuration as well (NSG for Azure, a security policy for AWS, etc)
Check inner communication :
# Log into one of the pods
kubectl exec -n <namespace> <pod name> sh
# Try to connect to the other service with the FQDN
curl -sv <servicename>.<namespace>.svc.cluster.local
Update:
You commented that you are running on Azure, check the desired ports are opened under the NSG.
Assuming that you are running AKS you need to find out the MS_xxxx resource and your NSG group will be located under this resource group, edit it and open the desired ports
You are trying to connect to http://wallet-service:3007/api/v1/wallet/health - Where did you get the 3007 port?

Different hosts (Pods in Kubernetes) responds with different certificate for the same hostname

I have a werid problem - when asking for my internal hostname, xxx.home.arpa via e.g openssl s_client -connect xxx.home.arpa:443 one (example) pod
- image: docker.io/library/node:8.17.0-slim
name: node
args:
- "86400"
command:
- sleep
is getting response with DEFAULT NGINX INGRESS CERTIFICATE.
Other pod in the same namespace for the same command is getting response with my custom certificate.
Question:
Why one pod RECEIVES different cert for the same request?
For the purpose of this problem, please assume that cert-manager and certs should be properly configured - they are working in most of the system, it's only few pods that are misbehaving
Configuration: k8s nginx ingress, calico CNI, custom coredns svc which manages DNS responses (might be important?), my own CA authority.
e:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: ca-issuer
kubernetes.io/ingress.class: nginx
creationTimestamp: "2022-03-13T06:54:17Z"
generation: 1
name: gerrit-ingress
namespace: gerrit
resourceVersion: "739842"
uid: f22034ab-0ed8-4779-b01e-2738e6f63eb7
spec:
rules:
- host: gerrit.home.arpa
http:
paths:
- backend:
service:
name: gerrit-gerrit-service
port:
number: 80
pathType: ImplementationSpecific
tls:
- hosts:
- gerrit.home.arpa
secretName: gerrit-tls
status:
loadBalancer:
ingress:
- ip: 192.168.10.2
Most of the configuration (Except DNS) is up here.
As it turns out, my initial guesses were far off - particular container had a set of tools which were both configured to not send servername (Or not support SNI at all, which was the problem), specifically yarn:1.x and openssl:1.0.x.
The problem was with SNI of course, newer openssl or curl do use -servername by default satisfying SNI requirements.
To this I've considered two solutions:
Wildcard DNS for the clients that do not support SNI, which is easier but does not feel secure
TLS termination with reverse proxy allowing me to transparently use client with SNI support, which I haven't yet tried.
I went with wildcard DNS, though I don't feel that this should be done in prod. :)

Deploy Spring MVC application using GPC: Cloud SQL, Kubernetes (Service and Ingress) and HTTP(S) Load Balancer with Google Managed Certificate

Let me explain what the deployment consists of. First of all I created a Cloud SQL db by importing some data. To connect the db to the application I used cloud-sql-proxy and so far everything works.
I created a kubernetes cluster in which there is a pod containing the Docker container of the application that I want to depoly and so far everything works ... To reach the application in https then I followed several online guides (https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs#console , https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs#console , etc.), all converge on using a service and an ingress kubernetes. The first one maps the 8080 of spring to the 80 while the second one creates a load balacer that exposes a frontend in https. I configured a health-check, I created a certificate (google managed) associated to a domain which maps the static ip assigned to the ingress.
Apparently everything works but as soon as you try to reach from the browser the address https://example.org/ you are correctly redirected to the login page ( http://example.org/login ) but as you can see it switches to the HTTP protocol and obviously a 404 is returned by google since http is disabled. Forcing https on the address to which it redirects you then ( https://example.org/login ) for some absurd reason adds "www" in front of the domain name ( https://www.example.org/login ). If you try not to use the domain by switching to the static IP the www problem disappears... However, every time you make a request in HTTPS it keeps changing to HTTP.
P.S. the general goal would be to have http up to the load balancer (google's internal network) and then have https between the load balancer and the client.
Can anyone help me? If it helps I post the yaml file of the deployment. Thank you very much!
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: my-app # Label for the Deployment
name: my-app # Name of Deployment
spec:
minReadySeconds: 60 # Number of seconds to wait after a Pod is created and its status is Ready
selector:
matchLabels:
run: my-app
template: # Pod template
metadata:
labels:
run: my-app # Labels Pods from this Deployment
spec: # Pod specification; each Pod created by this Deployment has this specification
containers:
- image: eu.gcr.io/my-app/my-app-production:latest # Application to run in Deployment's Pods
name: my-app-production # Container name
# Note: The following line is necessary only on clusters running GKE v1.11 and lower.
# For details, see https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing#align_rollouts
ports:
- containerPort: 8080
protocol: TCP
- image: gcr.io/cloudsql-docker/gce-proxy:1.17
name: cloud-sql-proxy
command:
- "/cloud_sql_proxy"
- "-instances=my-app:europe-west6:my-app-cloud-sql-instance=tcp:3306"
- "-credential_file=/secrets/service_account.json"
securityContext:
runAsNonRoot: true
volumeMounts:
- name: my-app-service-account-secret-volume
mountPath: /secrets/
readOnly: true
volumes:
- name: my-app-service-account-secret-volume
secret:
secretName: my-app-service-account-secret
terminationGracePeriodSeconds: 60 # Number of seconds to wait for connections to terminate before shutting down Pods
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-app-health-check
spec:
healthCheck:
checkIntervalSec: 60
port: 8080
type: HTTP
requestPath: /health/check
---
apiVersion: v1
kind: Service
metadata:
name: my-app-svc # Name of Service
annotations:
cloud.google.com/neg: '{"ingress": true}' # Creates a NEG after an Ingress is created
cloud.google.com/backend-config: '{"default": "my-app-health-check"}'
spec: # Service's specification
type: ClusterIP
selector:
run: my-app # Selects Pods labelled run: neg-demo-app
ports:
- port: 80 # Service's port
protocol: TCP
targetPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-app-ing
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
ingress.gcp.kubernetes.io/pre-shared-cert: "example-org"
kubernetes.io/ingress.allow-http: "false"
spec:
backend:
serviceName: my-app-svc
servicePort: 80
tls:
- secretName: example-org
hosts:
- example.org
---
As I mention in the comment section, you can redirect HTTP to HTTPS.
Google Cloud have quite good documentation and you can find there step by step guides, including firewall configurations or tests. You can find this guide here.
I would also suggest you to read also docs like:
Traffic management overview for external HTTP(S) load balancers
Setting up traffic management for external HTTP(S) load balancers
Routing and traffic management
As alternative you could check Nginx Ingress with proper annotation (force-ssl-redirect). Some examples can be found here.

Nginx ingress controller not giving metrics for prometheus

I am trying to deploy an nginx ingress controller which can be monitored using prometheus however I am running into an issue that it seems no metrics pod(s) is being created like most posts and docs I have found online show.
I'm using helm to deploy the ingress controller and using a CLI arguement to enable metrics.
helm install ingress stable/nginx-ingress --set controller.metrics.enabled=true
Here is my ingress file
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
# add an annotation indicating the issuer to use.
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-dev"
# needed to allow the front end to talk to the back end
nginx.ingress.kubernetes.io/cors-allow-origin: "https://app.domain.com"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, PUT, POST, DELETE, PATCH, OPTIONS"
# needed for monitoring
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
name: dev-ingress
namespace: development
spec:
rules:
- host: api.<domain>.com
http:
paths:
- backend:
serviceName: api
servicePort: 8090
path: /
tls: # < placing a host in the TLS config will indicate a certificate should be created
- hosts:
- api.<domai>.com
secretName: dev-ingress-cert # < cert-manager will store the created certificate in this secre
In case this makes a difference I am using the prometheus operator helm chart with the below command.
helm install monitoring stable/prometheus-operator --namespace=monitoring
All namespaces exist already so that shouldn't be an issue, as for the development vs monitoring name spaces I saw in many places this was acceptable so I went with it to make things easier to figure out what is happening.
I would follow this guide to setup monitoring for Nginx ingress controller. I believe what you are missing is a prometheus.yaml which defines scrape config for the Nginx ingress controller and RBAC for prometheus to be able to scrape the Nginx ingress controller metrics.
Edit: Annotate nginx ingress controller pods
kubectl annotate pods nginx-ingress-controller-pod prometheus.io/scrape=true -n ingress-nginx --overwrite
kubectl annotate pods nginx-ingress-controller-pod prometheus.io/port=10254 -n ingress-nginx --overwrite
I was not using helm but manifests and the method POD annotations to install the Prometheus. I follow the official doc but could not see the metric either.
I believe the Deployment manifest has some issues. The annotation shouldn't be put on the Deployment level but on the pod level
apiVersion: v1
kind: Deployment
..
spec:
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
label:
...
ports:
- name: prometheus
containerPort: 10254
..
Also, I've confirmed the metric for Nginx is enabled by default when using manifest deployment. No extra steps are needed for this.

Let's Encrypt kubernetes Ingress Controller issuing Fake Certificate

Not Sure why I'm getting Fake certificate, even the certificate is properly issued by Let's Encrypt using certmanager
The setup is running on the Alibaba Cloud ECS console, where one Kube-master and one cube-minion form a Kubernetes cluster.
Service Details
root#kube-master:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h20m
my-nginx ClusterIP 10.101.150.247 <none> 80/TCP 77m
Pod Details
root#kube-master:~# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
my-nginx-6cc48cd8db-n6scm 1/1 Running 0 46s app=my-nginx,pod-template-hash=6cc48cd8db
Helm Cert-manager deployed
root#kube-master:~# helm ls
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
cert-manager 1 Tue Mar 12 15:29:21 2019 DEPLOYED cert-manager-v0.5.2 v0.5.2 kube-system
kindred-garfish 1 Tue Mar 12 17:03:41 2019 DEPLOYED nginx-ingress-1.3.1 0.22.0 kube-system
Certificate Issued Properly
root#kube-master:~# kubectl describe certs
Name: tls-prod-cert
Namespace: default
Labels: <none>
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Creation Timestamp: 2019-03-12T10:26:58Z
Generation: 2
Owner References:
API Version: extensions/v1beta1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: nginx-ingress-prod
UID: 5ab11929-44b1-11e9-b431-00163e005d19
Resource Version: 17687
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/tls-prod-cert
UID: 5dad4740-44b1-11e9-b431-00163e005d19
Spec:
Acme:
Config:
Domains:
zariga.com
Http 01:
Ingress:
Ingress Class: nginx
Dns Names:
zariga.com
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-prod
Secret Name: tls-prod-cert
Status:
Acme:
Order:
URL: https://acme-v02.api.letsencrypt.org/acme/order/53135536/352104603
Conditions:
Last Transition Time: 2019-03-12T10:27:00Z
Message: Order validated
Reason: OrderValidated
Status: False
Type: ValidateFailed
Last Transition Time: <nil>
Message: Certificate issued successfully
Reason: CertIssued
Status: True
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateOrder 27s cert-manager Created new ACME order, attempting validation...
Normal IssueCert 27s cert-manager Issuing certificate...
Normal CertObtained 25s cert-manager Obtained certificate from ACME server
Normal CertIssued 25s cert-manager Certificate issued successfully
Ingress Details
root#kube-master:~# kubectl describe ingress
Name: nginx-ingress-prod
Namespace: default
Address:
Default backend: my-nginx:80 (192.168.123.202:80)
TLS:
tls-prod-cert terminates zariga.com
Rules:
Host Path Backends
---- ---- --------
* * my-nginx:80 (192.168.123.202:80)
Annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: true
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 7m13s nginx-ingress-controller Ingress default/nginx-ingress-prod
Normal CreateCertificate 7m8s cert-manager Successfully created Certificate "tls-prod-cert"
Normal UPDATE 6m57s nginx-ingress-controller Ingress default/nginx-ingress-prod
Letsencrypt Nginx Production Definition
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-prod
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/tls-acme: 'true'
labels:
app: 'my-nginx'
spec:
backend:
serviceName: my-nginx
servicePort: 80
tls:
- secretName: tls-prod-cert
hosts:
- zariga.com
Maybe would be helpful for someone experiencing similar issues. As for me, a forgot to specify hostname in Ingress yaml file for both rules and tls sections.
After duplicating the hostname, it started responding with a proper certificate.
Example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-web-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- my.host.com # <----
secretName: tls-secret
rules:
- host: my.host.com # <----
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: my-nginx
servicePort: 80
Sometimes it may happen if you are using the clusterissuer URL as staging URL.
Check the letsencrypt url set in you issuer.yaml or clusterissuer.yaml and change it to production url: https://acme-v02.api.letsencrypt.org/directory
I faced the same issue once and changing the url to production url solved it.
Also check that the ingress tls secrets you are using is right.
Actual cluster issuer should be something like for production :
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: dev-clusterissuer
spec:
acme:
email: harsh#example.com
privateKeySecretRef:
name: dev-clusterissuer
server: https://acme-v02.api.letsencrypt.org/directory # <----check this server URL it is for Prod and use this only
solvers:
- http01:
ingress:
class: nginx
If you are using server: https://acme-staging-v02.api.letsencrypt.org/directory you will face issue better replace it with server: https://acme-v02.api.letsencrypt.org/directory
If you're convinced that everything is set up correctly and it still doesn't work, try this.
Edit the deployment of your nginx-controller. Why? Because, if it doesn't find the secret in the namespace it's deployed in, the Nginx controller deploys it's own certificate (fake certificate). Not knowing this (I'm new to the game) cost me a few days of my life.
So, either change to the namespace where your Nginx Ingress controller is and get the name of the deployment, then:
kubectl edit deployment nginx-ingress-ingress-nginx-controller -n nginx-ingress
Or if there is only one deployment in that namespace you can just do
kubectl edit deployment
And you should be in edit mode for your nginx controller deployment. Look for the section: spec --> containers: --> args:
spec:
containers:
- args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/nginx-ingress-ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/nginx-ingress-ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --default-ssl-certificate=app-namespace/letsencrypt-cert-prod
You can add a default certificate to use if your nginx controller doesn't find one (as I have above), so it will search in a namespace for a secret by adding:
--default-ssl-certificate=your-cert-namespace/your-cert-secret
your-cert-namespace: The namespace where your certificate secret is
your-cert-secret: The name of your certificate containing secret
Once you save and close your editor, it should be updated. Then check the logs of your cert manager pod:
kubectl logs cert-manager-xxxpodxx-abcdef -n cert-manager
To make sure that things are working as normal.
You probably won't have this issue if all your resources are deployed in the same namespace.
Important to note that the ClusterIssuer spec for solvers changed. For people using cer-manager>0.7.2, this comment saved me so much time: https://github.com/jetstack/cert-manager/issues/1650#issuecomment-518953464. Specially on how to configure the ClusterIssuer and Certificate.
In my case, the problem was accessing the domain at wrong port, my default https port wasn't 443 but 4443
For me, the issue was that I forgot to kubectl apply the secret (in my case 'tls-secret.yml'). When deploying K8S manually, such an error is rarely made. However, I'm using gitlab CICD to deploy applications, and I forgot to add - kubectl apply -f ./kube/secret to my .gitlab-ci.yml.
In my case i mistyped the name of my tls secret inside my ingress rules.
instead of secretName: my-homepage-tls i typed secretName: myy-homepage-tls
For me, the issue was ingress class name, since I'm using microk8s, ingress class name is public:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: "your#email.tld"
privateKeySecretRef:
name: letsencrypt-prod
server: "https://acme-v02.api.letsencrypt.org/directory"
solvers:
- http01:
ingress:
class: public
This happened to me today, I had 2 ingresses in the same namespace and used letsencrypt-prod as the secret name for both. One worked, the other didn't. The secrets are auto-generated and needed to have a unique name to avoid clashing

Resources