Disable external authentication on Kubernetes Ingress - nginx

I run a bare-metal Kubernetes cluster and want to map services onto URLs instead of ports (I used NodePort so far).
To achieve this I tried to install an IngressController to be able to deploy Ingress objects containing routing.
I installed the IngressController via helm:
helm install my-ingress helm install my-ingress stable/nginx-ingress
and the deployment worked fine so far. To just use the node's domain name, I enabled hostNetwork: true in the nginx-ingress-controller.
Then, I created an Ingress deployment with this definition:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
which also deployed fine. Finally, when I try to access http://my-url.com/testpath I get a login-prompt. I nowhere set up login-credentials nor do I intend to do so as the services should be publicly available and/or handle authentication on their own.
How do I disable this behavior? I want to access the services just as I would use a NodePort solution.

To clarify the case I am posting answer (from comments area) as Community Wiki.
The problem here was not in configuration but in environment - there was running another ingress in the pod during Longhorn' deployment. This situation led to force basic authentication to both ones.
To resolve that problem it was necessary to to clean up all deployments.

Related

Kubernetes Ingress doesnt find/expose the application properly

I have one application on two environments, its been running for well over a year and now had to re-deploy it on one env and im left with half working external traffic.
example of working up
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
my-app prod-app.my.domain <public ip e.g 41.30.20.20 . 80, 443 127d
and the not working one
MacBook-Pro% kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
my-app dev-app.my.domain <for some reason priv addresses not public that I assigned?> 10.223.0.76,10.223.0.80,10.223.0.81,10.223.0.99 80, 443 5m5s
the deployments works like so, in helm I have the deployments,service etc. + kubernetes ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Values.deployment.name }}
namespace: {{ .Values.deployment.env }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
<some other annotatioins>
spec:
tls:
- secretName: {{ .Values.ingress.tlsSecretName.Games }}
rules:
- host: [prod,dev]-app.my.domain
http:
paths:
- path: /
backend:
serviceName: my-app
servicePort: {{ .Values.service.port }}
and before it I deployed the stable/nginx-ingress helm chart (yup, i know there is ingress-nginx/ingress-nginx - will migrate to it soon, but first want to bring back the env)
and the simple nginx config
controller:
name: main
tag: "v0.41.2"
config:
log-format-upstream: ....
replicaCount: 4
service:
externalTrafficPolicy: Local
updateStrategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 25% #max number of Pods can be unavailable during the update
type: RollingUpdate
# We want to disperse pods into the whole cluster, on each data node
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
# app label is set in the main deployment manifest
# https://github.com/helm/charts/blob/master/stable/nginx-ingress/templates/controller-deployment.yaml#L6
values:
- nginx-ingress
- key: release
operator: In
values:
- my-app-ingress
topologyKey: kubernetes.io/hostname
any idea why my kubernetes ingress has private addresses not the assigned public one ?
and my services on prod are
my-app NodePort 10.190.173.152 <none> 8093:32519/TCP 127d 127d
my-app-ingress-stg-controller LoadBalancer 10.190.180.54 <PUB_IP> 80:30111/TCP,443:30752/TCP 26d
and on dev
my-app NodePort 10.190.79.119 <none> 8093:30858/TCP 10m
my-app-ingress-dev-main LoadBalancer 10.190.93.104 <PUB_IP> 80:32027/TCP,443:30534/TCP 10m
I kinda see the problem (cause I already tried migrating to new nginx a month ago, and on dev there is still old, but there were issues with having multiple envs on same dev cluster with ingresses) - I guess ill try to migrate to the new one and see if that somehow fixes the issue - other than that any idea why the priv addresses ?
Not sure how it works but I deployed ingress (nginx-ingress helm chart) after deploying the application helm chart and at first all pods were 1/1 ready, and site didnt responde, and after ~10min it did so ¯_(ツ)_/¯ no idea why it took so long, as for future reference what I did was
Reserve public IP in gcp (my cloud provider)
Create A record on where my domain is registered godaddy etc. to pin-point to that pub address from step 1
Deploy app helm chart with ingress in it, with my domain and ssl-cert in it, and kubernetes service (load balancer) having that public IP
Deploy nginx-ingress pointing to that public address from the domain
if there is any mistake in my logic please say so and ill update it
#potatopotato I have just moved you own answer from initial question to community wiki separate answer. In that case it will be more searchable and indexing in
future searches.
Explanation regarding below
Not sure how it works but I deployed ingress (nginx-ingress helm
chart) after deploying the application helm chart and at first all
pods were 1/1 ready, and site didnt responde, and after ~10min it did
so ¯_(ツ)_/¯ no idea why it took so long
As per official documentation:
Note: It might take a few minutes for GKE to allocate an external IP address and prepare the load balancer. You might get errors like HTTP 404 and HTTP 500 until the load balancer is ready to serve the traffic.
your answer itself
Not sure how it works but I deployed ingress (nginx-ingress helm chart) after deploying the application helm chart and at first all pods were 1/1 ready, and site didnt responde, and after ~10min it did so ¯_(ツ)_/¯ no idea why it took so long, as for future reference what I did was
Reserve public IP in gcp (my cloud provider)
Create A record on where my domain is registered godaddy etc. to pin-point to that pub address from step 1
Deploy app helm chart with ingress in it, with my domain and ssl-cert in it, and kubernetes service (load balancer) having that public IP
Deploy nginx-ingress pointing to that public address from the domain

Nginx ingress controller not giving metrics for prometheus

I am trying to deploy an nginx ingress controller which can be monitored using prometheus however I am running into an issue that it seems no metrics pod(s) is being created like most posts and docs I have found online show.
I'm using helm to deploy the ingress controller and using a CLI arguement to enable metrics.
helm install ingress stable/nginx-ingress --set controller.metrics.enabled=true
Here is my ingress file
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
# add an annotation indicating the issuer to use.
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-dev"
# needed to allow the front end to talk to the back end
nginx.ingress.kubernetes.io/cors-allow-origin: "https://app.domain.com"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, PUT, POST, DELETE, PATCH, OPTIONS"
# needed for monitoring
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
name: dev-ingress
namespace: development
spec:
rules:
- host: api.<domain>.com
http:
paths:
- backend:
serviceName: api
servicePort: 8090
path: /
tls: # < placing a host in the TLS config will indicate a certificate should be created
- hosts:
- api.<domai>.com
secretName: dev-ingress-cert # < cert-manager will store the created certificate in this secre
In case this makes a difference I am using the prometheus operator helm chart with the below command.
helm install monitoring stable/prometheus-operator --namespace=monitoring
All namespaces exist already so that shouldn't be an issue, as for the development vs monitoring name spaces I saw in many places this was acceptable so I went with it to make things easier to figure out what is happening.
I would follow this guide to setup monitoring for Nginx ingress controller. I believe what you are missing is a prometheus.yaml which defines scrape config for the Nginx ingress controller and RBAC for prometheus to be able to scrape the Nginx ingress controller metrics.
Edit: Annotate nginx ingress controller pods
kubectl annotate pods nginx-ingress-controller-pod prometheus.io/scrape=true -n ingress-nginx --overwrite
kubectl annotate pods nginx-ingress-controller-pod prometheus.io/port=10254 -n ingress-nginx --overwrite
I was not using helm but manifests and the method POD annotations to install the Prometheus. I follow the official doc but could not see the metric either.
I believe the Deployment manifest has some issues. The annotation shouldn't be put on the Deployment level but on the pod level
apiVersion: v1
kind: Deployment
..
spec:
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
label:
...
ports:
- name: prometheus
containerPort: 10254
..
Also, I've confirmed the metric for Nginx is enabled by default when using manifest deployment. No extra steps are needed for this.

Using Kuberenetes ingress controller as reverse proxy to other services in the cluster

I have a simple Kubernetes cluster on kops and aws, which is serving a web app, there is a single html page and a few apis. They are all running as services. I want to expose all endpoints(html and apis) publicly for the web page to work.
I have exposed the html service as an LoadBalancer and I am also using nginx-ingress controller. I want to use the same LoadBalancer to expose the other apis as well(using a different LoadBalancer for each service seems like a bad and expensive way), it is something that I was able to do using Nginx reverse proxy in the on-premise version of the same application, by giving different paths for each api in the nginx conf file.
Although I am not able to do the same in the cluster, I tried Service ingress but somehow I am not able to get the desired result, if I add a path, e.g. "path: "/mobiles-service"" and then add the specific service for it, the http requests do not somehow get redirected to the service. Only the html service works on the root path. Any help would be appreciated.
First you need to create controller for your Kops cluster running on AWS
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.6.0.yaml
Then check if ingress-nginx service is created by running:
kubectl get svc ingress-nginx -n kube-ingress
Then create your pods and ClusterIP type services for your each app like sample below:
kind: Service
apiVersion: v1
metadata:
name: app1-service
spec:
selector:
app: app1
ports:
- port: <app-port>
Then create ingress rule file like sample below:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /app1
backend:
serviceName: app1-service
servicePort: <app1-port>
- path: /app2
backend:
serviceName: app2-service
servicePort: <app2-port>
Once you deploy this ingress rule yaml, Kubernetes creates an Ingress resource on your cluster. The Ingress controller running in your cluster is responsible for creating an HTTP(S) Load Balancer to route all external HTTP traffic (on port 80) to the App Services in backend you exposed on specified pathes.
You can see newly created ingress rule by running:
kubectl get ingress
And you will see output like below:
NAME HOSTS ADDRESS PORTS AGE
example-ingress * a886e57982736434e9a1890264d461398-830017012.us-east-2.elb.amazonaws.com 80 1m
In relevant path like http://external-dns-name/app1 and http://external-dns-name/app2 you will access to your apps and in root / path, you will get <default backend - 404>

Using socket.io on GKE with nginx ingress

I'm trying to integrate socket.io into an application deployed on Google Kubernetes Engine. Developing locally, everything works great. But once deployed, I am continuously getting the dreaded 400 response when my sockets try to connect on. I've been searching on SO and other sites for a few days now and I haven't found anything that fixes my issue.
Unfortunately this architecture was set up by a developer who is no longer at our company, and I'm certainly not a Kubernetes or GKE expert, so I'm definitely not sure I've got everything set up correctly.
Here's out setup:
we have 5 app pods that serve our application distributed across 5 cloud nodes (GCE vm instances)
we are using the nginx ingress controller (https://github.com/kubernetes/ingress-nginx) to create a load balancer to distribute traffic between our nodes
Here's what I've tried so far:
adding the following annotations to the ingress:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
adding sessionAffinity: ClientIP to the backend service referenced by the ingress
These measures don't seem to have made any difference, I'm still getting a 400 response. If anyone has handled a similar situation or has any advice to point me in the right direction I'd be very, very appreciative!
I just setup ngin ingress with same config where we are using socket.io.
here is my ingress config
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: core-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/websocket-services : "app-test"
nginx.ingress.kubernetes.io/rewrite-target: /
certmanager.k8s.io/cluster-issuer: core-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/websocket-services : "socket-service"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
spec:
tls:
- hosts:
- <domain>
secretName: core-prod
rules:
- host: <domain>
http:
paths:
- backend:
serviceName: service-name
servicePort: 80
i was also facing same issue so added proxy-send-timeout and proxy-read-timeout.
I'm guessing you have probably found the answer by now, but you have to add an annotation to your ingress to specify which service will provide websocket upgrades. It looks something like this:
# web socket support
nginx.org/websocket-services: "(your-websocket-service)"

kubernetes nginx ingress fails to redirect HTTP to HTTPS

I have a web app hosted in the Google Cloud platform that sits behind a load balancer, which itself sits behind an ingress. The ingress is set up with an SSL certificate and accepts HTTPS connections as expected, with one problem: I cannot get it to redirect non-HTTPS connections to HTTPS. For example, if I connect to it with the URL http://foo.com or foo.com, it just goes to foo.com, instead of https://foo.com as I would expect. Connecting to https://foo.com explicitly produces the desired HTTPS connection.
I have tried every annotation and config imaginable, but it stubbornly refuses, although it shouldn't even be necessary since docs imply that the redirect is automatic if TLS is specified. Am I fundamentally misunderstanding how ingress resources work?
Update: Is it necessary to manually install nginx ingress on GCP? Now that I think about it, I've been taking its availability on the platform for granted, but after coming across information on how to install nginx ingress on the Google Container Engine, I realized the answer may be a lot simpler than I thought. Will investigate further.
Kubernetes version: 1.8.5-gke.0
Ingress YAML file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: https-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- hosts:
- foo.com
secretName: tls-secret
rules:
- host: foo.com
http:
paths:
- path: /*
backend:
serviceName: foo-prod
servicePort: 80
kubectl describe ing https-ingress output
Name: https-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (10.56.0.3:8080)
TLS:
tls-secret terminates foo.com
Rules:
Host Path Backends
---- ---- --------
foo.com
/* foo-prod:80 (<none>)
Annotations:
force-ssl-redirect: true
secure-backends: true
ssl-redirect: true
Events: <none>
The problem was indeed the fact that the Nginx Ingress is not standard on the Google Cloud Platform, and needs to be installed manually - doh!
However, I found installing it to be much more difficult than anticipated (especially because my needs pertained specifically to GCP), so I'm going to outline every step I took from start to finish in hopes of helping anyone else who uses that specific cloud and has that specific need, and finds generic guides to not quite fit the bill.
Get Cluster Credentials
This is a GCP specific step that tripped me up for a while - you're dealing with it if you get weird errors like
kubectl unable to connect to server: x509: certificate signed by unknown authority
when trying to run kubectl commands. Run this to set up your console:
gcloud container clusters get-credentials YOUR-K8s-CLUSTER-NAME --z YOUR-K8S-CLUSTER-ZONE
Install Helm
Helm by itself is not hard to install, and the directions can be found on GCP's own docs; what they neglect to mention, however, is that on new versions of K8s, RBAC configuration is required to allow Tiller to install things. Run the following after helm init:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Install Nginx Ingress through Helm
Here's another step that tripped me up - rbac.create=true is necessary for the aforementioned RBAC factor.
helm install --name nginx-ingress-release stable/nginx-ingress --set rbac.create=true
Create your Ingress resource
This step is the simplest, and there are plenty of sample nginx ingress configs to tweak - see #JahongirRahmonov's example above. What you MUST keep in mind is that this step takes anywhere from half an hour to over an hour to set up - if you change the config and check again immediately, it won't be set up, but don't take that as implication that you messed something up! Wait for a while and see first.
It's hard to believe this is how much it takes just to redirect HTTP to HTTPS with Kubernetes right now, but I hope this guide helps anyone else stuck on such a seemingly simple and yet so critical need.
GCP has a default ingress controller which at the time of this writing cannot force https.
You need to explicitly manage an NGINX Ingress Controller.
See this article on how to do that on GCP.
Then add this annotation to your ingress:
kubernetes.io/ingress.allow-http: "false"
Hope it helps.

Resources