NGINX Installation and Configuration - nginx

I'm new to Kubernetes and wanted to use the NGINX Ingress Controller for the project I'm currently working on. I read some of the docs and watched some tutorials but I haven't really understood the:
installation process (should I use Helm, the git repo???)
how to properly configure the Ingress. For example, the Kubernetes docs say to use a nginx.conf file (https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/#creating-the-frontend) which is never mentioned in the actual NGINX docs. They say to use ConfigMaps or annotations
Does anybody know of a blog post or tutorial that makes these things clear. Out of everything I've learned so far (both frontend and backend) developing and deploying to a cloud environment has got me lost. I've been stuck on a problem for a week and want to figure out it Ingress can help me.
Thanks!

Answering:
How should I install nginx-ingress
There is no one correct way to install nginx-ingress. Each way has its own advantages/disadvantages, each Kubernetes cluster could require different treatment (for example: cloud managed Kubernetes and minikube) and you will need to determine which option is best suited for you.
You can choose from running:
$ kubectl apply -f ...,
$ helm install ...,
terraform apply ... (helm provider),
etc.
How should I properly configure Ingress?
Citing the official documentation:
An API object that manages external access to the services in a cluster, typically HTTP.
-- Kubernetes.io: Docs: Concepts: Services networking: Ingress
Basically Ingress is a resource that tells your Ingress controller how it should handle specific HTTP/HTTPS traffic.
Speaking specifically about the nginx-ingress, it's entrypoint that your HTTP/HTTPS traffic should be sent to is a Service of type LoadBalancer named: ingress-nginx-controller (in a ingress-nginx namespace). In Docker with Kubernetes implementation it will bind to the localhost of your machine.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: "nginx"
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
The modified example from the documentation will tell your Ingress controller to pass the traffic with any Host and with path: / (every path) to a service named nginx on port 80.
The above configuration after applying will be reflected by ingress-nginx in the /etc/nginx/nginx.conf file.
A side note!
Take a look on how the part of nginx.conf looks like when you apply above definition:
location / {
set $namespace "default";
set $ingress_name "minimal-ingress";
set $service_name "nginx";
set $service_port "80";
set $location_path "/";
set $global_rate_limit_exceeding n;
On how your specific Ingress manifest should look like you'll need to consult the documentation of the software that you are trying to send your traffic to and ingress-nginx docs.
Addressing the part:
how to properly configure the Ingress. For example, the Kubernetes docs say to use a nginx.conf file (https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/#creating-the-frontend) which is never mentioned in the actual NGINX docs. They say to use ConfigMaps or annotations.
You typically don't modify nginx.conf that the Ingress controller is using by yourself. You write an Ingress manifest and rest is taken by Ingress controller and Kubernetes. nginx.conf in the Pod responsible for routing (your Ingress controller) will reflect your Ingress manifests.
Configmaps and Annotations can be used to modify/alter the configuration of your Ingress controller. With the Configmap you can say to enable gzip2 compression and with annotation you can say to use a specific rewrite.
To make things clearer. The guide that is referenced here is a frontend Pod with nginx installed that passes the request to a backend. This example apart from using nginx and forwarding traffic is not connected with the actual Ingress. It will not acknowledge the Ingress resource and will not act accordingly to the manifest you've passed.
A side note!
Your traffic would be directed in a following manner (simplified):
Ingress controller -> frontend -> backend
This example speaking from personal perspective is more of a guide how to connect frontend and backend and not about Ingress.
Additional resources:
Stackoverflow.com: Questions: Ingress nginx how sto serve assets to application
Stackoverflow.com: Questions: How nginx ingress controller backend protocol annotation works
The guide that I wrote some time ago should help you with the idea on how you can configure basic Ingress (it could be little outdated):
Stackoverflow.com: Questions: How can I access nginx ingress on my local

The most straightforward process of installing nginx ingress controller (or any other for that matter) would be using helm. This would need basic understanding of helm and how to work with helm charts.
Here's the repo: https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx
Follow the instructions in there - quite straightforward if you use the default values. For the configuration, you can customize the chart too before installing. Look at the Readme to see how to get all the configurable options.
Hope this helps as a starting point.

Related

Route external traffic from a standalone nginx service to kubernetes nodeport service

GOAL
I want to get access to kubernetes dashboard with a standalone nginx service and a microk8s nodeport service.
CONTEXT
I have a linux server.
On this server, there are several running services such as:
microk8s
nginx (note: I am not using ingress, nginx service works independently from microk8s).
Here is the workflow that I am looking for:
http:// URL /dashboard
NGINX service (FROM http:// URL /dashboard TO nodeIpAddress:nodeport)
nodePort service
kubernetes dashboard service
ISSUE:
However, each time I request http:// URL /dashboard I receive a 502 bad request answer, what I am missing?
CONFIGURATION
Please find below, nginx configuration, node port service configuration and the status of microk8s cluster:
nginx configuration: /etc/nginx/site-availables/default
node-port-service configuration
node ip address
microk8s namespaces
Thank you very much for your helps.
I'll summarize the whole problem and solutions here.
First, the service which needs to expose the Kubernetes Dashboard needs to point at the right target port, and also needs to select the right Pod (the kubernetes-dashboard Pod)
If you check your service with a:
kubectl desribe service <service-name>
You can easily see if its selecting a Pod (or more than one) or nothing, by looking at the Endpoints section. In general, your service should have the same selector, port, targetPort and so on of the standard kubernetes-dashboard service (which expose the dashboard but only internally to the cluster)
Second, your NGINX configuration proxy the location /dashboard to the service, but the problem is that the kubernetes-dashboard Pod is expecting requests to reach / directly, so the path /dashboard means nothing to it.
To solve the second problem, there are a few ways, but they all lay in the NGINX configuration. If you read the documentation of the module proxy (aka http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass) you can see that the solution is to add an URI in the configuration, something like this:
proxy_pass https://51.68.123.169:30000/
Notice the trailing slash, that is the URI, which means that the location matching the proxy rule is rewritten into /. This means that your_url/dashboard will just become your_url/
Without the trailing slash, your location is passed to the target as it is, since the target is only and endpoint.
If you need more complex URI changes, what you're searching is a rewrite rule (they support regex and lots more) but adding the trailing slash should solve your second problem.
Indeed #AndD, you advice me to execute this command :
sudo microk8s kubectl describe service -n kube-system kubernetes-dashboard
In order to get these below information
Labels:k8s-app=kubernetes-dashboard
TargetPort:8443/TCP
thanks to this above information, I could fix the nodePort service, you can find below a snippet:
spec:
type: NodePort
k8s-app: 'kubernetes-dashboard'
ports:
- protocol: TCP
port: 8443
targetPort: 8443
nodePort: 30000
However, I did change the nginx configuration to
proxy_pass https://51.68.123.169:30000/
I do receive a successful response (html), then all remaining requests have a 404 status (js, css, assets).
http 404
edit
the html file contains a set of dependencies (js/img/css)
<link rel="stylesheet" href="styles.3aaa4ab96be3c2d1171f.css"></head>
...
<script src="runtime.3e2867321ef71252064e.js" defer></script>
So it tries to get these assets with these URL:
https::// URL/styles.3aaa4ab96be3c2d1171f.css
https::// URL/runtime.3e2867321ef71252064e.js
instead of using:
https::// URL/dashboard/styles.3aaa4ab96be3c2d1171f.css
https::// URL/dashboard/runtime.3e2867321ef71252064e.js
edit #2
I just changed again the subpath => dashboad/ to dash/
new nginx conf
And it works with chromium.
But it doesn't with firefox. (not a big deal)
thank you very much AndD!
Besides, I got a similar issue with jenkins, however jenkins image contains a parameter that fixes the issue.
docker run --publish 8080:8080 --env JENKINS_OPTS="--prefix=/subpath" jenkins/jenkins
I was expecting to find something similar with kubernetesui/dashboard but I haven't found anything
https://hub.docker.com/r/kubernetesui/dashboard
https://github.com/kubernetes/dashboard
Well, I do not know how to configure very well nginx in order to display correctly the dashboard in a subpath and I didn't find any parameter in the kubernetes\dashboard image to handle the subpath.

Single NGINX-Ingress Controller for subdomains that have resources in separate namespaces

Can anyone explain to me how to set up a SINGLE nginx ingress controller with configuration that I currently have, as follows:
A.MY-SITE.COM = Service, Ingress, Pods..etc live under the "A" namespace
B.MY-SITE.COM = Service, Ingress, Pods..etc live under the "B" namepsace
I've seen here
https://github.com/nginxinc/kubernetes-ingress/tree/v1.7.0/examples-of-custom-resources/cross-namespace-configuration
This seems to be on the right track, but it's for paths "/cafe". When I need it to be "a.my-site.com".
The main reason I want to do this is I don't want to have to install an ingress controller for every client (namespace) we have.
So I figured this out,
The default HELM nginx-ingress controller installation works fine without SSL certificates. NGINX Controller actually does work with ingress resources from various namespaces.
I installed my *.domain.com certificate and key using
kubectl create secret tls {SECRET_NAME} --key {KEY_FILE} --cert {CERT_FILE}
Then in the nginx-ingress-controller deployment I added:
-args:
- --default-ssl-certificate=tenancy/whatevername-wildcard={NAMESPACE}/{SECRET_NAME}
The ingresses live in the namespace. For example:
Namespace A: Ingress -> Host: a.domain.com
Namespace B: Ingress with host: b.domain.com
the only thing listed in the annotations for the ingress controllers is
kubernetes.io/ingress.class: "nginx"
All domains points to the nginx load balancer IP.
Now it works perfectly. Very simple, but it was also very unclear scouring through the docs.

docker push into private Docker registry inside Kubernetes cluster fails with: 413 Request Entity Too Large

I have deployed a private Docker registry (image registry:2) in a Kubernetes cluster and exposed it via an Ingress. I am using the nginxinc/kubernetes-ingress (not: kubernetes/ingress-nginx) NGINX ingress controller.
curl https://my_registry/v2/_catalog works fine. But docker push into the registry runs into this error: Pushing ... 100.6MB/100.6MB ... 413 Request Entity Too Large.
For what I know, this can be mitigated by instructing the NGINX ingres controller to accept larger chunks of data. I have e.g. tried adding the annotation nginx.ingress.kubernetes.io/proxy-body-size: "200m" into my Ingress specification (as suggested here) but this has not worked so far.
So what is the right way for instructing an nginxinc/kubernetes-ingress NGINX ingress controller to accept sufficiently large chunks?
UPDATE I have meanwhile concluded that nginxinc/kubernetes-ingress does not take its configuration from annotations, but from a ConfigMap named nginx-config that resides in the same namespace as the NGINX ingress controller. I have now added such a ConfigMap with data client-max-body-size: "200m", but the problem still persists.
You need to set Annotation:
nginx.org/client-max-body-size "200m"
I have switched from NGINX Inc.'s to Kubernetes' NGINX ingress controller, and there adding the following annotation to the Ingress' metadata proved sufficient:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 500m

kubernetes expose nginx to static ip in gcp with ingress service configuration error

I had a couple of questions regarding kubernetes ingress service [/controllers]
For example I have an nginx frontend image that I am trying to run with kubectl -
kubectl run <deployment> --image <repo> --port <internal-nginx-port>.
Now I tried to expose this to the outer world with a service -
kubectl expose deployment <deployment> --target-port <port>.
Then tried to create an ingress service with the following nignx-ing.yaml -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: urtutorsv2ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "coreos"
spec:
backend:
serviceName: <service>
servicePort: <port>
Where my ingress.global-static-ip-name is correctly created & available
in Google cloud console.
[I am assuming the service port here is the port I want on my "coreos" IP , so I set it to 80 initially which didn't work so I tried setting it same as the specified in the first step but it still didn't work.]
So, the issue is I am not able to access the frontend at both the urls
http://COREOS_IP, http://COREOS_IPIP:
Which is why I tried to use -
kubectl expose deployment <deployment> --target-port <port>. --type NodePort
to see if it worked with a NodePort & I was able to access the frontend.
So, I am thinking there might be a configuration mistake here because of which I am not getting results with the ingress.
Can anyone here help debug / fix the issue ?
Yeah, the service is there. I tried to check the status with - kubectl get services, kubectl describe service k8urtutorsv2. It showed the service. I tried editing it & saved the nodeport value. the thing is it works with nodeport but not 80 or 443.
You cannot directly expose service on the port 80 or 443.
The available range of exposed services is predefined in the kube-api configuration by the service-node-port-range option with the default value 30000-32767.

kubernetes nginx ingress fails to redirect HTTP to HTTPS

I have a web app hosted in the Google Cloud platform that sits behind a load balancer, which itself sits behind an ingress. The ingress is set up with an SSL certificate and accepts HTTPS connections as expected, with one problem: I cannot get it to redirect non-HTTPS connections to HTTPS. For example, if I connect to it with the URL http://foo.com or foo.com, it just goes to foo.com, instead of https://foo.com as I would expect. Connecting to https://foo.com explicitly produces the desired HTTPS connection.
I have tried every annotation and config imaginable, but it stubbornly refuses, although it shouldn't even be necessary since docs imply that the redirect is automatic if TLS is specified. Am I fundamentally misunderstanding how ingress resources work?
Update: Is it necessary to manually install nginx ingress on GCP? Now that I think about it, I've been taking its availability on the platform for granted, but after coming across information on how to install nginx ingress on the Google Container Engine, I realized the answer may be a lot simpler than I thought. Will investigate further.
Kubernetes version: 1.8.5-gke.0
Ingress YAML file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: https-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- hosts:
- foo.com
secretName: tls-secret
rules:
- host: foo.com
http:
paths:
- path: /*
backend:
serviceName: foo-prod
servicePort: 80
kubectl describe ing https-ingress output
Name: https-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (10.56.0.3:8080)
TLS:
tls-secret terminates foo.com
Rules:
Host Path Backends
---- ---- --------
foo.com
/* foo-prod:80 (<none>)
Annotations:
force-ssl-redirect: true
secure-backends: true
ssl-redirect: true
Events: <none>
The problem was indeed the fact that the Nginx Ingress is not standard on the Google Cloud Platform, and needs to be installed manually - doh!
However, I found installing it to be much more difficult than anticipated (especially because my needs pertained specifically to GCP), so I'm going to outline every step I took from start to finish in hopes of helping anyone else who uses that specific cloud and has that specific need, and finds generic guides to not quite fit the bill.
Get Cluster Credentials
This is a GCP specific step that tripped me up for a while - you're dealing with it if you get weird errors like
kubectl unable to connect to server: x509: certificate signed by unknown authority
when trying to run kubectl commands. Run this to set up your console:
gcloud container clusters get-credentials YOUR-K8s-CLUSTER-NAME --z YOUR-K8S-CLUSTER-ZONE
Install Helm
Helm by itself is not hard to install, and the directions can be found on GCP's own docs; what they neglect to mention, however, is that on new versions of K8s, RBAC configuration is required to allow Tiller to install things. Run the following after helm init:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Install Nginx Ingress through Helm
Here's another step that tripped me up - rbac.create=true is necessary for the aforementioned RBAC factor.
helm install --name nginx-ingress-release stable/nginx-ingress --set rbac.create=true
Create your Ingress resource
This step is the simplest, and there are plenty of sample nginx ingress configs to tweak - see #JahongirRahmonov's example above. What you MUST keep in mind is that this step takes anywhere from half an hour to over an hour to set up - if you change the config and check again immediately, it won't be set up, but don't take that as implication that you messed something up! Wait for a while and see first.
It's hard to believe this is how much it takes just to redirect HTTP to HTTPS with Kubernetes right now, but I hope this guide helps anyone else stuck on such a seemingly simple and yet so critical need.
GCP has a default ingress controller which at the time of this writing cannot force https.
You need to explicitly manage an NGINX Ingress Controller.
See this article on how to do that on GCP.
Then add this annotation to your ingress:
kubernetes.io/ingress.allow-http: "false"
Hope it helps.

Resources