What is the best way to organize a .net core app with nginx reverse proxy inside a kubernetes cluster? - nginx

I want to deploy a .NET Core app with NGINX reverse proxy on Azure Kubernetes Service. What is the best way to organize the pods and containers?
Two single-container pods, one pod for nginx and one pod for the app (.net-core/kestrel), so each one can scale independently of the other
One multi-container pod, this single pod with two containers (one for nginx and one for the app)
One single-container pod, a single container running both the nginx and the .net app
I would choose the 1st option, but I don't know if it is the right choice, would be great to know the the pros and cons of each option.
If I choose the 1st option, is it best to set affinity to put nginx pod in the same node that the app pod? Or anti-affinity so they deploy on different nodes? Or no affinity/anti-affinity at all?

The best practice for inbound traffic in Kubernetes is to use the Ingress resource. This requires a bit of extra setup in AKS because there's no built-in ingress controller. You definitely don't want to do #2 because it's not flexible, and #3 is not possible to my knowledge.
The Kubernetes Ingress resource is a configuration file that manages reverse proxy rules for inbound cluster traffic. This allows you to surface multiple services as if they were a combined API.
To set up ingress, start by creating a public IP address in your auto-generated MC resource group:
az network public-ip create `
-g MC_rg-name_cluster-name_centralus `
-n cluster-name-ingress-ip `
-l centralus `
--allocation-method static `
--dns-name cluster-name-ingress
Now create an ingress controller. This is required to actually handle the inbound traffic from your public IP. It sits and listens to the Kubernetes API Ingress updates, and auto-generates an nginx.conf file.
# Note: you'll have to install Helm and its service account prior to running this. See my GitHub link below for more information
helm install stable/nginx-ingress `
--name nginx-ingress `
--namespace default `
--set controller.service.loadBalancerIP=ip.from.above.result `
--set controller.scope.enabled=true `
--set controller.scope.namespace="default" `
--set controller.replicaCount=3
kubectl get service nginx-ingress-controller -n default -w
Once that's provisioned, make sure to use this annotation on your Ingress resource: kubernetes.io/ingress.class: nginx
If you'd like more information on how to set this up, please see this GitHub readme I put together this week. I've also included TLS termination with cert-manager, also installed with Helm.

Related

NGINX Installation and Configuration

I'm new to Kubernetes and wanted to use the NGINX Ingress Controller for the project I'm currently working on. I read some of the docs and watched some tutorials but I haven't really understood the:
installation process (should I use Helm, the git repo???)
how to properly configure the Ingress. For example, the Kubernetes docs say to use a nginx.conf file (https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/#creating-the-frontend) which is never mentioned in the actual NGINX docs. They say to use ConfigMaps or annotations
Does anybody know of a blog post or tutorial that makes these things clear. Out of everything I've learned so far (both frontend and backend) developing and deploying to a cloud environment has got me lost. I've been stuck on a problem for a week and want to figure out it Ingress can help me.
Thanks!
Answering:
How should I install nginx-ingress
There is no one correct way to install nginx-ingress. Each way has its own advantages/disadvantages, each Kubernetes cluster could require different treatment (for example: cloud managed Kubernetes and minikube) and you will need to determine which option is best suited for you.
You can choose from running:
$ kubectl apply -f ...,
$ helm install ...,
terraform apply ... (helm provider),
etc.
How should I properly configure Ingress?
Citing the official documentation:
An API object that manages external access to the services in a cluster, typically HTTP.
-- Kubernetes.io: Docs: Concepts: Services networking: Ingress
Basically Ingress is a resource that tells your Ingress controller how it should handle specific HTTP/HTTPS traffic.
Speaking specifically about the nginx-ingress, it's entrypoint that your HTTP/HTTPS traffic should be sent to is a Service of type LoadBalancer named: ingress-nginx-controller (in a ingress-nginx namespace). In Docker with Kubernetes implementation it will bind to the localhost of your machine.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: "nginx"
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
The modified example from the documentation will tell your Ingress controller to pass the traffic with any Host and with path: / (every path) to a service named nginx on port 80.
The above configuration after applying will be reflected by ingress-nginx in the /etc/nginx/nginx.conf file.
A side note!
Take a look on how the part of nginx.conf looks like when you apply above definition:
location / {
set $namespace "default";
set $ingress_name "minimal-ingress";
set $service_name "nginx";
set $service_port "80";
set $location_path "/";
set $global_rate_limit_exceeding n;
On how your specific Ingress manifest should look like you'll need to consult the documentation of the software that you are trying to send your traffic to and ingress-nginx docs.
Addressing the part:
how to properly configure the Ingress. For example, the Kubernetes docs say to use a nginx.conf file (https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/#creating-the-frontend) which is never mentioned in the actual NGINX docs. They say to use ConfigMaps or annotations.
You typically don't modify nginx.conf that the Ingress controller is using by yourself. You write an Ingress manifest and rest is taken by Ingress controller and Kubernetes. nginx.conf in the Pod responsible for routing (your Ingress controller) will reflect your Ingress manifests.
Configmaps and Annotations can be used to modify/alter the configuration of your Ingress controller. With the Configmap you can say to enable gzip2 compression and with annotation you can say to use a specific rewrite.
To make things clearer. The guide that is referenced here is a frontend Pod with nginx installed that passes the request to a backend. This example apart from using nginx and forwarding traffic is not connected with the actual Ingress. It will not acknowledge the Ingress resource and will not act accordingly to the manifest you've passed.
A side note!
Your traffic would be directed in a following manner (simplified):
Ingress controller -> frontend -> backend
This example speaking from personal perspective is more of a guide how to connect frontend and backend and not about Ingress.
Additional resources:
Stackoverflow.com: Questions: Ingress nginx how sto serve assets to application
Stackoverflow.com: Questions: How nginx ingress controller backend protocol annotation works
The guide that I wrote some time ago should help you with the idea on how you can configure basic Ingress (it could be little outdated):
Stackoverflow.com: Questions: How can I access nginx ingress on my local
The most straightforward process of installing nginx ingress controller (or any other for that matter) would be using helm. This would need basic understanding of helm and how to work with helm charts.
Here's the repo: https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx
Follow the instructions in there - quite straightforward if you use the default values. For the configuration, you can customize the chart too before installing. Look at the Readme to see how to get all the configurable options.
Hope this helps as a starting point.

Missing Dapr Side-Car in POD with NGINX in AKS

I have a AKS cluster running in Azure and have deployed Dapr runtime into it along with a few other test service deployments that are annotated with dapr.io/enabled, dapr.io/id.
A Daprd side-car container instance co-locates within the same pod as each service so all is good there.
I then installed NGINX with the Dapr side-car annotations however I do not see a Daprd container instance co-located within the same pod as the NGINX container.
I followed this article to deploy NGINX with Dapr annotations applied through dapr-annotation.yaml file
Here are my workloads...
Here are my Services and ingresses...
Deployed Service1 has side-car...
Deployed Echo Service has side-car...
NGINX does not have a side-car but has annotations...
The issue was related to changes to the Helm chart format to the following.
As a result of that change I now see the Darp side car on NGINX ingress controller as expected.

run kubernetes containers without minikube or etc

I want to just run an nginx-server on kubernetes with the help of
kubectl run nginx-image --image nginx
but the error was thrown:
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
1. Via the command-line flag --kubeconfig
2. Via the KUBECONFIG environment variable
3. In your home directory as ~/.kube/config
I then ran
kubectl run nginx-image --kubeconfig ~/.kube/config --image nginx
again thrown:
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
1. Via the command-line flag --kubeconfig
2. Via the KUBECONFIG environment variable
3. In your home directory as ~/.kube/config
minikube start solves the problem but it is taking resources...
I just want to ask How can I run kubectl without minikube (or other such solutions) being started? Please tell me if it not possible
when I run kubectl get pods, I get two pods instead I just want one and I know it is possible since I had seen in some video tutorials.
Please help...
Kubectl is a command-line tool and it is responsible for communicating with Minikube. Kubectl allows you to run commands against Minikube. You can use Kubectl to deploy applications, inspect and manage resources, and view logs. When you execute this command
kubectl run nginx-image --image nginx
kubectl tries to connect to minikube and sends your request(run Nginx) to it. So if you stop minikube, kubectl can't communicate. So minikube is responsible to run Nginx and kubectl is just responsible to tell Minikube to run Nginx
I mean you need to install Kubernetes in order to use it. It’s not magic. If minikube isn’t to your liking there are many installers, try Docker Desktop or k3d.

How to fix Kubernetes Ingress Controller cutting off nodes from cluster

I'm having some trouble installing an Ingress Controller in my on-prem cluster (created with Kubespray, running MetalLB to create LoadBalancer.).
I tried using nginx, traefik and kong but all got the same results.
I'm installing my the nginx helm chart using the following values.yaml:
controller:
kind: DaemonSet
nodeSelector:
node-role.kubernetes.io/master: ""
image:
tag: 0.23.0
rbac:
create: true
With command:
helm install --name nginx stable/nginx-ingress --values values.yaml --namespace ingress-nginx
When I deploy the ingress controller in the cluster, a service is created (e.g. nginx-ingress-controller for nginx). This service is of the type LoadBalancer and gets an external IP.
When this external IP is assigned, the node that's linked to this external IP is lost (status Not Ready). However, when I check this node, it's still running, it's just cut off from the other
nodes, it can't even ping them (No route found). When I remove the service (not the rest of the nginx helm chart), everything works and the Ingress works. I also tried installing nginx/traefik/kong without a LoadBalancer using NodePorts or External IPs on the service, but I get the same result.
Does anyone recognize this behaviour?
Why does the ingress still work, even when I remove the nginx-ingress-controller service?
After a long search, we finally found a working solution for this problem.
As mentioned by #A_Suh, the pool of IPs that metallb uses, should contain IPs that are currently not used by one of the nodes in the cluster. By adding a new IP range that's also configured in the DHCP server, metallb can use ARP to link one of the IPs to one of the nodes.
For example in my 5 node cluster (kube11-15): When metallb gets the range 10.4.5.200/31 and allocates 10.4.5.200 for my nginx-ingress-controller, 10.4.5.200 is linked to kube12. On ARP requests for 10.4.5.200, all 5 nodes respond with kube12 and trafic will be routed to this node.

kubernetes: a service is not accessible outside host

I am following the guide at http://kubernetes.io/docs/getting-started-guides/ubuntu/ to create a kubernetes cluster. Once the cluster is up, i can create pods and services using kubectl. Basically, do the following
kubectl run nginx --image=nginx --port=80
kubectl expose deployment/nginx
I see a pod and service running
# kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 192.168.3.1 <none> 443/TCP 2d
nginx 192.168.3.208 <none> 80/TCP 2d
When I try to access the service from the machine where the pod is running, I get back the nginx helloworld page. But if i try it another machine in the kubernetes cluster, i get a timeout.
I thought all the services are accessible anywhere in the cluster. Why could it not be working that way?
Thanks
Yes, services should be accessible anywhere in the cluster. Is your "another machine" listed in the output of kubectl get nodes? Is the node Ready? Maybe the machine wasn't configured correctly.
If you want to get the servicer anywherer in the cluster, You must use the network plug-in,such as Flannel,OpenVSwitch.
http://kubernetes.io/docs/admin/networking/#flannel
https://github.com/coreos/flannel#flannel
found out my error by comparing it with another installation where it worked. This installation was missing an iptables rule that forced everything going to the containers onto the flannel interface. So the traffic was reaching the target host on eth0 making it discard the packet. I donot know why the proxy didnt add that rule. Once i manually added it, it worked.

Resources