Missing Dapr Side-Car in POD with NGINX in AKS - nginx

I have a AKS cluster running in Azure and have deployed Dapr runtime into it along with a few other test service deployments that are annotated with dapr.io/enabled, dapr.io/id.
A Daprd side-car container instance co-locates within the same pod as each service so all is good there.
I then installed NGINX with the Dapr side-car annotations however I do not see a Daprd container instance co-located within the same pod as the NGINX container.
I followed this article to deploy NGINX with Dapr annotations applied through dapr-annotation.yaml file
Here are my workloads...
Here are my Services and ingresses...
Deployed Service1 has side-car...
Deployed Echo Service has side-car...
NGINX does not have a side-car but has annotations...

The issue was related to changes to the Helm chart format to the following.
As a result of that change I now see the Darp side car on NGINX ingress controller as expected.

Related

Nginx Ingress controller - Error when getting IngressClass nginx

I have a Kubernetes cluster v1.22.1 set up in bare metal CentOS. I am facing a problem when setting up Nginx Ingress controller following this link.
I followed exactly the same in step 1-3 but got a CrashLoopBackOff error in nginx ingress controller pod. I checked the logs of the pod and found below:
[root#dev1 deployments]# kubectl logs -n nginx-ingress nginx-ingress-5cd5c7549d-hw6l7
I0910 23:15:20.729196 1 main.go:271] Starting NGINX Ingress controller Version=1.12.1 GitCommit=6f72db6030daa9afd567fd7faf9d5fffac9c7c8f Date=2021-09-08T13:39:53Z PlusFlag=false
W0910 23:15:20.770569 1 main.go:310] The '-use-ingress-class-only' flag will be deprecated and has no effect on versions of kubernetes >= 1.18.0. Processing ONLY resources that have the 'ingressClassName' field in Ingress equal to the class.
F0910 23:15:20.774788 1 main.go:314] Error when getting IngressClass nginx: the server could not find the requested resource
I believe I have the IngressClass setup properly as shown in below:
[root#dev1 deployments]# kubectl get IngressClass
NAME CONTROLLER PARAMETERS AGE
nginx nginx.org/ingress-controller <none> 2m12s
So I have no idea why it said Error when getting IngressClass nginx. Can anyone shed me some lights please?
Reproduction and what happens
I created a one node cluster using kubeadm on CentOS 7. And got the same error.
You and I were able to proceed further only because we missed this command at the beginning:
git checkout v1.12.1
The main difference is ingress-class.yaml has networking.k8s.io/v1beta1 in v1.12.1 and networking.k8s.io/v1 in master branch.
After I went here for the second time and switched the branch, I immediately saw this error:
$ kubectl apply -f common/ingress-class.yaml
error: unable to recognize "common/ingress-class.yaml": no matches for kind "IngressClass" in version "networking.k8s.io/v1beta1"
That looks like other resources are not updated to be used on kubernetes v1.22+ yet.
Please see deprecated migration guide - v1.22 - ingress
How to proceed further
I tested exactly the same approach on a cluster with v1.21.4 and it worked like a charm. So you may consider downgrading the cluster.
If you're not tight to using NGINX ingress controller (supported by Nginx inc, you can try ingress nginx which is developed by kubernetes community. I tested it on v1.22, it works fine. Please find
Installation on bare metal cluster.
P.S. It may be confusing, but there are two free nginx ingress controllers which are developed by different teams. Also there's a third option - NGINX Plus which is paid and has more option. Please see here the difference

How to fix Kubernetes Ingress Controller cutting off nodes from cluster

I'm having some trouble installing an Ingress Controller in my on-prem cluster (created with Kubespray, running MetalLB to create LoadBalancer.).
I tried using nginx, traefik and kong but all got the same results.
I'm installing my the nginx helm chart using the following values.yaml:
controller:
kind: DaemonSet
nodeSelector:
node-role.kubernetes.io/master: ""
image:
tag: 0.23.0
rbac:
create: true
With command:
helm install --name nginx stable/nginx-ingress --values values.yaml --namespace ingress-nginx
When I deploy the ingress controller in the cluster, a service is created (e.g. nginx-ingress-controller for nginx). This service is of the type LoadBalancer and gets an external IP.
When this external IP is assigned, the node that's linked to this external IP is lost (status Not Ready). However, when I check this node, it's still running, it's just cut off from the other
nodes, it can't even ping them (No route found). When I remove the service (not the rest of the nginx helm chart), everything works and the Ingress works. I also tried installing nginx/traefik/kong without a LoadBalancer using NodePorts or External IPs on the service, but I get the same result.
Does anyone recognize this behaviour?
Why does the ingress still work, even when I remove the nginx-ingress-controller service?
After a long search, we finally found a working solution for this problem.
As mentioned by #A_Suh, the pool of IPs that metallb uses, should contain IPs that are currently not used by one of the nodes in the cluster. By adding a new IP range that's also configured in the DHCP server, metallb can use ARP to link one of the IPs to one of the nodes.
For example in my 5 node cluster (kube11-15): When metallb gets the range 10.4.5.200/31 and allocates 10.4.5.200 for my nginx-ingress-controller, 10.4.5.200 is linked to kube12. On ARP requests for 10.4.5.200, all 5 nodes respond with kube12 and trafic will be routed to this node.

What is the best way to organize a .net core app with nginx reverse proxy inside a kubernetes cluster?

I want to deploy a .NET Core app with NGINX reverse proxy on Azure Kubernetes Service. What is the best way to organize the pods and containers?
Two single-container pods, one pod for nginx and one pod for the app (.net-core/kestrel), so each one can scale independently of the other
One multi-container pod, this single pod with two containers (one for nginx and one for the app)
One single-container pod, a single container running both the nginx and the .net app
I would choose the 1st option, but I don't know if it is the right choice, would be great to know the the pros and cons of each option.
If I choose the 1st option, is it best to set affinity to put nginx pod in the same node that the app pod? Or anti-affinity so they deploy on different nodes? Or no affinity/anti-affinity at all?
The best practice for inbound traffic in Kubernetes is to use the Ingress resource. This requires a bit of extra setup in AKS because there's no built-in ingress controller. You definitely don't want to do #2 because it's not flexible, and #3 is not possible to my knowledge.
The Kubernetes Ingress resource is a configuration file that manages reverse proxy rules for inbound cluster traffic. This allows you to surface multiple services as if they were a combined API.
To set up ingress, start by creating a public IP address in your auto-generated MC resource group:
az network public-ip create `
-g MC_rg-name_cluster-name_centralus `
-n cluster-name-ingress-ip `
-l centralus `
--allocation-method static `
--dns-name cluster-name-ingress
Now create an ingress controller. This is required to actually handle the inbound traffic from your public IP. It sits and listens to the Kubernetes API Ingress updates, and auto-generates an nginx.conf file.
# Note: you'll have to install Helm and its service account prior to running this. See my GitHub link below for more information
helm install stable/nginx-ingress `
--name nginx-ingress `
--namespace default `
--set controller.service.loadBalancerIP=ip.from.above.result `
--set controller.scope.enabled=true `
--set controller.scope.namespace="default" `
--set controller.replicaCount=3
kubectl get service nginx-ingress-controller -n default -w
Once that's provisioned, make sure to use this annotation on your Ingress resource: kubernetes.io/ingress.class: nginx
If you'd like more information on how to set this up, please see this GitHub readme I put together this week. I've also included TLS termination with cert-manager, also installed with Helm.

helm not working on rancher with kubernetes

We followed the quick start guide for the Rancher with kubernetes environment, and followed all the steps and exercises from this ebook.
Everything was Beautiful, with one exception: helm chart manager is not working.
We found this issue that had a lot of people talking about nginx configurations that apparently solved it, but it did not for us.
When we run helm like:
> helm install --name prom-release stable/prometheus
It returns:
Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp 35.227.80.81:10250: getsockopt: connection timed out
We appreciate the help!
http://rancher.com/docs/rancher/v1.6/en/kubernetes/addons/#helm
Using helm in the Rancher UI
Rancher provides shell access directly to a managed kubectl instance that can be used to manage Kubernetes clusters and applications. To start using this shell, navigate to Kubernetes -> CLI. This shell is automatically installed with a Helm client and commands for Helm can be used immediately.

kubernetes: a service is not accessible outside host

I am following the guide at http://kubernetes.io/docs/getting-started-guides/ubuntu/ to create a kubernetes cluster. Once the cluster is up, i can create pods and services using kubectl. Basically, do the following
kubectl run nginx --image=nginx --port=80
kubectl expose deployment/nginx
I see a pod and service running
# kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 192.168.3.1 <none> 443/TCP 2d
nginx 192.168.3.208 <none> 80/TCP 2d
When I try to access the service from the machine where the pod is running, I get back the nginx helloworld page. But if i try it another machine in the kubernetes cluster, i get a timeout.
I thought all the services are accessible anywhere in the cluster. Why could it not be working that way?
Thanks
Yes, services should be accessible anywhere in the cluster. Is your "another machine" listed in the output of kubectl get nodes? Is the node Ready? Maybe the machine wasn't configured correctly.
If you want to get the servicer anywherer in the cluster, You must use the network plug-in,such as Flannel,OpenVSwitch.
http://kubernetes.io/docs/admin/networking/#flannel
https://github.com/coreos/flannel#flannel
found out my error by comparing it with another installation where it worked. This installation was missing an iptables rule that forced everything going to the containers onto the flannel interface. So the traffic was reaching the target host on eth0 making it discard the packet. I donot know why the proxy didnt add that rule. Once i manually added it, it worked.

Resources