how to create a symlink for an exīsting PVC in kubernetes pod? - kubernetes-pvc

I have mounted a volume with PVC in a POD and would like to create a symlink to this path in the POD. Please advise.
I tried to add multiple Volumes with same PVC and trying to mount it. But it's throwing errors. My expectation is should have 2 mount paths in a POD pointing to same shared nfs location.

Related

run kubernetes containers without minikube or etc

I want to just run an nginx-server on kubernetes with the help of
kubectl run nginx-image --image nginx
but the error was thrown:
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
1. Via the command-line flag --kubeconfig
2. Via the KUBECONFIG environment variable
3. In your home directory as ~/.kube/config
I then ran
kubectl run nginx-image --kubeconfig ~/.kube/config --image nginx
again thrown:
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
1. Via the command-line flag --kubeconfig
2. Via the KUBECONFIG environment variable
3. In your home directory as ~/.kube/config
minikube start solves the problem but it is taking resources...
I just want to ask How can I run kubectl without minikube (or other such solutions) being started? Please tell me if it not possible
when I run kubectl get pods, I get two pods instead I just want one and I know it is possible since I had seen in some video tutorials.
Please help...
Kubectl is a command-line tool and it is responsible for communicating with Minikube. Kubectl allows you to run commands against Minikube. You can use Kubectl to deploy applications, inspect and manage resources, and view logs. When you execute this command
kubectl run nginx-image --image nginx
kubectl tries to connect to minikube and sends your request(run Nginx) to it. So if you stop minikube, kubectl can't communicate. So minikube is responsible to run Nginx and kubectl is just responsible to tell Minikube to run Nginx
I mean you need to install Kubernetes in order to use it. It’s not magic. If minikube isn’t to your liking there are many installers, try Docker Desktop or k3d.

How to fix Kubernetes Ingress Controller cutting off nodes from cluster

I'm having some trouble installing an Ingress Controller in my on-prem cluster (created with Kubespray, running MetalLB to create LoadBalancer.).
I tried using nginx, traefik and kong but all got the same results.
I'm installing my the nginx helm chart using the following values.yaml:
controller:
kind: DaemonSet
nodeSelector:
node-role.kubernetes.io/master: ""
image:
tag: 0.23.0
rbac:
create: true
With command:
helm install --name nginx stable/nginx-ingress --values values.yaml --namespace ingress-nginx
When I deploy the ingress controller in the cluster, a service is created (e.g. nginx-ingress-controller for nginx). This service is of the type LoadBalancer and gets an external IP.
When this external IP is assigned, the node that's linked to this external IP is lost (status Not Ready). However, when I check this node, it's still running, it's just cut off from the other
nodes, it can't even ping them (No route found). When I remove the service (not the rest of the nginx helm chart), everything works and the Ingress works. I also tried installing nginx/traefik/kong without a LoadBalancer using NodePorts or External IPs on the service, but I get the same result.
Does anyone recognize this behaviour?
Why does the ingress still work, even when I remove the nginx-ingress-controller service?
After a long search, we finally found a working solution for this problem.
As mentioned by #A_Suh, the pool of IPs that metallb uses, should contain IPs that are currently not used by one of the nodes in the cluster. By adding a new IP range that's also configured in the DHCP server, metallb can use ARP to link one of the IPs to one of the nodes.
For example in my 5 node cluster (kube11-15): When metallb gets the range 10.4.5.200/31 and allocates 10.4.5.200 for my nginx-ingress-controller, 10.4.5.200 is linked to kube12. On ARP requests for 10.4.5.200, all 5 nodes respond with kube12 and trafic will be routed to this node.

What is the best way to organize a .net core app with nginx reverse proxy inside a kubernetes cluster?

I want to deploy a .NET Core app with NGINX reverse proxy on Azure Kubernetes Service. What is the best way to organize the pods and containers?
Two single-container pods, one pod for nginx and one pod for the app (.net-core/kestrel), so each one can scale independently of the other
One multi-container pod, this single pod with two containers (one for nginx and one for the app)
One single-container pod, a single container running both the nginx and the .net app
I would choose the 1st option, but I don't know if it is the right choice, would be great to know the the pros and cons of each option.
If I choose the 1st option, is it best to set affinity to put nginx pod in the same node that the app pod? Or anti-affinity so they deploy on different nodes? Or no affinity/anti-affinity at all?
The best practice for inbound traffic in Kubernetes is to use the Ingress resource. This requires a bit of extra setup in AKS because there's no built-in ingress controller. You definitely don't want to do #2 because it's not flexible, and #3 is not possible to my knowledge.
The Kubernetes Ingress resource is a configuration file that manages reverse proxy rules for inbound cluster traffic. This allows you to surface multiple services as if they were a combined API.
To set up ingress, start by creating a public IP address in your auto-generated MC resource group:
az network public-ip create `
-g MC_rg-name_cluster-name_centralus `
-n cluster-name-ingress-ip `
-l centralus `
--allocation-method static `
--dns-name cluster-name-ingress
Now create an ingress controller. This is required to actually handle the inbound traffic from your public IP. It sits and listens to the Kubernetes API Ingress updates, and auto-generates an nginx.conf file.
# Note: you'll have to install Helm and its service account prior to running this. See my GitHub link below for more information
helm install stable/nginx-ingress `
--name nginx-ingress `
--namespace default `
--set controller.service.loadBalancerIP=ip.from.above.result `
--set controller.scope.enabled=true `
--set controller.scope.namespace="default" `
--set controller.replicaCount=3
kubectl get service nginx-ingress-controller -n default -w
Once that's provisioned, make sure to use this annotation on your Ingress resource: kubernetes.io/ingress.class: nginx
If you'd like more information on how to set this up, please see this GitHub readme I put together this week. I've also included TLS termination with cert-manager, also installed with Helm.

kubernetes liveness probe restarts the pod which ends in CrashLoopback

I have a deployment with 2 replicas of nginx with openconnect vpn proxy container (a pod has only one container).
They start without any problems and everything works, but once the connection crashes and my liveness probe fails, the nginx container is restarted ending up in CrashLoopbackoff because the openconnect and nginx restart fails with
nginx:
host not found in upstream "example.server.org" in /etc/nginx/nginx.conf:11
openconnect:
getaddrinfo failed for host 'vpn.server.com': Temporary failure in name resolution
It seems like the /etc/resolv.conf is edited by openconnect and on the pod restart it stays the same (altough it is not a part of a persistent volume) and I believe the whole container should be run from a clean docker image, where the /etc/resolv.conf is not modified, right?
The only way how to fix the CrashLoopback is to delete the pod and the deployment rc runs a new pod that works.
How is it different to create a new pod vs. when the container in pod is restarted by the liveness probe restartPolicy: Always? Is the container restarted with a clean image?
restartPolicy applies to all Containers in the Pod, not the pod itself. Pods usually only get re-created when someone explicitly deletes them.
I think this explains why the restarted container with the bad resolv.conf fails but a new pod works.
A "restarted container" is just that, it is not spawned new from the downloaded docker image. It is like killing a process and starting it - the file system for the new process is the same one the old process was updating. But a new pod will create a new container with a local file system view identical to the one packaged in the downloaded docker image - fresh start.

kubernetes: a service is not accessible outside host

I am following the guide at http://kubernetes.io/docs/getting-started-guides/ubuntu/ to create a kubernetes cluster. Once the cluster is up, i can create pods and services using kubectl. Basically, do the following
kubectl run nginx --image=nginx --port=80
kubectl expose deployment/nginx
I see a pod and service running
# kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 192.168.3.1 <none> 443/TCP 2d
nginx 192.168.3.208 <none> 80/TCP 2d
When I try to access the service from the machine where the pod is running, I get back the nginx helloworld page. But if i try it another machine in the kubernetes cluster, i get a timeout.
I thought all the services are accessible anywhere in the cluster. Why could it not be working that way?
Thanks
Yes, services should be accessible anywhere in the cluster. Is your "another machine" listed in the output of kubectl get nodes? Is the node Ready? Maybe the machine wasn't configured correctly.
If you want to get the servicer anywherer in the cluster, You must use the network plug-in,such as Flannel,OpenVSwitch.
http://kubernetes.io/docs/admin/networking/#flannel
https://github.com/coreos/flannel#flannel
found out my error by comparing it with another installation where it worked. This installation was missing an iptables rule that forced everything going to the containers onto the flannel interface. So the traffic was reaching the target host on eth0 making it discard the packet. I donot know why the proxy didnt add that rule. Once i manually added it, it worked.

Resources