Kubernetes ingress - AKS - nginx

I have followed the steps mentioned in this nginx for kubernetes, For installing this in azure i ran the following
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml
I opened that file and under the section # Source: ingress-nginx/templates/controller-deployment.yaml i could see the resources, is there a way to override this and set the cpu and memory limit for that ingress and also i would like to know whether everything in there is customisable.

I would like to know whether everything in there is customizable.
Almost everything is customizable, but keep in mind that you must know exactly what are you changing, otherwise it can break your ingress.
Is there a way to override this and set the cpu and memory limit for that ingress?
Aside for download and editing the file before deploying it, Here are three ways you can customize it on the run:
Kubectl Edit:
The edit command allows you to directly edit any API resource you can retrieve via the command line tools.
It will open the editor defined by your KUBE_EDITOR, or EDITOR environment variables, or fall back to 'vi' for Linux or 'notepad' for Windows.
You can edit multiple objects, although changes are applied one at a time.
Example:
kubectl edit deployment ingress-nginx-controller -n ingress-nginx
This is the command that will open the deployment mentioned in the file. If you make an invalid change, it will not apply and will save to a temporary file, so use it with that in mind, if it's not applying, you changed something you shouldn't like the structure.
Kubectl Patch using a yaml file:
Update field(s) of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.
JSON and YAML formats are accepted.
Create a simple file called patch-nginx.yaml with the minimal following content (the parameter you wish to change and his structure):
spec:
template:
spec:
containers:
- name: controller
resources:
requests:
cpu: 111m
memory: 99Mi
The command structure is: kubectl patch <KIND> <OBJECT_NAME> -n <NAMESPACE> --patch "$(cat <FILE_TO_PATCH>)"
Here is a full example:
$ kubectl patch deployment ingress-nginx-controller -n ingress-nginx --patch "$(cat patch-nginx.yaml)"
deployment.apps/ingress-nginx-controller patched
$ kubectl describe deployment ingress-nginx-controller -n ingress-nginx | grep cpu
cpu: 111m
$ kubectl describe deployment ingress-nginx-controller -n ingress-nginx | grep memory
memory: 99Mi
Kubectl Patch with JSON format:
This is the one-liner version and it follows the same structure as the yaml version, but we will pass the parameter in a json structure instead:
$ kubectl patch deployment ingress-nginx-controller -n ingress-nginx --patch '{"spec":{"template":{"spec":{"containers":[{"name":"controller","resources":{"requests":{"cpu":"122m","memory":"88Mi"}}}]}}}}'
deployment.apps/ingress-nginx-controller patched
$ kubectl describe deployment ingress-nginx-controller -n ingress-nginx | grep cpu
cpu: 122m
$ kubectl describe deployment ingress-nginx-controller -n ingress-nginx | grep memory
memory: 88Mi
If you have any doubts, let me know in the comments.

what the comment suggests ( download the file and manually override it or use helm chart ) or use kubectl edit deployment xxx and set those limits\requests.

Related

Airflow2 gitSync DAG works for airflow namespace, but not alternate namespace

I'm running minikube to develop with Apache Airflow2. I am trying to sync my DAGs from a private repo on GitLab, but have taken a few steps back just to get a basic example working. In the case of the default "airflow" namespace, it works, but when using the exact same file in a non-default name space, it doesn't.
I have a values.yaml file which has the following section:
dags:
gitSync:
enabled: true
repo: "ssh://git#github.com/apache/airflow.git"
branch: v2-1-stable
rev: HEAD
depth: 1
maxFailures: 0
subPath: "tests/dags"
wait: 60
containerName: git-sync
uid: 65533
extraVolumeMounts: []
env: []
resources: {}
If I run helm upgrade --install airflow apache-airflow/airflow -f values.yaml -n airflow, and then kubectl port-forward svc/airflow-webserver 8080:8080 --namespace airflow, I get a whole list of DAGs as expected at http://localhost:8080.
But if I run helm upgrade --install airflow apache-airflow/airflow -f values.yaml -n mynamespace, and then kubectl port-forward svc/airflow-webserver 8080:8080 --namespace mynamespace, I get no DAGs listed at http://localhost:8080.
This post would be 10 times longer if I listed all the sites I hit trying to resolve this. What have I done wrong??
UPDATE: I created a new namespace, test01, in case there was some history being held over and causing the problem. I ran helm upgrade --install airflow apache-airflow/airflow -f values.yaml -n test01. Starting the webserver and inspecting, I do not get a login screen, but it goes straight to the usual web pages, also does not show the dags list, but this time has a notice at the top of the DAG page:
The scheduler does not appear to be running.
The DAGs list may not update, and new tasks will not be scheduled.
This is different behaviour yet again (although the same as with mynamespace insofar as showing no DAGs via gitSync), even though it seems to suggest a reason why DAGs aren't being retrieved in this case. I don't understand why a scheduler isn't running if everything was spun-up and initiated the same as before.
Curiously, helm show values apache-airflow/airflow --namespace test01 > values2.yaml gives the default dags.gitSync.enabled: false and dags.gitSync.repo: https://github.com/apache/airflow.git. I would have thought that should reflect what I upgraded/installed from values.yaml: enable = true and the ssh repo fetch. I get no change in behaviour by editing values2.yaml to dags.gitSync.enabled: true and re-upgrading -- still the error note about scheduler no running, and no DAGs.

How to change server in admin.conf file in kubernetes?

I have a single node c=kubernetes cluster and I'm untainting the master node. I'm using the following command to create my cluster.
#Only on the Master Node: On the master node initialize the cluster.
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=NumCPU
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#Add pod network add-on
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
# Ref: https://github.com/calebhailey/homelab/issues/3
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl get nodes
kubectl get pods --all-namespaces
#for nginx
kubectl create deployment nginx --image=nginx
#for deployment nginx server
#https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
since the admin.cnfig file is being automatically generated, it has allocated the server to https://172.17.0.2:6443 I want it to use '0.0.0.0:3000' instead. How can I do that?
What would be the reason to restrict a multi cluster only to one node? maybe you can give some more specific information and there is another workaround.

Deploy docker image to k8s cluster issue

I am trying to deploy docker image to kubernetes but hitting a strange issue. Below is the command i am using in jenkinsfile
stage('Deploy to k8s') {
steps{
sshagent(['kops-machine']) {
sh "scp -o StrictHostKeyChecking=no deployment.yml ubuntu#<ip>:/home/ubuntu/"
sh "ssh ubuntu#<ip> kubectl apply -f ."
sh 'kubectl set image deployment/nginx-deployment nginx=account/repo:${BUILD_NUMBER}'
}
}
I am getting this error message
kubectl set image deployment/nginx-deployment nginx=account/repo:69
error: the server doesn't have a resource type "deployment"
Strange thing is if i copy and paste this command and execute on the cluster, the image gets updated
kubectl set image deployment/nginx-deployment nginx=account/repo:69
Can somebody please help, image builds and pushes to docker hub successfully, its just that i am stuck with pulling and deploying to kubernetes cluster, if you have anyother alternatives please let me know, the deployment.yml file which gets copied to the server is as follows
spec:
containers:
- name: nginx
image: account/repo:3
ports:
- containerPort: 80
Ok so if found the work around. if i change this line in my docker file
sh "ssh ubuntu#<ip> kubectl apply -f ." to
sh "ssh ubuntu#<ip> kubectl set image deployment/nginx-deployment
nginx=account/repo:${BUILD_NUMBER}"
It works, but if there is no deployment created at all, then i have to add these two line to make it work
sh "ssh ubuntu#<ip> kubectl apply -f ."
sh "ssh ubuntu#<ip> kubectl set image deployment/nginx-deployment
nginx=account/repo:${BUILD_NUMBER}"

How to delete networkpolicies using kubectl?

I've followed a Kubernetes tutorial similar to:
https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/ which created some basic networkpolicies as follows:
root#server:~# kubectl get netpol -n policy-demo
NAME POD-SELECTOR AGE
access-nginx run=nginx 49m
default-deny <none> 50m
I saw that I can delete the entire namespace (pods included) using a command like "kubectl delete ns policy-demo", but I can't see what command I need to use if I just want to delete a single policy (or edit it even).
How would I use kubectl to delete just the "access-nginx" policy above?
This should work. A similar command works at my end.
kubectl -n policy-demo delete networkpolicy access-nginx
PolicyName: access-nginx
Namespace: default
kubectl get netpol -n default
NAME POD-SELECTOR AGE
access-nginx run=nginx 60m
Deleting Network Policy
kubectl delete networkpolicy access-nginx -n default
or
kubectl delete netpol access-nginx -n default
or - using the filename of the resource
kubectl delete -f access-nginx.yaml
Editing Networking Policy
By default it will open in yaml format for editing
kubectl edit netpol access-nginx -n default
or - if you prefer in json format
kubectl edit netpol access-nginx -n default -o json
or - using the filename of the resource
kubectl edit -f access-nginx.yaml
Editor preference:
KUBE_EDITOR='nano' kubectl edit netpol access-nginx -n default
or - vim
KUBE_EDITOR='vim' kubectl edit netpol access-nginx -n default
In case of default namespace it is not required to pass -n default as kubectl considers default namespace as default.

How to turn off autoscaling in Kubernetes with the kubectl command?

If I set to autoscale a deployment using the kubectl autoscale command (http://kubernetes.io/docs/user-guide/kubectl/kubectl_autoscale/), how can I turn it off and go back to manual scaling?
When you autoscale, it creates a HorizontalPodScaler.
You can delete it by:
kubectl delete hpa NAME-OF-HPA.
You can get NAME-OF-HPA from:
kubectl get hpa.
kubectl delete hpa ${name of hpa}
Horizontal Pod Autoscaler, like every API resource, is supported in a
standard way by kubectl. We can create a new autoscaler using kubectl
create command. We can list autoscalers by kubectl get hpa and get
detailed description by kubectl describe hpa. Finally, we can delete an
autoscaler using kubectl delete hpa.
from official docs
kubectl delete horizontalpodautoscaler name_autoscaler_deployment -n namespace
instead of deleting the auto-scalar, if possible set min and max value nodes to same value(equal to current pods count). So that autoscaler won't do anything. if you want autoscaler feature agian then just update the min and max nodes.
Delete all of the HPAs within a namespace using the following command:
kubectl --namespace=MY_NAMESPACE get hpa | awk '{print $1}' | xargs kubectl --namespace=MY_NAMESPACE delete hpa
If you follow this example and if you are not able to terminate your load generator from the terminal (by typing Ctrl+C) then deleting only hpa doesn't actually terminate your deployment. In that case, you have to delete your deployments as well. In this example, you have two deployments:
$ kubectl get deployment (run this command to see deployments)
NAME -------- DESIRED -- CURRENT -- UP-TO-DATE - AVAILABLE - AGE
load-generator 1 1 1 1 1 d
php-apache 1 1 1 1 1 d
Then execute following commands to delete your deployments:
$ kubectl delete deployment load-generator
$ kubectl delete deployment php-apache
If you want to disable the effect of cluster Autoscaler temporarily then try the following method. you can enable and disable the effect of cluster Autoscaler(node level).
kubectl get deploy -n kube-system -> it will list the kube-system deployments.
update the coredns-autoscaler or autoscaler replica from 1 to 0. So, the pod which is responsible for autoscaling will be terminated which means you have turned off the effect of Autoscaler. but the deployment is still there, and you can update the replica back to 1 to enable the Autoscaler effect on your cluster.

Resources