How to turn off autoscaling in Kubernetes with the kubectl command? - scale

If I set to autoscale a deployment using the kubectl autoscale command (http://kubernetes.io/docs/user-guide/kubectl/kubectl_autoscale/), how can I turn it off and go back to manual scaling?

When you autoscale, it creates a HorizontalPodScaler.
You can delete it by:
kubectl delete hpa NAME-OF-HPA.
You can get NAME-OF-HPA from:
kubectl get hpa.

kubectl delete hpa ${name of hpa}
Horizontal Pod Autoscaler, like every API resource, is supported in a
standard way by kubectl. We can create a new autoscaler using kubectl
create command. We can list autoscalers by kubectl get hpa and get
detailed description by kubectl describe hpa. Finally, we can delete an
autoscaler using kubectl delete hpa.
from official docs

kubectl delete horizontalpodautoscaler name_autoscaler_deployment -n namespace

instead of deleting the auto-scalar, if possible set min and max value nodes to same value(equal to current pods count). So that autoscaler won't do anything. if you want autoscaler feature agian then just update the min and max nodes.

Delete all of the HPAs within a namespace using the following command:
kubectl --namespace=MY_NAMESPACE get hpa | awk '{print $1}' | xargs kubectl --namespace=MY_NAMESPACE delete hpa

If you follow this example and if you are not able to terminate your load generator from the terminal (by typing Ctrl+C) then deleting only hpa doesn't actually terminate your deployment. In that case, you have to delete your deployments as well. In this example, you have two deployments:
$ kubectl get deployment (run this command to see deployments)
NAME -------- DESIRED -- CURRENT -- UP-TO-DATE - AVAILABLE - AGE
load-generator 1 1 1 1 1 d
php-apache 1 1 1 1 1 d
Then execute following commands to delete your deployments:
$ kubectl delete deployment load-generator
$ kubectl delete deployment php-apache

If you want to disable the effect of cluster Autoscaler temporarily then try the following method. you can enable and disable the effect of cluster Autoscaler(node level).
kubectl get deploy -n kube-system -> it will list the kube-system deployments.
update the coredns-autoscaler or autoscaler replica from 1 to 0. So, the pod which is responsible for autoscaling will be terminated which means you have turned off the effect of Autoscaler. but the deployment is still there, and you can update the replica back to 1 to enable the Autoscaler effect on your cluster.

Related

Airflow2 gitSync DAG works for airflow namespace, but not alternate namespace

I'm running minikube to develop with Apache Airflow2. I am trying to sync my DAGs from a private repo on GitLab, but have taken a few steps back just to get a basic example working. In the case of the default "airflow" namespace, it works, but when using the exact same file in a non-default name space, it doesn't.
I have a values.yaml file which has the following section:
dags:
gitSync:
enabled: true
repo: "ssh://git#github.com/apache/airflow.git"
branch: v2-1-stable
rev: HEAD
depth: 1
maxFailures: 0
subPath: "tests/dags"
wait: 60
containerName: git-sync
uid: 65533
extraVolumeMounts: []
env: []
resources: {}
If I run helm upgrade --install airflow apache-airflow/airflow -f values.yaml -n airflow, and then kubectl port-forward svc/airflow-webserver 8080:8080 --namespace airflow, I get a whole list of DAGs as expected at http://localhost:8080.
But if I run helm upgrade --install airflow apache-airflow/airflow -f values.yaml -n mynamespace, and then kubectl port-forward svc/airflow-webserver 8080:8080 --namespace mynamespace, I get no DAGs listed at http://localhost:8080.
This post would be 10 times longer if I listed all the sites I hit trying to resolve this. What have I done wrong??
UPDATE: I created a new namespace, test01, in case there was some history being held over and causing the problem. I ran helm upgrade --install airflow apache-airflow/airflow -f values.yaml -n test01. Starting the webserver and inspecting, I do not get a login screen, but it goes straight to the usual web pages, also does not show the dags list, but this time has a notice at the top of the DAG page:
The scheduler does not appear to be running.
The DAGs list may not update, and new tasks will not be scheduled.
This is different behaviour yet again (although the same as with mynamespace insofar as showing no DAGs via gitSync), even though it seems to suggest a reason why DAGs aren't being retrieved in this case. I don't understand why a scheduler isn't running if everything was spun-up and initiated the same as before.
Curiously, helm show values apache-airflow/airflow --namespace test01 > values2.yaml gives the default dags.gitSync.enabled: false and dags.gitSync.repo: https://github.com/apache/airflow.git. I would have thought that should reflect what I upgraded/installed from values.yaml: enable = true and the ssh repo fetch. I get no change in behaviour by editing values2.yaml to dags.gitSync.enabled: true and re-upgrading -- still the error note about scheduler no running, and no DAGs.

How to change server in admin.conf file in kubernetes?

I have a single node c=kubernetes cluster and I'm untainting the master node. I'm using the following command to create my cluster.
#Only on the Master Node: On the master node initialize the cluster.
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=NumCPU
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#Add pod network add-on
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
# Ref: https://github.com/calebhailey/homelab/issues/3
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl get nodes
kubectl get pods --all-namespaces
#for nginx
kubectl create deployment nginx --image=nginx
#for deployment nginx server
#https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
since the admin.cnfig file is being automatically generated, it has allocated the server to https://172.17.0.2:6443 I want it to use '0.0.0.0:3000' instead. How can I do that?
What would be the reason to restrict a multi cluster only to one node? maybe you can give some more specific information and there is another workaround.

Kubernetes ingress - AKS

I have followed the steps mentioned in this nginx for kubernetes, For installing this in azure i ran the following
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml
I opened that file and under the section # Source: ingress-nginx/templates/controller-deployment.yaml i could see the resources, is there a way to override this and set the cpu and memory limit for that ingress and also i would like to know whether everything in there is customisable.
I would like to know whether everything in there is customizable.
Almost everything is customizable, but keep in mind that you must know exactly what are you changing, otherwise it can break your ingress.
Is there a way to override this and set the cpu and memory limit for that ingress?
Aside for download and editing the file before deploying it, Here are three ways you can customize it on the run:
Kubectl Edit:
The edit command allows you to directly edit any API resource you can retrieve via the command line tools.
It will open the editor defined by your KUBE_EDITOR, or EDITOR environment variables, or fall back to 'vi' for Linux or 'notepad' for Windows.
You can edit multiple objects, although changes are applied one at a time.
Example:
kubectl edit deployment ingress-nginx-controller -n ingress-nginx
This is the command that will open the deployment mentioned in the file. If you make an invalid change, it will not apply and will save to a temporary file, so use it with that in mind, if it's not applying, you changed something you shouldn't like the structure.
Kubectl Patch using a yaml file:
Update field(s) of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.
JSON and YAML formats are accepted.
Create a simple file called patch-nginx.yaml with the minimal following content (the parameter you wish to change and his structure):
spec:
template:
spec:
containers:
- name: controller
resources:
requests:
cpu: 111m
memory: 99Mi
The command structure is: kubectl patch <KIND> <OBJECT_NAME> -n <NAMESPACE> --patch "$(cat <FILE_TO_PATCH>)"
Here is a full example:
$ kubectl patch deployment ingress-nginx-controller -n ingress-nginx --patch "$(cat patch-nginx.yaml)"
deployment.apps/ingress-nginx-controller patched
$ kubectl describe deployment ingress-nginx-controller -n ingress-nginx | grep cpu
cpu: 111m
$ kubectl describe deployment ingress-nginx-controller -n ingress-nginx | grep memory
memory: 99Mi
Kubectl Patch with JSON format:
This is the one-liner version and it follows the same structure as the yaml version, but we will pass the parameter in a json structure instead:
$ kubectl patch deployment ingress-nginx-controller -n ingress-nginx --patch '{"spec":{"template":{"spec":{"containers":[{"name":"controller","resources":{"requests":{"cpu":"122m","memory":"88Mi"}}}]}}}}'
deployment.apps/ingress-nginx-controller patched
$ kubectl describe deployment ingress-nginx-controller -n ingress-nginx | grep cpu
cpu: 122m
$ kubectl describe deployment ingress-nginx-controller -n ingress-nginx | grep memory
memory: 88Mi
If you have any doubts, let me know in the comments.
what the comment suggests ( download the file and manually override it or use helm chart ) or use kubectl edit deployment xxx and set those limits\requests.

Unable to delete the Persistent Volumes (PVs) associated with the helm release of a micro service deployment in JenkinsX

Summary:
I have deployed a microservice in OKD cluster through JenkinsX and am trying to delete the Persistent Volumes (PVs) associated with a helm release right after the deployment. So I found the following command from the jx documentation,
jx step helm delete <release_name> -n <namespace>
Steps to reproduce the behavior:
Deploy a service using jx preview command with release name,
jx preview --app $APP_NAME --dir ../.. --release preview-$APP_NAME
Expected behavior:
The jx step helm delete should remove the Persistent volumes (PVs) associated with the micro service deployment.
Actual behavior:
The above delete command is unable to delete the PVs which makes the promotion to staging build fails with port error.
Jx version:
The output of jx version is:
NAME VERSION
jx 2.0.785
jenkins x platform 2.0.1973
Kubernetes cluster v1.11.0+d4cacc0
kubectl v1.11.0+d4cacc0
helm client Client: v2.12.0+gd325d2a
git 2.22.0
Operating System "CentOS Linux release 7.7.1908 (Core)"
Jenkins type:
[ ] Serverless Jenkins X Pipelines (Tekton + Prow)
[*] Classic Jenkins
Kubernetes cluster:
Openstack cluster with 1 master and 2 worker nodes.
I need to delete the PVs through jx's jenkinsfile so tried using,
1. jx step helm delete <release_name> -n <namespace> ["Unable to delete PVs"]
2. helm delete purge <release_name> ["unable to list/delete the release created through jx helm"]
3. oc/kubectl commands are not working through Jenkinsfile.
But nothing helps. So, please suggest me anyway that I can delete PVs through Jenkinsfile of jx.
jx step helm delete doesn't remove a PV. helm delete also doesn't remove a PV and it's an expected behaviour.
You need to use --purge option to completely delete Helm release with all PV associate with it. e.g. jx step helm delete <release_name> -n <namespace> --purge

Can we create more then 2 riak cluster

Can we setup Riak Cluster with only 2 nodes like this
node01
node02
or we add more cluster or less then 2 cluster if we can then please let me how we can achieve that .
It will depends if you are running your cluster under docker or not.
If you are under docker
You can in this case start a new riak node with the command
docker run -d -P -e COORDINATOR_NODE=172.17.0.3 --label cluster.name=<your main node name> basho/riak-kv
For more explaination about this line you can go on the basho post: Running Riak in Docker
If you are not under a docker container
As I didn't experimented this case personally I will only link the documentation to run a new node on a riak server: Running a Cluster
Hope I understood the question correctly

Resources