Unable to delete the Persistent Volumes (PVs) associated with the helm release of a micro service deployment in JenkinsX - okd

Summary:
I have deployed a microservice in OKD cluster through JenkinsX and am trying to delete the Persistent Volumes (PVs) associated with a helm release right after the deployment. So I found the following command from the jx documentation,
jx step helm delete <release_name> -n <namespace>
Steps to reproduce the behavior:
Deploy a service using jx preview command with release name,
jx preview --app $APP_NAME --dir ../.. --release preview-$APP_NAME
Expected behavior:
The jx step helm delete should remove the Persistent volumes (PVs) associated with the micro service deployment.
Actual behavior:
The above delete command is unable to delete the PVs which makes the promotion to staging build fails with port error.
Jx version:
The output of jx version is:
NAME VERSION
jx 2.0.785
jenkins x platform 2.0.1973
Kubernetes cluster v1.11.0+d4cacc0
kubectl v1.11.0+d4cacc0
helm client Client: v2.12.0+gd325d2a
git 2.22.0
Operating System "CentOS Linux release 7.7.1908 (Core)"
Jenkins type:
[ ] Serverless Jenkins X Pipelines (Tekton + Prow)
[*] Classic Jenkins
Kubernetes cluster:
Openstack cluster with 1 master and 2 worker nodes.
I need to delete the PVs through jx's jenkinsfile so tried using,
1. jx step helm delete <release_name> -n <namespace> ["Unable to delete PVs"]
2. helm delete purge <release_name> ["unable to list/delete the release created through jx helm"]
3. oc/kubectl commands are not working through Jenkinsfile.
But nothing helps. So, please suggest me anyway that I can delete PVs through Jenkinsfile of jx.

jx step helm delete doesn't remove a PV. helm delete also doesn't remove a PV and it's an expected behaviour.
You need to use --purge option to completely delete Helm release with all PV associate with it. e.g. jx step helm delete <release_name> -n <namespace> --purge

Related

Airflow2 gitSync DAG works for airflow namespace, but not alternate namespace

I'm running minikube to develop with Apache Airflow2. I am trying to sync my DAGs from a private repo on GitLab, but have taken a few steps back just to get a basic example working. In the case of the default "airflow" namespace, it works, but when using the exact same file in a non-default name space, it doesn't.
I have a values.yaml file which has the following section:
dags:
gitSync:
enabled: true
repo: "ssh://git#github.com/apache/airflow.git"
branch: v2-1-stable
rev: HEAD
depth: 1
maxFailures: 0
subPath: "tests/dags"
wait: 60
containerName: git-sync
uid: 65533
extraVolumeMounts: []
env: []
resources: {}
If I run helm upgrade --install airflow apache-airflow/airflow -f values.yaml -n airflow, and then kubectl port-forward svc/airflow-webserver 8080:8080 --namespace airflow, I get a whole list of DAGs as expected at http://localhost:8080.
But if I run helm upgrade --install airflow apache-airflow/airflow -f values.yaml -n mynamespace, and then kubectl port-forward svc/airflow-webserver 8080:8080 --namespace mynamespace, I get no DAGs listed at http://localhost:8080.
This post would be 10 times longer if I listed all the sites I hit trying to resolve this. What have I done wrong??
UPDATE: I created a new namespace, test01, in case there was some history being held over and causing the problem. I ran helm upgrade --install airflow apache-airflow/airflow -f values.yaml -n test01. Starting the webserver and inspecting, I do not get a login screen, but it goes straight to the usual web pages, also does not show the dags list, but this time has a notice at the top of the DAG page:
The scheduler does not appear to be running.
The DAGs list may not update, and new tasks will not be scheduled.
This is different behaviour yet again (although the same as with mynamespace insofar as showing no DAGs via gitSync), even though it seems to suggest a reason why DAGs aren't being retrieved in this case. I don't understand why a scheduler isn't running if everything was spun-up and initiated the same as before.
Curiously, helm show values apache-airflow/airflow --namespace test01 > values2.yaml gives the default dags.gitSync.enabled: false and dags.gitSync.repo: https://github.com/apache/airflow.git. I would have thought that should reflect what I upgraded/installed from values.yaml: enable = true and the ssh repo fetch. I get no change in behaviour by editing values2.yaml to dags.gitSync.enabled: true and re-upgrading -- still the error note about scheduler no running, and no DAGs.

How to change server in admin.conf file in kubernetes?

I have a single node c=kubernetes cluster and I'm untainting the master node. I'm using the following command to create my cluster.
#Only on the Master Node: On the master node initialize the cluster.
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=NumCPU
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#Add pod network add-on
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
# Ref: https://github.com/calebhailey/homelab/issues/3
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl get nodes
kubectl get pods --all-namespaces
#for nginx
kubectl create deployment nginx --image=nginx
#for deployment nginx server
#https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
since the admin.cnfig file is being automatically generated, it has allocated the server to https://172.17.0.2:6443 I want it to use '0.0.0.0:3000' instead. How can I do that?
What would be the reason to restrict a multi cluster only to one node? maybe you can give some more specific information and there is another workaround.

How can I package and run a React Single Page App using Bitnami?

I have a react SPA (Single Page Application) and want to deploy it to a Kubernetes environment.
For the sake of keeping it simple, assume the SPA is stand alone.
I've been told Bitnami's repo for Helm Charts are a good place to start to solve this problem.
So my question is what Bitnami chart should I use to deploy a react SPA to a Kubernetes cluster? And where can I find the steps explained?
What I want
The desired solution should be a Helm Chart that serves up static content. Typically app.js and index.html page, and other static content. And lets me specify the sub-directory to use as the contents of the website. In react, the build subdirectory holds the website.
What I currently do (How to deploy a SPA to K8S my steps)
What I currently do is described below. I'm starting from a new app created by create-react-app so that others could follow along and do this if needed to helm answer the question.
This assumes you have Docker, Kubernetes and helm installed (as well as node and npm for React).
The following commands do the following:
Create a new React application
Create a docker container for it.
Build and test the SPA running in a local docker image .
Create a helm chart to deploy the image to K8S.
Configure the helm chart so it uses the docker image created in step 3.
Using the helm CLI deploy the SPA app to the k8s cluster.
Test the SPA running in k8s cluster.
#1 Create a new React application
npx create-react-app spatok8s
cd spatok8s
npm run build
At this point the static SPA website is created an is in the build directory.
2. Create a docker container for it.
Next, create Dockerfile with the following. For example, vi Dockerfile and put the following in it. The Dockerfile was what is described here https://hub.docker.com/_/nginx.
FROM nginx
copy build /usr/share/nginx/html
These commands say to use the NGINX docker image (from dockerhub) and copy my website onto the image so my website will be self contained within the image. When the image starts (nginx starts) and the only content to be served will be my index.html file in the /usr/share/nginx/html/index.html file.
3. Build and test the SPA running in a local docker image .
Next build the docker image spatok8s and run it locally, and open your browser to http://localhost:8082 (used in this example).
docker build -t spatok8s .
docker run -d -p8082:80 spatok8s
After you've verified it works stop it using docker stop # where the # is the container number from docker ps -q --filter ancestor=spatok8s.
4. Create a helm chart to deploy the image to K8S.
Now create a helm chart so I can deploy this docker image to Kubernetes:
helm create spatok8schart
5. Configure the helm chart so it uses the docker image created in step 3.
Update the helm chart for this application vi spatok8schart
The lines changed are included below:
# Update the repo to use the Docker image just built
repository: spatok8s
. . .
# Update the URL to use to access the SPA when it is deployed to Kubernetes
- host: spatok8s.local
. . .
serviceName: spatok8s.local
6. Using the helm CLI deploy the SPA app to the k8s cluster.
Deploy it
helm install spatok8s spatok8schart
The output for the last command is:
NAME: spatok8s
LAST DEPLOYED: Thu Apr 8 22:50:26 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
http://spatok8s.local/
7. Test the SPA running in k8s cluster.
Open the browser to http://spatok8s.local.
If you are doing local development and your Kubernetes environment is not automatically setting up your DNS names, then you'll have to manually set the hostname spatok8s.local to the IP address of the kubernetes cluster.
The files /etc/hosts or c:\Windows\System32\Drivers\etc\hosts can be used to hold that information.
Searching for a solution
So it works but it isn't as easy as I've been told it could be, so I'm searching for the Bitnami chart that will make this easier.
I searched helm chart for deploying a single page app? and found:
https://developer.ibm.com/depmodels/cloud/tutorials/convert-sample-web-app-to-helmchart/ - Which requires an IBM private cloud (a non-starter for me).
https://wkrzywiec.medium.com/how-to-deploy-application-on-kubernetes-with-helm-39f545ad33b8 - A medium article which looked overly complicated for what I want to do.
https://opensource.com/article/20/5/helm-charts - Good article but not what I'm looking for
A search for "What bitnami chart should I use to deploy a React SPA?" is what worked for me.
See https://docs.bitnami.com/tutorials/deploy-react-application-kubernetes-helm/.
I'll summarize the steps below but this website should be around for a while.
The Binami Approach
Step 1: Build and test a custom Docker image
Step 2: Publish the Docker image
Step 3: Deploy the application on Kubernetes
Step 1: Build and test a custom Docker image
The website provides a sample react app
git clone https://github.com/pankajladhar/GFontsSpace.git
cd GFontsSpace
npm install
Create a Dockerfile with the following:
FROM bitnami/apache:latest
COPY build /app
Build and test it. Build the Docker image, replacing the USERNAME placeholder in the command below with your Docker Hub username:
docker build -t USERNAME/react-app .
Run it to verify it works:
docker run -p 8080:8080 USERNAME/react-app
Step 2: Publish the Docker image
docker login
docker push USERNAME/react-app
Again use your Docker hub username
Step 3: Deploy the application on Kubernetes
Make sure that you can to connect to your Kubernetes cluster by executing the command below:
kubectl cluster-info
Update your Helm repository list:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
Deploy the application by executing the following (replace the USERNAME placeholder with your Docker username):
helm install apache bitnami/apache \
--set image.repository=USERNAME/react-app \
--set image.tag=latest \
--set image.pullPolicy=Always
If you wish to access the application externally through a domain name and you have installed the NGINX Ingress controller, use this command instead and replace the DOMAIN placeholder with your domain name:
helm install apache bitnami/apache \
--set image.repository=USERNAME/react-app \
--set image.tag=latest \
--set image.pullPolicy=Always \
--set ingress.enabled=true \
--set ingress.hosts[0].name=DOMAIN
You were actually doing the same steps, so your manual approach was "spot on"!
Thanks again to Vikram Vaswani, and this website https://docs.bitnami.com/tutorials/deploy-react-application-kubernetes-helm that had this answer!

How to stop a corda node running a example cordapp?

I have cloned a cordapp example https://github.com/corda/samples/tree/release-V4/cordapp-example
cd /cordapp-example
./gradlew deployNodes
kotlin-source/build/nodes/runnodes
The example runs correctly.
How do is shutdown the example cordapp and corda node?
Type bye inside each node terminal.
For the above example , update the gradle file to include sshd option as sshdPort , this generates sshd config in node.conf for each node.
sshd {
port=2222
}
Reference - https://docs.corda.net/docs/corda-os/4.4/generating-a-node.html
From your local desktop Login via ssh into Remote Ubuntu machine
ssh -p <port> <ipaddress> -l user1
Reference - https://docs.corda.net/docs/corda-os/4.4/shell.html#the-shell-via-the-local-terminal
And type in run command in corda Crash shell launched
run gracefulShutdown
Reference - https://docs.corda.net/docs/corda-os/4.4/shell.html#shutting-down-the-node
This shuts down the corda node in the remote ubuntu machine.

How to use Gitlab CI/CD to deploy a meteor project?

As claimed at their website Gitlab can be used to auto deploy projects after some code is pushed into the repository but I am not able to figure out how. There are plenty of ruby tutorials out there but none for meteor or node.
Basically I just need to rebuild an Docker container on my server, after code is pushed into my master branch. Does anyone know how to achieve it? I am totally new to the .gitlab-ci.yml stuff and appreciate help pretty much.
Brief: I am running a Meteor 1.3.2 app, hosted on Digital Ocean (Ubuntu 14.04) since 4 months. I am using Gitlab v. 8.3.4 running on the same Digital Ocean droplet as the Meteor app. It is a 2 GB / 2 CPUs droplet ($ 20 a month). Using the built in Gitlab CI for CI/CD. This setup has been running successfully till now. (We are currently not using Docker, however this should not matter.)
Our CI/CD strategy:
We check out Master branch on our local laptop. The branch contains the whole Meteor project as shown below:
We use git CLI tool on Windows to connect to our Gitlab server. (for pull, push, etc. similar regular git activities)
Open the checked out project in Atom editor. We have also integrated Atom with Gitlab. This helps in quick git status/pull/push etc. within Atom editor itself. Do regular Meteor work viz. fix bugs etc.
Post testing on local laptop, we then do git push & commit on master. This triggers auto build using Gitlab CI and the results (including build logs) can be seen in Gitlab itself as shown below:
Below image shows all previous build logs:
Please follow below steps:
Install meteor on the DO droplet.
Install Gitlab on DO (using 1-click deploy if possible) or manual installation. Ensure you are installing Gitlab v. 8.3.4 or newer version. I had done a DO one-click deploy on my droplet.
Start the gitlab server & log into gitlab from browser. Open your project and go to project settings -> Runners from left menu
SSH to your DO server & configure a new upstart service on the droplet as root:
vi /etc/init/meteor-service.conf
Sample file:
#upstart service file at /etc/init/meteor-service.conf
description "Meteor.js (NodeJS) application for eaxmple.com:3000"
author "rohanray#gmail.com"
# When to start the service
start on runlevel [2345]
# When to stop the service
stop on shutdown
# Automatically restart process if crashed
respawn
respawn limit 10 5
script
export PORT=3000
# this allows Meteor to figure out correct IP address of visitors
export HTTP_FORWARDED_COUNT=1
export MONGO_URL=mongodb://xxxxxx:xxxxxx#example123123.mongolab.com:59672/meteor-db
export ROOT_URL=http://<droplet_ip>:3000
exec /home/gitlab-runner/.meteor/packages/meteor-tool/1.1.10/mt-os.linux.x86_64/dev_bundle/bin/node /home/gitlab-runner/erecaho-build/server-alpha-running/bundle/main.js >> /home/gitlab-runner/erecaho-build/server-alpha-running/meteor.log
end script
Install gitlab-ci-multi-runner from here: https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/install/linux-repository.md as per the instructions
Cheatsheet:
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-ci-multi-runner/script.deb.sh | sudo bash
sudo apt-get install gitlab-ci-multi-runner
sudo gitlab-ci-multi-runner register
Enter details from step 2
Now the new runner should be green or activate the runner if required
Create .gitlab-ci.yml within the meteor project directory
Sample file:
before_script:
- echo "======================================"
- echo "==== START auto full script v0.1 ====="
- echo "======================================"
types:
- cleanup
- build
- test
- deploy
job_cleanup:
type: cleanup
script:
- cd /home/gitlab-runner/erecaho-build
- echo "cleaning up existing bundle folder"
- echo "cleaning up current server-running folder"
- rm -fr ./server-alpha-running
- mkdir ./server-alpha-running
only:
- master
tags:
- master
job_build:
type: build
script:
- pwd
- meteor build /home/gitlab-runner/erecaho-build/server-alpha-running --directory --server=http://example.org:3000 --verbose
only:
- master
tags:
- master
job_test:
type: test
script:
- echo "testing ----"
- cd /home/gitlab-runner/erecaho-build/server-alpha-running/bundle
- ls -la main.js
only:
- master
tags:
- master
job_deploy:
type: deploy
script:
- echo "deploying ----"
- cd /home/gitlab-runner/erecaho-build/server-alpha-running/bundle/programs/server/ && /home/gitlab-runner/.meteor/packages/meteor-tool/1.1.10/mt-os.linux.x86_64/dev_bundle/bin/npm install
- cd ../..
- sudo restart meteor-service
- sudo status meteor-service
only:
- master
tags:
- master
Check in above file in gitlab. This should trigger Gitlab CI and after the build process is complete, the new app will be available # example.net:3000
Note: The app will not be available after checking in .gitlab-ci.yml for the first time, since restart meteor-service will result in service not found. Manually run sudo start meteor-service once on DO SSH console. Post this any new check-in to gitlab master will trigger auto CI/CD and the new version of the app will be available on example.com:3000 after the build is completed successfully.
P.S.: gitlab ci yaml docs can be found at http://doc.gitlab.com/ee/ci/yaml/README.html for your customization and to understand the sample yaml file above.
For docker specific runner, please refer https://gitlab.com/gitlab-org/gitlab-ci-multi-runner

Resources