As claimed at their website Gitlab can be used to auto deploy projects after some code is pushed into the repository but I am not able to figure out how. There are plenty of ruby tutorials out there but none for meteor or node.
Basically I just need to rebuild an Docker container on my server, after code is pushed into my master branch. Does anyone know how to achieve it? I am totally new to the .gitlab-ci.yml stuff and appreciate help pretty much.
Brief: I am running a Meteor 1.3.2 app, hosted on Digital Ocean (Ubuntu 14.04) since 4 months. I am using Gitlab v. 8.3.4 running on the same Digital Ocean droplet as the Meteor app. It is a 2 GB / 2 CPUs droplet ($ 20 a month). Using the built in Gitlab CI for CI/CD. This setup has been running successfully till now. (We are currently not using Docker, however this should not matter.)
Our CI/CD strategy:
We check out Master branch on our local laptop. The branch contains the whole Meteor project as shown below:
We use git CLI tool on Windows to connect to our Gitlab server. (for pull, push, etc. similar regular git activities)
Open the checked out project in Atom editor. We have also integrated Atom with Gitlab. This helps in quick git status/pull/push etc. within Atom editor itself. Do regular Meteor work viz. fix bugs etc.
Post testing on local laptop, we then do git push & commit on master. This triggers auto build using Gitlab CI and the results (including build logs) can be seen in Gitlab itself as shown below:
Below image shows all previous build logs:
Please follow below steps:
Install meteor on the DO droplet.
Install Gitlab on DO (using 1-click deploy if possible) or manual installation. Ensure you are installing Gitlab v. 8.3.4 or newer version. I had done a DO one-click deploy on my droplet.
Start the gitlab server & log into gitlab from browser. Open your project and go to project settings -> Runners from left menu
SSH to your DO server & configure a new upstart service on the droplet as root:
vi /etc/init/meteor-service.conf
Sample file:
#upstart service file at /etc/init/meteor-service.conf
description "Meteor.js (NodeJS) application for eaxmple.com:3000"
author "rohanray#gmail.com"
# When to start the service
start on runlevel [2345]
# When to stop the service
stop on shutdown
# Automatically restart process if crashed
respawn
respawn limit 10 5
script
export PORT=3000
# this allows Meteor to figure out correct IP address of visitors
export HTTP_FORWARDED_COUNT=1
export MONGO_URL=mongodb://xxxxxx:xxxxxx#example123123.mongolab.com:59672/meteor-db
export ROOT_URL=http://<droplet_ip>:3000
exec /home/gitlab-runner/.meteor/packages/meteor-tool/1.1.10/mt-os.linux.x86_64/dev_bundle/bin/node /home/gitlab-runner/erecaho-build/server-alpha-running/bundle/main.js >> /home/gitlab-runner/erecaho-build/server-alpha-running/meteor.log
end script
Install gitlab-ci-multi-runner from here: https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/install/linux-repository.md as per the instructions
Cheatsheet:
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-ci-multi-runner/script.deb.sh | sudo bash
sudo apt-get install gitlab-ci-multi-runner
sudo gitlab-ci-multi-runner register
Enter details from step 2
Now the new runner should be green or activate the runner if required
Create .gitlab-ci.yml within the meteor project directory
Sample file:
before_script:
- echo "======================================"
- echo "==== START auto full script v0.1 ====="
- echo "======================================"
types:
- cleanup
- build
- test
- deploy
job_cleanup:
type: cleanup
script:
- cd /home/gitlab-runner/erecaho-build
- echo "cleaning up existing bundle folder"
- echo "cleaning up current server-running folder"
- rm -fr ./server-alpha-running
- mkdir ./server-alpha-running
only:
- master
tags:
- master
job_build:
type: build
script:
- pwd
- meteor build /home/gitlab-runner/erecaho-build/server-alpha-running --directory --server=http://example.org:3000 --verbose
only:
- master
tags:
- master
job_test:
type: test
script:
- echo "testing ----"
- cd /home/gitlab-runner/erecaho-build/server-alpha-running/bundle
- ls -la main.js
only:
- master
tags:
- master
job_deploy:
type: deploy
script:
- echo "deploying ----"
- cd /home/gitlab-runner/erecaho-build/server-alpha-running/bundle/programs/server/ && /home/gitlab-runner/.meteor/packages/meteor-tool/1.1.10/mt-os.linux.x86_64/dev_bundle/bin/npm install
- cd ../..
- sudo restart meteor-service
- sudo status meteor-service
only:
- master
tags:
- master
Check in above file in gitlab. This should trigger Gitlab CI and after the build process is complete, the new app will be available # example.net:3000
Note: The app will not be available after checking in .gitlab-ci.yml for the first time, since restart meteor-service will result in service not found. Manually run sudo start meteor-service once on DO SSH console. Post this any new check-in to gitlab master will trigger auto CI/CD and the new version of the app will be available on example.com:3000 after the build is completed successfully.
P.S.: gitlab ci yaml docs can be found at http://doc.gitlab.com/ee/ci/yaml/README.html for your customization and to understand the sample yaml file above.
For docker specific runner, please refer https://gitlab.com/gitlab-org/gitlab-ci-multi-runner
Related
I have a react SPA (Single Page Application) and want to deploy it to a Kubernetes environment.
For the sake of keeping it simple, assume the SPA is stand alone.
I've been told Bitnami's repo for Helm Charts are a good place to start to solve this problem.
So my question is what Bitnami chart should I use to deploy a react SPA to a Kubernetes cluster? And where can I find the steps explained?
What I want
The desired solution should be a Helm Chart that serves up static content. Typically app.js and index.html page, and other static content. And lets me specify the sub-directory to use as the contents of the website. In react, the build subdirectory holds the website.
What I currently do (How to deploy a SPA to K8S my steps)
What I currently do is described below. I'm starting from a new app created by create-react-app so that others could follow along and do this if needed to helm answer the question.
This assumes you have Docker, Kubernetes and helm installed (as well as node and npm for React).
The following commands do the following:
Create a new React application
Create a docker container for it.
Build and test the SPA running in a local docker image .
Create a helm chart to deploy the image to K8S.
Configure the helm chart so it uses the docker image created in step 3.
Using the helm CLI deploy the SPA app to the k8s cluster.
Test the SPA running in k8s cluster.
#1 Create a new React application
npx create-react-app spatok8s
cd spatok8s
npm run build
At this point the static SPA website is created an is in the build directory.
2. Create a docker container for it.
Next, create Dockerfile with the following. For example, vi Dockerfile and put the following in it. The Dockerfile was what is described here https://hub.docker.com/_/nginx.
FROM nginx
copy build /usr/share/nginx/html
These commands say to use the NGINX docker image (from dockerhub) and copy my website onto the image so my website will be self contained within the image. When the image starts (nginx starts) and the only content to be served will be my index.html file in the /usr/share/nginx/html/index.html file.
3. Build and test the SPA running in a local docker image .
Next build the docker image spatok8s and run it locally, and open your browser to http://localhost:8082 (used in this example).
docker build -t spatok8s .
docker run -d -p8082:80 spatok8s
After you've verified it works stop it using docker stop # where the # is the container number from docker ps -q --filter ancestor=spatok8s.
4. Create a helm chart to deploy the image to K8S.
Now create a helm chart so I can deploy this docker image to Kubernetes:
helm create spatok8schart
5. Configure the helm chart so it uses the docker image created in step 3.
Update the helm chart for this application vi spatok8schart
The lines changed are included below:
# Update the repo to use the Docker image just built
repository: spatok8s
. . .
# Update the URL to use to access the SPA when it is deployed to Kubernetes
- host: spatok8s.local
. . .
serviceName: spatok8s.local
6. Using the helm CLI deploy the SPA app to the k8s cluster.
Deploy it
helm install spatok8s spatok8schart
The output for the last command is:
NAME: spatok8s
LAST DEPLOYED: Thu Apr 8 22:50:26 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
http://spatok8s.local/
7. Test the SPA running in k8s cluster.
Open the browser to http://spatok8s.local.
If you are doing local development and your Kubernetes environment is not automatically setting up your DNS names, then you'll have to manually set the hostname spatok8s.local to the IP address of the kubernetes cluster.
The files /etc/hosts or c:\Windows\System32\Drivers\etc\hosts can be used to hold that information.
Searching for a solution
So it works but it isn't as easy as I've been told it could be, so I'm searching for the Bitnami chart that will make this easier.
I searched helm chart for deploying a single page app? and found:
https://developer.ibm.com/depmodels/cloud/tutorials/convert-sample-web-app-to-helmchart/ - Which requires an IBM private cloud (a non-starter for me).
https://wkrzywiec.medium.com/how-to-deploy-application-on-kubernetes-with-helm-39f545ad33b8 - A medium article which looked overly complicated for what I want to do.
https://opensource.com/article/20/5/helm-charts - Good article but not what I'm looking for
A search for "What bitnami chart should I use to deploy a React SPA?" is what worked for me.
See https://docs.bitnami.com/tutorials/deploy-react-application-kubernetes-helm/.
I'll summarize the steps below but this website should be around for a while.
The Binami Approach
Step 1: Build and test a custom Docker image
Step 2: Publish the Docker image
Step 3: Deploy the application on Kubernetes
Step 1: Build and test a custom Docker image
The website provides a sample react app
git clone https://github.com/pankajladhar/GFontsSpace.git
cd GFontsSpace
npm install
Create a Dockerfile with the following:
FROM bitnami/apache:latest
COPY build /app
Build and test it. Build the Docker image, replacing the USERNAME placeholder in the command below with your Docker Hub username:
docker build -t USERNAME/react-app .
Run it to verify it works:
docker run -p 8080:8080 USERNAME/react-app
Step 2: Publish the Docker image
docker login
docker push USERNAME/react-app
Again use your Docker hub username
Step 3: Deploy the application on Kubernetes
Make sure that you can to connect to your Kubernetes cluster by executing the command below:
kubectl cluster-info
Update your Helm repository list:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
Deploy the application by executing the following (replace the USERNAME placeholder with your Docker username):
helm install apache bitnami/apache \
--set image.repository=USERNAME/react-app \
--set image.tag=latest \
--set image.pullPolicy=Always
If you wish to access the application externally through a domain name and you have installed the NGINX Ingress controller, use this command instead and replace the DOMAIN placeholder with your domain name:
helm install apache bitnami/apache \
--set image.repository=USERNAME/react-app \
--set image.tag=latest \
--set image.pullPolicy=Always \
--set ingress.enabled=true \
--set ingress.hosts[0].name=DOMAIN
You were actually doing the same steps, so your manual approach was "spot on"!
Thanks again to Vikram Vaswani, and this website https://docs.bitnami.com/tutorials/deploy-react-application-kubernetes-helm that had this answer!
I'm testing WordPress for personnal project but i would like to install locally my development WordPress website and install on my Personnal production server the final website.
In order to do that, i search a plugin or program for syncronising wordpress dévelopment with new pages, templates, and configurations inside my production wordpress.
Is there a program or plugin to do that? How is much better to work with wordpress?
Thanks :)
There are two topics you can try:
-.By schedule copy files to production like linux CLI with crontab (every min):
* * * * * scp local_file remote_username#remote_ip:remote_file
But I don't recommend this way , and for you to easy understand.
-.By CICD, here is a blog link for you to know the concept first if you don't know this:
https://thecodingmachine.io/continuous-delivery-on-a-dedicated-server
Briefly, you can push your project to private repo on gitlab or github,
then make development(=development server),production(=production server) branches, the automate job will deploy to the servers if you have git push.
Here's an example main part from the link on the file .gitlab-ci.yml:
deploy_staging:
stage: deploy
image: kroniak/ssh-client:3.6
script:
# add the server as a known host
- mkdir ~/.ssh
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
# add ssh key stored in SSH_PRIVATE_KEY variable to the agent store
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
# log into Docker registry
- ssh deployer#thecodingmachine.io "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.thecodingmachine.com"
# stop container, remove image.
- ssh deployer#thecodingmachine.io "docker stop thecodingmachine.io_${CI_COMMIT_REF_SLUG}" || true
- ssh deployer#thecodingmachine.io "docker rm thecodingmachine.io_${CI_COMMIT_REF_SLUG}" || true
- ssh deployer#thecodingmachine.io "docker rmi registry.thecodingmachine.com/tcm-projects/thecodingmachine.io:${CI_COMMIT_REF_SLUG}" || true
# start new container
- ssh deployer#thecodingmachine.io "docker run --name thecodingmachine.io_${CI_COMMIT_REF_SLUG} --network=web -d registry.thecodingmachine.com/tcm-projects/thecodingmachine.io:${CI_COMMIT_REF_SLUG}"
only:
- branches
except:
- master
It maybe hard for you to read this, but you can know there is a way which can work you need and you may take times to learn this part.
Hope it work for you.
Thanks for David Négrier sharing.
We have build our first Nodejs app and I want to integrate Jenkins as continuous integration we are running node server behind Nginx as proxy and source control in Gitlab. I need example configurations or steps.
I am looking here any doc or wiki link or if you can point me into right direction it will be helpful
I have CentOS server and managed to install and configure Jenkins but not getting the proper way to connect my Gitlab server. I need to run npm commands after each build. If any one already has done that please let me know.
Thanks
Your question is still vague but I will try to provide you here how I had done Jenkins NodeJs with Gitlab integration. I have CentOS 6 and tested.
Steps
Open Java should be installed prior.
sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
sudo yum install jenkins
sudo service jenkins start
Login as jenkins
sudo -s -H -u jenkins
Now generate the ssh key in the folder /var/lib/jenkins/.ssh and copy that key to gitlab
ssh-keygen
Install Gitlab Hook Plugin and GitLab Plugin in jenkins.
As you will create a project by accessing your jenkins in browser
After creating the project go to configure (left side menu) project page
There lots of options are self explanatory - setup Git repo url
and setup mail git browser url.
Create a new item in jenkins and add the git repo url and in build triggers
select Build when a change is pushed to GitLab. GitLab CI Service URL:
Build Triggers
check the option
Build when a change is pushed to GitLab
Paste that url in your gitlab repo's webhooks in settings.
This is to run npm commands after build
There is one section SSH Publisher
In exec commands section (I have put my example you can write your commands)
cd project_dir
rm -rf public server package.json
tar -xvf projectname.tgz
ls
npm install --production
export NODE_ENV=production
forever restartall
jasmine-node spec/api/frisbyapi_spec.js
rm -rf projectname.tgz
I have written most the steps that I took to setup jenkins nodejs and gitlab.
I might have forgot any step. If you face any error please post that as well.
I'm a Docker newbie and I'm trying to setup my first project.
To test how to play with it, I just cloned one ready-to-go project and I setup it (Project repo).
As the guide claims if I access a specific url, I reach the homepage. To be more specific a symfony start page.
Moreover with this command
docker run -i -t testdocker_application /bin/bash
I'm able to login to the container.
My problem is if I try to go to the application folder through bash, the folder that I shared with my host is empty.
I tried with another project, but the result is the same.
Where I'm wrong?
Here some infos about my env:
Ubuntu 12.04
Docker version 1.8.3, build f4bf5c7
Config:
application:
build: code
volumes:
- ./symfony:/var/www/symfony
- ./logs/symfony:/var/www/symfony/app/logs
tty: true
Looks like you have a docker-compose.yml file but are running the image with docker. You don't actually need docker-compose to start a single container. If you just want to start the container your command should look like this:
docker run -ti -v $(pwd)/symfony:/var/www/symfony -v $(pwd)/logs/symfony:/var/www/symfony/app/logs testdocker_application /bin/bash
To use your docker-compose.yml start your container with docker-compose up. You would also need to add the following to drop into a shell.
stdin_open: true
command: /bin/bash
What is a workflow for deploying to Digital Ocean with Phusion Docker and Node/Meteor support?
I tried :
FROM phusion/passenger-nodejs:0.9.10
# Set correct environment variables.
ENV HOME /root
# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]
# ssh
ADD private/keys/akey.pub /tmp/your_key
RUN cat /tmp/your_key >> /root/.ssh/authorized_keys && rm -f /tmp/your_key
## Download shit
RUN apt-get update
RUN apt-get install -qq -y python-software-properties software-properties-common curl git build-essential
RUN npm install fibers#1.0.1
# install meteor
RUN curl https://install.meteor.com | /bin/sh
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Enable nginx
# RUN rm -f /etc/service/nginx/down
#setup app
RUN mkdir /home/app/someapp
ADD . /home/app/someapp
WORKDIR /home/app/someapp
EXPOSE 4000
CMD passenger start -p 4000
But nothing is working and then I'm not sure how to really manage update/deploy/running?
E.g, how would you also handle updating the app without rebuilding the docker image?
Here is my suggested workflow:
Create an account on Docker Hub, you can get 1 private repository for free. If you want a complete private repository hosted on your own server, you can run an entire docker registry and use it to host your images.
Create your image on your development machine (locally or on a server), then push the image to the repository using docker push
Update the image when needed and commit your changes with docker commit then push the updated image to your repository (you should properly version and tag all your images)
You can start a digital ocean droplet with docker pre-installed (from applications tab) and simply pull your image and run your container. Whenever you update and push your image from your development machine, simple pull it again from the droplet.
For large and complex infrastructure, I would recommend looking into Ansible to configure your docker containers and manage digital ocean droplet as well.
Be aware that your data will be lost if you stop the container, so consider defining a volume in your container that is mapped to a shared folder on your host machine
I suggest you test your Dockerfile in a local VirtualBox VM. I wrote a tutorial about deploying node.js app with Docker. I build several images (layers) instead of just 1. When you update your app, you just need to rebuild the top layer. Hope it helps. http://vinceyuan.blogspot.com/2015/05/deploying-web-app-redis-postgres-and.html