How do you rsync build files from Gitlab CI to another server - rsync

It's unclear to me how to get my build files from the Gitlab CI (hosted on https://ci.gitlab.com) over to my personal server using rsync.
I have setup 1 test and 1 deploy job.
Under the deploy tab I have inputed the bash commands to:
Install rsync
Update packages
Finally, the rsync command to
transfer files over SSH to my personal server.
When I enter the SSH credentials (with verbose flag on) for my private personal server, it would appear that the SSH key is the issue. In Gitlab, I have already established the deploy key (for hooks - tested this and it works).
Where do I locate the public SSH key for the Gitlab deploy instance so that I can install that key on my server?
Below is the exact script entered in Gitlab CI deploy job script pane:
# Run as root
(
set -e
set -u
set -x
apt-get update -y
apt-get -y install rsync
)
git clone https://github.com/bla/deployments.git $HOME/deploy/deployments
SVR_WEB1_WEBSERVER="000.11.22.333"
USER1="franklin"
GROUP1="team1"
FROM_DIR="/gitlab-ci-runner/tmp/builds/myrepo-1/"
DEST1="subdomains/gitlab/myrepo"
EXCLUSIONS_LIST="${HOME}/deploy/deployments/exclusions/exclusions.txt"
ssh -v "$USER1#$SVR_WEB1_WEBSERVER"
/usr/bin/rsync -avzh --progress --delete -e ssh --group=$GROUP1 -p --exclude-from "$EXCLUSIONS_LIST" "$FROM_DIR" "$USER1#$SVR_WEB1_WEBSERVER:$DEST1"

Providing your private ssh key is dangerous unless you use your own gitlab-ci runners for deployment. That's why it is better to use rsync modules.

Related

How to install JupyterHub with Docker on a local machine and in a sub domain

I will run JupyterHub in a sub domain. Here is the Dockerfile, jupyterhub_config.py, .gitlab-ci.yml.
My first question is how to configure the jupyter_config.py. How can I load the jupyterhub_config.py on the build in the container?
How do I start Jupyterhub in the .gitlab-ci.yml for tests and how do I copy the application in the sub domain? I wrote a README.md. I need a little help for the JupyterHub. If all works fine, I will write a complete HOWTO Install JupyterHub on a local machine and in a sub domain by a provider.
FROM continuumio/miniconda3
# Updating packages
RUN apt-get update -y \
&& apt-get install -y --no-install-recommends \
git \
nano \
unzip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install conda and Jupyter
RUN conda update -y conda
RUN conda install -c conda-forge jupyter_nbextensions_configurator \
jupyterhub \
jupyterlab \
matplotlib \
pandas \
scipy
# Setup application
EXPOSE 8000
CMD ["jupyterhub", "--ip='*'", "--port=8000", "--no-browser", "--allow-root"]
The .gitlab-ci.yml
image: docker:latest
variables:
CONTAINER_IMAGE: registry.gitlab.com/joklein
DOCKER_IMAGE: jupyterhub
TAG: 0.1.0
services:
- docker:dind
stages:
- build
- test
- release
- deploy
before_script:
- echo "$GITLAB_PASSWORD" | docker login registry.gitlab.com --username $GITLAB_USER --password-stdin
build:
stage: build
script:
- docker build -t $CONTAINER_IMAGE/$DOCKER_IMAGE .
- docker push $CONTAINER_IMAGE/$DOCKER_IMAGE
test:
stage: test
script:
- docker pull $CONTAINER_IMAGE/$DOCKER_IMAGE
# - docker run $CONTAINER_IMAGE/$DOCKER_IMAGE -dt -p 8000:8000 --name $DOCKER_IMAGE
release:
stage: release
script:
- docker pull $CONTAINER_IMAGE/$DOCKER_IMAGE
- docker tag $CONTAINER_IMAGE/$DOCKER_IMAGE:latest $CONTAINER_IMAGE/$DOCKER_IMAGE:$TAG
- docker push $CONTAINER_IMAGE/$DOCKER_IMAGE:$TAG
only:
- master
deploy:
stage: deploy
image: alpine:latest
before_script:
- apk update && apk add git openssh-client rsync
script:
- mkdir .public
- cp -r * .public
- mv .public public
- mkdir "${HOME}/.ssh"
- echo "${SSH_HOST_KEY}" > "${HOME}/.ssh/known_hosts"
- echo "${SSH_PRIVATE_KEY}" > "${HOME}/.ssh/id_rsa"
- chmod 700 "${HOME}/.ssh/id_rsa"
- rsync -hrvz --delete --exclude=_ public/ user#example.com:www/jupyter/
only:
- master
The jupyterhub_config.py
c = get_config()
# Letsencrypt (https://letsencrypt.org/) to obtain a free, trusted SSL
# certificate.
c.JupyterHub.ssl_key = '/etc/letsencrypt/live/example.com/privkey.pem'
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/example.com/fullchain.pem'
c.JupyterHub.port = 443
#
# Change from JupyterHub to JupyterLab
c.Spawner.default_url = '/lab'
c.Spawner.debug = True
#
# # Specify users and admin
c.Authenticator.whitelist = {"systemuser"}
c.Authenticator.admin_users = {"systemuser"}
Docker base image of JupyterHub and JupyterLab
JupyterHub is a multi-user server for Jupyter notebooks. JupyterLab is the
next-generation web-based user interface for the Jupyter Project. This
JupyterHub is a Docker base image for JupyterHub and JupyterLab
that works as a stand-alone application and in a (sub) domain.
Images derived from this image can either run as a stand-alone server, or
function as a volume image for your server. You can also use them in a CI/CD
system such as GitLab CI to build your content prior to bundling it into a
standalone server container.
Building your JupyterHub image
Based on this structure, you can easily build an image for your needs. There are two options for using the image you generated:
as a stand-alone image
as a volume image for your webserver
The simplest way to build your own image is to use a Dockerfile. This is only an example. If you need more software packages you can install them with this
Dockerfile and conda.
Build the container
docker build -t juypterhub .
Your JupyterHub with JupyterLab is automatically generated during this build.
Run the container
docker run -p 8000:8000 -d --name jupyterhub jupyterhub jupyterhub
-p is used to map your local port 8000 to the container port 8000
-d is used to run the container in background. JupyterHub will just write
logs so no need to output them in your terminal unless you want to troubleshoot a server error.
-- name jupyterhub names your container jupyterhub
jupyterhub the image
jupyterhub is the last command used to start the jupyterhub server
and your JupyterHub with Jupyterlab is now available of http://localhost:8000.
Start / Stop JupyterHub
docker start / stop juyterhub
Configure JupyterHub
Let's encrypt certificates for JupyterHub
To enable HTTPS on your website, you need to get a certificate (a type of file) from a Certificate Authority (CA). Let’s Encrypt is a CA. In order to get a certificate for your website’s domain from Let’s Encrypt, you have to
demonstrate control over the domain. With Let’s Encrypt, you do this using
software that uses the ACME protocol, which typically runs on your web host.
Change to zerossl.com and generate a certificate for your domain. As the
result you get four files, domain-key.txt, domain-crt.txt, domain-csr.txt, account-key.txt. This files uses base 64, which is readable in
ASCII, not binary format. The certificates are already in PEM format. Just
change the extension to *.pem.
For JupyterHub only the files domain-key.txt and domain-crt are needed.
cp domain-crt.txt fullchain.pem
cp domain-key.txt privkey.pem
Add a System user in the container
By default JupyterHub searches for users on the server. In order to be able to
log in to our new JupyterHub server we need to connect to the JupyterHub docker
container and create a new system user with a password.
docker exec -it jupyterhub bash
useradd --create-home systemuser
passwd systemuser
exit
The command docker exec -it jupyterhub bash will spawn a root shell in your
docker container. You can use the root shell to create system users in the
container. These accounts will be used for authentication in JupyterHub's
default configuration.
The first command useradd creates a new user named systemuser. The second will
ask you for a password.
The all process might be simpler with GitLab 12.0 (June 2019), and its
Git integration for JupyterHub
Deploying JupyterHub via GitLab’s Kubernetes integration provides an easy way to get started with Jupyter notebooks, which can be used to create and share documents that contain live code, visualizations, and even runbooks.
Starting with GitLab 12.0, JupyterLab’s Git extension is automatically provisioned and configured when installing JupyterHub onto your Kubernetes cluster.
This integration enables full version control of your notebooks as well as issuance of Git commands within Jupyter. Git commands can be issued via the Git tab on the left panel or via Jupyter’s command line prompt.
See documentation and gitlab-ce issue 47138.
jupyterhub --generate-config
This is what on the documentation
It created a config.py file in /srv/jupyterhub

Setting Dokku environment variables

I'm trying to set Some variables on Dokku for deployment. As far as i can see from the dev files, one should create a .env file in the directory and put the variables in there. But this is not updating anything
.env file
DOKKU_NGINX_PORT=3000
MYSQL_URL=http://blabla
MYSQL_USER=mysqluser
I'm trying to map the port of the app to port 3000, and inject the mysql vars into the runtime environment.
I know I can set it with dokku config:set on the server, but I want to be able to automate it during deployment.
Any ideas? Or an example?
You'll need to install a Dokku client, or CLI in order to locally interact with the remote application on your Dokku instance.
Here are a few options:
(node.js) dokku-toolbelt
Dokku toolbelt is a node-based CLI wrapper that proxies requests to
the Dokku command running on remote hosts.
You can install it via the following shell command (assuming you have node and npm installed):
$ npm install -g dokku-toolbelt
See documentation here for more information.
(python) dokku-client
Dokku client is an extensible python-based cli wrapper for remote
Dokku hosts.
You can install it via the following shell command (assuming you have python and pip installed):
$ pip install dokku-client
See documentation here for more information.
(ruby) Dokku CLI
Dokku CLI is a rubygem that acts as a client for your Dokku
installation.
You can install it via the following shell command (assuming you have ruby and rubygems installed):
$ gem install dokku-cli
See documentation here for more information.
After the Dokku client is installed locally, make sure that the dokku app remote is set inside the repository directory.
You can verify this by running $ git remote -v.
If the output doesn't show your dokku application instance, set it with the following command:
$ git remote add dokku dokku#example.com:your-app-name
Here's an example from my terminal with some information redacted for security purposes.
seth#linuxmint ~/repos/Adopt-a-Pet $ git remote -v
dokku dokku#example.com:adopt-a-pet (fetch)
dokku dokku#example.com:adopt-a-pet (push)
origin https://github.com/sethbergman/Adopt-a-Pet.git (fetch)
origin https://github.com/sethbergman/Adopt-a-Pet.git (push)
Then you can set environment variables with the following commands:
$ dokku config:set DOKKU_NGINX_PORT=3000
You can optionally set environment variables with the .env file:
$ dokku config:set:file <path/to/.env>
If the .env file is in the root directory of the repository, then the command would be:
$ dokku config:set:file <.env>
If you're using ruby, you can use the gem 'dokku-cli'. With that, you can set config from any file by issuing the command
dokku config:set:file <path/to/file>
See ruby doc

Jenkins CI integrate with NodeJS and Github problems in configuring build

We have build our first Nodejs app and I want to integrate Jenkins as continuous integration we are running node server behind Nginx as proxy and source control in Gitlab. I need example configurations or steps.
I am looking here any doc or wiki link or if you can point me into right direction it will be helpful
I have CentOS server and managed to install and configure Jenkins but not getting the proper way to connect my Gitlab server. I need to run npm commands after each build. If any one already has done that please let me know.
Thanks
Your question is still vague but I will try to provide you here how I had done Jenkins NodeJs with Gitlab integration. I have CentOS 6 and tested.
Steps
Open Java should be installed prior.
sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
sudo yum install jenkins
sudo service jenkins start
Login as jenkins
sudo -s -H -u jenkins
Now generate the ssh key in the folder /var/lib/jenkins/.ssh and copy that key to gitlab
ssh-keygen
Install Gitlab Hook Plugin and GitLab Plugin in jenkins.
As you will create a project by accessing your jenkins in browser
After creating the project go to configure (left side menu) project page
There lots of options are self explanatory - setup Git repo url
and setup mail git browser url.
Create a new item in jenkins and add the git repo url and in build triggers
select Build when a change is pushed to GitLab. GitLab CI Service URL:
Build Triggers
check the option
Build when a change is pushed to GitLab
Paste that url in your gitlab repo's webhooks in settings.
This is to run npm commands after build
There is one section SSH Publisher
In exec commands section (I have put my example you can write your commands)
cd project_dir
rm -rf public server package.json
tar -xvf projectname.tgz 
ls
npm install --production
export NODE_ENV=production
forever restartall
jasmine-node spec/api/frisbyapi_spec.js
rm -rf projectname.tgz 
I have written most the steps that I took to setup jenkins nodejs and gitlab.
I might have forgot any step. If you face any error please post that as well.

How to deploy my meteor app on Digital Ocean Droplet

I have a a simple meteor 1.0 app that I want to deploy on my Digital Ocean Droplet. I can access this Droplet using ssh.
How can I deploy this app? Is there anything I should install and what are the settings I should use on my Droplet?
I've used arunoda's solution to deploy to my DO Droplet
https://github.com/arunoda/meteor-up
As in the docs after installing the module you'll get the mup command
You can find the detail documentation on how to deploy here
https://meteorhacks.com/deploy-a-meteor-app-into-a-server-or-a-vm.html
All the solution I found were not working well with Ubuntu 10.04. An easy solution is to simply write a bash script to send the code on the remote server and reload the meteor application:
Share a public key between your development environment and the remote server (How tohere)
Create the following script file (myscript.sh) with the following instructions in it (make sure you edit the variables in the header!):
myscript.sh:
#!/bin/bash
#*************** ONLY EDIT THIS PART
SERVER='<SERVER_IP>'
PORT='22'
USERNAME="root"
PROJECT_NAME="<PROJECT_FOLDER_NAME>"
DESTINATION_PATH="</home/any_user/projects>"
ORIGIN_PATH="</home/any_user/projects/project_folder_name>"
COPY_METEOR_PACKAGES=FALSE
#******************
echo ""
echo "Deployment on $USERNAME#$SERVER:$PORT:$DESTINATION_PATH"
echo "Make sure to have a public key on the server! http://www.linuxproblem.org/art_9.html"
echo ""
#copy the files
if $COPY_METEOR_PACKAGES==true; then
echo "Copy packages"
scp -P $PORT -r $ORIGIN_PATH $USERNAME#$SERVER:$DESTINATION_PATH
else
echo "Do not copy packages"
scp -P $PORT -r $ORIGIN_PATH/client $USERNAME#$SERVER:$DESTINATION_PATH
scp -P $PORT -r $ORIGIN_PATH/common $USERNAME#$SERVER:$DESTINATION_PATH
scp -P $PORT -r $ORIGIN_PATH/lib $USERNAME#$SERVER:$DESTINATION_PATH
scp -P $PORT -r $ORIGIN_PATH/public $USERNAME#$SERVER:$DESTINATION_PATH
scp -P $PORT -r $ORIGIN_PATH/server $USERNAME#$SERVER:$DESTINATION_PATH
fi
# reload meteor
ssh $USERNAME#$SERVER bash -c "'
cd $DESTINATION_PATH/$PROJECT_NAME
meteor
exit
'"
Useful info here:
Just run the script using the following command in your development console:
sh myscript.sh
Et voila! When you run this script, it will copy the files and the packages (no need to transfer all the time) to the remote server of your choice using the SSH protocol and it restart the server in case it has crashed (it shouldn't but it was the case for me).

How do you point deployed Meteor app to a new version?

I am specifically talking about an app bundle running on my own server.
I have a Meteor app running using forever in ~/bundle and my git repo is at ~/project. I keep different release bundle tarballs in ~/release.
~/release
|-0.1.0.tar.gz
|-0.1.1.tar.gz
|-0.2.0.tar.gz
After pulling in changes from git and switching to the latest release, I want to bundle my new version and take advantage of hot-code reloading and (hopefully?) keeping client connections alive. What is the best way to do this?
Note: I am also using nginx; so will this affect the process in any way? i.e. will it kill open client connections? do I have to reload nginx after updating to newer app version?
Thanks.
You could use a script like this.
Make sure define your server in your ssh config file, e.g
Host yourserver
User youruser
Port 22
Hostname yourapp.com
IdentityFile ~/.ssh/yourkeyfile.pem
TCPKeepAlive yes
IdentitiesOnly yes
Then you could have a bash script like this:
#!/bin/bash
cd ~/Desktop/yourappdirectory
rm -f ~/Desktop/yourapp.tar.gz
meteor bundle ~/Desktop/yourapp.tar.gz
scp ~/Desktop/yourapp.tar.gz yourserver:~/yourapp.tar.gz
ssh yourserver <<'ENDSSH'
cd ~/
tar -xzf yourapp.tar.gz
sudo rm -rf yourapp
mv bundle yourapp
cd yourapp/programs/server/node_modules
rm -rf fibers
rm -rf bcrypt
sudo npm install fibers#1.0.1
sudo npm install bcrypt
cd ~/yourapp/programs/server/npm/mongo-livedata/main
rm -r mongodb
sudo npm install mongodb#1.4.1
cd ~/
sudo forever stop ~/yourapp/main.js
sudo MONGO_URL=mongodb://user:pass#ip:27017/meteor PORT=3000 ROOT_URL=https://yoursite.com forever start ~/yourapp/main.js
ENDSSH
Then just run the bash and it would upload and deploy your app for you. Just a note I couldn't put a release version in so stuff just uploads to ~/yourapp.tar.gz then unbundles into ~/yourapp
The meteor app would then be hot code reloaded on any clients if they're on the site.

Resources