Gitlab CI: how to load vars from JSON file - asp.net

I'm creating ASP.Net Core applications using Gitlab CI and docker-in-docker.
So at first stage I build Dockerfiles and push them in repo, at the second stage I apply yamls to deploy from repo to cluster.
K8S doesn't allow to run containers at port 80 (in my configuration), so I need to expose ports for Dockerfiles.
What I want to do - load correct port numbers from appsettings.json (AspNet config file) in GitLab CI/CD - I don't want to hardcode that values in Gitlab job or in Gitlab Variables.
At the current moment I'm going this way gitlab-ci.yml:
build:
image: ${CI_REGISTRY}/dockerhub/library/docker:20.10.11
stage: build
services:
- ${CI_REGISTRY}/dockerhub/library/docker:20.10.11-dind
variables:
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
script:
- docker build
-f ./MyApp.Api/Dockerfile
--build-arg PORT=5076 .
- docker push ${CI_REGISTRY_IMAGE}/myapp:${tag}
I hardcoded --build_arg to expose correct port.
How to read it value from JSON file?
Json file looks like this:
{
"MyApp": {
"ConnectionString": "Host=host;Port=5432;"
},
"ClientBaseUrls": {
"MyApp": "http://my-serv:5076/api",
}
}
Also I can add additional var to JSON, for example: PORT=5076 but still can't figure out how to read this value in GitLab CI.

You could use jq in a previous job to read from appsettings.json , save them in a script that exports these variables, use this script as artifact and run it in your build job so that env variables are available in your build stage.
You stated that you can add additional vars, so lets assume your .json looks like this:
{
"port" : "5678",
...
}
Then your gitlab-ci.yml could look like this:
stages:
- variables
- build
variables:
stage: variables
image: ubuntu
before_script:
- apt-get update
- apt-get install -y jq
script:
- export PORT=$(cat appsettings.json | jq '.port' )
- echo "export PORT=$PORT" >> vars.sh
artifacts:
paths:
- vars.sh
build:
image: ${CI_REGISTRY}/dockerhub/library/docker:20.10.11
stage: build
services:
- ${CI_REGISTRY}/dockerhub/library/docker:20.10.11-dind
variables:
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
before_script:
- . ./vars.sh
script:
- docker build
-f ./MyApp.Api/Dockerfile
--build-arg PORT=$PORT .
- docker push ${CI_REGISTRY_IMAGE}/myapp:${tag}

Related

How to setup buildspec.yml file for two different repositories?

I have setup a cicd pipeline using AWS Codepipeline and AWS CodeBuild. Right now my application's frontend and backend are present in different repositories. I have two source stages in my Codepipeline. Now I want to build both frontend image and backend image in the CodeBuild using buildspec.yml file.
Problem
I am very confused how to setup buildspec.yml files for both frontend and backend repos. Do I have to specify buildspec.yml file in both repos separate or I have to specify buildspec.yml file only in PrimarySource repository.
This is my buildspec.yml file:
version: 0.2
phases:
install:
runtime-versions:
docker: 18
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=${COMMIT_HASH:=latest}
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $ECR_REPOSITORY_URI:latest .
- docker tag $ECR_REPOSITORY_URI:latest $ECR_REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $ECR_REPOSITORY_URI:latest
- docker push $ECR_REPOSITORY_URI:$IMAGE_TAG
- printf '{"ImageURI":"%s:%s"}' $ECR_REPOSITORY_URI $IMAGE_TAG > imageDetail.json
artifacts:
files:
- imageDetail.json
I want to know how I can setup similar for backend too. So, my Codebuild should first build for frontend and then for backend.

Is it possible to run ui tests in codeception in the background?

I'm new to codeception and wonder, is it possible to run ui tests in the background, without opening test web browser every time?
I suspect, that I should change something in acceptance.suite.yml, but not sure what.
I would appreciate any help.
You can use headless browser. This will execute all the test flow almost exactly as it would work on regular UI mode while there will not be opened visual browser.
You can learn more about this here and in more similar resources.
You can use Docker to virtualize the WebDriver and Selenium.
Create two different files in the root directory. The Dockerfile will generate a container with PHP and composer to run your codeception tests in it.
Dockerfile
FROM php:8.0-cli-alpine
RUN apk -U upgrade --no-cache
# install composer
COPY --from=composer:2.2 /usr/bin/composer /usr/bin/composer
ENV COMPOSER_ALLOW_SUPERUSER=1
ENV PATH="${PATH}:/root/.composer/vendor/bin"
WORKDIR /app
COPY composer.json composer.lock ./
RUN composer install --no-dev --no-scripts --no-progress \
&& composer clear-cache
COPY . /app
RUN composer dump-autoload --optimize --classmap-authoritative \
&& composer clear-cache
The second file is docker-compose.yml which is using a preconfigured Selenium image and bound your PHP codeception tests to one network, so that the container can talk with each other over the needed ports (4444 and 7900)
docker-compose.yml
---
version: '3.4'
services:
php:
build: .
depends_on:
- selenium
volumes:
- ./:/usr/src/app:rw,cached
selenium:
image: selenium/standalone-chrome:4
shm_size: 2gb
container_name: selenium
ports:
- "4444:4444"
- "7900:7900"
environment:
- VNC_NO_PASSWORD=1
- SCREEN_WIDTH=1920
- SCREEN_HEIGHT=1080
If you setup docker and your codeception project correctly, you can run these containers in the background.
docker-compose up -d
and execute your tests:
vendor/bin/codecept run
If you want to see, what the test is doing, you can visit http://localhost:7900 to connect to the browser in the container and you can see, what the test is executing.
If you are using the WebDriver module to run your tests with codeception, there is an option to configure your browser in headless mode.
It won't open any windows and the tests will run in the background without bothering you.
There is an example with chrome :
modules:
enabled:
- WebDriver
config:
WebDriver:
url: 'http://myapp.local'
browser: chrome
window_size: 1920x1080
capabilities:
chromeOptions:
args: ["--headless", "--no-sandbox"]

How to organize your Postman collections and execute them with Newman with GitLab CI?

I am thinking of organizing my Postman API tests by creating a collection per feature. So for example we have 2 features the collections will be saved in /test/FEAT-01/FEAT-01-test.postman_collection.json and another in /test/FEAT-02/FEAT-02-test.postman_collection.json. In this way I am able to compare the collections separately in git and selectively execute tests as I want.
However I want my GitLab CI to execute all my tests under my root test folder using Newman. How can I achieve that?
One thing that worked for me, was creating one job per collection in gitlab, it looks something like this:
variables:
POSTMAN_COLLECTION1: <Your collection here>.json
POSTMAN_COLLECTION2: <Your collection here>.json
POSTMAN_ENVIRONMENT: <Your environment here>.json
DATA: <Your iterator file>
stages:
- test
postman_tests:
stage: test
image:
name: postman/newman:alpine
entrypoint: [""]
script:
- newman --version
- npm install -g newman-reporter-html
- newman run ${POSTMAN_COLLECTION1} -e ${POSTMAN_ENVIRONMENT} -d ${DATA} --reporters cli,html --reporter-html-export report.html
artifacts:
when: always
paths:
- report.html
postman_test2:
stage: test
image:
name: postman/newman:alpine
entrypoint: [""]
script:
- newman --version
- npm install -g newman-reporter-html
- newman run ${POSTMAN_COLLECTION2} -e ${POSTMAN_ENVIRONMENT} --reporters cli,html --reporter-html-export report.html
artifacts:
when: always
paths:
- report.html
You can also create a script that reads from your root folder and executes X times per collection, something like this:
$ for collection in ./PostmanCollections/*; do newman run "$collection" --environment ./PostmanEnvironments/Test.postman_environment.json' -r cli; done

Bitbucket Pipeline: Deploy jar artifact to ftp

I'm trying to build and then deploy the artifacts (jar) by the bitbucket pipeline. The build is working but the deploy of the artifacts doesnt work as I want it.
When the pipeline is finished I have all code files (src/main/java etc) instead of the jar on the ftp server.
Do you see where I do the mistake? Actually I also looked for a another ftp function but failed.
Pipeline:
# This is a sample build configuration for Java (Maven).
# Check our guides at https://confluence.atlassian.com/x/zd-5Mw for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: maven:3.3.9
pipelines:
default:
- step:
name: Build
caches:
- maven
script:
- apt-get update
- apt-get install -y openjfx
- mvn install -DskipTests
artifacts:
- /opt/atlassian/pipelines/agent/build/target/**
- target/**
# - /**.jar
- step:
name: Deploy
script:
- apt-get update
- apt-get -qq install git-ftp
- git ftp init --user $user --passwd $pw -v sftp://$host:5544/$folder
To solve this problem I added the SSH-Key to Bitbucket. Then I could do the deploy by sftp using lftp and docker images.
pipelines:
branches:
master:
- step:
name: Build
image: tgalopin/maven-javafx
caches:
- maven
script:
- mvn install
artifacts:
- target/**
- step:
name: Deploy
image: alpacadb/docker-lftp
script:
- lftp sftp://$user:$pw#$host:$port -e "put /my-file; bye"

Docker: How can I have sqlite db changes persist to the db file?

FROM golang:1.8
ADD . /go/src/beginnerapp
RUN go get -u github.com/gorilla/mux
RUN go get github.com/mattn/go-sqlite3
RUN go install beginnerapp/
VOLUME /go/src/beginnerapp/local-db
WORKDIR /go/src/beginnerapp
ENTRYPOINT /go/bin/beginnerapp
EXPOSE 8080
The sqlite db file is in the local-db directory but I don't seem to be using the VOLUME command correctly. Any ideas how I can have db changes to the sqlite db file persisted?
I don't mind if the volume is mounted before or after the build.
I also tried running the following command
user#cardboardlaptop:~/go/src/beginnerapp$ docker run -p 8080:8080 -v ./local-db:/go/src/beginnerapp/local-db beginnerapp
docker: Error response from daemon: create ./local-db: "./local-db" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
EDIT: Works with using /absolutepath/local-db instead of relative path ./local-db
You are not mounting volumes in a Dockerfile.
VOLUME tells docker that content on those directories can be mounted via docker run --volumes-from
You're right. Docker doesn't allow relative paths on volumes on command line.
Run your docker using absolute path:
docker run -v /host/db/local-db:/go/src/beginnerapp/local-db
Your db will be persisted in the host file /host/db/local-db
If you want to use relative paths, you can make it work with docker-compose with "volumes" tag:
volumes:
- ./local-db:/go/src/beginnerapp/local-db
You can try this configuration:
Put the Dockerfile in a directory, (e.g. /opt/docker/myproject)
create a docker-compose.yml file in the same path like this:
version: "2.0"
services:
myproject:
build: .
volumes:
- "./local-db:/go/src/beginnerapp/local-db"
Execute docker-compose up -d myproject in the same path.
Your db should be stored in /opt/docker/myproject/local-db
Just a comment. The content of local-db (if any) will be replaced by the content of ./local-db path (empty). If the container have any information (initialized database) will be a good idea to copy it with docker cp or include any init logic on an entrypoint or command shell script.

Resources