Bitbucket Pipeline: Deploy jar artifact to ftp - sftp

I'm trying to build and then deploy the artifacts (jar) by the bitbucket pipeline. The build is working but the deploy of the artifacts doesnt work as I want it.
When the pipeline is finished I have all code files (src/main/java etc) instead of the jar on the ftp server.
Do you see where I do the mistake? Actually I also looked for a another ftp function but failed.
Pipeline:
# This is a sample build configuration for Java (Maven).
# Check our guides at https://confluence.atlassian.com/x/zd-5Mw for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: maven:3.3.9
pipelines:
default:
- step:
name: Build
caches:
- maven
script:
- apt-get update
- apt-get install -y openjfx
- mvn install -DskipTests
artifacts:
- /opt/atlassian/pipelines/agent/build/target/**
- target/**
# - /**.jar
- step:
name: Deploy
script:
- apt-get update
- apt-get -qq install git-ftp
- git ftp init --user $user --passwd $pw -v sftp://$host:5544/$folder

To solve this problem I added the SSH-Key to Bitbucket. Then I could do the deploy by sftp using lftp and docker images.
pipelines:
branches:
master:
- step:
name: Build
image: tgalopin/maven-javafx
caches:
- maven
script:
- mvn install
artifacts:
- target/**
- step:
name: Deploy
image: alpacadb/docker-lftp
script:
- lftp sftp://$user:$pw#$host:$port -e "put /my-file; bye"

Related

Is it possible to run ui tests in codeception in the background?

I'm new to codeception and wonder, is it possible to run ui tests in the background, without opening test web browser every time?
I suspect, that I should change something in acceptance.suite.yml, but not sure what.
I would appreciate any help.
You can use headless browser. This will execute all the test flow almost exactly as it would work on regular UI mode while there will not be opened visual browser.
You can learn more about this here and in more similar resources.
You can use Docker to virtualize the WebDriver and Selenium.
Create two different files in the root directory. The Dockerfile will generate a container with PHP and composer to run your codeception tests in it.
Dockerfile
FROM php:8.0-cli-alpine
RUN apk -U upgrade --no-cache
# install composer
COPY --from=composer:2.2 /usr/bin/composer /usr/bin/composer
ENV COMPOSER_ALLOW_SUPERUSER=1
ENV PATH="${PATH}:/root/.composer/vendor/bin"
WORKDIR /app
COPY composer.json composer.lock ./
RUN composer install --no-dev --no-scripts --no-progress \
&& composer clear-cache
COPY . /app
RUN composer dump-autoload --optimize --classmap-authoritative \
&& composer clear-cache
The second file is docker-compose.yml which is using a preconfigured Selenium image and bound your PHP codeception tests to one network, so that the container can talk with each other over the needed ports (4444 and 7900)
docker-compose.yml
---
version: '3.4'
services:
php:
build: .
depends_on:
- selenium
volumes:
- ./:/usr/src/app:rw,cached
selenium:
image: selenium/standalone-chrome:4
shm_size: 2gb
container_name: selenium
ports:
- "4444:4444"
- "7900:7900"
environment:
- VNC_NO_PASSWORD=1
- SCREEN_WIDTH=1920
- SCREEN_HEIGHT=1080
If you setup docker and your codeception project correctly, you can run these containers in the background.
docker-compose up -d
and execute your tests:
vendor/bin/codecept run
If you want to see, what the test is doing, you can visit http://localhost:7900 to connect to the browser in the container and you can see, what the test is executing.
If you are using the WebDriver module to run your tests with codeception, there is an option to configure your browser in headless mode.
It won't open any windows and the tests will run in the background without bothering you.
There is an example with chrome :
modules:
enabled:
- WebDriver
config:
WebDriver:
url: 'http://myapp.local'
browser: chrome
window_size: 1920x1080
capabilities:
chromeOptions:
args: ["--headless", "--no-sandbox"]

Impossible to start Symfony 5 server on Docker container (symfony serve -d)

I trying to create Docker container to contenerized my Symfony 5 application.
I create first a Dockerfile
FROM php:7.4-fpm-alpine
# Update
RUN apk --no-cache update
RUN apk --no-cache add bash git
# Install Node
RUN apk --no-cache add --update nodejs npm
RUN apk --no-cache add --update python3
RUN apk --no-cache add --update make
RUN apk --no-cache add --update g++
# Install pdo
RUN docker-php-ext-install pdo_mysql
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Symfony CLI
RUN curl -sS https://get.symfony.com/cli/installer | bash && mv /root/.symfony/bin/symfony /usr/local/bin/symfony
# WORK DIR
COPY . /var/www/html
WORKDIR /var/www/html
RUN composer update
RUN composer install
RUN npm install
# Start Symfony server on Port 8000
EXPOSE 8000
RUN symfony serve -d
Then I created a docker-compose.yml file (where I simply redirect port 8000 of the container to port 8080 on my machine).
version: '3.8'
services:
php-fpm:
container_name: infolea
build: ./
ports:
- 8080:8000
volumes:
- ./:/var/www/html
Then, I build my image docker-compose build.
Then, I run my image docker-compose up -d.
On my browser, the localhost:8080 link doesn't display anything.
Then I restart the symfony server by typing symfony serve -d on the terminal of my container and on localhost:8080 I can see my application working.
Something is weard, is that when I verified if my server is not started yet on my docker container terminal, I got this :
docker container terminal
What i want, it's to start my Symfony server dirrectly, without retapping symfony serve -d.
How can i do it ?
Try using CMD istead of RUN
CMD ["/usr/local/bin/symfony", "local:server:start" , "--port=8000", "--no-tls"]
see https://docs.docker.com/engine/reference/builder/#cmd

CircleCI permission denied opening firebase-tools.json for Firebase deployment

I'm using Firebase to host my personal website and wanted to integrate CircleCI for faster integration. However I receive this error on the step for deployment:
Note
Adding sudo before the deploy command causes the build to fail also
/home/circleci/project/node_modules/configstore/index.js:52
throw error;
^
Error: EACCES: permission denied, open '/home/circleci/.config/configstore/firebase-tools.json'
You don't have access to this file.
Below is my project's yaml configuration:
---
commands:
restore_cache_cmd:
description: "Restore cached npm install"
steps:
- restore_cache:
key: 'dependency-cache-{{checksum "package.json"}}'
save_cache_cmd:
description: "Saving npm install"
steps:
- save_cache:
key: 'dependency-cache-{{ checksum "package.json"}}'
paths:
- "./node_modules"
update:
description: "Installing project's dependencies"
steps:
- checkout
- restore_cache_cmd
- run: sudo npm i -g npm#latest
- run: sudo npm i
- save_cache_cmd
build_deploy:
description: "Building project"
steps:
- run:
name: Build
command: sudo npm run build
- run:
name: Deploy
command: ./node_modules/.bin/firebase deploy --token=$FIREBASE_DEPLOY_TOKEN -- only hosting
executors:
docker-executor:
docker:
- image: "cimg/node:12.14.1"
jobs:
build_site:
executor: docker-executor
working_directory: ~/Darryls-Personal-Site
steps:
- update
- build_deploy
version: 2.1
workflows:
build_site:
jobs:
- build_site:
filters:
branches:
only: master
Steps that I have already completed from other questions:
Used firebase login:ci to obtain refresh token and placed into an environment variable within my CircleCI project environment
Used npm install --save-dev firebase-tools
I think the problem is that you run all your npm commands with sudo except the firebase deploy command.
You should definitely run everything with the current user and not the superuser.
You will see in official tutorials that nothing is run with sudo except for very specific cases.
Also, instead of doing this ./node_modules/.bin/firebase deploy you could use npx run firebase deploy which first look in the local node_modules then in the global ones.

Deploy Gatsby to Firebase using Circleci

I followed this blog to deploy my Gatsby site to Firebase using circleCI
https://circleci.com/blog/automatically-deploy-a-gatsby-site-to-firebase-hosting/
The config.yml file is as follows
# CircleCI Firebase Deployment Config
version: 2
jobs:
build:
docker:
- image: circleci/node:10
working_directory: ~/gatsby-site
steps:
- checkout
- restore_cache:
keys:
# Find a cache corresponding to this specific package-lock.json
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Fallback cache to be used
- v1-npm-deps-
- run:
name: Install Dependencies
command: npm install
- save_cache:
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- run:
name: Gatsby Build
command: npm run build
- run:
name: Firebase Deploy
command: ./node_modules/.bin/firebase deploy --token "$FIREBASE_TOKEN"
This caused an error
#!/bin/bash -eo pipefail
./node_modules/.bin/firebase deploy --token "$FIREBASE_TOKEN"
/bin/bash: ./node_modules/.bin/firebase: No such file or directory
Exited with code exit status 127
CircleCI received exit code 127
I haven't used yml files or focused on devops before so did some digging around. Found a few other people with this issue and there was a suggestion to use workspaces and workflow. So I amended my yml file to support this
# CircleCI Firebase Deployment Config
version: 2
jobs:
#build jobs
build:
docker:
- image: circleci/node:10
working_directory: ~/gatsby-site
steps:
- checkout
- restore_cache:
keys:
# Find a cache corresponding to this specific package-lock.json
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Fallback cache to be used
- v1-npm-deps-
- run:
name: Install Dependencies
command: npm install
- save_cache:
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- persist_to_workspace:
root: ./
paths:
- ./
- run:
name: Gatsby Build
command: npm run build
- persist_to_workspace:
root: ./
paths:
- ./
# deploy jobs
deploy-production:
docker:
- image: circleci/node:10
steps:
- attach_workspace:
at: ./
- run:
name: Firebase Deploy
command: ./node_modules/.bin/firebase deploy --token "$FIREBASE_TOKEN"
workflows:
version: 2
build:
jobs:
#build
- build
#deploy
- deploy-production:
requires:
- build
Same issue
#!/bin/bash -eo pipefail
./node_modules/.bin/firebase deploy --token "$FIREBASE_TOKEN"
/bin/bash: ./node_modules/.bin/firebase: No such file or directory
Exited with code exit status 127
CircleCI received exit code 127
I assume it must be something to do with the paths and it's looking in the wrong directory? Any idea of how I can get it to find the module required?
Apparently I can't read. The fix was in the instructions
We’ll also need to install the firebase-tools package locally to our
project as a devDependency. This will come in handy later on when
integrating with CircleCI, which does not allow installing packages
globally by default. So let’s install it right now:
npm install -D firebase-tools

Docker Compose Link Containers for Phpunit with Wordpress, MySQL

I would like to use a docker-compose app to run unit tests on a wordpress plugin.
Following (mostly) this tutorial I have created four containers:
my-wpdb:
image: mariadb
ports:
- "8081:3306"
environment:
MYSQL_ROOT_PASSWORD: dockerpass
my-wp:
image: wordpress
volumes:
- ./:/var/www/html
ports:
- "8080:80"
links:
- my-wpdb:mysql
environment:
WORDPRESS_DB_PASSWORD: dockerpass
my-wpcli:
image: tatemz/wp-cli
volumes_from:
- my-wp
links:
- my-wpdb:mysql
entrypoint: wp
command: "--info"
my-phpunit:
image: phpunit/phpunit
volumes_from:
- my-wp
links:
- my-wpdb
This tutorial got me as far as creating the phpunit files (xml, tests, bin, .travis), with the exception that I had to install subversion manually:
docker exec wp_my-wp_1 apt-get update
docker exec wp_my-wp_1 apt-get install -y wget git curl zip vim
docker exec wp_my-wp_1 apt-get install -y apache2 subversion libapache2-svn libsvn-perl
And run the last part of bin/install-wp-tests.sh manually in the database container:
docker exec wp_my-wpdb_1 mysqladmin create wordpress_test --user=root --password=dockerpass --host=localhost --protocol=tcp
I can run phpunit: docker-compose run --rm my-wp phpunit --help.
I can specify the config xml file:
docker-compose run --rm my-wp phpunit --configuration /var/www/html/wp-content/plugins/my-plugin/phpunit.xml.dist
However, the test wordpress installation is installed in the my-wp container's /tmp directory: /tmp/wordpress-tests-lib/includes/functions.php
I think I have to link the my-phpunit containers /tmp to the one in my-wp?
This doesn't answer my question, but as a solution to the problem, there is a github repo for a Docker Image that provides the wanted features: https://github.com/chriszarate/docker-wordpress as well as a wrapper you can invoke it through: https://github.com/chriszarate/docker-wordpress-vip
I wonder if for the initial set-up (noted in the question), it might make more sense to add Phpunit to the Wordpress container, rather than making a separate container for it.

Resources