Is it possible to run ui tests in codeception in the background? - webdriver

I'm new to codeception and wonder, is it possible to run ui tests in the background, without opening test web browser every time?
I suspect, that I should change something in acceptance.suite.yml, but not sure what.
I would appreciate any help.

You can use headless browser. This will execute all the test flow almost exactly as it would work on regular UI mode while there will not be opened visual browser.
You can learn more about this here and in more similar resources.

You can use Docker to virtualize the WebDriver and Selenium.
Create two different files in the root directory. The Dockerfile will generate a container with PHP and composer to run your codeception tests in it.
Dockerfile
FROM php:8.0-cli-alpine
RUN apk -U upgrade --no-cache
# install composer
COPY --from=composer:2.2 /usr/bin/composer /usr/bin/composer
ENV COMPOSER_ALLOW_SUPERUSER=1
ENV PATH="${PATH}:/root/.composer/vendor/bin"
WORKDIR /app
COPY composer.json composer.lock ./
RUN composer install --no-dev --no-scripts --no-progress \
&& composer clear-cache
COPY . /app
RUN composer dump-autoload --optimize --classmap-authoritative \
&& composer clear-cache
The second file is docker-compose.yml which is using a preconfigured Selenium image and bound your PHP codeception tests to one network, so that the container can talk with each other over the needed ports (4444 and 7900)
docker-compose.yml
---
version: '3.4'
services:
php:
build: .
depends_on:
- selenium
volumes:
- ./:/usr/src/app:rw,cached
selenium:
image: selenium/standalone-chrome:4
shm_size: 2gb
container_name: selenium
ports:
- "4444:4444"
- "7900:7900"
environment:
- VNC_NO_PASSWORD=1
- SCREEN_WIDTH=1920
- SCREEN_HEIGHT=1080
If you setup docker and your codeception project correctly, you can run these containers in the background.
docker-compose up -d
and execute your tests:
vendor/bin/codecept run
If you want to see, what the test is doing, you can visit http://localhost:7900 to connect to the browser in the container and you can see, what the test is executing.

If you are using the WebDriver module to run your tests with codeception, there is an option to configure your browser in headless mode.
It won't open any windows and the tests will run in the background without bothering you.
There is an example with chrome :
modules:
enabled:
- WebDriver
config:
WebDriver:
url: 'http://myapp.local'
browser: chrome
window_size: 1920x1080
capabilities:
chromeOptions:
args: ["--headless", "--no-sandbox"]

Related

Docker doesn't copy new images

I have a problem regarding Docker.
When I'm deploying a new version of my app image, the images i have added to the images folder in my wwwroot folder aren't copied..
My Dockerfile looks like this:
FROM microsoft/aspnetcore-build:1.0-projectjson
WORKDIR /app-src
COPY . .
RUN dotnet restore
RUN dotnet publish src/Test -o /app
EXPOSE 5000
WORKDIR /app
ENTRYPOINT ["dotnet", "Test.dll"]
And my docker-compose:
version: '3.8'
services:
app:
image: <dockeruser>/<imagename>:<tag>
links:
- db
environment:
ConnectionStrings__Dataconnection: "Host=db;Username=Username;Password=Password;Database=db"
ports:
- "5000:5000"
volumes:
- ~/data/images:/app/wwwroot/images
db:
image: postgres:9.5
ports:
- "31337:5432"
volumes:
- ~/data/db:/var/lib/postgresql/data/pgdata
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: db
PGDATA: /var/lib/postgresql/data/pgdata
My current version of docker is:
Docker version 18.09.7, build 2d0083d and docker-compose docker-compose version 1.26.2, build eefe0d31
The exact same files (except for the docker-compose version was set to 2 in docker-compose.yml) worked previously on docker version Docker version 17.03.0-ce, build 60ccb22 and docker-compose version docker-compose version 1.9.0, build 2585387
I store my new images in my repos wwwroot/images folder, and then push them to the repo, and then dockerhub automatically builds an image from the new commit. On the server i then pull the new docker-image and run the docker-compose down -v command followed by docker-compose up -d but the images is not available in the app afterwards.
Disclaimer: This is a project I have overtaken and I'm aware of some of the very old software versions.
Your images may be in your container image, but since you are doing a bind mount whatever is in your server’s “~/data/images” directory will basically “override/replace” what’s in your image when the container is created.
Try removing the volume from the app service, basically remove this:
volumes:
- ~/data/images:/app/wwwroot/images
The other thing you can try is to manually copy the images to the “~/data/images” directory on the server.

Docker Compose Link Containers for Phpunit with Wordpress, MySQL

I would like to use a docker-compose app to run unit tests on a wordpress plugin.
Following (mostly) this tutorial I have created four containers:
my-wpdb:
image: mariadb
ports:
- "8081:3306"
environment:
MYSQL_ROOT_PASSWORD: dockerpass
my-wp:
image: wordpress
volumes:
- ./:/var/www/html
ports:
- "8080:80"
links:
- my-wpdb:mysql
environment:
WORDPRESS_DB_PASSWORD: dockerpass
my-wpcli:
image: tatemz/wp-cli
volumes_from:
- my-wp
links:
- my-wpdb:mysql
entrypoint: wp
command: "--info"
my-phpunit:
image: phpunit/phpunit
volumes_from:
- my-wp
links:
- my-wpdb
This tutorial got me as far as creating the phpunit files (xml, tests, bin, .travis), with the exception that I had to install subversion manually:
docker exec wp_my-wp_1 apt-get update
docker exec wp_my-wp_1 apt-get install -y wget git curl zip vim
docker exec wp_my-wp_1 apt-get install -y apache2 subversion libapache2-svn libsvn-perl
And run the last part of bin/install-wp-tests.sh manually in the database container:
docker exec wp_my-wpdb_1 mysqladmin create wordpress_test --user=root --password=dockerpass --host=localhost --protocol=tcp
I can run phpunit: docker-compose run --rm my-wp phpunit --help.
I can specify the config xml file:
docker-compose run --rm my-wp phpunit --configuration /var/www/html/wp-content/plugins/my-plugin/phpunit.xml.dist
However, the test wordpress installation is installed in the my-wp container's /tmp directory: /tmp/wordpress-tests-lib/includes/functions.php
I think I have to link the my-phpunit containers /tmp to the one in my-wp?
This doesn't answer my question, but as a solution to the problem, there is a github repo for a Docker Image that provides the wanted features: https://github.com/chriszarate/docker-wordpress as well as a wrapper you can invoke it through: https://github.com/chriszarate/docker-wordpress-vip
I wonder if for the initial set-up (noted in the question), it might make more sense to add Phpunit to the Wordpress container, rather than making a separate container for it.

How to use Gitlab CI/CD to deploy a meteor project?

As claimed at their website Gitlab can be used to auto deploy projects after some code is pushed into the repository but I am not able to figure out how. There are plenty of ruby tutorials out there but none for meteor or node.
Basically I just need to rebuild an Docker container on my server, after code is pushed into my master branch. Does anyone know how to achieve it? I am totally new to the .gitlab-ci.yml stuff and appreciate help pretty much.
Brief: I am running a Meteor 1.3.2 app, hosted on Digital Ocean (Ubuntu 14.04) since 4 months. I am using Gitlab v. 8.3.4 running on the same Digital Ocean droplet as the Meteor app. It is a 2 GB / 2 CPUs droplet ($ 20 a month). Using the built in Gitlab CI for CI/CD. This setup has been running successfully till now. (We are currently not using Docker, however this should not matter.)
Our CI/CD strategy:
We check out Master branch on our local laptop. The branch contains the whole Meteor project as shown below:
We use git CLI tool on Windows to connect to our Gitlab server. (for pull, push, etc. similar regular git activities)
Open the checked out project in Atom editor. We have also integrated Atom with Gitlab. This helps in quick git status/pull/push etc. within Atom editor itself. Do regular Meteor work viz. fix bugs etc.
Post testing on local laptop, we then do git push & commit on master. This triggers auto build using Gitlab CI and the results (including build logs) can be seen in Gitlab itself as shown below:
Below image shows all previous build logs:
Please follow below steps:
Install meteor on the DO droplet.
Install Gitlab on DO (using 1-click deploy if possible) or manual installation. Ensure you are installing Gitlab v. 8.3.4 or newer version. I had done a DO one-click deploy on my droplet.
Start the gitlab server & log into gitlab from browser. Open your project and go to project settings -> Runners from left menu
SSH to your DO server & configure a new upstart service on the droplet as root:
vi /etc/init/meteor-service.conf
Sample file:
#upstart service file at /etc/init/meteor-service.conf
description "Meteor.js (NodeJS) application for eaxmple.com:3000"
author "rohanray#gmail.com"
# When to start the service
start on runlevel [2345]
# When to stop the service
stop on shutdown
# Automatically restart process if crashed
respawn
respawn limit 10 5
script
export PORT=3000
# this allows Meteor to figure out correct IP address of visitors
export HTTP_FORWARDED_COUNT=1
export MONGO_URL=mongodb://xxxxxx:xxxxxx#example123123.mongolab.com:59672/meteor-db
export ROOT_URL=http://<droplet_ip>:3000
exec /home/gitlab-runner/.meteor/packages/meteor-tool/1.1.10/mt-os.linux.x86_64/dev_bundle/bin/node /home/gitlab-runner/erecaho-build/server-alpha-running/bundle/main.js >> /home/gitlab-runner/erecaho-build/server-alpha-running/meteor.log
end script
Install gitlab-ci-multi-runner from here: https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/install/linux-repository.md as per the instructions
Cheatsheet:
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-ci-multi-runner/script.deb.sh | sudo bash
sudo apt-get install gitlab-ci-multi-runner
sudo gitlab-ci-multi-runner register
Enter details from step 2
Now the new runner should be green or activate the runner if required
Create .gitlab-ci.yml within the meteor project directory
Sample file:
before_script:
- echo "======================================"
- echo "==== START auto full script v0.1 ====="
- echo "======================================"
types:
- cleanup
- build
- test
- deploy
job_cleanup:
type: cleanup
script:
- cd /home/gitlab-runner/erecaho-build
- echo "cleaning up existing bundle folder"
- echo "cleaning up current server-running folder"
- rm -fr ./server-alpha-running
- mkdir ./server-alpha-running
only:
- master
tags:
- master
job_build:
type: build
script:
- pwd
- meteor build /home/gitlab-runner/erecaho-build/server-alpha-running --directory --server=http://example.org:3000 --verbose
only:
- master
tags:
- master
job_test:
type: test
script:
- echo "testing ----"
- cd /home/gitlab-runner/erecaho-build/server-alpha-running/bundle
- ls -la main.js
only:
- master
tags:
- master
job_deploy:
type: deploy
script:
- echo "deploying ----"
- cd /home/gitlab-runner/erecaho-build/server-alpha-running/bundle/programs/server/ && /home/gitlab-runner/.meteor/packages/meteor-tool/1.1.10/mt-os.linux.x86_64/dev_bundle/bin/npm install
- cd ../..
- sudo restart meteor-service
- sudo status meteor-service
only:
- master
tags:
- master
Check in above file in gitlab. This should trigger Gitlab CI and after the build process is complete, the new app will be available # example.net:3000
Note: The app will not be available after checking in .gitlab-ci.yml for the first time, since restart meteor-service will result in service not found. Manually run sudo start meteor-service once on DO SSH console. Post this any new check-in to gitlab master will trigger auto CI/CD and the new version of the app will be available on example.com:3000 after the build is completed successfully.
P.S.: gitlab ci yaml docs can be found at http://doc.gitlab.com/ee/ci/yaml/README.html for your customization and to understand the sample yaml file above.
For docker specific runner, please refer https://gitlab.com/gitlab-org/gitlab-ci-multi-runner

Why I can't see my files inside a docker container?

I'm a Docker newbie and I'm trying to setup my first project.
To test how to play with it, I just cloned one ready-to-go project and I setup it (Project repo).
As the guide claims if I access a specific url, I reach the homepage. To be more specific a symfony start page.
Moreover with this command
docker run -i -t testdocker_application /bin/bash
I'm able to login to the container.
My problem is if I try to go to the application folder through bash, the folder that I shared with my host is empty.
I tried with another project, but the result is the same.
Where I'm wrong?
Here some infos about my env:
Ubuntu 12.04
Docker version 1.8.3, build f4bf5c7
Config:
application:
build: code
volumes:
- ./symfony:/var/www/symfony
- ./logs/symfony:/var/www/symfony/app/logs
tty: true
Looks like you have a docker-compose.yml file but are running the image with docker. You don't actually need docker-compose to start a single container. If you just want to start the container your command should look like this:
docker run -ti -v $(pwd)/symfony:/var/www/symfony -v $(pwd)/logs/symfony:/var/www/symfony/app/logs testdocker_application /bin/bash
To use your docker-compose.yml start your container with docker-compose up. You would also need to add the following to drop into a shell.
stdin_open: true
command: /bin/bash

Docker permissions development environment using a host mounted volume

I'm using docker-compose to set up a portable development environment for a bunch of symfony2 applications (though nothing I want to do is specific to symfony). I've decided to have the source files on the local machine exposed as a data volume with all the other dependencies in docker. This way developers can edit on the local file-system.
Everything works great, except that after running the app my cache and log files and the files created by composer in the /vendor directory are now owned by root.
I've read about this problem and some possible approaches here:
Changing permissions of added file to a Docker volume
But I can't quite quite tease out what changes I have to make in my docker-compose.yml file so that when my symphony container starts with docker-compose up any files that are created have the permissions of the user on the host machine.
I'm posting the file for reference, worker is where php, etc. live:
source:
image: symfony/worker-dev
volumes:
- $PWD:/var/www/app
mongodb:
image: mongo:2.4
ports:
- "27017:27017"
volumes_from:
- source
worker:
image: symfony/worker-dev
ports:
- "80:80"
- mongodb
volumes_from:
- source
volumes:
- "tmp/:/var/log/nginx"
One of the solutions is to execure the commands inside your container. I've tried multiple workarounds for the same issue I faced in the past. I find executing the command inside the container the most user-friendly.
Example command: docker-compose run CONTAINER_NAME php bin/console cache:clear. You may use make, ant or any modern tool to keep the commands short.
Example with Makefile:
all: | build run test
build: | docker-compose-build
run: | composer-install clear-cache
############## docker compose
docker-compose-build:
docker-compose build
############## composer
composer-install:
docker-compose run app composer install
composer-update:
docker-compose run app composer update
############## cache
clear-cache:
docker-compose run app php bin/console cache:clear
docker-set-permissions:
docker-compose run app chown -R www-data:www-data var/logs
docker-compose run app chown -R www-data:www-data var/cache
############## test
test:
docker-compose run app php bin/phpunit
Alternatively, you may introduce a .env file which contains a environment variables and then user one of the variables to run usermod command in the Docker container.

Resources