How do I run playwright targeting localhost in github-actions? - next.js

I am trying to run playwright E2E tests for github-actions but have been unsuccessful so far.
- name: Run build and start
run: |
yarn build:e2e
yarn start:e2e &
- name: Run e2e
run: |
yarn e2e
I don't think the server is running when playwright runs because all the e2e tests end up failing.
Run build and start
Done in 192.91s.
yarn run v1.22.19
$ env-cmd -f environments/.env.e2e next start
ready - started server on 0.0.0.0:3000, url: ***
Run e2e
Test timeout of 270000ms exceeded while running "beforeEach" hook.
I am pretty certain that playwright cannot connect to http://localhost:3000 from the previous step and that's why all the tests timeout.

I had a similar problem and I fixed it by adding this to playwright.config.ts
const config: PlaywrightTestConfig = {
// the rest of the options
webServer: {
command: 'yarn start',
url: 'http://localhost:3000/',
timeout: 120000,
},
use: {
baseURL: 'http://localhost:3000/',
},
// the rest of the options
};
export default config;
Also, I didn't need to yarn start in GitHub Actions workflow, just yarn build is enough.
Source:
https://github.com/microsoft/playwright/issues/14814

Related

Github actions Runner listener exited with error code null

In my server, if I run
sudo ./svc.sh status
I got this
It says status is active. But Runner listener exited with error code null
In my, Github accounts actions page runner is offline.
The runner should be Idle as far as I know.
This is my workflow
name: Node.js CI
on:
push:
branches: [ dev ]
jobs:
build:
runs-on: self-hosted
strategy:
matrix:
node-version: [ 12.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
Here I don't have any build scripts because I directly push the build folder to Github repo.
How do I fix this error?
I ran into this issue when I first installed the runner application under $USER1 but configured it as a service when I was $USER2.
In the end, I ran cd /home/$USER1/actions-runner && sudo ./svc.sh uninstall to remove the service.
Changed the owner of the all files in the /home/$USER1/actions-runner/ by
sudo chown -R $USER2 /home/$USER1/actions-runner.
And ran ./config.sh remove --token $HASHNUBER (you can get this by following this page) to removed all the runner application.
I also removed it from github settings runner page.
In the end, I installed eveything again as the same user, and it worked out.
I ran into this same issue and it turned out to be a disk space problem. Make sure your server has enough allocated storage.

Codefresh allure report: Test reporter requires CF_BRANCH_TAG_NORMALIZED variable for upload files

Set up:
Upon merge to master codefresh build job builds image and pushes it to docker registry
Codefresh test run job picks up new image and runs the test
By the end of test run CF job, allure report building step runs
Results:
3rd step fails with message in a title only if job ran all the way through pipeline
It passes fine if I rerun the job manually(no step 1, 2 are executed in this case)
Notes:
Manually adding that tag does not help
Test execution pipeline:
stages:
- "clone"
- "create"
- "run"
- "get_results"
- "clean_up"
steps:
clone:
title: "Cloning repository"
type: "git-clone"
repo: "repo/repo"
# CF_BRANCH value is auto set when pipeline is triggered
revision: "${{CF_BRANCH}}"
git: "github"
stage: "clone"
create:
title: "Spin up ec2 server on aws"
image: mesosphere/aws-cli
working_directory: "${{clone}}" # Running command where code cloned
commands:
- export AWS_ACCESS_KEY_ID="${{AWS_ACCESS_KEY_ID}}"
- export AWS_SECRET_ACCESS_KEY="${{AWS_SECRET_ACCESS_KEY}}"
- export AWS_DEFAULT_REGION="${{AWS_REGION}}"
- aws cloudformation create-stack --stack-name yourStackName --template-body file://cloudformation.yaml --parameters ParameterKey=keyName,ParameterValue=qaKeys
stage: "create"
run:
title: "Wait for results"
image: mesosphere/aws-cli
working_directory: "${{clone}}" # Running command where code cloned
commands:
# wait for results in s3
- apk update
- apk upgrade
- apk add bash
- export AWS_ACCESS_KEY_ID="${{AWS_ACCESS_KEY_ID}}"
- export AWS_SECRET_ACCESS_KEY="${{AWS_SECRET_ACCESS_KEY}}"
- export AWS_DEFAULT_REGION="${{AWS_REGION}}"
- chmod +x ./wait-for-aws.sh
- ./wait-for-aws.sh
# copy results ojbects from s3
- aws s3 cp s3://${S3_BUCKETNAME}/ ./ --recursive
- cp -r -f ./_result_/allure-raw $CF_VOLUME_PATH/allure-results
- cat test-result.txt
stage: "run"
get_results:
title: Generate test reporting
image: codefresh/cf-docker-test-reporting
tag: "${{CF_BRANCH_TAG_NORMALIZED}}"
working_directory: '${{CF_VOLUME_PATH}}/'
environment:
- BUCKET_NAME=yourName
- CF_STORAGE_INTEGRATION=integrationName
stage: "get_results"
clean_up:
title: "Remove cf stack and files from s3"
image: mesosphere/aws-cli
working_directory: "${{clone}}" # Running command where code cloned
commands:
# wait for results in s3
- apk update
- apk upgrade
- apk add bash
- export AWS_ACCESS_KEY_ID="${{AWS_ACCESS_KEY_ID}}"
- export AWS_SECRET_ACCESS_KEY="${{AWS_SECRET_ACCESS_KEY}}"
- export AWS_DEFAULT_REGION="${{AWS_REGION}}"
# delete stack
- aws cloudformation delete-stack --stack-name stackName
# remove all files from s3
# - aws s3 rm s3://bucketName --recursive
stage: "clean_up"```
Adding CF_BRANCH_TAG_NORMALIZED as a tag won't help in that case.
CF_BRANCH_TAG_NORMALIZED needs to be set as an environment variable for this step.
Taking a look at the source code of codefresh/cf-docker-test-reporting,
https://github.com/codefresh-io/cf-docker-test-reporting/blob/master/config/index.js
env: {
// bucketName - only bucket name, with out subdir path
bucketName: ConfigUtils.getBucketName(),
// bucketSubPath - parsed path to sub folder inside bucket
bucketSubPath: ConfigUtils.getBucketSubPath(),
// originBucketName - origin value that can contain subdir need to use it in some cases
originBucketName: process.env.BUCKET_NAME,
apiKey: process.env.CF_API_KEY,
buildId: process.env.CF_BUILD_ID,
volumePath: process.env.CF_VOLUME_PATH,
branchNormalized: process.env.CF_BRANCH_TAG_NORMALIZED,
storageIntegration: process.env.CF_STORAGE_INTEGRATION,
logLevel: logLevelsMap[process.env.REPORT_LOGGING_LEVEL] || INFO,
sourceReportFolderName: (allureDir || 'allure-results').trim(),
reportDir: ((reportDir || '').trim()) || undefined,
reportIndexFile: ((reportIndexFile || '').trim()) || undefined,
reportWrapDir: _.isNumber(reportWrapDir) ? String(reportWrapDir) : '',
reportType: _.isString(reportType) ? reportType.replace(/[<>]/g, 'hackDetected') : 'default',
allureDir,
clearTestReport
},
you can see that CF_BRANCH_TAG_NORMALIZED is taken directly from the environment.
My assumption is that whatever triggers your build normally will not set this environment variable. It is usually set automatically when you have a git trigger e.g. from Github.
When you start your pipeline manually you probably set the variable and that's why it's running then.
You should check how your pipelines are usually triggered and if the variable is set (automatically or manually).
Here's some more documentation about these variables:
https://codefresh.io/docs/docs/codefresh-yaml/variables/#system-provided-variables

CodeDeploy Bitbucket - How to Fail Bitbucket on CodeDeploy Failure

I have a successful bitbucket pipeline calling out to aws CodeDeploy, but I'm wondering if I can add a step that will check and wait for CodeDeploy success, otherwise fail the pipeline. Would this just be possible with a script that loops through a CodeDeploy call that continues to monitor the status of the CodeDeploy push? Any idea what CodeDeploy call that would be?
bitbucket-pipline.yml
image: pitech/gradle-awscli
pipelines:
branches:
develop:
- step:
caches:
- gradle
script:
- gradle build bootRepackage
- mkdir tmp; cp appspec.yml tmp; cp build/libs/thejar*.jar tmp/the.jar; cp -r scripts/ ./tmp/
- pip install awscli --upgrade --user
- aws deploy push --s3-location s3://thebucket/the-deploy.zip --application-name my-staging-app --ignore-hidden-files --source tmp
- aws deploy create-deployment --application-name server-staging --s3-location bucket=staging-codedeploy,key=the-deploy.zip,bundleType=zip --deployment-group-name the-staging --deployment-config-name CodeDeployDefault.AllAtOnce --file-exists-behavior=OVERWRITE
appspec.yml
version: 0.0
os: linux
files:
- source: thejar.jar
destination: /home/ec2-user/the-server/
permissions:
- object: /
pattern: "**"
owner: ec2-user
group: ec2-user
hooks:
ApplicationStop:
- location: scripts/server_stop.sh
timeout: 60
runas: ec2-user
ApplicationStart:
- location: scripts/server_start.sh
timeout: 60
runas: ec2-user
ValidateService:
- location: scripts/server_validate.sh
timeout: 120
runas: ec2-user
Unfortunately it doesn't seem like Bitbucket is waiting for the ValidateService to complete, so I'd need a way in Bitbucket to confirm before marking the build a success.
AWS CLI already has a deployment-successful method which checks the status of a deployment every 15 seconds. You just need to pipe the output of create-deployment to deployment-successful.
In your specific case, it should look like this:
image: pitech/gradle-awscli
pipelines:
branches:
develop:
- step:
caches:
- gradle
script:
- gradle build bootRepackage
- mkdir tmp; cp appspec.yml tmp; cp build/libs/thejar*.jar tmp/the.jar; cp -r scripts/ ./tmp/
- pip install awscli --upgrade --user
- aws deploy push --s3-location s3://thebucket/the-deploy.zip --application-name my-staging-app --ignore-hidden-files --source tmp
- aws deploy create-deployment --application-name server-staging --s3-location bucket=staging-codedeploy,key=the-deploy.zip,bundleType=zip --deployment-group-name the-staging --deployment-config-name CodeDeployDefault.AllAtOnce --file-exists-behavior=OVERWRITE > deployment.json
- aws deploy wait deployment-successful --cli-input-json file://deployment.json
aws deploy create-deployment is an asynchronous call, and BitBucket has no idea that it needs to know about the success of your deployment. Adding a script to your CodeDeploy application will have no effect on BitBucket knowing about your deployment.
You have one (maybe two) options to fix this issue.
#1 Include a script that waits for your deployment to finish
You need to add a script to your BitBucket pipeline to check the status of your deployment to finish. You can either use SNS notifications, or poll the CodeDeploy service directly.
The pseudocode would look something like this:
loop
check_if_deployment_complete
if false, wait and retry
if true && deployment successful, return 0 (success)
if true && deployment failed, return non-zero (failure)
You can use the AWS CLI or your favorite scripting language. Add it at the end of your bitbucket-pipline.yml script. Make sure you use a wait between calls to CodeDeploy to check the status.
#2 (the maybe) Use BitBucket AWS CodeDeploy integration directly
BitBucket integrates with AWS CodeDeploy directly, so you might be able to use their integration rather than your script to integration properly. I don't know if this is supported or not.

Travis start server and continue with scripts

for my tests I need a dummy server which outputs sample/test code. I use a node http server for that and start it before my scripts with node ./test/server.js.
I can start it, but the issue is that it's taking up the instance and therefore can't run the tests now.
So the question is, how can I run the server in the background/new instance so it doesn't conflict with that? I do stop the server with the end_script so I don't have to terminate it.
This is my travis-config so far:
language: node_js
node_js:
- "6.1"
cache:
directories:
— node_modules
install:
- npm install
before_script:
- sh -e node ./test/server.js
script:
- ./node_modules/mocha-phantomjs/bin/mocha-phantomjs ./test/browser/jelly.html
- ./node_modules/mocha-phantomjs/bin/mocha-phantomjs ./test/browser/form.html
- ./node_modules/mocha/bin/mocha ./test/node/jelly.js
after_script:
- curl http://localhost:5555/close
You can background the process by appending a &:
before_script:
- node ./test/server.js &

Testing meteor js/mocha with wercker pipeline hangs

I've got an app I can test locally without issue using
meteor test-packages --velocity
// result
[[[[[ Tests ]]]]]
=> Started proxy.
=> Started MongoDB.
=> Started your app.
=> App running at: http://localhost:3000/
PASSED mocha : sanjo:jasmine on server => works
TESTS RAN SUCCESSFULLY
Each package.js in the app I'm testing has the following :
Package.onTest(function(api) {
api.use(['mike:mocha-package#0.5.8','velocity:core#0.9.3]);
api.addFiles('tests/server/example.js', 'server');
});
Now I'm trying to do the same via the Wercker pipeline using the following wercker.yml :
build :
box: ubuntu
steps :
# have to install meteor to run the tests
- script :
name : meteor install
code : |
sudo apt-get update -y
sudo apt-get -y install curl wget
cd /tmp
wget https://phantomjs.googlecode.com/files/phantomjs-1.9.1-linux-x86_64.tar.bz2
tar xfj phantomjs-1.9.1-linux-x86_64.tar.bz2
sudo cp /tmp/phantomjs-1.9.1-linux-x86_64/bin/phantomjs /usr/local/bin
curl https://install.meteor.com | /bin/sh
# run tests using meteor test cli
- script :
name : meteor test
code : |
meteor test-packages --velocity --settings config/settings.json
The meteor install step works fine but then the pipeline just hangs here :
[[[[[ Tests ]]]]]
=> Started proxy.
=> Started MongoDB.
=> Started your app.
=> App running at: http://localhost:3000/
Any ideas ? Am I not installing phantomjs correctly ?
UPDATE :
after discovering the DEBUG=1 flags... i run
DEBUG=1 VELOCITY_DEBUG=1 meteor test-packages --velocity
on both dev and in wercker.yml
ON DEV :
I20150915-21:12:35.362(2)? [velocity] adding velocity core
I20150915-21:12:36.534(2)? [velocity] Register framework mocha with regex mocha/.+\.(js|coffee|litcoffee|coffee\.md)$
I20150915-21:12:36.782(2)? [velocity] Server startup
I20150915-21:12:36.785(2)? [velocity] app dir /private/var/folders/c3/hlsb9j0s0d3ck8trdcqscpzc0000gn/T/meteor-test-runyaqy6y
I20150915-21:12:36.785(2)? [velocity] config = {
I20150915-21:12:36.785(2)? "mocha": {
I20150915-21:12:36.785(2)? "regex": "mocha/.+\\.(js|coffee|litcoffee|coffee\\.md)$",
I20150915-21:12:36.785(2)? "name": "mocha",
I20150915-21:12:36.785(2)? "_regexp": {}
I20150915-21:12:36.785(2)? }
I20150915-21:12:36.785(2)? }
I20150915-21:12:36.787(2)? [velocity] resetting the world
I20150915-21:12:36.787(2)? [velocity] frameworks with disable auto reset: []
I20150915-21:12:36.797(2)? [velocity] Add paths to watcher [ '/private/var/folders/c3/hlsb9j0s0d3ck8trdcqscpzc0000gn/T/meteor-test-runyaqy6y/tests' ]
I20150915-21:12:36.811(2)? [velocity] File scan complete, now watching /tests
I20150915-21:12:36.811(2)? [velocity] Triggering queued startup functions
=> Started your app.
=> App running at: http://localhost:3000/
PASSED mocha : sanjo:jasmine on server => works
TESTS RAN SUCCESSFULLY
and ON WERCKER :
[[[[[ Tests ]]]]]
=> Started proxy.
=> Started MongoDB.
I20150915-19:03:24.207(0)? [velocity] adding velocity core
I20150915-19:03:24.299(0)? [velocity] Register framework mocha with regex mocha/.+\.(js|coffee|litcoffee|coffee\.md)$
I20150915-19:03:24.342(0)? [velocity] Server startup
I20150915-19:03:24.343(0)? [velocity] app dir /tmp/meteor-test-run1f61jb9
I20150915-19:03:24.343(0)? [velocity] config = {
I20150915-19:03:24.343(0)? "mocha": {
I20150915-19:03:24.344(0)? "regex": "mocha/.+\\.(js|coffee|litcoffee|coffee\\.md)$",
I20150915-19:03:24.344(0)? "name": "mocha",
I20150915-19:03:24.344(0)? "_regexp": {}
I20150915-19:03:24.344(0)? }
I20150915-19:03:24.344(0)? }
I20150915-19:03:24.346(0)? [velocity] resetting the world
I20150915-19:03:24.347(0)? [velocity] frameworks with disable auto reset: []
I20150915-19:03:24.354(0)? [velocity] Add paths to watcher [ '/tmp/meteor-test-run1f61jb9/tests' ]
=> Started your app.
=> App running at: http://localhost:3000/
I20150915-19:03:24.378(0)? [velocity] File scan complete, now watching /tests
I20150915-19:03:24.378(0)? [velocity] Triggering queued startup functions
Try to add --once flag into you testing command.
I haven't quite figured out the implementation with Mocha, but I have found the implementation using 'TinyTest'. Since I thought this would be useful for other users, I've put together a minimal example of Meteor with a few CI providers (CircleCI, Travis, and Wercker).
Of course, you'll need NodeJS installed. This varies by CI provider, but in the case of Travis CI, you'll want a configuration like this :
sudo: required
language: node_js
node_js:
- "0.10"
- "0.12"
- "4.0"
Then, assuming you're building a Meteor package, you'll effectively do the following steps in any CI environment :
# Install Meteor
meteor || curl https://install.meteor.com | /bin/sh
# Install spacejam
npm install -g spacejam
# Execute your tests
spacejam test-packages ./
Source Code is available: https://github.com/b-long/meteor-ci-example

Resources