Testing meteor js/mocha with wercker pipeline hangs - meteor

I've got an app I can test locally without issue using
meteor test-packages --velocity
// result
[[[[[ Tests ]]]]]
=> Started proxy.
=> Started MongoDB.
=> Started your app.
=> App running at: http://localhost:3000/
PASSED mocha : sanjo:jasmine on server => works
TESTS RAN SUCCESSFULLY
Each package.js in the app I'm testing has the following :
Package.onTest(function(api) {
api.use(['mike:mocha-package#0.5.8','velocity:core#0.9.3]);
api.addFiles('tests/server/example.js', 'server');
});
Now I'm trying to do the same via the Wercker pipeline using the following wercker.yml :
build :
box: ubuntu
steps :
# have to install meteor to run the tests
- script :
name : meteor install
code : |
sudo apt-get update -y
sudo apt-get -y install curl wget
cd /tmp
wget https://phantomjs.googlecode.com/files/phantomjs-1.9.1-linux-x86_64.tar.bz2
tar xfj phantomjs-1.9.1-linux-x86_64.tar.bz2
sudo cp /tmp/phantomjs-1.9.1-linux-x86_64/bin/phantomjs /usr/local/bin
curl https://install.meteor.com | /bin/sh
# run tests using meteor test cli
- script :
name : meteor test
code : |
meteor test-packages --velocity --settings config/settings.json
The meteor install step works fine but then the pipeline just hangs here :
[[[[[ Tests ]]]]]
=> Started proxy.
=> Started MongoDB.
=> Started your app.
=> App running at: http://localhost:3000/
Any ideas ? Am I not installing phantomjs correctly ?
UPDATE :
after discovering the DEBUG=1 flags... i run
DEBUG=1 VELOCITY_DEBUG=1 meteor test-packages --velocity
on both dev and in wercker.yml
ON DEV :
I20150915-21:12:35.362(2)? [velocity] adding velocity core
I20150915-21:12:36.534(2)? [velocity] Register framework mocha with regex mocha/.+\.(js|coffee|litcoffee|coffee\.md)$
I20150915-21:12:36.782(2)? [velocity] Server startup
I20150915-21:12:36.785(2)? [velocity] app dir /private/var/folders/c3/hlsb9j0s0d3ck8trdcqscpzc0000gn/T/meteor-test-runyaqy6y
I20150915-21:12:36.785(2)? [velocity] config = {
I20150915-21:12:36.785(2)? "mocha": {
I20150915-21:12:36.785(2)? "regex": "mocha/.+\\.(js|coffee|litcoffee|coffee\\.md)$",
I20150915-21:12:36.785(2)? "name": "mocha",
I20150915-21:12:36.785(2)? "_regexp": {}
I20150915-21:12:36.785(2)? }
I20150915-21:12:36.785(2)? }
I20150915-21:12:36.787(2)? [velocity] resetting the world
I20150915-21:12:36.787(2)? [velocity] frameworks with disable auto reset: []
I20150915-21:12:36.797(2)? [velocity] Add paths to watcher [ '/private/var/folders/c3/hlsb9j0s0d3ck8trdcqscpzc0000gn/T/meteor-test-runyaqy6y/tests' ]
I20150915-21:12:36.811(2)? [velocity] File scan complete, now watching /tests
I20150915-21:12:36.811(2)? [velocity] Triggering queued startup functions
=> Started your app.
=> App running at: http://localhost:3000/
PASSED mocha : sanjo:jasmine on server => works
TESTS RAN SUCCESSFULLY
and ON WERCKER :
[[[[[ Tests ]]]]]
=> Started proxy.
=> Started MongoDB.
I20150915-19:03:24.207(0)? [velocity] adding velocity core
I20150915-19:03:24.299(0)? [velocity] Register framework mocha with regex mocha/.+\.(js|coffee|litcoffee|coffee\.md)$
I20150915-19:03:24.342(0)? [velocity] Server startup
I20150915-19:03:24.343(0)? [velocity] app dir /tmp/meteor-test-run1f61jb9
I20150915-19:03:24.343(0)? [velocity] config = {
I20150915-19:03:24.343(0)? "mocha": {
I20150915-19:03:24.344(0)? "regex": "mocha/.+\\.(js|coffee|litcoffee|coffee\\.md)$",
I20150915-19:03:24.344(0)? "name": "mocha",
I20150915-19:03:24.344(0)? "_regexp": {}
I20150915-19:03:24.344(0)? }
I20150915-19:03:24.344(0)? }
I20150915-19:03:24.346(0)? [velocity] resetting the world
I20150915-19:03:24.347(0)? [velocity] frameworks with disable auto reset: []
I20150915-19:03:24.354(0)? [velocity] Add paths to watcher [ '/tmp/meteor-test-run1f61jb9/tests' ]
=> Started your app.
=> App running at: http://localhost:3000/
I20150915-19:03:24.378(0)? [velocity] File scan complete, now watching /tests
I20150915-19:03:24.378(0)? [velocity] Triggering queued startup functions

Try to add --once flag into you testing command.

I haven't quite figured out the implementation with Mocha, but I have found the implementation using 'TinyTest'. Since I thought this would be useful for other users, I've put together a minimal example of Meteor with a few CI providers (CircleCI, Travis, and Wercker).
Of course, you'll need NodeJS installed. This varies by CI provider, but in the case of Travis CI, you'll want a configuration like this :
sudo: required
language: node_js
node_js:
- "0.10"
- "0.12"
- "4.0"
Then, assuming you're building a Meteor package, you'll effectively do the following steps in any CI environment :
# Install Meteor
meteor || curl https://install.meteor.com | /bin/sh
# Install spacejam
npm install -g spacejam
# Execute your tests
spacejam test-packages ./
Source Code is available: https://github.com/b-long/meteor-ci-example

Related

How do I run playwright targeting localhost in github-actions?

I am trying to run playwright E2E tests for github-actions but have been unsuccessful so far.
- name: Run build and start
run: |
yarn build:e2e
yarn start:e2e &
- name: Run e2e
run: |
yarn e2e
I don't think the server is running when playwright runs because all the e2e tests end up failing.
Run build and start
Done in 192.91s.
yarn run v1.22.19
$ env-cmd -f environments/.env.e2e next start
ready - started server on 0.0.0.0:3000, url: ***
Run e2e
Test timeout of 270000ms exceeded while running "beforeEach" hook.
I am pretty certain that playwright cannot connect to http://localhost:3000 from the previous step and that's why all the tests timeout.
I had a similar problem and I fixed it by adding this to playwright.config.ts
const config: PlaywrightTestConfig = {
// the rest of the options
webServer: {
command: 'yarn start',
url: 'http://localhost:3000/',
timeout: 120000,
},
use: {
baseURL: 'http://localhost:3000/',
},
// the rest of the options
};
export default config;
Also, I didn't need to yarn start in GitHub Actions workflow, just yarn build is enough.
Source:
https://github.com/microsoft/playwright/issues/14814

CodeDeploy Bitbucket - How to Fail Bitbucket on CodeDeploy Failure

I have a successful bitbucket pipeline calling out to aws CodeDeploy, but I'm wondering if I can add a step that will check and wait for CodeDeploy success, otherwise fail the pipeline. Would this just be possible with a script that loops through a CodeDeploy call that continues to monitor the status of the CodeDeploy push? Any idea what CodeDeploy call that would be?
bitbucket-pipline.yml
image: pitech/gradle-awscli
pipelines:
branches:
develop:
- step:
caches:
- gradle
script:
- gradle build bootRepackage
- mkdir tmp; cp appspec.yml tmp; cp build/libs/thejar*.jar tmp/the.jar; cp -r scripts/ ./tmp/
- pip install awscli --upgrade --user
- aws deploy push --s3-location s3://thebucket/the-deploy.zip --application-name my-staging-app --ignore-hidden-files --source tmp
- aws deploy create-deployment --application-name server-staging --s3-location bucket=staging-codedeploy,key=the-deploy.zip,bundleType=zip --deployment-group-name the-staging --deployment-config-name CodeDeployDefault.AllAtOnce --file-exists-behavior=OVERWRITE
appspec.yml
version: 0.0
os: linux
files:
- source: thejar.jar
destination: /home/ec2-user/the-server/
permissions:
- object: /
pattern: "**"
owner: ec2-user
group: ec2-user
hooks:
ApplicationStop:
- location: scripts/server_stop.sh
timeout: 60
runas: ec2-user
ApplicationStart:
- location: scripts/server_start.sh
timeout: 60
runas: ec2-user
ValidateService:
- location: scripts/server_validate.sh
timeout: 120
runas: ec2-user
Unfortunately it doesn't seem like Bitbucket is waiting for the ValidateService to complete, so I'd need a way in Bitbucket to confirm before marking the build a success.
AWS CLI already has a deployment-successful method which checks the status of a deployment every 15 seconds. You just need to pipe the output of create-deployment to deployment-successful.
In your specific case, it should look like this:
image: pitech/gradle-awscli
pipelines:
branches:
develop:
- step:
caches:
- gradle
script:
- gradle build bootRepackage
- mkdir tmp; cp appspec.yml tmp; cp build/libs/thejar*.jar tmp/the.jar; cp -r scripts/ ./tmp/
- pip install awscli --upgrade --user
- aws deploy push --s3-location s3://thebucket/the-deploy.zip --application-name my-staging-app --ignore-hidden-files --source tmp
- aws deploy create-deployment --application-name server-staging --s3-location bucket=staging-codedeploy,key=the-deploy.zip,bundleType=zip --deployment-group-name the-staging --deployment-config-name CodeDeployDefault.AllAtOnce --file-exists-behavior=OVERWRITE > deployment.json
- aws deploy wait deployment-successful --cli-input-json file://deployment.json
aws deploy create-deployment is an asynchronous call, and BitBucket has no idea that it needs to know about the success of your deployment. Adding a script to your CodeDeploy application will have no effect on BitBucket knowing about your deployment.
You have one (maybe two) options to fix this issue.
#1 Include a script that waits for your deployment to finish
You need to add a script to your BitBucket pipeline to check the status of your deployment to finish. You can either use SNS notifications, or poll the CodeDeploy service directly.
The pseudocode would look something like this:
loop
check_if_deployment_complete
if false, wait and retry
if true && deployment successful, return 0 (success)
if true && deployment failed, return non-zero (failure)
You can use the AWS CLI or your favorite scripting language. Add it at the end of your bitbucket-pipline.yml script. Make sure you use a wait between calls to CodeDeploy to check the status.
#2 (the maybe) Use BitBucket AWS CodeDeploy integration directly
BitBucket integrates with AWS CodeDeploy directly, so you might be able to use their integration rather than your script to integration properly. I don't know if this is supported or not.

Meteor-Up failures on 2 different servers

I am experiencing failure every time I try to deploy to either an AWS/EC2 instance or a Digital Ocean droplet. With the EC2 instance I am using a pem file and with the droplet I am using password access. There were a lot of hoops to jump through to get through the mup setup, but that is finally successful on both instances. It's the mup deploy that fails on the Invoking deployment process: step.
Here's my mup.json for the droplet:
{
// Server authentication info
"servers": [
{
"host": "xx.xx.xx.xx",
"username": "root",
"password": "notmypassword"
// or pem file (ssh based authentication)
//"pem": "~/.ssh/id_rsa"
//"pem": "/users/alex/dropbox/awspems/projectmanager.pem"
}
],
// Install MongoDB in the server, does not destroy local MongoDB on future setup
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: If nodeVersion omitted will setup 0.10.33 by default. Do not use v, only version number.
"nodeVersion": "0.10.35",
// Install PhantomJS in the server
"setupPhantom": true,
// Application name (No spaces)
"appName": "projectmanager",
// Location of app (local directory)
"app": "/users/alex/dropbox/projectmanager",
// Configure environment
"env": {
"ROOT_URL": "http://xx.xx.xx.xx"
},
// Meteor Up checks if the app comes online just after the deployment
// before mup checks that, it will wait for no. of seconds configured below
"deployCheckWaitTime": 30
}
The following messages appear whether in the EC2 instance or the droplet instance:
Claire-MacAir-7:projectmanager alex$ mup deploy
Meteor Up: Production Quality Meteor Deployments
------------------------------------------------
“ Checkout Kadira!
It's the best way to monitor performance of your app.
Visit: https://kadira.io/mup ”
Building Started: /users/notmyusername/dropbox/projectmanager
Started TaskList: Deploy app 'projectmanager' (linux)
[xx.xx.xx.xx] - Uploading bundle
[xx.xx.xx.xx] ✔ Uploading bundle: SUCCESS
[xx.xx.xx.xx] - Setting up Environment Variables
[xx.xx.xx.xx] ✔ Setting up Environment Variables: SUCCESS
[xx.xx.xx.xx] - Invoking deployment process
[xx.xx.xx.xx] ✘ Invoking deployment process: FAILED
-----------------------------------STDERR-----------------------------------
ir=/root/.node-gyp/0.10.35',
gyp info spawn args '-Dmodule_root_dir=/opt/projectmanager/tmp/bundle/programs/server/npm/npm-bcrypt/node_modules/bcrypt',
gyp info spawn args '--depth=.',
gyp info spawn args '--no-parallel',
gyp info spawn args '--generator-output',
gyp info spawn args 'build',
gyp info spawn args '-Goutput_dir=.' ]
gyp info spawn make
gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]
gyp info ok
npm WARN package.json meteor-dev-bundle#0.0.0 No description
npm WARN package.json meteor-dev-bundle#0.0.0 No repository field.
npm WARN package.json meteor-dev-bundle#0.0.0 No README data
stop: Unknown instance:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 80: Connection refused
App did not pick up! Please check app logs.
-----------------------------------STDOUT-----------------------------------
LE(target) Release/obj.target/bcrypt_lib.node: Finished
COPY Release/bcrypt_lib.node
make: Leaving directory `/opt/projectmanager/tmp/bundle/programs/server/npm/npm-bcrypt/node_modules/bcrypt/build'
> fibers#1.0.1 install /opt/projectmanager/tmp/bundle/programs/server/node_modules/fibers
> node ./build.js
`linux-x64-v8-3.14` exists; testing
Binary is fine; exiting
underscore#1.5.2 node_modules/underscore
semver#4.1.0 node_modules/semver
chalk#0.5.1 node_modules/chalk
├── escape-string-regexp#1.0.2
├── ansi-styles#1.1.0
├── supports-color#0.2.0
├── strip-ansi#0.3.0 (ansi-regex#0.2.1)
└── has-ansi#0.1.0 (ansi-regex#0.2.1)
eachline#2.3.3 node_modules/eachline
└── type-of#2.0.1
source-map-support#0.2.8 node_modules/source-map-support
└── source-map#0.1.32 (amdefine#0.1.0)
fibers#1.0.1 node_modules/fibers
Waiting for MongoDB to initialize. (5 minutes)
connected
projectmanager start/running, process 11786
Waiting for 30 seconds while app is booting up
Checking is app booted or not?
----------------------------------------------------------------------------
Completed TaskList: Deploy app 'projectmanager' (linux)
The project works perfectly with no errors on my localhost.
Any ideas?
Thanks in advance, Alex Adams
And the solution is:
The mup logs showed that it didn't like a path designated for a folder to hold uploaded files. I changed the app to use GridFS, thus storing the files in the database. The app deploys correctly now.

Exit Meteor Tinytest after after all tests have been completed

When running meteor test-packages ./ from automated tests (e.g. grunt files), it would help if meteor exited after the tests were run. Is there a way to do that? The command line help doesn't suggest anything of that sort and this issue suggests it's not possible.
Tinytest is designed to run continuously and reactively test a set of packages.
For continuous integration scenarios, there's a tool called spacejam, which calls meteor-testpackages, waits for the tests to complete, then sends a SIGTERM signal to meteor.
$ npm install -g spacejam
$ spacejam test-packages ./
spacejam: spawning meteor
[[[[[ Tests ]]]]]
=> Started proxy.
=> Started MongoDB.
spacejam: meteor mongodb is ready
I20141129-21:12:34.361(-8)? test-in-console listening
=> Started your app.
=> App running at: http://localhost:4096/
spacejam: meteor is ready
spacejam: spawning phantomjs
phantomjs: Running tests at http://localhost:4096/ using test-in-console
S: tinytest - Moment.is : OK
C: tinytest - Moment.is : OK
passed/expected/failed/total 2 / 0 / 0 / 2
##_meteor_magic##state: done
spacejam: phantomjs exited with code: 0
spacejam: killing meteor
spacejam: meteor killed with signal: SIGTERM

Authentication failed with capifony

I'm trying to do a Symfony 2 project deployment web app based on capifony and Symfony2.
It uses Process to trigger my "cap deploy" task and display my output in a web browser.
When in a shell, if I run my "cap deploy" as user www-data (the same as used by Process) , my deployement works fine so there's nothing wrong either with my deploy task nor with my authentication keys.
Though, when I call my task from my web app, capifony tells me it can't authenticate on the remote server.
triggering start callbacks for `deploy'
* executing `deploy:setdomain'
* executing `deploy'
* executing `deploy:update'
** transaction: start
* executing `deploy:update_code'
triggering before callbacks for `deploy:update_code'
[32m--> Updating code base with checkout strategy[0m
executing locally: "git ls-remote [ myrepo ]"
command finished in 2068ms
* executing "git clone -q -o [ remote server ] [ my repo ]
/var/www/spinfony/releases/20121211100449 && cd /var/www/spinfony/releases/20121211100449 && git checkout -q -b deploy be53233e51a4c542c3bc8603b424e57f988898a4 && (echo be53233e51a4c542c3bc8603b424e57f988898a4 > /var/www/spinfony/releases/20121211100449/REVISION)"
servers: ["[ remote server ]"]
Password: stty: standard input: Invalid argument
stty: standard input: Invalid argument
stty: standard input: Invalid argument
*** [deploy:update_code] rolling back
* executing "rm -rf /var/www/spinfony/releases/20121211100449; true"
servers: ["[ remote server ]"]
** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: [ remote server ] (Net::SSH::AuthenticationFailed: [ user ])
connection failed for: [ remote server ] (Net::SSH::AuthenticationFailed: [ user ])
I'm trying to figure out why capifony seems to expect a password I can't provide since i'm not running it from a shell, whereas when I do run it from a shell, it works fine without asking me anything.
Once again, the same file is called from the same user.
This is a known "bug"
You need to tell capistrano wich key to use
Try adding this to your deploy.rb :
ssh_options[:keys] = %w(/what/ever/.ssh/id_rsa)
Source : http://adam.goucher.ca/?p=1253
When you call your task from a web app, you need to tell it what user to use.
set :user, "www-data"
set :domain, "webserverdomainname.com"

Resources