When running meteor test-packages ./ from automated tests (e.g. grunt files), it would help if meteor exited after the tests were run. Is there a way to do that? The command line help doesn't suggest anything of that sort and this issue suggests it's not possible.
Tinytest is designed to run continuously and reactively test a set of packages.
For continuous integration scenarios, there's a tool called spacejam, which calls meteor-testpackages, waits for the tests to complete, then sends a SIGTERM signal to meteor.
$ npm install -g spacejam
$ spacejam test-packages ./
spacejam: spawning meteor
[[[[[ Tests ]]]]]
=> Started proxy.
=> Started MongoDB.
spacejam: meteor mongodb is ready
I20141129-21:12:34.361(-8)? test-in-console listening
=> Started your app.
=> App running at: http://localhost:4096/
spacejam: meteor is ready
spacejam: spawning phantomjs
phantomjs: Running tests at http://localhost:4096/ using test-in-console
S: tinytest - Moment.is : OK
C: tinytest - Moment.is : OK
passed/expected/failed/total 2 / 0 / 0 / 2
##_meteor_magic##state: done
spacejam: phantomjs exited with code: 0
spacejam: killing meteor
spacejam: meteor killed with signal: SIGTERM
Related
I have a successful bitbucket pipeline calling out to aws CodeDeploy, but I'm wondering if I can add a step that will check and wait for CodeDeploy success, otherwise fail the pipeline. Would this just be possible with a script that loops through a CodeDeploy call that continues to monitor the status of the CodeDeploy push? Any idea what CodeDeploy call that would be?
bitbucket-pipline.yml
image: pitech/gradle-awscli
pipelines:
branches:
develop:
- step:
caches:
- gradle
script:
- gradle build bootRepackage
- mkdir tmp; cp appspec.yml tmp; cp build/libs/thejar*.jar tmp/the.jar; cp -r scripts/ ./tmp/
- pip install awscli --upgrade --user
- aws deploy push --s3-location s3://thebucket/the-deploy.zip --application-name my-staging-app --ignore-hidden-files --source tmp
- aws deploy create-deployment --application-name server-staging --s3-location bucket=staging-codedeploy,key=the-deploy.zip,bundleType=zip --deployment-group-name the-staging --deployment-config-name CodeDeployDefault.AllAtOnce --file-exists-behavior=OVERWRITE
appspec.yml
version: 0.0
os: linux
files:
- source: thejar.jar
destination: /home/ec2-user/the-server/
permissions:
- object: /
pattern: "**"
owner: ec2-user
group: ec2-user
hooks:
ApplicationStop:
- location: scripts/server_stop.sh
timeout: 60
runas: ec2-user
ApplicationStart:
- location: scripts/server_start.sh
timeout: 60
runas: ec2-user
ValidateService:
- location: scripts/server_validate.sh
timeout: 120
runas: ec2-user
Unfortunately it doesn't seem like Bitbucket is waiting for the ValidateService to complete, so I'd need a way in Bitbucket to confirm before marking the build a success.
AWS CLI already has a deployment-successful method which checks the status of a deployment every 15 seconds. You just need to pipe the output of create-deployment to deployment-successful.
In your specific case, it should look like this:
image: pitech/gradle-awscli
pipelines:
branches:
develop:
- step:
caches:
- gradle
script:
- gradle build bootRepackage
- mkdir tmp; cp appspec.yml tmp; cp build/libs/thejar*.jar tmp/the.jar; cp -r scripts/ ./tmp/
- pip install awscli --upgrade --user
- aws deploy push --s3-location s3://thebucket/the-deploy.zip --application-name my-staging-app --ignore-hidden-files --source tmp
- aws deploy create-deployment --application-name server-staging --s3-location bucket=staging-codedeploy,key=the-deploy.zip,bundleType=zip --deployment-group-name the-staging --deployment-config-name CodeDeployDefault.AllAtOnce --file-exists-behavior=OVERWRITE > deployment.json
- aws deploy wait deployment-successful --cli-input-json file://deployment.json
aws deploy create-deployment is an asynchronous call, and BitBucket has no idea that it needs to know about the success of your deployment. Adding a script to your CodeDeploy application will have no effect on BitBucket knowing about your deployment.
You have one (maybe two) options to fix this issue.
#1 Include a script that waits for your deployment to finish
You need to add a script to your BitBucket pipeline to check the status of your deployment to finish. You can either use SNS notifications, or poll the CodeDeploy service directly.
The pseudocode would look something like this:
loop
check_if_deployment_complete
if false, wait and retry
if true && deployment successful, return 0 (success)
if true && deployment failed, return non-zero (failure)
You can use the AWS CLI or your favorite scripting language. Add it at the end of your bitbucket-pipline.yml script. Make sure you use a wait between calls to CodeDeploy to check the status.
#2 (the maybe) Use BitBucket AWS CodeDeploy integration directly
BitBucket integrates with AWS CodeDeploy directly, so you might be able to use their integration rather than your script to integration properly. I don't know if this is supported or not.
When I run my meteor app located in /path/to/app, it correctly builds and starts, but after about 45 seconds to 1 minute it will always crash with an error like
=> Started proxy.
=> Started MongoDB.
=> Started your app.
=> App running at: http://localhost:3000/
/Users/alex/.meteor/packages/meteor-tool/.1.1.10.1j76dru++os.osx.x86_64+web.browser+web.cordova/mt-os.osx.x86_64/dev_bundle/lib/node_modules/fibers/future.js:278
throw(ex);
^
Error: UNKNOWN, readdir '/path/to/node_modules/sjcl/jsdoc_toolkit-2.3.3-beta/app/test'
at Object.Future.wait (/Users/alex/.meteor/packages/meteor-tool/.1.1.10.1j76dru++os.osx.x86_64+web.browser+web.cordova/mt-os.osx.x86_64/dev_bundle/lib/node_modules/fibers/future.js:398:15)
at /tools/fs/files.js:1331:28
at Object.wrapper (/tools/fs/files.js:1334:20)
at readDirectory (/tools/fs/watch.js:265:26)
at Watcher._fireIfDirectoryChanged (/tools/fs/watch.js:409:23)
at /tools/fs/watch.js:670:12
at Array.forEach (native)
at Function._.each._.forEach (/Users/alex/.meteor/packages/meteor-tool/.1.1.10.1j76dru++os.osx.x86_64+web.browser+web.cordova/mt-os.osx.x86_64/dev_bundle/lib/node_modules/underscore/underscore.js:79:11)
at Watcher._checkDirectories (/tools/fs/watch.js:659:7)
at new Watcher (/tools/fs/watch.js:356:10)
at [object Object]._.extend._runOnce (/tools/runners/run-app.js:746:23)
at [object Object]._.extend._fiber (/tools/runners/run-app.js:858:28)
at /tools/runners/run-app.js:396:12
- - - - -
Let's say I try to reproduce this error a couple times, I will will always see the same error at future.js:278 and the Object.Future.wait at 398:15, but the directory that's trying to be read by readdir will be some different node_modules package. I have all the permissions correct for this project.
It might be useful to know that prior to this problem I was experiencing a problem with too many file open (an EMFILE error) and I added this line to my bashrc file to increase number of files a process could have open
sudo launchctl limit maxfiles 16384 16384 && ulimit -n 16384
which got rid of the EMFILE error, but now I'm stuck with this unknown error.
Also, I've tried to solution here
https://github.com/meteor/meteor/issues/4660
with "sudo purge" but it didn't work. Any solutions to this problem.
Finding and then killing any other running instances of Meteor seemed to fix this for me:
ps -x | grep meteor
# find [pid] of meteor instance
kill [pid]
Run sudo purge. If it doesn't work, restart you mac, then run meteor again. It works for me.
I've found that raising maxfiles limits way higher solves the problem.
For example:
sudo launchctl limit maxfiles 100000 100000
I've got an app I can test locally without issue using
meteor test-packages --velocity
// result
[[[[[ Tests ]]]]]
=> Started proxy.
=> Started MongoDB.
=> Started your app.
=> App running at: http://localhost:3000/
PASSED mocha : sanjo:jasmine on server => works
TESTS RAN SUCCESSFULLY
Each package.js in the app I'm testing has the following :
Package.onTest(function(api) {
api.use(['mike:mocha-package#0.5.8','velocity:core#0.9.3]);
api.addFiles('tests/server/example.js', 'server');
});
Now I'm trying to do the same via the Wercker pipeline using the following wercker.yml :
build :
box: ubuntu
steps :
# have to install meteor to run the tests
- script :
name : meteor install
code : |
sudo apt-get update -y
sudo apt-get -y install curl wget
cd /tmp
wget https://phantomjs.googlecode.com/files/phantomjs-1.9.1-linux-x86_64.tar.bz2
tar xfj phantomjs-1.9.1-linux-x86_64.tar.bz2
sudo cp /tmp/phantomjs-1.9.1-linux-x86_64/bin/phantomjs /usr/local/bin
curl https://install.meteor.com | /bin/sh
# run tests using meteor test cli
- script :
name : meteor test
code : |
meteor test-packages --velocity --settings config/settings.json
The meteor install step works fine but then the pipeline just hangs here :
[[[[[ Tests ]]]]]
=> Started proxy.
=> Started MongoDB.
=> Started your app.
=> App running at: http://localhost:3000/
Any ideas ? Am I not installing phantomjs correctly ?
UPDATE :
after discovering the DEBUG=1 flags... i run
DEBUG=1 VELOCITY_DEBUG=1 meteor test-packages --velocity
on both dev and in wercker.yml
ON DEV :
I20150915-21:12:35.362(2)? [velocity] adding velocity core
I20150915-21:12:36.534(2)? [velocity] Register framework mocha with regex mocha/.+\.(js|coffee|litcoffee|coffee\.md)$
I20150915-21:12:36.782(2)? [velocity] Server startup
I20150915-21:12:36.785(2)? [velocity] app dir /private/var/folders/c3/hlsb9j0s0d3ck8trdcqscpzc0000gn/T/meteor-test-runyaqy6y
I20150915-21:12:36.785(2)? [velocity] config = {
I20150915-21:12:36.785(2)? "mocha": {
I20150915-21:12:36.785(2)? "regex": "mocha/.+\\.(js|coffee|litcoffee|coffee\\.md)$",
I20150915-21:12:36.785(2)? "name": "mocha",
I20150915-21:12:36.785(2)? "_regexp": {}
I20150915-21:12:36.785(2)? }
I20150915-21:12:36.785(2)? }
I20150915-21:12:36.787(2)? [velocity] resetting the world
I20150915-21:12:36.787(2)? [velocity] frameworks with disable auto reset: []
I20150915-21:12:36.797(2)? [velocity] Add paths to watcher [ '/private/var/folders/c3/hlsb9j0s0d3ck8trdcqscpzc0000gn/T/meteor-test-runyaqy6y/tests' ]
I20150915-21:12:36.811(2)? [velocity] File scan complete, now watching /tests
I20150915-21:12:36.811(2)? [velocity] Triggering queued startup functions
=> Started your app.
=> App running at: http://localhost:3000/
PASSED mocha : sanjo:jasmine on server => works
TESTS RAN SUCCESSFULLY
and ON WERCKER :
[[[[[ Tests ]]]]]
=> Started proxy.
=> Started MongoDB.
I20150915-19:03:24.207(0)? [velocity] adding velocity core
I20150915-19:03:24.299(0)? [velocity] Register framework mocha with regex mocha/.+\.(js|coffee|litcoffee|coffee\.md)$
I20150915-19:03:24.342(0)? [velocity] Server startup
I20150915-19:03:24.343(0)? [velocity] app dir /tmp/meteor-test-run1f61jb9
I20150915-19:03:24.343(0)? [velocity] config = {
I20150915-19:03:24.343(0)? "mocha": {
I20150915-19:03:24.344(0)? "regex": "mocha/.+\\.(js|coffee|litcoffee|coffee\\.md)$",
I20150915-19:03:24.344(0)? "name": "mocha",
I20150915-19:03:24.344(0)? "_regexp": {}
I20150915-19:03:24.344(0)? }
I20150915-19:03:24.344(0)? }
I20150915-19:03:24.346(0)? [velocity] resetting the world
I20150915-19:03:24.347(0)? [velocity] frameworks with disable auto reset: []
I20150915-19:03:24.354(0)? [velocity] Add paths to watcher [ '/tmp/meteor-test-run1f61jb9/tests' ]
=> Started your app.
=> App running at: http://localhost:3000/
I20150915-19:03:24.378(0)? [velocity] File scan complete, now watching /tests
I20150915-19:03:24.378(0)? [velocity] Triggering queued startup functions
Try to add --once flag into you testing command.
I haven't quite figured out the implementation with Mocha, but I have found the implementation using 'TinyTest'. Since I thought this would be useful for other users, I've put together a minimal example of Meteor with a few CI providers (CircleCI, Travis, and Wercker).
Of course, you'll need NodeJS installed. This varies by CI provider, but in the case of Travis CI, you'll want a configuration like this :
sudo: required
language: node_js
node_js:
- "0.10"
- "0.12"
- "4.0"
Then, assuming you're building a Meteor package, you'll effectively do the following steps in any CI environment :
# Install Meteor
meteor || curl https://install.meteor.com | /bin/sh
# Install spacejam
npm install -g spacejam
# Execute your tests
spacejam test-packages ./
Source Code is available: https://github.com/b-long/meteor-ci-example
I am experiencing failure every time I try to deploy to either an AWS/EC2 instance or a Digital Ocean droplet. With the EC2 instance I am using a pem file and with the droplet I am using password access. There were a lot of hoops to jump through to get through the mup setup, but that is finally successful on both instances. It's the mup deploy that fails on the Invoking deployment process: step.
Here's my mup.json for the droplet:
{
// Server authentication info
"servers": [
{
"host": "xx.xx.xx.xx",
"username": "root",
"password": "notmypassword"
// or pem file (ssh based authentication)
//"pem": "~/.ssh/id_rsa"
//"pem": "/users/alex/dropbox/awspems/projectmanager.pem"
}
],
// Install MongoDB in the server, does not destroy local MongoDB on future setup
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: If nodeVersion omitted will setup 0.10.33 by default. Do not use v, only version number.
"nodeVersion": "0.10.35",
// Install PhantomJS in the server
"setupPhantom": true,
// Application name (No spaces)
"appName": "projectmanager",
// Location of app (local directory)
"app": "/users/alex/dropbox/projectmanager",
// Configure environment
"env": {
"ROOT_URL": "http://xx.xx.xx.xx"
},
// Meteor Up checks if the app comes online just after the deployment
// before mup checks that, it will wait for no. of seconds configured below
"deployCheckWaitTime": 30
}
The following messages appear whether in the EC2 instance or the droplet instance:
Claire-MacAir-7:projectmanager alex$ mup deploy
Meteor Up: Production Quality Meteor Deployments
------------------------------------------------
“ Checkout Kadira!
It's the best way to monitor performance of your app.
Visit: https://kadira.io/mup ”
Building Started: /users/notmyusername/dropbox/projectmanager
Started TaskList: Deploy app 'projectmanager' (linux)
[xx.xx.xx.xx] - Uploading bundle
[xx.xx.xx.xx] ✔ Uploading bundle: SUCCESS
[xx.xx.xx.xx] - Setting up Environment Variables
[xx.xx.xx.xx] ✔ Setting up Environment Variables: SUCCESS
[xx.xx.xx.xx] - Invoking deployment process
[xx.xx.xx.xx] ✘ Invoking deployment process: FAILED
-----------------------------------STDERR-----------------------------------
ir=/root/.node-gyp/0.10.35',
gyp info spawn args '-Dmodule_root_dir=/opt/projectmanager/tmp/bundle/programs/server/npm/npm-bcrypt/node_modules/bcrypt',
gyp info spawn args '--depth=.',
gyp info spawn args '--no-parallel',
gyp info spawn args '--generator-output',
gyp info spawn args 'build',
gyp info spawn args '-Goutput_dir=.' ]
gyp info spawn make
gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]
gyp info ok
npm WARN package.json meteor-dev-bundle#0.0.0 No description
npm WARN package.json meteor-dev-bundle#0.0.0 No repository field.
npm WARN package.json meteor-dev-bundle#0.0.0 No README data
stop: Unknown instance:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 80: Connection refused
App did not pick up! Please check app logs.
-----------------------------------STDOUT-----------------------------------
LE(target) Release/obj.target/bcrypt_lib.node: Finished
COPY Release/bcrypt_lib.node
make: Leaving directory `/opt/projectmanager/tmp/bundle/programs/server/npm/npm-bcrypt/node_modules/bcrypt/build'
> fibers#1.0.1 install /opt/projectmanager/tmp/bundle/programs/server/node_modules/fibers
> node ./build.js
`linux-x64-v8-3.14` exists; testing
Binary is fine; exiting
underscore#1.5.2 node_modules/underscore
semver#4.1.0 node_modules/semver
chalk#0.5.1 node_modules/chalk
├── escape-string-regexp#1.0.2
├── ansi-styles#1.1.0
├── supports-color#0.2.0
├── strip-ansi#0.3.0 (ansi-regex#0.2.1)
└── has-ansi#0.1.0 (ansi-regex#0.2.1)
eachline#2.3.3 node_modules/eachline
└── type-of#2.0.1
source-map-support#0.2.8 node_modules/source-map-support
└── source-map#0.1.32 (amdefine#0.1.0)
fibers#1.0.1 node_modules/fibers
Waiting for MongoDB to initialize. (5 minutes)
connected
projectmanager start/running, process 11786
Waiting for 30 seconds while app is booting up
Checking is app booted or not?
----------------------------------------------------------------------------
Completed TaskList: Deploy app 'projectmanager' (linux)
The project works perfectly with no errors on my localhost.
Any ideas?
Thanks in advance, Alex Adams
And the solution is:
The mup logs showed that it didn't like a path designated for a folder to hold uploaded files. I changed the app to use GridFS, thus storing the files in the database. The app deploys correctly now.
I am learning Chef + Test Kitchen on a CentOS VM at the moment and it seems that every time I run kitchen converge, some packages fail and throw the same error:
Chef::Exceptions::Exec
----------------------
returned 1, expected 0
And ALL of the errors are located in the package resource. For example:
Compiled Resource:
------------------
# Declared in /tmp/kitchen/cookbooks/nginx/recipes/package.rb:39:in `from_file'
package("nginx") do
action :install
retries 0
retry_delay 2
guard_interpreter :default
package_name "nginx"
version "1.0.15-5.el6"
cookbook_name :nginx
recipe_name "package"
end
However, when I login to the VM using kitchen login and manually run
yum install nginx
It runs just OK. Also, sometimes it just installs fine when I run kitchen converge for the second time.
My recipe file is:
# create vtapp user
user node.default['railsapp']['user'] do
supports :manage_home => true
system true
home "/home/#{node.default['railsapp']['user']}"
shell '/bin/bash'
end
# install git
package 'git'
# install mysql and run the service
mysql_service 'default'
# install redis and run the service
include_recipe 'redis::server'
# install rbenv to vtapp user, and install ruby 2.1.0 along with bundler
include_recipe "ruby_build"
node.default['rbenv']['user_installs'] = [
{
'user' => node.default['railsapp']['user'],
'rubies' => ['2.1.0'],
'gems' => {
'2.1.0' => [
{ 'name' => 'bundler' }
]
}
}
]
include_recipe "rbenv::user"
# install monit
include_recipe "monit"
# install nginx
include_recipe "nginx"
Did I miss something?
Well, as crazy as it seems, after I increased the memory allocation for Vagrant to 1024 MB as is described in the link below:
https://github.com/test-kitchen/kitchen-vagrant/issues/22
The intermittent issue above suddenly gone...
update:
I have repeatedly run the full kitchen test command with success after I increased the memory allocation :-)
update (2):
I have delved more about Chef and another possible cause is the timeout set within Chef to execute an action, 15 mins if I recall correctly. Possible solutions I have used are 1) Installing proxy server to accelerate download times, 2) Increasing internet bandwidth, 3) Allow vagrant to allocate more CPU cores to the VM.
You must also pay attention about the minimum memory required for the application. For example, I had installed ZenOSS with Chef, which require 3 GB memory at minimum, and kept failing with the error code above if I allocated memory below that.