Long story short, I have a meteor app passing in a continuous-integration pipeline which runs every tests before deploying. My tests run with chimp and I was installing chimp globally on my CI environment in each build with the latest version before running the tests.
recently, chimp made a somewhat major update which resulted in my chimp execution running 1..0 tests. After finding out it was because of the version of chimp, I made the change to the installation of chimp to be local and locked at a certain version.
The problem is that my pipeline was passing because hey, 0 tests passing is still 0 tests failing..!
I'm trying to make chimp fail if it runs no tests at all. How would be a good way to do it?
I tried greping the output and matching it to '1..0' resulting in a exit /b 1 status with no luck. The best solution would involve only the chimp command.
Thanks to anyone that could give me some hint on that.
Use the after test hook
You can put your code which runs after all tests in the after test hook (example below). I"m not sure on how to check check the number of tests run, but I think you can find out the same way the test reporter finds it. So, I'd suggest going through the source of one of the popular/simpler reporters like dot/spec and figuring it out from there.
describe('test', function() {
after(function() { console.log('after'); });
afterEach(function() { console.log('afterEach'); });
it('fails sync', function(done) {
after(function() { console.log('inner after 1'); });
throw new Error('failed');
});
it('fails async', function(done) {
after(function() { console.log('inner after 2'); });
process.nextTick(function() {
throw new Error('failed');
});
});
});
which produces the following output with mocha 1.1.12:
․afterEach
․afterEach
after
inner after 1
inner after 2
0 passing (5 ms)
2 failing
1) test fails sync:
Error: failed
at Context.<anonymous> (/private/tmp/so/test/main.js:7:11)
at Test.Runnable.run (/private/tmp/so/node_modules/mocha/lib/runnable.js:194:15)
at Runner.runTest (/private/tmp/so/node_modules/mocha/lib/runner.js:355:10)
at /private/tmp/so/node_modules/mocha/lib/runner.js:401:12
at next (/private/tmp/so/node_modules/mocha/lib/runner.js:281:14)
at /private/tmp/so/node_modules/mocha/lib/runner.js:290:7
at next (/private/tmp/so/node_modules/mocha/lib/runner.js:234:23)
at Object._onImmediate (/private/tmp/so/node_modules/mocha/lib/runner.js:258:5)
at processImmediate [as _immediateCallback] (timers.js:330:15)
2) test fails async:
Error: failed
at /private/tmp/so/test/main.js:13:12
at process._tickCallback (node.js:415:13)
credits for example code from SO user Miroslav Bajtoš
Related
When run the XrayImportBuilder step prints a lot of useful stuff to the Log but I can't see any simple way of getting at this information so it can be used from the Jenkinsfile script code. Specifically this appears in the Log:
XRAY_TEST_EXECS: ENT-8327
and I hoping to add this info to the current build description. Ideally the info would be returned from the call, but the result is empty. Alternatives might be to scan the log or I use a curl call and handle all the output - latter feels like a backwards step.
I was successful in extracting that information from the logs generated.
After the Xray import results stage I added:
stage('Extract Variable from log'){
steps {
script {
def logContent = Jenkins.getInstance().getItemByFullName(env.JOB_NAME).getBuildByNumber(Integer.parseInt(env.BUILD_NUMBER)).logFile.text
env.testExecs = (logContent =~ /XRAY_TEST_EXECS:.*/).findAll().first()
echo testExecs
}
}
}
stage('Using variable from another stage') {
steps {
script {
echo "${env.testExecs}"
}
}
You can change the REGEX used to your specific case. I've added the extracted value to a variable so that it can be used in another stages.
Current situation:
We are using cypress for test automation. We have a folder named 'integration' which contains several 'spec' files. These spec files can contain one or more tests related to each other.
Problem:
I want to organize the cypress test automation on bamboo properly. What I want to do is have test suites e.g.
Playground_suite contains: 1) slide_tests_spec.js
2) teeter_totters_tests_spec.js ...
Road_suite contains: 1) car_tests_spec.js 2) truck_tests_spec.js ...
The I have the option of running Playground_suite that will only run the spec files defined in this suite.
Is this possible in cypress, if yes, how? Please help.
We had faced this same type of issue. What we had come up to solve the same issue was the following:
00_suite.example.js:
import Test01 from './e2e_test01.example.js';
import Test02 from './e2e_test02.example.js';
import Test03 from './e2e_test03.example.js';
describe('Cypress_PreTest_Configuration', function() {
console.log(Cypress.config());
});
// This is an example suite running tests in a specified order -
// All tests contained in each of these files will be run before the next
// file is processed.
describe('Example_E2E_Test_Suite', function() {
Test01();
Test02();
Test03();
});
describe('Example_Reverse_Ordered_E2E_Test_Suite', function() {
Test03();
Test02();
Test01();
});
The key in the actual test files is that they contain the "export default function() {}" option prior to the describe suite definition(s):
e2e_test01.example.js:
export default function() {
describe('Example_Tests_01', function() {
it('TC01 - Example Tiger Tests', function() {
doNothingOne();
console.log(this.test.parent.parent.title);
cy.visit(this.commonURIs.loginURI);
})
})
}
When attempting to run the e2e_test*.example.js files within the Cypress UI, you will find that the UI will report that there are no tests found. You will have to execute the tests through through the suite definition files. We had approached this limitation with only using the 'suite' approach for E2E tests while we utilize the standard spec files for regression and minimum acceptance testing.
I hope that this example is helpful for you and perhaps someone else may have an other solution.
I'm writing am openembedded/bitbake recipe for openembedded-classic. My recipe RDEPENDS on keyutils, and everything seems to work, except one thing:
I want to append a single line to the /etc/request-key.conf file installed by the keyutils package. So I added the following to my recipe:
pkg_postinst_${PN} () {
echo 'create ... more stuff ..' >> ${sysconfdir}/request-key.conf
}
However, the intended added line is missing in my resulting image.
My recipe inherits update-rc.d if that makes any difference.
My main question is: How do i debug this? Currently I am constructing an entire rootfs image, and then poke-around in that to see, if the changes show up. Surely there is a better way?
UPDATE:
Changed recipe to:
pkg_postinst_${PN} () {
echo 'create ... more stuff ...' >> ${D}${sysconfdir}/request-key.conf
}
But still no luck.
As far as I know, postinst runs at rootfs creation, and only at first boot if rootfs fails.
So there is a easy way to execute something only first boot. Just check for $D, like this:
pkg_postinst_stuff() {
#!/bin/sh -e
if [ x"$D" = "x" ]; then
# do something at first boot here
else
exit 1
fi
}
postinst scripts are ran at roots time, so ${sysconfdir} is /etc on your host. Use $D${sysconfdir} to write to the file inside the rootfs being generated.
OE-Classic is pretty ancient so you really should update to oe-core.
That said, Do postinst's run at first boot? I'm not sure. Also look in the recipes work directory in the temp directory and read the log and run files to see if there are any clues there.
One more thing. If foo RDEPENDS on bar that just means "when foo is installed, bar is also installed". I'm not sure it makes assertions about what is installed during the install phase, when your postinst is running.
If using $D doesn't fix the problem try editing your postinst to copy the existing file you're trying to edit somewhere else, so you can verify that it exists in the first place. Its possible that you're appending to a file that doesn't exist yet, and the the package that installs the file replaces it.
I am trying to run the following grunt command grunt test:e2e but this does not seem to work as I get the warning as pointed in the title. I don't want to post the entire gruntfile.js so I have supplied the gist link. I would really appreciate if someone can point me in the right direction.
Gruntfile.js
The error should be your clue here; there are no task targets named "livereload-start". If you want to point to a specific task config given this structure:
connect: {
target1: {
// opts
},
target2: {
// opts
}
}
You would run connect:target1 instead of connect-target1. If you remove livereload-start out of your task configuration (line 379), what happens?
It is difficult for me to catch with the eye a boundary between test runs.
Is it possible to clear console for each run of Testacular/Karma + Jasmine or at least put there something easily catched by the eye, for example a series of newlines?
Note
Currently it is an abandoned question because I am no longer trying to perform tasks described in it. Please do not ask for additional info. Write only if you know for sure what to do. It will help other people.
Write your own reporter, and do whatever you want with it.
Also, if you're on a Mac and use Growl, take a look at karma-growl-reporter
I am not sure to fully understand your need but karma-spec-reporter can give you a detailed review of your test execution. Output example from karma-spec-reporter-example:
array:
push:
PASSED - should add an element
PASSED - should remove an element
FAILED - should do magic (this test will fail) expected [] to include 'magic'
at /home/michael/development/codecentric/karma-spec-reporter-example/node_modules/chai/chai.js:401
...
PhantomJS 1.8.1 (Linux): Executed 3 of 3 (1 FAILED) (0.086 secs / NaN secs)
There's now a reporter available for this: https://github.com/arthurc/karma-clear-screen-reporter
It's working for me on OSX.