Can a Grunt task be configured to run a single Karma test? - gruntjs

When developing a test I would like to be able to run a single Karma unit test only. Is there a way to run one test only from the command line using Karma, or perhaps configure a simple task in Grunt that runs one test only?

Assuming that you're using Karma with Jasmine, simply change one or more describe() calls to fdescribe() calls, or it() calls to fit() calls. Only the f-prefixed tests will be run. This is documented in the focused_specs.js file of the Jasmine documentation.
(In older versions of Jasmine, the 'focused' versions of describe and it were instead called ddescribe and iit; try them if you're on an old version and can't upgrade.)

Related

Is it possible to turn "ERROR No tests to run" in to a Warning?

I'm running TestCafe via CircleCi as part of my CI/CD process, using "Smoke" test-meta tags (in order to run a subset of our regression tests every build deploy).
As part of the run, CircleCi splits the test suites/specs to run on different containers in parallel, resulting in:
testcafe chrome:headless tests/someFolder/someTestSuite.js --test-meta smoke=true
Not every suite will contain a "Smoke" test, however, so those will fail with 'ERROR No tests to run. Either the test files contain no tests or the filter function is too restrictive'.
Is there a way to switch this to a warning, rather than a failure? I've tried using the --disable-test-syntax-validation flag, but this understandably doesn't help.
You can not do this via a public API. You can consider defining a custom filter or adding some empty tests with meta='smoke' to avoid this error.

How can I run a script after phpunit tests finished

Can I run a PHP script automatically after all PHPUnit tests are finished?
I'd like to report some non-fatal issues (i.e. correct but suboptimal test results) after all tests are finished.
Assuming you can detect such sub-optimal results, PHPUnit (v9 and before) has a 'TestListener' facility, that can be extended with custom code and enabled in the phpunit.xml file. It can run code before and after tests, and for each potential result (pass/fail/errors, etc, etc).
PHPUnit 10 replaces the TestListener with a new events system.

Protractor implicit waiting not working when using grunt-protractor-runner

I am writing e2e Tests for some JS application at the moment. Since I am not a JS developer I was investigating on this theme for a while and ended up with the following setup:
Jasmine2 as testing framework
grunt as "build-tool"
protractor as test runner
jenkins as CI server (already in use for plenty java projects)
Although the application under tests is not written in angular I decided to go for protractor, following a nice guide on howto make protractor run nicely even without angular.
Writing some simple tests and running them locally worked like a charm. In order to implicitly wait for some elements to show up in den DOM I used the following code in my conf.js:
onPrepare: function() {
browser.driver.manage().timeouts().implicitlyWait(5000);
}
All my tests were running as expected and so I decided to go to the next step, i.e. installation in the CI server.
The development team of the aplication I want to tests was already using grunt to build their application so I decided to just hook myself into that. The goal of my new grunt task is to:
assemble the application
start a local webserver running the application
run my protractor test
write some test reports
Finally I accomplished all of the above steps, but I am dealing with a problem now I cannot solve and did not find any help googling it. In order to run the protractor test from grunt I installed the grunt-protractor-runner.
The tests are running, BUT the implicit wait is not working, causing some tests to fail. When I added some explicit waits (browser.sleep(...)) everything is ok again but that is not what I want.
Is there any chance to get implicitly waiting to work when using the grunt-protractor-runner?
UPDATE:
The problem does not have anything to do with the grunt-protractor-runner. When using a different webserver I start up during my taks its working again. To be more precisley: Using the plugin "grunt-contrib-connect" the tests is working using the plugin "grunt-php" the test fails. So I am looking for another php server for grunt now. I will be updating this question.
UPDATE 2:
While looking for some alternatives I considered and finally decided to mock the PHP part of the app.

How do I ONLY run Meteor tests, and how do I NOT run Meteor tests

Trying to setup my integration flow and I have some tests that are quite destructive using the velocity-cucumber package.
First issue I find is that these tests are being run on the standard Meteor db. Which on localhost and dev is fine, but not so great for production. As far as I can tell the velocity-cucumber doesn't do anything with mirrors yet.
Because of this I have two cases where I need Meteor to launch in a specific way.
1) On the CI server I need for JUST the tests to run then exit (hopefully with the correct exit code).
2) On the production server I need Meteor to skip all tests and just launch.
Is this currently possible with Meteor command line arguments? I'm contemplating making demeteorize a part of the process, and then use standard node.js testing frameworks.
To run velocity tests and then exit, you can allegedly run meteor with the --test option:
meteor run --test
This isn't working for me, but that's what the documentation says it is supposed to do.
To disable velocity tests, run meteor with the environment variable VELOCITY set to 0. This will skip setting up the mirror, remove the red/green dot, etc.:
VELOCITY=0 meteor run

Geb: Moving on from a failed test rather than stopping the run

I'm (learning how to) writing and running Geb tests in Intellij. How do I configure Geb so that it runs all my tests rather than stop at the first fail and leaving the rest not run?
When using Spock's #Stepwise all feature methods are run in the order of their declaration in the spec and if one feature method fails then all the following feature methods are being skipped.
I created a #StepThrough annotation by subclassing #Stepwise and taking out the line of code that fails the entire testsuite after a single failure. You can grab the code here

Resources