Geb: Moving on from a failed test rather than stopping the run - automated-tests

I'm (learning how to) writing and running Geb tests in Intellij. How do I configure Geb so that it runs all my tests rather than stop at the first fail and leaving the rest not run?

When using Spock's #Stepwise all feature methods are run in the order of their declaration in the spec and if one feature method fails then all the following feature methods are being skipped.

I created a #StepThrough annotation by subclassing #Stepwise and taking out the line of code that fails the entire testsuite after a single failure. You can grab the code here

Related

importance of do_build function in bitbake

I am new to bitbake and i am trying to learn it.
I am going through below link for getting a basic understanding
https://a4z.gitlab.io/docs/BitBake/guide.html
Here either through inheritance or directly i see a do_build is always executed.
this might seem a basic question but your answer would really help me.
I read a line like this "build is the task BitBake runs per default if no other task is specified".
Does it mean build is mandatory in bitbake.Is it like main function in c and c+= codes?
Thanks
When no task is specified on the commandline to bitbake, "build" is assumed. This can be overridden with the BB_DEFAULT_TASK variable to some other task. If a task isn't defined and you try and run it, an error will occur that the task can't be found. As such, a build task isn't absolutely required but you will likely run into errors without it in most common usages.

Is it possible to turn "ERROR No tests to run" in to a Warning?

I'm running TestCafe via CircleCi as part of my CI/CD process, using "Smoke" test-meta tags (in order to run a subset of our regression tests every build deploy).
As part of the run, CircleCi splits the test suites/specs to run on different containers in parallel, resulting in:
testcafe chrome:headless tests/someFolder/someTestSuite.js --test-meta smoke=true
Not every suite will contain a "Smoke" test, however, so those will fail with 'ERROR No tests to run. Either the test files contain no tests or the filter function is too restrictive'.
Is there a way to switch this to a warning, rather than a failure? I've tried using the --disable-test-syntax-validation flag, but this understandably doesn't help.
You can not do this via a public API. You can consider defining a custom filter or adding some empty tests with meta='smoke' to avoid this error.

How to stop PHPUnit in the middle of tests but still get a list of failures / reports?

We have a test suite that runs 30 or more minutes and it is not uncommon for me to see a situation like this:
I don't generally want to break on a first failure (--stop-on-failure) but in this specific example, I also don't want to wait another 10-15 minutes for the test suite to finish. If I do Ctrl+C, the process stops immediately and I won't see any messages.
I'm specifically interested in the format that PHPUnit uses in the console which I find very useful. For example, logging to a file using --testdox-text produces a nice but not very detailed list. Logging with --log-teamcity is verbose and quite technical.
Is there a way to see "console-like" PHPUnit messages before the test suite fully finishes?
Update: I've also opened an issue in the official repo.
maybe you could register a listener in your phpunit.xml file and implement its addFailure() method to display some info in console or a file...

Can a Grunt task be configured to run a single Karma test?

When developing a test I would like to be able to run a single Karma unit test only. Is there a way to run one test only from the command line using Karma, or perhaps configure a simple task in Grunt that runs one test only?
Assuming that you're using Karma with Jasmine, simply change one or more describe() calls to fdescribe() calls, or it() calls to fit() calls. Only the f-prefixed tests will be run. This is documented in the focused_specs.js file of the Jasmine documentation.
(In older versions of Jasmine, the 'focused' versions of describe and it were instead called ddescribe and iit; try them if you're on an old version and can't upgrade.)

Selenium tests sometimes fail, sometimes pass

I have created tests using selenium 2, I'm also using the selenium standalone server to run the tests.
The problem is that if I run one test, it works. If I run multiple tests, some of them fail. If I try then to run a failed test, it works.
Could the tests be running on threads?
I've used the NUnit GUI, and TeamCity to run the tests ... both give the same results : different tests fail, run again, other tests fail.
Any thoughts ?
EDIT
The tests shouldn't depend on one another. The database is emptied and repopulated for every test.
I guess the only problem could be that the database is not emptied correctly ... but then if I run the same test multiple times it should also fail.
EDIT2
The tests fail with "element not found".
I'll try and add a "WaitForElement" that retries every few milliseconds and maybe that will fix it.
Without knowing the exact errors that are thrown its hard to say. The normal causes of flakiness tend to be waits are not set to a decent time or the web server can't handle that many requests.
If the DB is on the same machine as the webserver, and why shouldnt it be on a build box, it can be intensive to clear it out.
I would recommend going through each of the errors and making it bullet proof for that and then moving to the next. I know people who run there tests all the time without flakiness so its definitely an environmental thing that can be sorted.
I know I'm a bit late to the party here but are you using a single window to run your tests? I had a similar issue since the site I'm testing has only one page load event so waiting for elements or pausing the test became very dodgy and I had different tests passing each time. Adding a ton of wait times didn't work at all until I just opened a new "clean" browser for each test. Testing does get slower but it worked.

Resources