How can I run a script after phpunit tests finished - phpunit

Can I run a PHP script automatically after all PHPUnit tests are finished?
I'd like to report some non-fatal issues (i.e. correct but suboptimal test results) after all tests are finished.

Assuming you can detect such sub-optimal results, PHPUnit (v9 and before) has a 'TestListener' facility, that can be extended with custom code and enabled in the phpunit.xml file. It can run code before and after tests, and for each potential result (pass/fail/errors, etc, etc).
PHPUnit 10 replaces the TestListener with a new events system.

Related

How to stop PHPUnit in the middle of tests but still get a list of failures / reports?

We have a test suite that runs 30 or more minutes and it is not uncommon for me to see a situation like this:
I don't generally want to break on a first failure (--stop-on-failure) but in this specific example, I also don't want to wait another 10-15 minutes for the test suite to finish. If I do Ctrl+C, the process stops immediately and I won't see any messages.
I'm specifically interested in the format that PHPUnit uses in the console which I find very useful. For example, logging to a file using --testdox-text produces a nice but not very detailed list. Logging with --log-teamcity is verbose and quite technical.
Is there a way to see "console-like" PHPUnit messages before the test suite fully finishes?
Update: I've also opened an issue in the official repo.
maybe you could register a listener in your phpunit.xml file and implement its addFailure() method to display some info in console or a file...

TFS release managment test result view fails with JSON error

I have a TFS (on premises version 15.105.25910.0) server with build and release management definitions. One of the definitions deploys a web site, the test assemblies and then runs my MSTest based Selenium tests. Most pass, some are not run, and a few fail.
When I attempt to view the test results in the TFS web portal the view of "failed" test results fails and it shows the following error message:
can't run your query: bad json escape sequence: \p. path
'build.branchname', line 1, position 182.
Can anyone explain how this fault arises? or more to the point what steps I might take to either diagnose this further or correct the fault
The troublesome environment and its "Run Functional Tests" task are shown below
Attempted diagnostics
As suggested by Patrick-MSFT I added the requisite three steps to a build (the one that makes the selenium tests)
Windows machine file copy (Copy MStest assembly containing selenium test to c:\tests on a test machine)
Visualstudio test agent deploy (to same machine)
Run functional tests (the assembly shipped in 1)
The test run (and have the same mix of pass fail, skipped) but the test results can be browsed just fine with the web pages test links.
Results after hammering the same test into a different environment to see how that behaves...
Well, same 3 steps (targeting the same test machine) in a different environment works as expected - same mix of results, but view shows results without errors.
To be clear this is a different (pre-existing) environment in the same release definition, targeting the same test PC. It would seem the issue is somehow tied to that specific environment. So how do I fix that then?
So next step, clone the failing environment and see what happens. Back later with the results.
Try to run the test with same settings in build definition instead of release. This could narrow down if the issue is related to your tests or task configuration.
Double check you have use the right settings of related tasks. You could refer related tutorial for Selenium test in MSDN: Get started with Selenium testing in a continuous integration pipeline
Try to run the same release in another environment.
Also go through your log files to see if there are some related info for troubleshooting.

Protractor E2e test run in Sauce labs is not running all tests listed in the config

We use grunt protractor runner and have 49 specs to run.
When I run them in sauce labs, there are times it just runs x number of tests but not all. Any idea why? Are there any sauce settings to be passed over apart from user and key in my protarctor conf.js?
Using SauceLabs selenium server at http://ondemand.saucelabs.com:80/wd/hub
[launcher] Running 1 instances of WebDriver
Started
.....
Ran 5 of 49 specs
5 specs, 0 failures
This kind of output is usually produced when there are "focused" tests present in the codebase. Check if there are fdescribe, fit in your tests.
As a side note, to avoid focused tests being committed to the repository, we've used static code analysis - eslint with eslint-plugin-jasmine plugin. Then, we've added a "pre-commit" git hook with the help of pre-git package that would run the eslint task before every commit eventually prohibiting any code style violations to be committed into the repository.

Protractor implicit waiting not working when using grunt-protractor-runner

I am writing e2e Tests for some JS application at the moment. Since I am not a JS developer I was investigating on this theme for a while and ended up with the following setup:
Jasmine2 as testing framework
grunt as "build-tool"
protractor as test runner
jenkins as CI server (already in use for plenty java projects)
Although the application under tests is not written in angular I decided to go for protractor, following a nice guide on howto make protractor run nicely even without angular.
Writing some simple tests and running them locally worked like a charm. In order to implicitly wait for some elements to show up in den DOM I used the following code in my conf.js:
onPrepare: function() {
browser.driver.manage().timeouts().implicitlyWait(5000);
}
All my tests were running as expected and so I decided to go to the next step, i.e. installation in the CI server.
The development team of the aplication I want to tests was already using grunt to build their application so I decided to just hook myself into that. The goal of my new grunt task is to:
assemble the application
start a local webserver running the application
run my protractor test
write some test reports
Finally I accomplished all of the above steps, but I am dealing with a problem now I cannot solve and did not find any help googling it. In order to run the protractor test from grunt I installed the grunt-protractor-runner.
The tests are running, BUT the implicit wait is not working, causing some tests to fail. When I added some explicit waits (browser.sleep(...)) everything is ok again but that is not what I want.
Is there any chance to get implicitly waiting to work when using the grunt-protractor-runner?
UPDATE:
The problem does not have anything to do with the grunt-protractor-runner. When using a different webserver I start up during my taks its working again. To be more precisley: Using the plugin "grunt-contrib-connect" the tests is working using the plugin "grunt-php" the test fails. So I am looking for another php server for grunt now. I will be updating this question.
UPDATE 2:
While looking for some alternatives I considered and finally decided to mock the PHP part of the app.

Can a Grunt task be configured to run a single Karma test?

When developing a test I would like to be able to run a single Karma unit test only. Is there a way to run one test only from the command line using Karma, or perhaps configure a simple task in Grunt that runs one test only?
Assuming that you're using Karma with Jasmine, simply change one or more describe() calls to fdescribe() calls, or it() calls to fit() calls. Only the f-prefixed tests will be run. This is documented in the focused_specs.js file of the Jasmine documentation.
(In older versions of Jasmine, the 'focused' versions of describe and it were instead called ddescribe and iit; try them if you're on an old version and can't upgrade.)

Resources