how can i get the failures details about phpunit and phing results - phpunit

[phpunit] Tests run: 415, Failures: 13, Errors: 19, Incomplete: 0, Skipped: 0, Time elapsed: 45.19617 s
i wanna find out the details about failures and errors,how do?

Can you post your build file?
In any case, to have the phpunit task output additional information, use the 'plain' formatter in the task call:
<phpunit ...>
<formatter usefile="false" type="plain"/>
<batchtest>
...
</batchtest>
</phpunit>

Related

phpunit 9 does not print error stack trace

I recently updated my unit test environment from phpunit v8.5.13 to v9.5.1. Same config file, same php version, all the same except the php unit phar file in command line.
All works fine but when there araises a not expected exception then the stack trace is no longer printed (as it was until v8):
PHPUnit 9.5.1 by Sebastian Bergmann and contributors.
E
Time: 00:00.973, Memory: 42.00 MB
There was 1 error:
1) moduleTmEmployeeTest::testAdd with data set "default" (array(), array())
mobEx: this is my exception
ERRORS!
Tests: 1, Assertions: 0, Errors: 1.
phpunit 8
PHPUnit 8.5.13 by Sebastian Bergmann and contributors.
E
Time: 1.36 seconds, Memory: 36.00 MB
There was 1 error:
1) moduleTmEmployeeTest::testAdd with data set "default" (array(), array())
mobEx: this is my exception
[path]\unittests\phpunit\tm\moduleTmEmployeeTest.php:54
ERRORS!
Tests: 1, Assertions: 0, Errors: 1.
I checked out docs and migration guide but got no clue which new flag to set or what else I could do. Does anyone know?
As referenced by #sebastian in his comment, it's a bug fixed in current release v9.5.2.

dotnet test command to display individual test run times?

I am switching from javascript Mocha to the MSTest framework with .NET Core 3.x.
Currently running dotnet test displays:
Passed! - Failed: 0, Passed: 6, Skipped: 0, Total: 6, Duration: 521 ms
Is there a way I can get it to display similar to Mocha?
TestClass1
TestMethod1 (51ms)
TestMethod2 (100ms)
TestMethod3 (24ms)
TestClass2
TestMethod1 (115ms)
TestMethod2 (55ms)
You can try the following to get a similar output.
dotnet test -l "console;verbosity=detailed"

What are the purpose of Targets in the .Net Core xUnit Test-Runner Output, and how do I use them?

I'm building a Web API using .Net Core. I'm building and running my unit tests as I go. When I run my tests, I get something like this:
The-Monarch:MyAwesomeService.Tests homr$ dotnet test
Project MyAwesomeService (.NETCoreApp,Version=v1.0) was previously compiled. Skipping compilation.
Project MyAwesomeService.Tests (.NETCoreApp,Version=v1.0) will be compiled because inputs were modified
Compiling MyAwesomeService.Tests for .NETCoreApp,Version=v1.0
Compilation succeeded.
0 Warning(s)
0 Error(s)
Time elapsed 00:00:01.2618204
xUnit.net .NET CLI test runner (64-bit osx.10.11-x64)
Discovering: MyAwesomeService.Tests
Discovered: MyAwesomeService.Tests
Starting: MyAwesomeService.Tests
Finished: MyAwesomeService.Tests
=== TEST EXECUTION SUMMARY ===
MyAwesomeService.Tests Total: 20, Errors: 0, Failed: 0, Skipped: 0, Time: 3.522s
SUMMARY: Total: 1 targets, Passed: 1, Failed: 0.
The-Monarch:MyAwesomeService.Tests homr$
I'm curious about the line that says SUMMARY: Total: 1 targets, Passed: 1, Failed: 0.. So far, I've never seen a total higher than "1" here. How do you get more than one "Test Target" in a single output, and what's the purpose?

Phpunit.xml.dist Symfony2

I am trying to follow this link http://welcometothebundle.com/symfony2-rest-api-the-best-2013-way/. When I run this command :
bin/phpunit -c app
I have this error :
Configuration read from /home/ismail/NetBeansProjects/tuto/blog-rest-
symfony2/app/phpunit.xml.dist
E.
Time: 1.87 seconds, Memory: 15.75Mb
FAILURES!
enter code hereTests: 2, Assertions: 3, Errors: 1.
ideas please ?!!
The error message states that one of your Tests returned an error. Usually PHPUnit adds a . for every passed test, an F for assertion failure and an E for an error (e.g. an exception). Check your test code and correct it.

Plone test-runner errors

Can anyone explain please what could be wrong:
I have a fresh Plone 4.1.4 installation via buildout and a fresh out-of-box Plone site created (no work is done on the site). After running ./bin/test --all testsuite (just out of curiosity) it gives lots of the following errors:
Mik#S-linux:/Plone414/PLONE414/zinstance>
./bin/test --all
./bin/test:239: DeprecationWarning: zope.testing.testrunner is deprecated in favour of zope.testrunner. /Plone414/PLONE414/buildout-cache/eggs/zope.testing-3.9.7-py2.6.egg/zope/testing/testrunner/formatter.py:28: DeprecationWarning: zope.testing.exceptions is deprecated in favour of zope.testrunner.exceptions from zope.testing.exceptions import DocTestFailureException Running Testing.ZopeTestCase.layer.ZopeLite tests: Set up Testing.ZopeTestCase.layer.ZopeLite in 0.071 seconds. Running: 8/44 (18.2%)
Failure in test testDateTime (Products.DocFinderTab.tests.testAnalyse.TestAnalyse) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/Products.DocFinderTab-1.0.5-py2.6.egg/Products/DocFinderTab/tests/testAnalyse.py", line 198, in testDateTime self.assertEqual(self.ob.getdoc('_DateTime').Type(), 'DateTime') File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 350, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: 'DateTime instance' != 'DateTime'
Ran 44 tests with 1 failures and 0 errors in 1.376 seconds. Running zope.testing.testrunner.layer.UnitTests tests: Tear down Testing.ZopeTestCase.layer.ZopeLite in 0.000 seconds. Set up zope.testing.testrunner.layer.UnitTests in 0.000 seconds. Running: 2/47 (4.3%)
Failure in test test_search_modules (plone.reload.tests.test_code.TestSearch) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/tests/test_code.py", line 33, in test_search_modules self.assertTrue(found) File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 325, in failUnless if not expr: raise self.failureException, msg AssertionError 5/47 (10.6%)
Error in test test_check_mod_times_change (plone.reload.tests.test_code.TestTimes) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/tests/test_code.py", line 82, in test_check_mod_times_change
our_entry = MOD_TIMES[our_package] KeyError: '/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/__init__.pyc'
8/47 (17.0%)
Failure in test test_get_mod_times (plone.reload.tests.test_code.TestTimes) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/tests/test_code.py", line 70, in test_get_mod_times self.assertTrue(our_package in times) File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 325, in failUnless if not expr: raise self.failureException, msg AssertionError 10/47 (21.3%)
Error in test test_reload_code_change (plone.reload.tests.test_code.TestTimes) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/tests/test_code.py", line 98, in test_reload_code_change our_entry = MOD_TIMES[our_package] KeyError: '/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/__init__.pyc'
Ran 47 tests with 2 failures and 2 errors in 0.102 seconds. Tearing down left over layers: Tear down zope.testing.testrunner.layer.UnitTests in 0.000 seconds. Total: 91 tests, 3 failures, 2 errors in 1.682 seconds.
This isn't a supported way to run the tests. Some of the tests for the components of Plone change global state and then do not clean up after themselves, causing failures in tests that run later which depended on that state being a certain way. The environment we use to develop Plone, buildout.coredev, uses the plone.recipe.alltests buildout recipe to set up a script that can run all the tests successfully by isolating some packages from others.
This is of course not ideal, but it's a pragmatic solution until someone does the work to find and solve the test isolation problems.

Resources