How do I suppress console errors when testing a class that catches an error in a dependency with phpunit9.5 and php8.0 - phpunit

I have a class that takes in a dependency. One of the things the class does is catch errors thrown by that dependency (does something useful to log this, and goes on to return a return the rest of the code finds useful.)
In PHPUnit, I can mock up the dependency and test as follows:
$stubqueryrunner = $this->createStub(QueryRunner::class);
$stubqueryrunner->method('getArray')->will($this->throwException(new \Error));
$inner = new myClass("mark", $stubqueryrunner);
$this->assertEquals(["what" => "", "why" => ""], $inner->getReasons());
The class I'm testing catches any error raised in its dependencies (and returns some default values) for the getReasons method.
My code passes my tests (ie - the Method I'm testing has indeed caught the error and substituted the default values.)
However, the act of raising the error writes many lines of stack trace to the output of PHPUnit.
I can't use the # operator at the point I call it, because I'm in PHP8, where the stfu operator is deprecated for errors.
I can't use
/**
* #expectedError PHPUnit\Framework\Error
*/
... because that's not in PHPUnit 9.
So how do I actually suppress all the stack trace stuff from appearing in the PHPUnit console?
Update: Adding
$this->expectException(\Error::class); just creates an additional assertion which then immediately fails precisely because the class I'm testing has successfully caught it.
The code I'm testing never actually produces errors. The only thing that's producing an error is the Stub I've written in the Unit test when it runs throwException
Adding error_reporting(0); in the test doesn't help.

I can't use the # operator at the point I call it, because I'm in PHP8, where the stfu operator is deprecated for errors.
The # operator may be deprecated, but what it does, is not.
Suppress the errors by settings or setting blank the error-handler for the lines of code you used the # operator previously and you should get it to work (functions that come to mind: error_reporting(), set_error_handler()).
This should then already suppress the error and henceforth even if the Phpunit run configuration is "sharp" and would mark the test as a failed on error (or at least warns, this is what you normally want in testing) as the error has been suppressed before Phpunits test-runner could be noticed by it (the phpunit error handler is set blank when you set your own handler, and active again after you unset it), there is no additional handing of it any longer by Phpunit.
So how do I actually suppress all the stack trace stuff from appearing in the PHPUnit console?
The previous stacktrace may not be fully gone after the change if some internal handling is adding it. One extension that does this I know about is xdebug and the stacktrace might then be caused by runtime configuration.
Take care you're using the right settings in testing or at least don't fear if you still see a stacktrace and can't explain it to yourself.
For Xdebug it's the develop mode in xdebug, remove it and the stack-traces done by xdebug to aid in development are gone.
You should get it to work on the level of the test. This would be the first milestone. Lets assume already being there and add some more context:
You now have two tests:
The original test (that is not sufficient for you under PHP 8, maybe even fails).
The regression (new) test where you take care of the error handling.
I'd say, the (original) test shows, that the code under test itself is not forward compatible with PHP 8. So what has been patched into the (regression) test to get it working should ideally become part of the implementation.
The previous test then should run again, regardless of which version. So you would have the old test, and the new test to test for the regression.
I'd consider this a bit of a luxury problem. Not because its superfluous, but by the description of the error handling, this is not straight forward code so it needs extras in development, testing and maintenance and should have actual a decent staff to take care of these.
See just in testing, its hard. Even the simple things like what the # operator is and while already seen as the cause of change, it's overly puzzling - why?
It might be because error handling is hard and testing for error handling maybe even harder.
Q/A Reference:
How to simulate and properly write test with error_get_last() function in phpunit

Related

Symfony phpunit test 'Killed'

I get my repository using entityManager, but if I try to access it in any way, it seems that the test starts endless cycle and then kills itself. It does not matter if I try to find an element inside or just print the whole variable the test just dies.
public function testStuff()
{
$moduleRepository = $this->entityManager->getRepository(Module::class);
fwrite(STDERR, print_r("asd", TRUE));
//$result = $moduleRepository->find(1);
fwrite(STDERR, print_r($moduleRepository, TRUE));
}
Output:
PHPUnit 9.5.5 by Sebastian Bergmann and contributors.
Testing
asdKilled
Without accessing the variable:
public function testStuff()
{
$moduleRepository = $this->entityManager->getRepository(Module::class);
fwrite(STDERR, print_r("asd", TRUE));
//$result = $moduleRepository->find(1);
//fwrite(STDERR, print_r($moduleRepository, TRUE));
}
Output:
PHPUnit 9.5.5 by Sebastian Bergmann and contributors.
Testing
asdmodule (App\Tests\module)
☢ Stuff
Time: 00:00.266, Memory: 22.00 MB
OK, but incomplete, skipped, or risky tests!
Tests: 1, Assertions: 0, Risky: 1.
I understand that by not calling the variable getRepository is not even started, but from the first test you can see that the method is done without any errors, the problem is only there when you try to access it.
This does not specifically answer your question, but might be important, also for future visitors, to understand the "Killed" part.
This is Unix error message "Killed" and other users are also seeing it in tests (here also database related).
This is not Phpunit that is outputting it, but it is the kernel killing the PHP process that runs Phpunit, likely the Out Of Memory Killer (OOM Killer) and as you can imagine, being Out of memory (OOM) is not a state you'd like to have - and so does not your kernel.
Similar what you describe:
it seems that the test starts endless cycle
and likely one that consumes more and more memory until the kernel kills the process.
Then the "Killed" message is printed in the terminal.
Most likely for your scenario the $this->entityManager is not properly initialized/configured as you locate the point within the test-method.
Sometimes it also happens with clean-up code that goes wrong, but that is less likely. More often it is the configuration of the test fixture.
Now this may not give you the exact place and solution, but hopefully enough pointers. As these kind of errors can be very hard to debug, it can help to create a new test-case in isolation to test for this behaviour only and to reproduce it with as little test-code as possible.
Run the test-suite only with that test-case and when its easy to reproduce, check with debugging what kicks in the endless loop. Often it is a specific condition, e.g. missing configuration or initialization.
As the case is already isolated, doing it in isolation often leads to better and faster results. And in case you run into a similiar problem in the future, you already have a test-case that shows how it can be done (or improved).
$this->assertEquals(25,count($moduleRepository));
After I tried to assert my results it worked.

Why doesn't RStudio clear its Global Environment when something goes wrong?

Using R Version 4.0.2 and RStudio Version 1.3.1056
This is honestly one of the strangest features I've seen in RStudio, and I suppose there's probably a good reason for it to be there, but I'm currently not seeing it and I feel that it can lead to a lot of issues of misleading data.
Basically, to my understanding, when you create and open an R project in RStudio, RStudio creates a Session with a Global Environment.
Every time you run something, this is added to the Global Environment, I assume it's done as a cached value.
However, I've encountered situations where this feature leads to either:
Outdated/wrong values being shown in my tests.
Cases where a function stops working altogether after changing 1 piece of code, executing the new code, then undoing the change.
functions "bleeding into" other files without importing/sourcing them.
Case 1 and 2 obviously leads to a lot of issues while testing. If you try to run a test like
test <- someFunction()
test #to display the value of the test
If the code is correct, the test will execute and the results of test will be stored in the Global Environment.
However, if you then proceed to break the code and run the test again, since test already has a value stored in Global Environment, that old value will print, even though the function failed and thus didn't return anything. Of course if you go above on the console feed, you might run into a line after test <- someFunction() saying "someFunction failed for X reason", but I still think it's both pretty misleading and not very intuitive. Sometimes the result of a function is really large and it's complicated to scroll all the way up the console to see if the code exited with an error, whereas other IDEs would simply immediately tell you at the end of the console that the code failed, and not print the old and outdated value.
Example: Running the proper code.
Running the code after having changed is.na to the non-existent is.not.na.
Notice how it's still printing the old value belonging to the previous version of the function.
The third case can also lead to misleading scenarios.
If you execute a function in any file within your session, the function is stored in the Global Environment. This allows you to call the function from any other file, even if you haven't added a source statement at the top to load the file containing that function.
Once again this can lead to cases where you inadvertently change/add a new function on file B without running it, then try to invoke the function from file A and you get unexpected results because you were actually invoking the old/outdated function, and the Global Environment has no idea about the changes to the old one or the new function.
All of these issues are rather easy to fix, but I think that's a bit beyond the point. Why is this a feature in general? Why isn't the Global Environment emptied upon errors in execution? I know that you can manually empty the GE whenever you want, but it seems odd to me that the IDE doesn't do it on its own, or, to my knowledge, that it doesn't provide you with an option for it to do it.
I can imagine that it provides some benefit at runtime, but is it really that significant that it can justify these behaviors?

Handling Test Workflows in R testthat

I've two files test_utils.r and test_core.r, they contain tests for various utilities and some core functions separated into different 'context'. I can control the flow of tests within each file by moving around my test_that() statements.
But am looking for a way in which I can create different workflows, say ensuring that at run time, tests from Context A_utils runs first followed by tests from Context B_Core followed by context B_Utils.
Any ideas on how this can be achieved?
BrajeshS,
I have an idea. Have you tried the skip() function in version 0.9 or better? See testthat documentation on page 7:
Description
This function allows you to skip a test if it’s not currently available.
This will produce an informative message, but will not cause the test suite
to fail.
It was introduced to skip tests if an internet connection or API not available. You could then dependent on your workflow, jump over tests.
To see example code using skip_on_cran, look at wibeasley's answer where he provides test code in Rappster's reply - https://stackoverflow.com/a/26068397/4606130
I am still getting to grips with testthat. Hope this helps you.

Can i compile without life cycle method in EJB 2.0?

iam begineer in ejb's.I have one doubt in ejb 2.0,In session beans,i will create() with out args in EJBhome.But i didn't define any methods i.e., ejbcreate and ejbremove in the bean.So,Can i compile or run this code without those method in the bean?.
You can compile it but cannot run it. You must have a matching ejbCreate() method in your bean class.
If you are very new to EJB I recommend testing your code with OpenEJB (here's a getting started video). Not because I work on the project (which I do), but because we aggressively check code for mistakes and will print clear errors as to what you might have done wrong.
The output can come in 3 levels of verbosity. On the most verbose level, the output is more email response oriented and error messages include information like "put code like this -code-sample- in your bean." The code samples even try to use your method names and parameter names where possible.
As well it is compiler style. Meaning if you made the same mistake in 10 places, you will see all 10 in the first run and then have the opportunity to fix them all at once. Rather than the traditional style of fix 1 issue, compile, test, get same error elsewhere in code, repeat N times.
And of course you can still deploy into another EJB container. Sounds like you are stuck using a pretty old one if you have to use EJB 2.0.
Here's a list of some of the mistakes that are checked

Any other alternative similar to #covers to mark that a particular method of a class is covered in a unit test

Please note that I am not instantiating my code in my unit test rather I am using curl to test a web service operation and then asserting the actual result against expected value. I have no issues with my testing. I just want a way to show that the class is covered in the 'CodeCoverage' of PHPUnderControl. I tried #covers- it just puts up the class in the Code Coverage list of classes but gives the code coverage as 0 pulling down my overall coverage. I wonder if there is a way to indicate explicitly that a unit test should cover few methods in a class.
Marking a test with #covers tells PHPUnit to use xdebug to track the code coverage in that area during that test. It does not state that the code is covered. I assume that the code under test is running under a separate PHP process invoked via httpd, Apache, or some other HTTP method.
If you can find a way to bypass curl from within your test and call the code directly, the coverage would get tracked. That would leave only the request layer uncovered which gets you closer. Or you need to find a way to simulate the same calls that come in to your web service.

Resources