I know that phpunit provides memory usage and execution time during test, but is there a way to use these data within an assert statement?
For example, say for example I want to assert if the consumed usage is greater than or equal a specified memory or running time. I search the net as well as the phpunit manuals and can' get an exact information.
Thanks for any tips.
To run asserts on execution time and used memory you can use phpunit_stopwatch_annotations package. It provides it's own TestCase class, which add support for special annotations (#executionTime and #memoryUsage) for your tests methods.
How to use phpunit_stopwatch_annotations:
Extend your TestCase class from StopwatchAnnotations\TestCase
class ExampleTest extends \StopwatchAnnotations\TestCase
Start using annotations by writing
#executionTime time_in_milliseconds
or
#memoryUsage memory_in_bytes
in docblock before your methods. Execution time and memory usage will be asserted automatically after each test.
Maybe I'm thinking in a wrong direction, but why not try something like this?
class memTest extends PHPUnit_Framework_TestCase {
public function testMemory() {
$this->assertGreaterThanOrEqual(4194304, memory_get_usage());
}
}
Just use your desired assumption specifier (here: assertGreaterThanOrEqual) and check your desired value against memory_get_usage().
In my case the output looks like this:
>phpunit unittests\memtest.php
PHPUnit 3.7.15
F
Time: 0 seconds, Memory: 1.75Mb
There was 1 failure:
1) memTest::testMemory
Failed asserting that 1503768 is equal to 4194304 or is greater than 4194304.
mypath\memtest.php:5
FAILURES!
Tests: 1, Assertions: 2, Failures: 1.
If any is interested here I rewrote the StopwatchAnnotaions into a Listener to allow the integration of assertions on memory usage and execution time here :
https://github.com/jclaveau/phpunit-profile-asserts
public function test_usages()
{
...
$this->assertExecutionTimeBelow(100); // seconds
$this->assertMemoryUsageBelow('1M');
}
Cheers
Related
I am a beginner to programming. I am trying to run a simulation of a combustion chamber using reactingFoam.
I have modified the counterflow2D tutorial.
For those who maybe don't know OpenFOAM, it is a programme built in C++ but it does not require C++ programming, just well-defining the variables in the files needed.
In one of my first tries I have made a very simple model but since I wanted to check it very well I set it to 60 seconds with a 1e-6 timestep.
My computer is not very powerful so it took me for a day aprox. (by this I mean I'd like to find a solution rather than repeating the simulation).
I executed the solver reactingFOAM using 4 processors in parallel using
mpirun -np 4 reactingFOAM -parallel > log
The log does not show any evidence of error.
The problem is that when I use reconstructPar it works perfectly but then I try to watch the results with paraFoam and this error is shown:
From function bool Foam::IOobject::readHeader(Foam::Istream&)
in file db/IOobject/IOobjectReadHeader.C at line 88
Reading "mypath/constant/reactions" at line 1
First token could not be read or is not the keyword 'FoamFile'
I have read that maybe some files are empty when they are not supposed to be so, but I have not found that problem.
My 'reactions' file have not been modified from the tutorial and has always worked.
edit:
Sorry for the vague question. I have modified it a bit.
A typical OpenFOAM dictionary file always contains a Foam::Istream named FoamFile. An example from a typical system/controlDict file can be seen below:
FoamFile
{
version 2.0;
format ascii;
class dictionary;
location "system";
object controlDict;
}
During the construction of the dictionary header, if this Istream is absent, OpenFOAM ceases its operation by raising an error message that you have experienced:
First token could not be read or is not the keyword 'FoamFile'
The benefit of the header is possibly to contribute OpenFOAM's abstraction mechanisms, which would be difficult otherwise.
As mentioned in the comments, adding the header entity almost always solves this problem.
I have a large number of test cases, in which several test cases are interdependent. Is it possible that while a later test case is getting executed you can find out the status of a previously executed test case?
In my case, the 99th test case depends on the status of some prior test cases and thus, if either the 24th or the 38th fails I would like the 99th test case NOT to get executed at all and thus save me a lot of time.
Kindly, explain with some example if possible. Thanks in advance!
Robot is very extensible, and a feature that was introduced in version 2.8.5 makes it easy to write a keyword that will fail if another test has failed. This feature is the ability for a library to act as a listener. With this, a library can keep track of the pass/fail status of each test. With that knowledge, you can create a keyword that fails immediately if some other test fails.
The basic idea is, cache the pass/fail status as each test finishes (via the special _end_test method). Then, use this value to determine whether to fail immediately or not.
Here's an example of how to use such a keyword:
*** Settings ***
Library /path/to/DependencyLibrary.py
*** Test Cases ***
Example of a failing test
fail this test has failed
Example of a dependent test
[Setup] | Require test case | Example of a failing test
log | hello, world
Here is the library definition:
from robot.libraries.BuiltIn import BuiltIn
class DependencyLibrary(object):
ROBOT_LISTENER_API_VERSION = 2
ROBOT_LIBRARY_SCOPE = "GLOBAL"
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.test_status = {}
def require_test_case(self, name):
key = name.lower()
if (key not in self.test_status):
BuiltIn().fail("required test case can't be found: '%s'" % name)
if (self.test_status[key] != "PASS"):
BuiltIn().fail("required test case failed: '%s'" % name)
return True
def _end_test(self, name, attrs):
self.test_status[name.lower()] = attrs["status"]
To solve this problem I'm using something like this:
Run Keyword if '${PREV TEST STATUS}'=='PASSED' myKeyword
so maybe this will be usable also for you.
I'm getting this on the console in a QML app:
QFont::setPointSizeF: Point size <= 0 (0.000000), must be greater than 0
The app is not crashing so I can't use the debugger to get a backtrace for the exception. How do I see where the error originates from?
If you know the function the warning occurs in (in this case, QFont::setPointSizeF()), you can put a breakpoint there. Following the stack trace will lead you to the code that calls that function.
If the warning doesn't include the name of the function and you have the source code available, use git grep with part of the warning to get an idea of where it comes from. This approach can be a bit of trial and error, as the code may span more than one line, etc, and so you might have to try different parts of the string.
If the warning doesn't include the name of the function, you don't have the source code available and/or you don't like the previous approach, use the QT_MESSAGE_PATTERN environment variable:
QT_MESSAGE_PATTERN="%{function}: %{message}"
For the full list of variables at your disposal, see the qSetMessagePattern() docs:
%{appname} - QCoreApplication::applicationName()
%{category} - Logging category
%{file} - Path to source file
%{function} - Function
%{line} - Line in source file
%{message} - The actual message
%{pid} - QCoreApplication::applicationPid()
%{threadid} - The system-wide ID of current thread (if it can be obtained)
%{qthreadptr} - A pointer to the current QThread (result of QThread::currentThread())
%{type} - "debug", "warning", "critical" or "fatal"
%{time process} - time of the message, in seconds since the process started (the token "process" is literal)
%{time boot} - the time of the message, in seconds since the system boot if that can be determined (the token "boot" is literal). If the time since boot could not be obtained, the output is indeterminate (see QElapsedTimer::msecsSinceReference()).
%{time [format]} - system time when the message occurred, formatted by passing the format to QDateTime::toString(). If the format is not specified, the format of Qt::ISODate is used.
%{backtrace [depth=N] [separator="..."]} - A backtrace with the number of frames specified by the optional depth parameter (defaults to 5), and separated by the optional separator parameter (defaults to "|"). This expansion is available only on some platforms (currently only platfoms using glibc). Names are only known for exported functions. If you want to see the name of every function in your application, use QMAKE_LFLAGS += -rdynamic. When reading backtraces, take into account that frames might be missing due to inlining or tail call optimization.
On an unrelated note, the %{time [format]} placeholder is quite useful to quickly "profile" code by qDebug()ing before and after it.
I think you can use qInstallMessageHandler (Qt5) or qInstallMsgHandler (Qt4) to specify a callback which will intercept all qDebug() / qInfo() / etc. messages (example code is in the link). Then you can just add a breakpoint in this callback function and get a nice callstack.
Aside from the obvious, searching your code for calls to setPointSize[F], you can try the following depending on your environment (which you didn't disclose):
If you have the debugging symbols of the Qt libs installed and are using a decent debugger, you can set a conditional breakpoint on the first line in QFont::setPointSizeF() with the condition set to pointSize <= 0. Even if conditional breakpoints don't work you should still be able to set one and step through every call until you've found the culprit.
On Linux there's the tool ltrace which displays all calls of a binary into shared libs, and I suppose there's something similar in the M$ VS toolbox. You can grep the output for calls to setPointSize directly, but of course this won't work for calls within the lib itself (which I guess could be the case when it handles the QML internally).
When an out-of-memory error is raised in a parfor, is there any way to kill only one Matlab slave to free some memory instead of having the entire script terminate?
Here is what happens by default when an out-of-memory error occurs in a parfor: the script terminated, as shown in the screenshot below.
I wish there was a way to just kill one slave (i.e. removing a worker from parpool) or stop using it to release as much memory as possible from it:
If you get a out of memory in the master process there is no chance to fix this. For out of memory on the slave, this should do it:
The simple idea of the code: Restart the parfor again and again with the missing data until you get all results. If one iteration fails, a flag (file) is written which let's all iterations throw an error as soon as the first error occurred. This way we get "out of the loop" without wasting time producing other out of memory.
%Your intended iterator
iterator=1:10;
%flags which indicate what succeeded
succeeded=false(size(iterator));
%result array
result=nan(size(iterator));
FLAG='ANY_WORKER_CRASHED';
while ~all(succeeded)
fprintf('Another try\n')
%determine which iterations should be done
todo=iterator(~succeeded);
%initialize array for the remaining results
partresult=nan(size(todo));
%initialize flags which indicate which iterations succeeded (we can not
%throw erros, it throws aray results)
partsucceeded=false(size(todo));
%flag indicates that any worker crashed. Have to use file based
%solution, don't know a better one. #'
delete(FLAG);
try
parfor falseindex=1:sum(~succeeded)
realindex=todo(falseindex);
try
% The flag is used to let all other workers jump out of the
% loop as soon as one calculation has crashed.
if exist(FLAG,'file')
error('some other worker crashed');
end
% insert your code here
%dummy code which randomly trowsexpection
if rand<.5
error('hit out of memory')
end
partresult(falseindex)=realindex*2
% End of user code
partsucceeded(falseindex)=true;
fprintf('trying to run %d and succeeded\n',realindex)
catch ME
% catch errors within workers to preserve work
partresult(falseindex)=nan
partsucceeded(falseindex)=false;
fprintf('trying to run %d but it failed\n',realindex)
fclose(fopen(FLAG,'w'));
end
end
catch
%reduce poolsize by 1
newsize = matlabpool('size')-1;
matlabpool close
matlabpool(newsize)
end
%put the result of the current iteration into the full result
result(~succeeded)=partresult;
succeeded(~succeeded)=partsucceeded;
end
After quite bit of research, and a lot of trial and error, I think I may have a decent, compact answer. What you're going to do is:
Declare some max memory value. You can set it dynamically using the MATLAB function memory, but I like to set it directly.
Call memory inside your parfor loop, which returns the memory information for that particular worker.
If the memory used by the worker exceeds the threshold, cancel the task that worker was working on. Now, here it get's a bit tricky. Depending on the way you're using parfor, you'll either need to delete or cancel either the task or worker. I've verified that it works with the code below when there is one task per worker, on a remote cluster.
Insert the following code at the beginning of your parfor contents. Tweak as necessary.
memLimit = 280000000; %// This doesn't have to be in parfor. Everything else does.
memData = memory;
if memData.MemUsedMATLAB > memLimit
task = getCurrentTask();
cancel(task);
end
Enjoy! (Fun question, by the way.)
One other option to consider is that since R2013b, you can open a parallel pool with 'SpmdEnabled' set to false - this allows MATLAB worker processes to die without the whole pool being shut down - see the doc here http://www.mathworks.co.uk/help/distcomp/parpool.html . Of course, you still need to arrange somehow to shutdown the workers.
It is difficult for me to catch with the eye a boundary between test runs.
Is it possible to clear console for each run of Testacular/Karma + Jasmine or at least put there something easily catched by the eye, for example a series of newlines?
Note
Currently it is an abandoned question because I am no longer trying to perform tasks described in it. Please do not ask for additional info. Write only if you know for sure what to do. It will help other people.
Write your own reporter, and do whatever you want with it.
Also, if you're on a Mac and use Growl, take a look at karma-growl-reporter
I am not sure to fully understand your need but karma-spec-reporter can give you a detailed review of your test execution. Output example from karma-spec-reporter-example:
array:
push:
PASSED - should add an element
PASSED - should remove an element
FAILED - should do magic (this test will fail) expected [] to include 'magic'
at /home/michael/development/codecentric/karma-spec-reporter-example/node_modules/chai/chai.js:401
...
PhantomJS 1.8.1 (Linux): Executed 3 of 3 (1 FAILED) (0.086 secs / NaN secs)
There's now a reporter available for this: https://github.com/arthurc/karma-clear-screen-reporter
It's working for me on OSX.