Skipping one test if another failed - python-unittest

I'm writing some tests which rely on an external binary being present. Having a test case for ensuring binary is present and functioning seems useful. I want multiple tests, expressing:
Is binary found by subprocess.Popen?
Can it do basic X?
Can it do basic Y?
I want a pattern to skip the basic tests X and Y if the first one fails. It would also be good to test several binaries in the same TestCase, with different kinds of X and Y.
Some approaches come to mind:
Decorate test_X() and test_Y() with #unittest.skipIf(basic failed), either through checking TestResults or setting a flag
Have test_basic() call TestResults.addSkip(test_binary_X) upon failure (but this doesn't appear to be exposed)
Put test_basic() into one TestCase and the others into another
Make test_basic() part of setUp(), so it's run for each of X and Y
Unittest doesn't run tests in the order written, so it may be hard to ensure sequence.
# Something like:
class TestBasics(unittest.TestCase):
test_basic(self):
try:
subprocess.Popen(["binary"], stdout=None, stderr=None)
catch OsError as e:
# fail depending on the error
#unittest.skipIf( test binary in path failed )
test_X(self):
# assert
#unittest.skipIf( test binary in path failed )
test_Y(self):
# assert

Related

Julia Distributed, redundant iterations appearing

I ran
mpiexec -n $nprocs julia --project myfile.jl
on a cluster, where myfile.jl has the following form
using Distributed; using Dates; using JLD2; using LaTeXStrings
#everywhere begin
using SharedArrays; using QuantumOptics; using LinearAlgebra; using Plots; using Statistics; using DifferentialEquations; using StaticArrays
#Defining some other functions and SharedArrays to be used later e.g.
MySharedArray=SharedArray{SVector{Nt,Float64}}(Np,Np)
end
#sync #distributed for pp in 1:Np^2
for jj in 1:Nj
#do some stuff with local variables
for tt in 1:Nt
#do some stuff with local variables
end
end
MySharedArray[pp]=... #using linear indexing
println("$pp finished")
end
timestr=Dates.format(Dates.now(), "yyyy-mm-dd-HH:MM:SS")
filename="MyName"*timestr
#save filename*".jld2"
#later on, some other small stuff like making and saving a figure. (This does give an error "no method matching heatmap_edges(::Surface{Array{Float64,2}}, ::Symbol)" but I think that this is a technical thing about Plots so not very related to the bigger issue here)
However, when looking at the output, there are a few issues that make me conclude that something is wrong
The "$pp finished" output is repeated many times for each value of pp. It seems that this amount is actually equal to 32=$nprocs
Despite the code not being finished, "MyName" files are generated. It should be one, but I get a dozen of them with different timestr component
EDIT: two more things that I can add
the output of the different "MyName" files is not identical, but this is expected since random numbers are used in the inner loops. There are 28 of them, a number that I don't easily recognize except that its again close to the 32 $nprocs
earlier, I wrote that the walltime was exceeded, but this turns out not to be true. The .o file ends with "BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES ... EXIT CODE :9", pretty shortly after the last output file.
$nprocs is obtained in the pbs script through
#PBS -l select=1:ncpus=32:mpiprocs=32
nprocs= `cat $PBS_NODEFILE|wc -l`
As pointed out by adamslc on the Julia discourse, the proper way to use Julia on a cluster is to either
Start a session with one core from the job script, add more with addprocs() in the Julia script itself
Use more specialized Julia packages
https://discourse.julialang.org/t/julia-distributed-redundant-iterations-appearing/57682/3

RED Robot Editor - One TestCase Dependent To Other [duplicate]

I have a large number of test cases, in which several test cases are interdependent. Is it possible that while a later test case is getting executed you can find out the status of a previously executed test case?
In my case, the 99th test case depends on the status of some prior test cases and thus, if either the 24th or the 38th fails I would like the 99th test case NOT to get executed at all and thus save me a lot of time.
Kindly, explain with some example if possible. Thanks in advance!
Robot is very extensible, and a feature that was introduced in version 2.8.5 makes it easy to write a keyword that will fail if another test has failed. This feature is the ability for a library to act as a listener. With this, a library can keep track of the pass/fail status of each test. With that knowledge, you can create a keyword that fails immediately if some other test fails.
The basic idea is, cache the pass/fail status as each test finishes (via the special _end_test method). Then, use this value to determine whether to fail immediately or not.
Here's an example of how to use such a keyword:
*** Settings ***
Library /path/to/DependencyLibrary.py
*** Test Cases ***
Example of a failing test
fail this test has failed
Example of a dependent test
[Setup] | Require test case | Example of a failing test
log | hello, world
Here is the library definition:
from robot.libraries.BuiltIn import BuiltIn
class DependencyLibrary(object):
ROBOT_LISTENER_API_VERSION = 2
ROBOT_LIBRARY_SCOPE = "GLOBAL"
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.test_status = {}
def require_test_case(self, name):
key = name.lower()
if (key not in self.test_status):
BuiltIn().fail("required test case can't be found: '%s'" % name)
if (self.test_status[key] != "PASS"):
BuiltIn().fail("required test case failed: '%s'" % name)
return True
def _end_test(self, name, attrs):
self.test_status[name.lower()] = attrs["status"]
To solve this problem I'm using something like this:
Run Keyword if '${PREV TEST STATUS}'=='PASSED' myKeyword
so maybe this will be usable also for you.

Qt error is printed on the console; how to see where it originates from?

I'm getting this on the console in a QML app:
QFont::setPointSizeF: Point size <= 0 (0.000000), must be greater than 0
The app is not crashing so I can't use the debugger to get a backtrace for the exception. How do I see where the error originates from?
If you know the function the warning occurs in (in this case, QFont::setPointSizeF()), you can put a breakpoint there. Following the stack trace will lead you to the code that calls that function.
If the warning doesn't include the name of the function and you have the source code available, use git grep with part of the warning to get an idea of where it comes from. This approach can be a bit of trial and error, as the code may span more than one line, etc, and so you might have to try different parts of the string.
If the warning doesn't include the name of the function, you don't have the source code available and/or you don't like the previous approach, use the QT_MESSAGE_PATTERN environment variable:
QT_MESSAGE_PATTERN="%{function}: %{message}"
For the full list of variables at your disposal, see the qSetMessagePattern() docs:
%{appname} - QCoreApplication::applicationName()
%{category} - Logging category
%{file} - Path to source file
%{function} - Function
%{line} - Line in source file
%{message} - The actual message
%{pid} - QCoreApplication::applicationPid()
%{threadid} - The system-wide ID of current thread (if it can be obtained)
%{qthreadptr} - A pointer to the current QThread (result of QThread::currentThread())
%{type} - "debug", "warning", "critical" or "fatal"
%{time process} - time of the message, in seconds since the process started (the token "process" is literal)
%{time boot} - the time of the message, in seconds since the system boot if that can be determined (the token "boot" is literal). If the time since boot could not be obtained, the output is indeterminate (see QElapsedTimer::msecsSinceReference()).
%{time [format]} - system time when the message occurred, formatted by passing the format to QDateTime::toString(). If the format is not specified, the format of Qt::ISODate is used.
%{backtrace [depth=N] [separator="..."]} - A backtrace with the number of frames specified by the optional depth parameter (defaults to 5), and separated by the optional separator parameter (defaults to "|"). This expansion is available only on some platforms (currently only platfoms using glibc). Names are only known for exported functions. If you want to see the name of every function in your application, use QMAKE_LFLAGS += -rdynamic. When reading backtraces, take into account that frames might be missing due to inlining or tail call optimization.
On an unrelated note, the %{time [format]} placeholder is quite useful to quickly "profile" code by qDebug()ing before and after it.
I think you can use qInstallMessageHandler (Qt5) or qInstallMsgHandler (Qt4) to specify a callback which will intercept all qDebug() / qInfo() / etc. messages (example code is in the link). Then you can just add a breakpoint in this callback function and get a nice callstack.
Aside from the obvious, searching your code for calls to setPointSize[F], you can try the following depending on your environment (which you didn't disclose):
If you have the debugging symbols of the Qt libs installed and are using a decent debugger, you can set a conditional breakpoint on the first line in QFont::setPointSizeF() with the condition set to pointSize <= 0. Even if conditional breakpoints don't work you should still be able to set one and step through every call until you've found the culprit.
On Linux there's the tool ltrace which displays all calls of a binary into shared libs, and I suppose there's something similar in the M$ VS toolbox. You can grep the output for calls to setPointSize directly, but of course this won't work for calls within the lib itself (which I guess could be the case when it handles the QML internally).

Is it possible to write Robot Framework tests (not keywords) in Python?

Is it possible to write Robot Framework tests in Python instead of the .txt format?
Behind the scenes it looks like the .txt test get converted into Python by pybot so I'm hoping that this is simply a matter of importing the right library and inheriting from the right class but I haven't been able to figure out how to do that.
(We already have a bunch of suites and have keywords written in both formats but sometimes the RF syntax makes it very difficult to do things that are simple in Python. I understand it would be possible to just write a Python keyword for each test plus 'wrap' setup and teardown functions the same way, but that seems cumbersome.)
Robot does not convert your test cases to python behind the scenes before running them. Instead, it parses the test cases, then iterates over each keyword, calling the code that implements the keyword. There isn't ever a stage where there's a completely pure python representation of a test case.
It is not possible to write tests in python, and have those tests run alongside traditional robot tests by the provided test runner. Like you said in your question, your only option is to put all of your logic for a single test case in a single keyword, and call that keyword from a test case.
It is possible to create and execute tests in python solely via the published API. This might not be what you're really asking for, because ultimately you're still creating keywords, you're just creating them via python.
from robot.api import TestSuite
suite = TestSuite('Activate Skynet')
suite.imports.library('OperatingSystem')
test = suite.tests.create('Should Activate Skynet', tags=['smoke'])
test.keywords.create('Set Environment Variable', args=['SKYNET', 'activated'], type='setup')
test.keywords.create('Environment Variable Should Be Set', args=['SKYNET'])
The above example was taken from here:
http://robot-framework.readthedocs.org/en/2.8.1/autodoc/robot.running.html
Well, you should not care if your python code represents tests or keywords as long as you code the logic of the tests in python.
The best you can do is to keep some html tables in robot format. Each line would be a call for a keyword. The keyword could be implemented in python, and, logically, represents a whole test (although in robot terminology it is still a "keyword").
This post shows how you can have access to the robot context from your python code.
robot variables
BuiltIn().get_variable_value("${USERNAME}")
java keywords
from com.mycompany.myproject.testtools import LoginRobotKeyword
LoginRobotKeywords().login(user, pwd)
robot keywords
BuiltIn().run_keyword("check user connected", user)
Robotframework does not support writting test cases in python directly. I've submitted an enhancement PR, check it here
https://github.com/robotframework/robotframework/issues/3128
But I've tried to do that by moving all the test cases logic to python code, and make RF test cases just a entry point to them.
Here is an example.
We could create a python file to include all testing logic and setup/teardown logic, like this
# *** case0001.py *****
from SchoolClass import SchoolClass
schCla = SchoolClass()
class case0001:
def steps(self):
print('''\n\n***** step 1 **** add school class \n''')
self.ret1 = schCla.add_school_class('grade#1', 'class#1', 60)
assert self.ret1['retcode'] == 0
print('''\n\n***** step 2 **** list school class to check\n''')
ret = schCla.list_school_class(1)
schCla.classlist_should_contain(ret['retlist'],
'grade#1',
'class#1',
60,
self.ret1['id'])
def setup(self):
pass
def teardown(self):
schCla.delete_school_class(self.ret1['id'])
And then we creat a Robot file. In which all RF test cases are in the same form and just work as entry points to python test cases above.
like this
*** Settings ***
Library cases/case0001.py WITH NAME C000001
Library cases/case0002.py WITH NAME C000002
*** Test Cases ***
add class - tc000001
[Setup] C000001.setup
C000001.steps
[Teardown] C000001.teardown
add class - tc000002
[Setup] C000002.setup
C000002.steps
[Teardown] C000002.teardown
You could see, in this way, the RF testcases are similar. We could even create a tool to auto generate them by scanning Python testcases.

Kill a calculation programme after user defined time in R

Say my executable is c:\my irectory\myfile.exe and my R script calls on this executeable with system(myfile.exe)
The R script gives parameters to the executable programme which uses them to do numerical calculations. From the ouput of the executable, the R script then tests whether the parameters are good ore not. If they are not good, the parameters are changed and the executable rerun with updated parameters.
Now, as this executable carries out mathematical calculations and solutions may converge only slowly I wish to be able to kill the executable once it has takes to long to carry out the calculations (say 5 seconds)
How do I do this time dependant kill?
PS:
My question is a little related to this one: (time non dependant kill)
how to run an executable file and then later kill or terminate the same process with R in Windows
You can add code to your R function which issued the executable call:
setTimeLimit(elapse=5, trans=T)
This will kill the calling function, returning control to the parent environment (which could well be a function as well). Then use the examples in the question you linked to for further work.
Alternatively, set up a loop which examines Sys.time and if the expected update to the parameter set has not taken place after 5 seconds, break the loop and issue the system kill command to terminate myfile.exe .
There might possibly be nicer ways but it is a solution.
The assumption here is, that myfile.exe successfully does its calculation within 5 seconds
try.wtl <- function(timeout = 5)
{
y <- evalWithTimeout(system(myfile.exe), timeout = timeout, onTimeout= "warning")
if(inherits(y, "try-error")) NA else y
}
case 1 (myfile.exe is closed after successfull calculation)
g <- try.wtl(5)
case 2 (myfile.exe is not closed after successfull calculation)
g <- try.wtl(0.1)
MSDOS taskkill required for case 2 to recommence from the beginnging
if (class(g) == "NULL") {system('taskkill /im "myfile.exe" /f',show.output.on.console = FALSE)}
PS: inspiration came from Time out an R command via something like try()

Resources