nose.run returns blank file when used with xunit arguments - python-unittest

I am using nose to run my tests the following way
import nose
import unittest
if __name__ == "__main__":
test_cases = unittest.TestLoader().discover(<path_to_test_files>)
suite = unittest.TestSuite([test_cases])
xunit_report = True
log_file = 'my_report.xml'
arguments = ["nosetest",'--verbosity=2']
if xunit_report:
arguments += ['--with-xunit', '--xunit-file', log_file]
nose.run(suite=suite, argv=arguments)
The suite variable is updated with all the test cases discovered. The console log also validates that all the tests got executed.
However, the xml result file always contains
<?xml version="1.0" encoding="UTF-8"?><testsuite name="nosetests" tests="0" errors="0" failures="0" skip="0"></testsuite>
Am on Python 2.7.14.
What do I need to change to get the actual results in my xml file?

If you change the discover() call to a provide a path, like . for the current directory:
test_cases = unittest.TestLoader().discover('.')
Then the loader will find files in the working directory that you are executing the script from that match the pattern 'test*.py'. If I add add your script to a file run.py, and a test next to it in a file named test_example.py, with the following UnitTest test:
import unittest
class TestStringMethods(unittest.TestCase):
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
Then the output xml file contains the expected test results.
So: Make sure you're running the script from the same directory your tests are in (or that you change .discover('.') to whatever directory your tests are in, and that your test files match the test*.py pattern.
Also note that that nose.run(..) has an argument for just a module name to find tests in that you may find useful:
nose.run(module=".")

Related

Pytest failing on file open command string assert - what's the best way to test this?

I am constructing a command to pass to requests library to Post an attachment - as in
files= attachment = {"attachment": ("image.png", open("C:\tmp\sensor.png", "rb"), "image/png")}
The code is working but I cannot get PyTest to test it as -is because of the open command which is executed when evaluated. Here is simplified code of the problem
import pytest
def openfile():
cmd = {"cmd": open(r"C:\tmp\sensor.png")}
return cmd
def test_openfile():
cmd = openfile()
#assert str(cmd) == str({"cmd": open(r"C:\tmp\sensor.png")}) # this works
assert cmd == {"cmd": open(r"C:\tmp\sensor.png")} # this does not
PyTest complains that the two side are different but then confirms they are the same in the diff panel!
Expected :{'cmd': <_io.TextIOWrapper name='C:\tmp\sensor.png' mode='r' encoding='cp1252'>}
Actual :{'cmd': <_io.TextIOWrapper name='C:\tmp\sensor.png' mode='r' encoding='cp1252'>}
'Click to see difference' - Opening diff panel reports 'Contents are identical'!
I can just stick with comparing the generated string with expected string but am wondering if there is a better way to do this.
Ideas?
You need to test the properties of the actual file buffer that is returned by the open call, instead of the references to that buffer, for example:
def test_openfile():
cmd = openfile()
expected_filename = r"C:\tmp\sensor.png"
assert "cmd" in cmd
file_cmd = cmd["cmd"]
assert file_cmd.name == expected_filename
with open(expected_filename) as f:
contents = f.read()
assert file_cmd.read() == contents
Note that in a test you may not have the file contents, or have them in another place like a fixture, so testing the file contents may have to be adapted, or may not be needed, depending on what you want to test.
After talking this through with a friend I think my original approach is perfectly valid. For anyone that trips over this question here's why:
I am trying to pytest building of an executable parameter to pass to another library for execution. The execution of the parameter is not relevant, just that it is correctly formatted. The test is to compare what is generated with the expected parameter ( as if I typed it) .
Therefore casting to string or json and comparing is appropriate since that is what a human does to manually check the code!

OpenMDAO adding command line args for ExternalCodeComp that won't results in runtime error

In OpenMDAO V3.1 I am using an ExternalCodeComp to execute a CFD code. Typically, I would call it as such:
mpirun nodet_mpi --design_run
If the above call is made in the appropriate directory, then it will find the appropriate run file and execute the CFD run. I have tried command args for the ExternalCodeComp;
execute = ['mpirun', 'nodet_mpi', '--design_run']
execute = ['mpirun', 'nodet_mpi --design_run']
execute = ['mpirun nodet_mpi --design_run']
I either get an error such as:
RunTimeError: 255, execvp error on file "nodet_mpi --design_run" (No such file or directory)
Or that the command cannot be found.
Is there any way to setup the execute statement to include commandline args for the flow solver when an input file is not defined?
Thanks in advance!
One detail in your question seems incorrect, you state that you have tried execute = "...". The ExternalCodeComp uses an option called command. I will assume that you are using the correct option in your code.
The most correct form to use is the list with all arguments as single entries in the list:
self.options['command'] = ['mpirun', 'nodet_mpi', '--design_run']
Your error msg seems to indicate that the directory that OpenMDAO is running in is not the same as the directory you would like to execute the CFD code from. The absolute simplest solution would be to make sure that you are in the correct directory via cd in the terminal window before executing your python script.
However, there is likely a reason that your python script is in a different place so there are other options I can suggest:
You can use a combination of os.getcwd() and os.chdir() inside the compute method that you have implemented to make sure you switch into and out of the working directory for the CFD code.
If you would like to, you can modify the entries of the list you've assigned to the self.options['command'] option on the fly within your compute method. You would again be relying on some of the methods in the os module for help. os.path.exists can be used to test if the specific input files you need exist or not, and you can modify the command option accordingly.
For option 2, code would look something like this:
def compute(self, inputs, outputs):
if os.path.exists('some_input.file'):
self.options['command'] = ['mpirun', 'nodet_mpi', '--design_run']
else:
self.options['command'] = ['mpirun', 'nodet_mpi', '--design_run', '--other_options']
# the parent compute function actually runs the external code
super().compute(inputs, outputs)

How do I determine whether a julia script is included as module or run as script?

I would like to know how in the Julia language, I can determine if a file.jl is run as script, such as in the call:
bash$ julia file.jl
It must only in this case start a function main, for example. Thus I could use include('file.jl'), without actually executing the function.
To be specific, I am looking for something similar answered already in a python question:
def main():
# does something
if __name__ == '__main__':
main()
Edit:
To be more specific, the method Base.isinteractive (see here) is not solving the problem, when using include('file.jl') from within a non-interactive (e.g. script) environment.
The global constant PROGRAM_FILE contains the script name passed to Julia from the command line (it does not change when include is called).
On the other hand #__FILE__ macro gives you a name of the file where it is present.
For instance if you have a files:
a.jl
println(PROGRAM_FILE)
println(#__FILE__)
include("b.jl")
b.jl
println(PROGRAM_FILE)
println(#__FILE__)
You have the following behavior:
$ julia a.jl
a.jl
D:\a.jl
a.jl
D:\b.jl
$ julia b.jl
b.jl
D:\b.jl
In summary:
PROGRAM_FILE tells you what is the file name that Julia was started with;
#__FILE__ tells you in what file actually the macro was called.
tl;dr version:
if !isdefined(:__init__) || Base.function_module(__init__) != MyModule
main()
end
Explanation:
There seems to be some confusion. Python and Julia work very differently in terms of their "modules" (even though the two use the same term, in principle they are different).
In python, a source file is either a module or a script, depending on how you chose to "load" / "run" it: the boilerplate exists to detect the environment in which the source code was run, by querying the __name__ of the embedding module at the time of execution. E.g. if you have a file called mymodule.py, it you import it normally, then within the module definition the variable __name__ automatically gets set to the value mymodule; but if you ran it as a standalone script (effectively "dumping" the code into the "main" module), the __name__ variable is that of the global scope, namely __main__. This difference gives you the ability to detect how a python file was ran, so you could act slightly differently in each case, and this is exactly what the boilerplate does.
In julia, however, a module is defined explicitly as code. Running a file that contains a module declaration will load that module regardless of whether you did using or include; however in the former case, the module will not be reloaded if it's already on the workspace, whereas in the latter case it's as if you "redefined" it.
Modules can have initialisation code via the special __init__() function, whose job is to only run the first time a module is loaded (e.g. when imported via a using statement). So one thing you could do is have a standalone script, which you could either include directly to run as a standalone script, or include it within the scope of a module definition, and have it detect the presence of module-specific variables such that it behaves differently in each case. But it would still have to be a standalone file, separate from the main module definition.
If you want the module to do stuff, that the standalone script shouldn't, this is easy: you just have something like this:
module MyModule
__init__() = # do module specific initialisation stuff here
include("MyModule_Implementation.jl")
end
If you want the reverse situation, you need a way to detect whether you're running inside the module or not. You could do this, e.g. by detecting the presence of a suitable __init__() function, belonging to that particular module. For example:
### in file "MyModule.jl"
module MyModule
export fun1, fun2;
__init__() = print("Initialising module ...");
include("MyModuleImplementation.jl");
end
### in file "MyModuleImplementation.jl"
fun1(a,b) = a + b;
fun2(a,b) = a * b;
main() = print("Demo of fun1 and fun2. \n" *
" fun1(1,2) = $(fun1(1,2)) \n" *
" fun2(1,2) = $(fun2(1,2)) \n");
if !isdefined(:__init__) || Base.function_module(__init__) != MyModule
main()
end
If MyModule is loaded as a module, the main function in MyModuleImplementation.jl will not run.
If you run MyModuleImplementation.jl as a standalone script, the main function will run.
So this is a way to achieve something close to the effect you want; but it's very different to saying running a module-defining file as either a module or a standalone script; I don't think you can simply "strip" the module instruction from the code and run the module's "contents" in such a manner in julia.
The answer is available at the official Julia docs FAQ. I am copy/pasting it here because this question comes up as the first hit on some search engines. It would be nice if people found the answer on the first-hit site.
How do I check if the current file is being run as the main script?
When a file is run as the main script using julia file.jl one might want to activate extra functionality like command line argument handling. A way to determine that a file is run in this fashion is to check if abspath(PROGRAM_FILE) == #__FILE__ is true.

Python Robot Framework : How to convert robot's txt code into Python code

Please tell me how to convert following robot (txt) code into Python code. Robot code.
*** Settings ***
Library OperatingSystem
*** Keywords ***
nik_key_1
[Arguments] ${arg1_str}
log to console ${arg1_str}
*** Variables ***
${var1} "variable1"
*** Test Cases ***
First Test Case
${output}= run "hostname"
log to console ${output}
${str1}= catenate "nikhil" "gupta"
nik_key_1 "NikArg1"
log to console ${var1}
log to console ${str1}
Following is the code that I tried :
from robot.api import TestSuite
from robot.running.model import Keyword
from robot.libraries.BuiltIn import BuiltIn
from robot.api.deco import keyword
bi = BuiltIn()
#keyword(name='nik_key_1')
def nik_key_1(username):
bi.log_to_console(message=username,stream='STDOUT',no_newline=False)
suite = TestSuite('Activate Skynet')
suite.resource.imports.library("OperatingSystem")
keyword1 = Keyword(name="nik_key_1",type='kw',doc="nik_key_doc1",args=
["nikusername"])
suite.keywords.append(keyword1)
test = suite.tests.create(name='nik_test_case1', tags=['smoke'])
test.doc = "nik doc"
print dir(test.keywords)
test.keywords.create('nik_key_1', args=['nikusername'],type='kw')
result = suite.run(critical='smoke', output='skynet.xml')
Following is the error I am getting :
No keyword with name 'nik_key_1' found.
Your code doesn't work because robot doesn't look at your script for context, and thus doesn't know about the nik_key_1. Since your suite doesn't import this script, it can't access any functions. You'll need to move nik_key_1 to a file, and import that file in the suite.
For example, create a file named keywords.py, and put this in it:
# keywords.py
from robot.api.deco import keyword
from robot.libraries.BuiltIn import BuiltIn
bi = BuiltIn()
#keyword(name='nik_key_1')
def nik_key_1(username):
bi.log_to_console(message=username,stream='STDOUT',no_newline=False)
Next, modify your test to include this library:
suite.resource.imports.library('keywords.py')
You can then call the keyword from your test.
Using your script as a library
It's possible to combine your script and the keywords in a single file, but that involves making your script importable by protecting the executable code from running when the file is imported.
For example, you can rewrite your script to be like the following. Notice how the executable portion of the script is inside a block that tests whether the file is imported or not. Also notice that the script itself is added as a library (suite.resource.imports.library(__file__))
from robot.api import TestSuite
from robot.running.model import Keyword
from robot.libraries.BuiltIn import BuiltIn
from robot.api.deco import keyword
bi = BuiltIn()
#keyword(name='nik_key_1')
def nik_key_1(username):
bi.log_to_console(message=username,stream='STDOUT',no_newline=False)
if __name__ == "__main__":
suite = TestSuite('Activate Skynet')
suite.resource.imports.library('OperatingSystem')
suite.resource.imports.library(__file__)
test = suite.tests.create('Should Activate Skynet', tags=['smoke'])
test.keywords.create('Set Environment Variable', args=['SKYNET', 'activated'], type='setup')
test.keywords.create('Environment Variable Should Be Set', args=['SKYNET'])
test.keywords.create('nik_key_1', args=['nikusername'],type='kw')
result = suite.run(critical='smoke', output='skynet.xml')

PyQt - Qprocess Can not run command "chcp" directly but from batch its fine

The following code working for executing the batch file:
def GetCMD_Encoding(self):
self.CMD = QProcess(self)
self.CMD.setProcessChannelMode(QProcess.MergedChannels)
self.CMD.readyReadStandardOutput.connect(self.EventDataForGetCMDEncoding)
self.CMD.start("chcp.bat")
def EventDataForGetCMDEncoding(self):
output = bytearray(self.CMD.readAllStandardOutput())
output = output.decode("ascii")
print (output)
The content of .bat file is only :
chcp
But if i want to exclude but file and i only execute simple command like:
self.CMD.start("chcp")
it does not working and any kind of signal isnt emit.
Other commands is working like :
self.CMD.start("ipconfig")
self.CMD.start("help")
You should try to
Use the full path of the file chcp.bat
Or add the path of the file chcp.bat to your system PATH
and maybe ensure that the file is executable.

Resources