Django unittest: required mock patch dotted path varies depending on how one calls/runs the tests - django-unittest

It took me hours to figure out how to patch the below code. The path to it was very much unexpected.
Depending on how I run the tests, and which dir I am in, I find the dotted path to the module to patch changes. This is really bad for unittesting. Which makes me think I am doing it wrong.
The file structure related to the code is:
loaders.py <-- Has a load_palette() func required to be patched
typers.py <-- Has `from . loaders import load_palette`, and calls load_palette()
render.py <-- Has a func that calls the typers func
tests/test_render.py <-- Tests for render which calls a func in render, which calls a func in typers, which calls load_palette()
In the code below __package__.replace('.tests', '.typers.load_palette') takes the current path to the current package which could be:
bar.tests or
foo.bar.tests
or something else
and builds the dotted path relatively so that is is correct. This seems very hackish. How is one supposed to safe guard against these kind of issues?
Ideally the dotted path would be ..typers.load_palette but it did not accept the relative dotted path.
Heres the actual code:
# file: test_render.py
# Depending where one runs the test from, the path is different, so generate it dynamically
#mock.patch(__package__.replace('.tests', '.typers.load_palette'), return_value=mocks.palette)
class render_rule_Tests(SimpleTestCase):
def test_render_preset_rule(self, _): # _ = mocked_load_palette
...

files layout as following:
$ tree issue
issue
├── __init__.py
├── loaders.py
├── renders.py
├── tests
│   ├── __init__.py
│   └── test_render.py
├── run_tests.sh
└── typers.py
1 directory, 7 files
the root package is issue, you should always import modules from issue, and patch issue.xxx.yyy.
then run pytest (or some other unittest tools) from the same path as tests resident.
for example, run_tests.sh is a shell script to run all test cases under tests.
and test_render may be like this
# file: test_render.py
# Depending where one runs the test from, the path is different, so generate it dynamically
#mock.patch('issue.typers.load_palette', return_value=mocks.palette)
class render_rule_Tests(SimpleTestCase):
def test_render_preset_rule(self, _): # _ = mocked_load_palette
...

You can add the path of the "tests" directory using sys.path.insert.
In the top of "tests/test_render.py" add:
import sys
sys.path.insert(0, "<path/to/the/folder/tests/>")
# Depending where one runs the test from, the path is different, so generate it dynamically
#mock.patch(__package__.replace('.tests', '.typers.load_palette'), return_value=mocks.palette)
class render_rule_Tests(SimpleTestCase):
def test_render_preset_rule(self, _): # _ = mocked_load_palette
...
This will add the path in system paths where python interpreter. From there, the python interpreter can locate the relative imports.
Note: The safest option would be to add the absolute path to the tests folder. However, if it's not possible, add the shortest relative path possible.

Related

Passing multiple config groups

In my config.yaml, how to pass two datasets for e.g. cifar and cinic at once? Can I pass a multiple config groups to the defaults list?
This is for the case when I want to train my model on a mix of datasets, but I do not want to create a config group for every possible combination.
├── config.yaml
└── dataset
├── cifar.yaml
└── imagenet.yaml
└── cinic.yaml
What I tried is as follows:
dataset:
- cifar
- cinic
which resulted in following error:
Could not load train_dataset/['cifar', 'cinic']. Available options 'cifar ... '
Currently config groups are mutually exclusive.
Support for this is planned for Hydra 1.1. See issue 499.
One possible workaround is to put everything in the config and to use interpolation:
all_datasets:
imagenet:
name: imagenet
cifar10:
name: cifar10
datasets:
- ${all_datasets.imagenet}
- ${all_datasets.cifar10}
This way you can override dataset to a different list of datasets from the command line (with the interpolation).
If you want to simplify the usage at the expense of some additional code, you can do something like:
all_datasets:
...
datasets_list:
- imagenet
- cifar10
datasets: []
#hydra.main(config_path="conf", config_name="config")
def my_app(cfg: DictConfig) -> None:
for ds in cfg.datasets_list:
cfg.datasets.append(cfg.all_datasets[ds])
if __name__ == "__main__":
my_app()
I didn't test this but I hope you get the idea.

Scons positively refuses to build into a variant_dir

I have been porting a project that used Make to SCons. Generally, I am pleased by how easy it is to use SCons, relatively to make. However, there is one thing that has resisted several hours of attempts.
The files in my projects are contained into a tree which starts at
ProjectHome. The srouces are in several subdirectories contained in ProjectHome/src/
I have a SConstruct file in ProjectHome which defines the build enviroment and then calls a SConscript
(in ProjectHome) which builds the object files, which are then put into a library in ProjectHome/lib
by SConstruct.
Everything works fine, except that I would like to separate where the .o files are kept from
where the source files are.
So here's what I have
#SConstruct.py
...
# The environment is defined above, no issues
cppobj, chfobj=SConscript('./SConscript.py', 'env', variant_dir='build', src_dir='.', duplicate=False)
env.Install('lib/'+str(Dim)+'D', env.SharedLibrary(target='Grade'+str(n), source=cppobj+chfobj))
and this is for the SConscript.py
#SConscript.py
import platform
import os
import sys
def getSubdirs(abs_path_dir) :
""" returns a sorted list with the subdirectoris in abs_path_dir"""
lst=[x[0] for x in os.walk(abs_path_dir)]
lst.sort()
return lst
Dirs=getSubdirs(os.getcwd()+'/src') # gives me list of the directories in src
CppNodes=[]
ChFNodes=[]
Import('env')
for directory in Dirs[2:3]:
CppNodes+=Glob(directory+'/*.cpp')
ChFNodes+=Glob(directory+'/*.ChF')
# env.Object can work on lists
ChFobj=env.SharedObject(ChFNodes)
# This builder likes to work one at a time
# this build an internal representation of _F.H headers
# so that when an #include in encountered, scons look
# at this list too, and not just what specified by the IncDirs
if len(ChFNodes)==1: # this is ridiculous but having only one ChF file causes troubles
os.system('touch dummyF.ChF')
ChFNodes.append('dummyF.ChF')
ChFHeader=[]
for file in ChFNodes:
ChFHeader+=env._H(source=file)
Cppobj=env.SharedObject(CppNodes)
Return('Cppobj ChFobj')
However, for the life of me, build is ignored completely. I have tried different combinations,
even placing SConscript.py in the build dir, cally SConscript('build/SCoscript.py', 'env',...) you name it: Scons stubbornly refuses to do anything with build. Any help is appreciated. To be clear, it works in creating the libraries. It just that it places the intermediate object files in the src dirs.

Submodule intra-dependencies in Julia

I'm trying to create a package with the following layout:
MyPkg
├── MyPkg.jl
├── Foo
│ ├── Foo.jl
│ └── another_file.jl
└── Bar
├── Bar.jl
└── yet_another_file.jl
My main package module looks something like this:
# MyPkg.jl
module Pkg
include("./Foo/Foo.jl")
using .Foo: FooStuffA, FooStuffB
export FooStuffA, FooStuffB
include("./Bar/Bar.jl")
using .Bar: BarStruct, BarStuffC, BarStuffD
export BarStruct, BarStuffC, BarStuffD
end
The problem arises when Foo needs a type (specifically a struct) defined in Bar in some function arguments. I'm not sure how to import this type. I've tried seemingly all combinations of include("../Bar/Bar.jl"), using Bar/.Bar/..Bar, inside the Foo submodule, outside the submodule, etc.
# Foo.jl
module Foo
# what am I missing here?
function print_bar_struct(bar::BarStruct)
#show bar
end
end
Any advice?
This should work
# MyPkg.jl
module MyPkg
include("./Bar/Bar.jl")
using .Bar: BarStruct, BarStuffC, BarStuffD
export BarStruct, BarStuffC, BarStuffD
include("./Foo/Foo.jl")
using .Foo: FooStuffA, FooStuffB
export FooStuffA, FooStuffB
end
# Foo.jl
module Foo
using ..MyPkg: BarStruct
function print_bar_struct(bar::BarStruct)
#show bar
end
end
Explanation: Remember that include statements are essentially copying+pasting the code from the source file into the module at the given line. So by the time the compiler is looking at the references for all of the symbols (reading from the top of the file to the bottom), at the point where include("./Foo/Foo.jl") occurs, it needs to know that BarStruct exists and is accessible in the current module (i.e., MyPkg), which it is in this rearranged layout.
So by looking just at this first half of MyPkg
# MyPkg.jl
module MyPkg
include("./Bar/Bar.jl")
using .Bar: BarStruct, BarStuffC, BarStuffD
export BarStruct, BarStuffC, BarStuffD
by the time the compiler reaches the last line here, BarStruct, BarStuffC, BarStuffD are the symbols brought into the MyPkg namespace (https://docs.julialang.org/en/v1/manual/modules/#Summary-of-module-usage-1).
When we reach the include("./Foo/Foo.jl") line (aka copying + pasting this source file into the current module at this point), we need to reference BarStruct in the parent namespace of this module, i.e., ..BarStruct
Have you tried referencing the struct with its module.struct full name, as in Bar.BarStruct?
With structures and enums, the export function seems to not work as well as with function names, but using the Module.Struct type syntax often works.

nose.run returns blank file when used with xunit arguments

I am using nose to run my tests the following way
import nose
import unittest
if __name__ == "__main__":
test_cases = unittest.TestLoader().discover(<path_to_test_files>)
suite = unittest.TestSuite([test_cases])
xunit_report = True
log_file = 'my_report.xml'
arguments = ["nosetest",'--verbosity=2']
if xunit_report:
arguments += ['--with-xunit', '--xunit-file', log_file]
nose.run(suite=suite, argv=arguments)
The suite variable is updated with all the test cases discovered. The console log also validates that all the tests got executed.
However, the xml result file always contains
<?xml version="1.0" encoding="UTF-8"?><testsuite name="nosetests" tests="0" errors="0" failures="0" skip="0"></testsuite>
Am on Python 2.7.14.
What do I need to change to get the actual results in my xml file?
If you change the discover() call to a provide a path, like . for the current directory:
test_cases = unittest.TestLoader().discover('.')
Then the loader will find files in the working directory that you are executing the script from that match the pattern 'test*.py'. If I add add your script to a file run.py, and a test next to it in a file named test_example.py, with the following UnitTest test:
import unittest
class TestStringMethods(unittest.TestCase):
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
Then the output xml file contains the expected test results.
So: Make sure you're running the script from the same directory your tests are in (or that you change .discover('.') to whatever directory your tests are in, and that your test files match the test*.py pattern.
Also note that that nose.run(..) has an argument for just a module name to find tests in that you may find useful:
nose.run(module=".")

Absolute path of the project root directory in Julia

The project root directory of a file located in PROJECT_ROOT/lib/code.jl can be accessed with this code:
root = dirname(dirname(#__FILE__))
Using dirname() twice seems pretty ugly. Is there a better way to do this? With Ruby, I would use this code:
root = File.expand_path('../', File.dirname(__FILE__))
Thanks for making me find out about:
"/"*relpath((#__FILE__)*"/../..","/")
According to ?relpath, it gives a path from the location of the second argument in the file-system, to the first argument. Is this better than the double dirname solution?
A variant of the same niceness is:
normpath(joinpath(#__FILE__,"..",".."))
Closest to Ruby equivalent might be:
realpath(dirname(#__FILE__)*"/..")
I like to use
module Foo
const PROJECT_ROOT = pkgdir(Foo)
end # module
where the definition of PROJECT_ROOT can also be replaced by
const PROJECT_ROOT = dirname(dirname(pathof(Foo)))
Or, you could use
const PROJECT_ROOT = pkdir(#__MODULE__)
I just use
const PROJECT_ROOT = #__DIR__
from inside my _init.jl file, which resides in the project root directory (next to the src directory) and gives you a canonical path.
I get my _init.jl files automatically executed when opening a Julia session from inside that directories by having
isfile("_init.jl") && include(joinpath(pwd(), "_init.jl"))
in my ~/.julia/config/startup.jl file. If you started Julia elsewhere, you have to include("_init.jl") it (or respective relative path) manually.

Resources