How to test that a method gets called twice with different arguments in Python unittest - python-unittest

Say I have a method that looks like this:
from otherModule import B
def A():
for pair in [[1, 2], [3, 4]]:
B(*pair)
and I have a test that looks like:
class TestA(unittest.TestCase):
#patch("moduleA.B")
def test_A(self, mockB):
A()
mockB.assert_has_calls([
call(1, 2),
call(3, 4)
])
For some reason I get an AssertionError: Calls not found. because it only registers teh call with 3,4 twice. Does what I'm doing look right?

It works fine, can't reproduce the error.
E.g.
moduleA.py:
from otherModule import B
def A():
for pair in [[1, 2], [3, 4]]:
B(*pair)
otherModule.py:
def B(x, y):
print(x, y)
test_moduleA.py:
import unittest
from unittest.mock import patch, call
from moduleA import A
class TestA(unittest.TestCase):
#patch("moduleA.B")
def test_A(self, mockB):
A()
mockB.assert_has_calls([
call(1, 2),
call(3, 4)
])
if __name__ == '__main__':
unittest.main()
unit test results with coverage report:
.
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
Name Stmts Miss Cover Missing
--------------------------------------------------------------------------
src/stackoverflow/59179990/moduleA.py 4 0 100%
src/stackoverflow/59179990/otherModule.py 2 1 50% 2
src/stackoverflow/59179990/test_moduleA.py 9 0 100%
--------------------------------------------------------------------------
TOTAL 15 1 93%
Python version: Python 3.7.5

Related

Call Python from Julia

I am new to Julia and I have a Python function that I want to use in Julia. Basically what the function does is to accept a dataframe (passed as a numpy ndarray), a filter value and a list of column indices (from the array) and run a logistic regression using the statsmodels package in Python. So far I have tried this:
using PyCall
py"""
import pandas as pd
import numpy as np
import random
import statsmodels.api as sm
import itertools
def reg_frac(state, ind_vars):
rows = 2000
total_rows = rows*13
data = pd.DataFrame({
'state': ['a', 'b', 'c','d','e','f','g','h','i','j','k','l','m']*rows, \
'y_var': [random.uniform(0,1) for i in range(total_rows)], \
'school': [random.uniform(0,10) for i in range(total_rows)], \
'church': [random.uniform(11,20) for i in range(total_rows)]}).to_numpy()
try:
X, y = sm.add_constant(np.array(data[(data[:,0] == state)][:,ind_vars], dtype=float)), np.array(data[(data[:,0] == state), 1], dtype=float)
model = sm.Logit(y, X).fit(cov_type='HC0', disp=False)
rmse = np.sqrt(np.square(np.subtract(y, model.predict(X))).mean())
except:
rmse = np.nan
return [state, ind_vars, rmse]
"""
reg_frac(state, ind_vars) = (py"reg_frac"(state::Char, ind_vars::Array{Any}))
However, when I run this, I don't expect the results to be NaN. I think it is working but I am missing something.
reg_frac('b', Any[i for i in 2:3])
0.000244 seconds (249 allocations: 7.953 KiB)
3-element Array{Any,1}:
'b'
[2, 3]
NaN
Any help is appreciated.
In Python code you have strs while in Julia code you have Chars - it is not the same.
Python:
>>> type('a')
<class 'str'>
Julia:
julia> typeof('a')
Char
Hence your comparisons do not work.
Your function could look like this:
reg_frac(state, ind_vars) = (py"reg_frac"(state::String, ind_vars::Array{Any}))
And now:
julia> reg_frac("b", Any[i for i in 2:3])
3-element Array{Any,1}:
"b"
[2, 3]
0.2853707270515166
However, I recommed using Vector{Float64} that in PyCall gets converted in-flight into a numpy vector rather than using Vector{Any} so looks like your code still could be improved (depending on what you are actually planning to do).

How to ignore the whole output of an instruction using python doctest?

Ellispis does not seem to work to ignore the whole output line.
I'd like to ignore everything that is outputed by foo:
def foo():
"""
>>> foo() # doctest: +ELLIPSIS
...
"""
print("IGNORE ME")
if __name__ == '__main__':
import doctest
doctest.testmod()
Running with python3 gives:
Failed example:
foo() # doctest: +ELLIPSIS
Expected nothing
Got:
IGNORE ME
**********************************************************************
1 items had failures:
1 of 1 in __main__.foo
***Test Failed*** 1 failures.
Note that ignoring only part of the output works. Adding a character before ... (here "-"):
def foo():
"""
>>> foo() # doctest: +ELLIPSIS
-...
"""
print("-IGNORE ME")
if __name__ == '__main__':
import doctest
doctest.testmod()

How to update connection sizes in a reconfigurable model in OpenMDAO 2.5.0?

With reconfigurable model execution it is possible to resize inputs and outputs of components. How are the connections updated, when reconfigured outputs and inputs are connected?
In the example below the output c2.y and c3.y is resized at each model run. This input and output is supposed to be connected, as shown in the N2 chart. However, after the reconfiguration the connection size seems to be not updated automatically, it throws the following error:
ValueError: The source and target shapes do not match or are ambiguous for the connection 'c2.y' to 'c3.y'. Expected (1,) but got (2,).
I included below 3 tests, with promoted connection, absolute connection, and the last one with reconfiguration but without the connection (which works).
The last chance would be to declare the connection in the parent group of the comps, which I did not try yet.
The tests:
Promoted connection
Absolute connection
No connection
Reconfigurable component classes and tests:
from __future__ import division
import logging
import numpy as np
import unittest
from openmdao.api import Problem, Group, IndepVarComp, ExplicitComponent
from openmdao.utils.assert_utils import assert_rel_error
class ReconfComp(ExplicitComponent):
def initialize(self):
self.size = 1
self.counter = 0
def reconfigure(self):
logging.info('reconf started {}'.format(self.pathname))
self.counter += 1
logging.info('reconf ended {}'.format(self.pathname))
if self.counter % 2 == 0:
self.size += 1
return True
else:
return False
def setup(self):
logging.info('setup started {}'.format(self.pathname))
self.add_input('x', val=1.0)
self.add_output('y', val=np.zeros(self.size))
# All derivatives are defined.
self.declare_partials(of='*', wrt='*')
logging.info('setup ended {}'.format(self.pathname))
def compute(self, inputs, outputs):
logging.info('compute started {}'.format(self.pathname))
outputs['y'] = 2 * inputs['x']
logging.info('compute ended {}'.format(self.pathname))
def compute_partials(self, inputs, jacobian):
jacobian['y', 'x'] = 2 * np.ones((self.size, 1))
class ReconfComp2(ReconfComp):
"""The size of the y input changes the same as way as in ReconfComp"""
def setup(self):
logging.info('setup started {}'.format(self.pathname))
self.add_input('y', val=np.zeros(self.size))
self.add_output('f', val=np.zeros(self.size))
# All derivatives are defined.
self.declare_partials(of='*', wrt='*')
logging.info('setup ended {}'.format(self.pathname))
def compute(self, inputs, outputs):
logging.info('compute started {}'.format(self.pathname))
outputs['f'] = 2 * inputs['y']
logging.info('compute ended {}'.format(self.pathname))
def compute_partials(self, inputs, jacobian):
jacobian['f', 'y'] = 2 * np.ones((self.size, 1))
class TestReconfConnections(unittest.TestCase):
def test_reconf_comp_promoted_connections(self):
p = Problem()
p.model = Group()
p.model.add_subsystem('c1', IndepVarComp('x', 1.0), promotes_outputs=['x'])
p.model.add_subsystem('c2', ReconfComp(), promotes_inputs=['x'], promotes_outputs=['y'])
p.model.add_subsystem('c3', ReconfComp2(), promotes_inputs=['y'],
promotes_outputs=['f'])
p.setup()
p['x'] = 3.
# First run the model once; counter = 1, size of y = 1
p.run_model()
totals = p.compute_totals(wrt=['x'], of=['y'])
assert_rel_error(self, p['x'], 3.0)
assert_rel_error(self, p['y'], 6.0)
assert_rel_error(self, totals['y', 'x'], [[2.0]])
print(p['x'], p['y'], totals['y', 'x'].flatten())
# Run the model again, which will trigger reconfiguration; counter = 2, size of y = 2
p.run_model() # FIXME Fails with ValueError
def test_reconf_comp_connections(self):
p = Problem()
p.model = Group()
p.model.add_subsystem('c1', IndepVarComp('x', 1.0), promotes_outputs=['x'])
p.model.add_subsystem('c2', ReconfComp(), promotes_inputs=['x'])
p.model.add_subsystem('c3', ReconfComp2(), promotes_outputs=['f'])
p.model.connect('c2.y', 'c3.y')
p.setup()
p['x'] = 3.
# First run the model once; counter = 1, size of y = 1
p.run_model()
# Run the model again, which will trigger reconfiguration; counter = 2, size of y = 2
p.run_model() # FIXME Fails with ValueError
def test_reconf_comp_not_connected(self):
p = Problem()
p.model = Group()
p.model.add_subsystem('c1', IndepVarComp('x', 1.0), promotes_outputs=['x'])
p.model.add_subsystem('c2', ReconfComp(), promotes_inputs=['x'])
p.model.add_subsystem('c3', ReconfComp2(), promotes_outputs=['f'])
# c2.y not connected to c3.y
p.setup()
p['x'] = 3.
# First run the model once; counter = 1, size of y = 1
p.run_model()
# Run the model again, which will trigger reconfiguration; counter = 2, size of y = 2
fail, _, _ = p.run_model()
self.assertFalse(fail)
if __name__ == '__main__':
unittest.main()
UPDATE:
It seems, that in Group._var_abs2meta only the source size is updated, but not the target. The setup of the connections starts, before the setup of the parent group or the setup of the other component would be called.
UPDATE 2:
This happens with the default NonlinearRunOnce solver, with a NewtonSolver of NonlinearBlockGS there is no error, but the variable sizes also don't change.
As of OpenMDAO V2.5 reconfigurable model variables is not an officially supported feature in the framework. The bare bones of the capability has been in the code since that research was done, but it wasn't something that was high priority enough for us to finalize. A recent major refactor in V2.4 re-worked how some underlying data-structures worked and must have broken this functionality.
It is on our development priority list to get this working again, but its not super high on that list. We focus development mainly on features that have a direct in-house applications, and we don't have one of those yet.
If you could provide a decently complete set of tests for it, we could take a look at getting the functionality working.

parallel DoE with distributed components in OpenMDAO

I'm trying to run a DoE in parallel on a distributed code, which doesn't seem to work. Below is a simplified example that raises the same error as for the real code.
import numpy as np
from openmdao.api import IndepVarComp, Group, Problem, Component
from openmdao.core.mpi_wrap import MPI
from openmdao.drivers.latinhypercube_driver import LatinHypercubeDriver
if MPI:
from openmdao.core.petsc_impl import PetscImpl as impl
rank = MPI.COMM_WORLD.rank
else:
from openmdao.api import BasicImpl as impl
rank = 0
class DistribCompSimple(Component):
"""Uses 2 procs but takes full input vars"""
def __init__(self, arr_size=2):
super(DistribCompSimple, self).__init__()
self._arr_size = arr_size
self.add_param('invar', 0.)
self.add_output('outvec', np.ones(arr_size, float))
def solve_nonlinear(self, params, unknowns, resids):
if rank == 0:
unknowns['outvec'] = params['invar'] * np.ones(self._arr_size) * 0.25
elif rank == 1:
unknowns['outvec'] = params['invar'] * np.ones(self._arr_size) * 0.5
print 'hello from rank', rank, unknowns['outvec']
def get_req_procs(self):
return (2, 2)
if __name__ == '__main__':
N_PROCS = 4
prob = Problem(impl=impl)
root = prob.root = Group()
root.add('p1', IndepVarComp('invar', 0.), promotes=['*'])
root.add('comp', DistribCompSimple(2), promotes=['*'])
prob.driver = LatinHypercubeDriver(4, num_par_doe=N_PROCS/2)
prob.driver.add_desvar('invar', lower=-5.0, upper=5.0)
prob.driver.add_objective('outvec')
prob.setup(check=False)
prob.run()
I run this with
mpirun -np 4 python lhc_driver.py
and get this error:
Traceback (most recent call last):
File "lhc_driver.py", line 60, in <module>
prob.run()
File "/Users/frza/git/OpenMDAO/openmdao/core/problem.py", line 1064, in run
self.driver.run(self)
File "/Users/frza/git/OpenMDAO/openmdao/drivers/predeterminedruns_driver.py", line 157, in run
self._run_par_doe(problem.root)
File "/Users/frza/git/OpenMDAO/openmdao/drivers/predeterminedruns_driver.py", line 221, in _run_par_doe
for case in self._get_case_w_nones(self._distrib_build_runlist()):
File "/Users/frza/git/OpenMDAO/openmdao/drivers/predeterminedruns_driver.py", line 283, in _get_case_w_nones
case = next(it)
File "/Users/frza/git/OpenMDAO/openmdao/drivers/latinhypercube_driver.py", line 119, in _distrib_build_runlist
run_list = comm.scatter(job_list, root=0)
File "MPI/Comm.pyx", line 1286, in mpi4py.MPI.Comm.scatter (src/mpi4py.MPI.c:109079)
File "MPI/msgpickle.pxi", line 707, in mpi4py.MPI.PyMPI_scatter (src/mpi4py.MPI.c:48114)
File "MPI/msgpickle.pxi", line 161, in mpi4py.MPI.Pickle.dumpv (src/mpi4py.MPI.c:41605)
ValueError: expecting 4 items, got 2
I don't see a test for this use case in the latest master, so does that mean you don't yet support it or is it a bug?
Thanks for submitting a simple test case for this. I added the parallel DOE stuff fairly recently and forgot to test it with distributed components. I'll add a story to our bug tracker for this and hopefully get it fixed soon.

How should you use argparse to choose which action to perform and pass arguments to it?

I want to use the argparse library to parse some arguments but I'm struggling to work out what in the myriad of ways you can specify arguments is the simplest way to choose between a few actions. Different actions require different numbers of arguments.
Given the following calls I'd expect the following outputs:
> python MyClass.py action1 foo
Action 1: 12345 - foo
> python MyClass.py action2 20 30
Action 2: 12345 - 20 30
The following seems to work:
import argparse
class MyClass:
def __init__(self, someVar):
self.someVar = someVar
def Action1(self, intToPrint):
print("Print 1: %d - %s"%(self.someVar,intToPrint))
def Action2(self, firstNum, firstString):
print("Print 2: %d - %d %s"%(self.someVar,firstNum, firstString))
def CallAction1(mc, args):
mc.Action1(args.intToPrint)
def CallAction2(mc, args):
mc.Action2(args.firstNum, args.firstString)
def Main():
parser = argparse.ArgumentParser(prog='PythonArgumentParsing.py')
subparsers = parser.add_subparsers(help='commands')
action1Group = subparsers.add_parser('action1', help='action 1 help')
action1Group.add_argument('intToPrint', type=str)
action1Group.set_defaults(func=CallAction1)
action2Group = subparsers.add_parser('action2', help='action 1 help')
action2Group.add_argument('firstNum', type=int)
action2Group.add_argument('firstString', type=str)
action2Group.set_defaults(func=CallAction2)
args = parser.parse_args()
someVar = 12345
mc = MyClass(someVar)
args.func(mc, args)
if __name__ == "__main__":
Main()
...but it seems a little clunky to have to create a CallAction to pass arguments from the parser.
Is there any way to clean this up?
I gather that you are just bothered by needing to write the Call_Action... functions which convert the args namespace into positional parameters for the method calls.
Using keyword parameters might eliminate this need. The following hasn't been tested yet:
def Action1(self, intToPrint=None, **kwargs):
print("Print 1: %d - %s"%(self.someVar,intToPrint))
def Action2(self, firstNum=None, firstString=None, **kwargs):
print("Print 2: %d - %d %s"%(self.someVar,firstNum, firstString))
...
action1Group.set_defaults(func=MyClass.Action1)
...
args.func(mc, **vars(args))
If I've done this right I can pass the whole vars(args) dictionary to the method. It will use the parameters that it needs, and ignore the rest.
argparse makes extensive use of the **kwargs method of passing parameters.

Resources