compute_partials call order per iteration - openmdao

In order to minimize the number of repeated computations per iteration I have been using some extra class variables in the compute method that are also used in compute_partials().
(see below code snippet it will be very clear what I mean.)
The question is;
is there any case compute_partials() is called before compute().
Is there any risk of using compute() and compute_partials() similar to the code below (see those two methods)
class MomentOfInertiaComp(ExplicitComponent):
def initialize(self):
self.options.declare('num_elements', types=int)
self.options.declare('b')
self.compcou=0
self.partcou =0
def setup(self):
num_elements = self.options['num_elements']
self.add_input('h', shape=num_elements)
self.add_output('I', shape=num_elements)
rows = np.arange(num_elements)
cols = np.arange(num_elements)
self.declare_partials('I', 'h', rows=rows, cols=cols)
def compute(self, inputs, outputs):
b = self.options['b']
# Instead of this line
# outputs['I'] = 1./12. * b * inputs['h'] ** 3
# these 2 lines are used
self.var=inputs['h'] ** 2
outputs['I'] = 1./12. * b * inputs['h'] * self.var
self.compcou += 1
def compute_partials(self, inputs, partials):
b = self.options['b']
self.partcou += 1
# instead of this
# partials['I', 'h'] = 1./4. * b * inputs['h'] ** 2
# this is used
partials['I', 'h'] = 1./4. * b * self.var

Openmdao does not guarantee that compute is called before compute_partials. U need to assume that are totally independent.

Related

Cannot convert Cython memoryviewslice to ndarray

I am trying to write an explicit Successive Overrelaxation Function over a 2D matrix. In this case for an electrostatic potential.
When trying to optimize this in Cython I seem to get an error that I am not quite sure I understand.
%%cython
cimport cython
import numpy as np
cimport numpy as np
from libc.math cimport pi
#SOR function
#cython.boundscheck(False)
#cython.wraparound(False)
#cython.initializedcheck(False)
#cython.nonecheck(False)
def SOR_potential(np.float64_t[:, :] potential, mask, int max_iter, float error_threshold, float alpha):
#the ints
cdef int height = potential.shape[0]
cdef int width = potential.shape[1] #more general non quadratic
cdef int it = 0
#the floats
cdef float error = 0.0
cdef float sor_adjustment
#the copy array we will iterate over and return
cdef np.ndarray[np.float64_t, ndim=2] input_matrix = potential.copy()
#set the ideal alpha if user input is 0.0
if alpha == 0.0:
alpha = 2/(1+(pi/((height+width)*0.5)))
#start the SOR loop. The for loops omit the 0 and -1 index\
#because they are *shadow points* used for neuman boundary conditions\
cdef int row, col
#iteration loop
while True:
#2-stencil loop
for row in range(1, height-1):
for col in range(1, width-1):
if not(mask[row][col]):
potential[row][col] = 0.25*(input_matrix[row-1][col] + \
input_matrix[row+1][col] + \
input_matrix[row][col-1] + \
input_matrix[row][col+1])
sor_adjustment = alpha * (potential[row][col] - input_matrix[row][col])
input_matrix[row][col] = sor_adjustment + input_matrix[row][col]
error += np.abs(input_matrix[row][col] - potential[row][col])
#by the end of this loop input_matrix and potential have diff values
if error<error_threshold:
break
elif it>max_iter:
break
else:
error = 0
it = it + 1
return input_matrix, error, it
and I used a very simple example for an array to see if it would give an error output.
test = [[True, False], [True, False]]
pot = np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float64)
SOR_potential(pot, test, 50, 0.1, 0.0)
Gives out this error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [30], line 1
----> 1 SOR_potential(pot, test, 50, 0.1, 0.0)
File _cython_magic_6c09a5060df996862b8e35adacc0e25c.pyx:21, in _cython_magic_6c09a5060df996862b8e35adacc0e25c.SOR_potential()
TypeError: Cannot convert _cython_magic_6c09a5060df996862b8e35adacc0e25c._memoryviewslice to numpy.ndarray
But when I delete the np.float64_t[:, :] part from
def SOR_potential(np.float64_t[:, :] potential,...)
the code works. Of course, the simple 2x2 matrix will not converge but it gives no errors. Where is the mistake here?
I also tried importing the modules differently as suggested here
Cython: how to resolve TypeError: Cannot convert memoryviewslice to numpy.ndarray?
but I got 2 errors instead of 1 where there were type mismatches.
Note: I would also like to ask, how would I define a numpy array of booleans to put in front of the "mask" input in the function?
A minimal reproducible example of your error message would look like this:
def foo(np.float64_t[:, :] A):
cdef np.ndarray[np.float64_t, ndim=2] B = A.copy()
# ... do something with B ...
return B
The problem is, that A is a memoryview while B is a np.ndarray. If both A and B are memoryviews, i.e.
def foo(np.float64_t[:, :] A):
cdef np.float64_t[:, :] B = A.copy()
# ... do something with B ...
return np.asarray(B)
your example will compile without errors. Note that you then need to call np.asarray if you want to return a np.ndarray.
Regarding your second question: You could use a memoryview with dtype np.uint8_t
def foo(np.float64_t[:, :] A, np.uint8_t[:, :] mask):
cdef np.float64_t[:, :] B = A.copy()
# ... do something with B and mask ...
return np.asarray(B)
and call it like this from Python:
mask = np.array([[True, True], [False, False]], dtype=bool)
A = np.ones((2,2), dtype=np.float64)
foo(A, mask)
PS: If your array's buffers are guaranteed to be C-Contiguous, you can use contiguous memoryviews for better performance:
def foo(np.float64_t[:, ::1] A, np.uint8_t[:, ::1] mask):
cdef np.float64_t[:, ::1] B = A.copy()
# ... do something with B and mask ...
return np.asarray(B)

Optimizing Distributed I/O with serial output

I am having trouble understanding how to optimize a distributed component with a serial output. This is my attempt with an example problem given in the openmdao docs.
import numpy as np
import openmdao.api as om
from openmdao.utils.array_utils import evenly_distrib_idxs
from openmdao.utils.mpi import MPI
class MixedDistrib2(om.ExplicitComponent):
def setup(self):
# Distributed Input
self.add_input('in_dist', shape_by_conn=True, distributed=True)
# Serial Input
self.add_input('in_serial', val=1)
# Distributed Output
self.add_output('out_dist', copy_shape='in_dist', distributed=True)
# Serial Output
self.add_output('out_serial', copy_shape='in_serial')
#self.declare_partials('*','*', method='cs')
def compute(self, inputs, outputs):
x = inputs['in_dist']
y = inputs['in_serial']
# "Computationally Intensive" operation that we wish to parallelize.
f_x = x**2 - 2.0*x + 4.0
# These operations are repeated on all procs.
f_y = y ** 0.5
g_y = y**2 + 3.0*y - 5.0
# Compute square root of our portion of the distributed input.
g_x = x ** 0.5
# Distributed output
outputs['out_dist'] = f_x + f_y
# Serial output
if MPI and comm.size > 1:
# We need to gather the summed values to compute the total sum over all procs.
local_sum = np.array(np.sum(g_x))
total_sum = local_sum.copy()
self.comm.Allreduce(local_sum, total_sum, op=MPI.SUM)
outputs['out_serial'] = g_y * total_sum
else:
# Recommended to make sure your code can run in serial too, for testing.
outputs['out_serial'] = g_y * np.sum(g_x)
size = 7
if MPI:
comm = MPI.COMM_WORLD
rank = comm.rank
sizes, offsets = evenly_distrib_idxs(comm.size, size)
else:
# When running in serial, the entire variable is on rank 0.
rank = 0
sizes = {rank : size}
offsets = {rank : 0}
prob = om.Problem()
model = prob.model
# Create a distributed source for the distributed input.
ivc = om.IndepVarComp()
ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True)
ivc.add_output('x_serial', val=1)
model.add_subsystem("indep", ivc)
model.add_subsystem("D1", MixedDistrib2())
model.add_subsystem('con_cmp1', om.ExecComp('con1 = y**2'), promotes=['con1', 'y'])
model.connect('indep.x_dist', 'D1.in_dist')
model.connect('indep.x_serial', ['D1.in_serial','y'])
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
model.add_design_var('indep.x_serial', lower=5, upper=10)
model.add_constraint('con1', upper=90)
model.add_objective('D1.out_serial')
prob.setup(force_alloc_complex=True)
#prob.setup()
# Set initial values of distributed variable.
x_dist_init = [1,1,1,1,1,1,1]
prob.set_val('indep.x_dist', x_dist_init)
# Set initial values of serial variable.
prob.set_val('indep.x_serial', 10)
#prob.run_model()
prob.run_driver()
print('x_dist', prob.get_val('indep.x_dist', get_remote=True))
print('x_serial', prob.get_val('indep.x_serial'))
print('Obj', prob.get_val('D1.out_serial'))
The problem is with defining partials with 'fd' or 'cs'. I cannot define partials of serial output w.r.t distributed input. So I used prob.setup(force_alloc_complex=True) to use complex step. But gives me this warning DerivativesWarning:Constraints or objectives [('D1.out_serial', inds=[0])] cannot be impacted by the design variables of the problem. I understand this is because the total derivative is 0 which causes the warning but I dont understand the reason. Clearly the total derivative should not be 0 here. But I guess this is because I didn't explicitly declare_partials in the component. I tried removing the distributed components and ran it again with declare_partials and this works correctly(code below).
import numpy as np
import openmdao.api as om
class MixedDistrib2(om.ExplicitComponent):
def setup(self):
self.add_input('in_dist', np.zeros(7))
self.add_input('in_serial', val=1)
self.add_output('out_serial', val=0)
self.declare_partials('*','*', method='cs')
def compute(self, inputs, outputs):
x = inputs['in_dist']
y = inputs['in_serial']
g_y = y**2 + 3.0*y - 5.0
g_x = x ** 0.5
outputs['out_serial'] = g_y * np.sum(g_x)
prob = om.Problem()
model = prob.model
model.add_subsystem("D1", MixedDistrib2(), promotes_inputs=['in_dist', 'in_serial'], promotes_outputs=['out_serial'])
model.add_subsystem('con_cmp1', om.ExecComp('con1 = in_serial**2'), promotes=['con1', 'in_serial'])
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
model.add_design_var('in_serial', lower=5, upper=10)
model.add_constraint('con1', upper=90)
model.add_objective('out_serial')
prob.setup(force_alloc_complex=True)
prob.set_val('in_dist', [1,1,1,1,1,1,1])
prob.set_val('in_serial', 10)
prob.run_model()
prob.check_totals()
prob.run_driver()
print('x_dist', prob.get_val('in_dist', get_remote=True))
print('x_serial', prob.get_val('in_serial'))
print('Obj', prob.get_val('out_serial'))
What I am trying to understand is
How to use 'fd' or 'cs' in Distributed component with a serial output?
What is the meaning of prob.setup(force_alloc_complex=True) ? Is not forcing to use cs in all the components in the problem ? If so why does the total derivative becomes 0?
When I run your code in OpenMDAO V 3.11.0 (after uncommenting the declare_partials call) I get the following error:
RuntimeError: 'D1' <class MixedDistrib2>: component has defined partial ('out_serial', 'in_dist') which is a serial output wrt a distributed input. This is only supported using the matrix free API.
As the error indicates, you can't use the matrix-based api for derivatives in this situations. The reasons why are a bit subtle, and probably outside the scope of what needs to be delt with to answer your question here. It boils down to OpenMDAO not knowing why kind of distributed operations are being done in the compute and having no way to manage those details when you propagate things in reverse.
So you need to use the matrix-free derivative APIs in this situation. When you use the matrix-free APIs you DO NOT declare any partials, because you don't want OpenMDAO to allocate any memory for you to store partials in (and you wouldn't use that memory even if it did).
I've coded them for your example here, but I need to note a few important details:
Your example has a distributed IVC, but as of OpenMDAO V3.11.0 you can't get total derivatives with respect to distributed design variables. I assume you just made it that way to make your simple test case, but in case your real problem was set up this way, you need to note this and not do it this way. Instead, make the IVC serial, and use src indices to distribute the correct parts to each proc.
In the example below, the derivatives are correct. However, there seems to be a bug in the check_partials output when running in paralle. So the reverse mode partials look like they are off by a factor of the comm size... this will have to get fixed in later releases.
I only did the derivatives for out_serial. out_dist will work similarly and is left as an excersize for the reader :)
You'll notice that I duplicates some code in the compute and compute_jacvec_product methods. You can abstract this duplicate code out into its own method (or call compute from within compute_jacvec_product by providing your own output dictionary). However, you might be asking why the duplicate call is needed at all? Why can't u store the values from the compute call. The answer is, in large part, that OpenMDAO does not guarantee that compute is always called before compute_jacvec_product. However, I'll also point out that this kind of code duplication is very AD-like. Any AD code will have the same kind of duplication built in, even though you don't see it.
import numpy as np
import openmdao.api as om
from openmdao.utils.array_utils import evenly_distrib_idxs
from openmdao.utils.mpi import MPI
class MixedDistrib2(om.ExplicitComponent):
def setup(self):
# Distributed Input
self.add_input('in_dist', shape_by_conn=True, distributed=True)
# Serial Input
self.add_input('in_serial', val=1)
# Distributed Output
self.add_output('out_dist', copy_shape='in_dist', distributed=True)
# Serial Output
self.add_output('out_serial', copy_shape='in_serial')
# self.declare_partials('*','*', method='fd')
def compute(self, inputs, outputs):
x = inputs['in_dist']
y = inputs['in_serial']
# "Computationally Intensive" operation that we wish to parallelize.
f_x = x**2 - 2.0*x + 4.0
# These operations are repeated on all procs.
f_y = y ** 0.5
g_y = y**2 + 3.0*y - 5.0
# Compute square root of our portion of the distributed input.
g_x = x ** 0.5
# Distributed output
outputs['out_dist'] = f_x + f_y
# Serial output
if MPI and comm.size > 1:
# We need to gather the summed values to compute the total sum over all procs.
local_sum = np.array(np.sum(g_x))
total_sum = local_sum.copy()
self.comm.Allreduce(local_sum, total_sum, op=MPI.SUM)
outputs['out_serial'] = g_y * total_sum
else:
# Recommended to make sure your code can run in serial too, for testing.
outputs['out_serial'] = g_y * np.sum(g_x)
def compute_jacvec_product(self, inputs, d_inputs, d_outputs, mode):
x = inputs['in_dist']
y = inputs['in_serial']
g_y = y**2 + 3.0*y - 5.0
# "Computationally Intensive" operation that we wish to parallelize.
f_x = x**2 - 2.0*x + 4.0
# These operations are repeated on all procs.
f_y = y ** 0.5
g_y = y**2 + 3.0*y - 5.0
# Compute square root of our portion of the distributed input.
g_x = x ** 0.5
# Distributed output
out_dist = f_x + f_y
# Serial output
if MPI and comm.size > 1:
# We need to gather the summed values to compute the total sum over all procs.
local_sum = np.array(np.sum(g_x))
total_sum = local_sum.copy()
self.comm.Allreduce(local_sum, total_sum, op=MPI.SUM)
# total_sum
else:
# Recommended to make sure your code can run in serial too, for testing.
total_sum = np.sum(g_x)
num_x = len(x)
d_f_x__d_x = np.diag(2*x - 2.)
d_f_y__d_y = np.ones(num_x)*0.5*y**-0.5
d_g_y__d_y = 2*y + 3.
d_g_x__d_x = 0.5*x**-0.5
d_out_dist__d_x = d_f_x__d_x # square matrix
d_out_dist__d_y = d_f_y__d_y # num_x,1
d_out_serial__d_y = d_g_y__d_y # scalar
d_out_serial__d_x = g_y*d_g_x__d_x.reshape((1,num_x))
if mode == 'fwd':
if 'out_serial' in d_outputs:
if 'in_dist' in d_inputs:
d_outputs['out_serial'] += d_out_serial__d_x.dot(d_inputs['in_dist'])
if 'in_serial' in d_inputs:
d_outputs['out_serial'] += d_out_serial__d_y.dot(d_inputs['in_serial'])
elif mode == 'rev':
if 'out_serial' in d_outputs:
if 'in_dist' in d_inputs:
d_inputs['in_dist'] += d_out_serial__d_x.T.dot(d_outputs['out_serial'])
if 'in_serial' in d_inputs:
d_inputs['in_serial'] += total_sum*d_out_serial__d_y.T.dot(d_outputs['out_serial'])
size = 7
if MPI:
comm = MPI.COMM_WORLD
rank = comm.rank
sizes, offsets = evenly_distrib_idxs(comm.size, size)
else:
# When running in serial, the entire variable is on rank 0.
rank = 0
sizes = {rank : size}
offsets = {rank : 0}
prob = om.Problem()
model = prob.model
# Create a distributed source for the distributed input.
ivc = om.IndepVarComp()
ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True)
ivc.add_output('x_serial', val=1)
model.add_subsystem("indep", ivc)
model.add_subsystem("D1", MixedDistrib2())
model.add_subsystem('con_cmp1', om.ExecComp('con1 = y**2'), promotes=['con1', 'y'])
model.connect('indep.x_dist', 'D1.in_dist')
model.connect('indep.x_serial', ['D1.in_serial','y'])
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
model.add_design_var('indep.x_serial', lower=5, upper=10)
model.add_constraint('con1', upper=90)
model.add_objective('D1.out_serial')
prob.setup(force_alloc_complex=True)
#prob.setup()
# Set initial values of distributed variable.
x_dist_init = np.ones(sizes[rank])
prob.set_val('indep.x_dist', x_dist_init)
# Set initial values of serial variable.
prob.set_val('indep.x_serial', 10)
prob.run_model()
prob.check_partials()
# prob.run_driver()
print('x_dist', prob.get_val('indep.x_dist', get_remote=True))
print('x_serial', prob.get_val('indep.x_serial'))
print('Obj', prob.get_val('D1.out_serial'))

Abstract typing and multiple dispatch for functions in julia

I want to have objects interact with specific interactions depending on their type.
Example problem: I have four particles, two are type A, and 2 are type B. when type A's interact I want to use the function
function interaction(parm1, parm2)
return parm1 + parm2
end
when type B's interact I want to use the function
function interaction(parm1, parm2)
return parm1 * parm2
end
when type A interacts with type B I want to use function
function interaction(parm1, parm2)
return parm1 - parm2
end
These functions are purposefully over simple.
I want to calculate a simple summation that depends on pairwise interactions:
struct part
parm::Float64
end
# part I need help with:
# initialize a list of length 4, where the entries are `struct part`, and the abstract types
# are `typeA` for the first two and `typeB` for the second two. The values for the parm can be
# -1.0,3, 4, 1.5 respectively
energy = 0.0
for i in range(length(particles)-1)
for j = i+1:length(particles)
energy += interaction(particles[i].parm, particles[j].parm)
end
end
println(energy)
assuming the use of parameters being particle[1].parm = -1, particle[2].parm = 3, particle[3].parm = 4, particle[4].parm = 1.5, energy should account for the interactions of
(1,2) = -1 + 3 = 2
(1,3) = -1 - 4 = -5
(1,4) = -1 - 1.5 = -2.5
(2,3) = 3 - 4 = -1
(2,4) = 3 - 1.5 = 1.5
(3,4) = 4 * 1.5 = 6
energy = 1
Doing this with if statements is almost trivial but not extensible. I am after a clean, tidy Julia approach...
You can do this (I use the simplest form of the implementation as in this case it is enough and it is explicit what happens I hope):
struct A
parm::Float64
end
struct B
parm::Float64
end
interaction(p1::A, p2::A) = p1.parm + p2.parm
interaction(p1::B, p2::B) = p1.parm * p2.parm
interaction(p1::A, p2::B) = p1.parm - p2.parm
interaction(p1::B, p2::A) = p1.parm - p2.parm # I added this rule, but you can leave it out and get MethodError if such case happens
function total_energy(particles)
energy = 0.0
for i in 1:length(particles)-1
for j = i+1:length(particles)
energy += interaction(particles[i], particles[j])
end
end
return energy
end
particles = Union{A, B}[A(-1), A(3), B(4), B(1.5)] # Union makes sure things are compiled to be fast
total_energy(particles)
I have no idea how to do this in your language, but what you need is an analogue to what we call the strategy pattern in object-oriented programming. A strategy is a pluggable, reusable algorithm. In Java I’d make an interface like:
interface Interaction<A, B>
{
double interact(A a, B b)
}
Then implement this three times and reuse those parts wherever you need things to interact. Another method can take an Interaction and use it without knowing how it’s implemented. I think this is the effect you’re after. Sorry I don’t know how to translate into your dialect.

vectorize complex slicing with pandas dataframe

I'd like to be able to vectorize, for speed purposes, this piece of code. the purpose is to calculate a function, in this case a standard deviation, from a tuple of pair of dates that are cointained in two separate arrays.
import pandas as pd
import numpy as np
asd_1 = pd.Series(0.01 * np.random.randn(252), index=pd.date_range('2011-1-1', periods=252))
index_1 = pd.to_datetime(['2011-2-2', '2011-4-3', '2011-5-1',])
index_2 = pd.to_datetime(['2011-2-15', '2011-4-16', '2011-5-17',])
index_tot = list(zip(index_1,index_2))
aux_learning_std = pd.DataFrame([np.nanstd(asd_1.loc[i:j]) for i, j in index_tot], index=index_1)
the solution, that works, is performed through a loop but i'd rather be able to vectorize it through numpy/pandas, which is much faster. initially I though about using something like:
df_aux = pd.concat([asd_1 for _ in range(len(index_1))], axis=1)
results = df_aux.apply(lambda x: np.nanstd(x.loc[i,j]), axis = 0)
but here I fail to put together the vectors into one operation.
any and all advice is welcome.
p.s.: below there is an image for explanatory purposes
Vectorized standard deviation across ranges in an array
def get_ranges_arr(starts,ends):
# Taken from http://stackoverflow.com/a/37626057/3293881
counts = ends - starts
counts_csum = counts.cumsum()
id_arr = np.ones(counts_csum[-1],dtype=int)
id_arr[0] = starts[0]
id_arr[counts_csum[:-1]] = starts[1:] - ends[:-1] + 1
return id_arr.cumsum()
def ranged_std(arr,starts,ends):
# Get all indices and the IDs corresponding to same groups
idx = get_ranges_arr(starts,ends)
id_arr = np.repeat(np.arange(starts.size),ends-starts)
# Extract relevant data
slice_arr = arr[idx]
# Simulate standard deviation implementation for a number of groups
# using id_arr as the basis to perform various mathematical operations
# within each group. Since, std. deviation performs sum/mean reduction,
# we can simply use np.bincount for an efficient implementation.
# Std. deviation formula used :
#https://github.com/numpy/numpy/blob/v1.11.0/numpy/core/fromnumeric.py#L2939
grp_counts = np.bincount(id_arr)
mean_vals = np.bincount(id_arr,slice_arr)/grp_counts
abs_vals = np.abs(slice_arr - mean_vals[id_arr])**2
return np.sqrt(np.bincount(id_arr,abs_vals)/grp_counts)
Sample run (verify against a loopy version)
In [173]: arr = np.random.randint(0,9,(20))
In [174]: starts = np.array([2,6,11])
In [175]: ends = np.array([8,9,15])
In [176]: [np.std(arr[i:j]) for i,j in zip(starts,ends)]
Out[176]: [1.9720265943665387, 0.81649658092772603, 0.82915619758884995]
In [177]: ranged_std(arr,starts,ends)
Out[177]: array([ 1.97202659, 0.81649658, 0.8291562 ])
Runtime test
Case #1 : Very small number of ranges 3
In [21]: arr = np.random.randint(0,9,(20))
In [22]: starts = np.array([2,6,11])
In [23]: ends = np.array([8,9,15])
In [24]: %timeit [np.std(arr[i:j]) for i,j in zip(starts,ends)]
10000 loops, best of 3: 146 µs per loop
In [25]: %timeit ranged_std(arr,starts,ends)
10000 loops, best of 3: 45 µs per loop
Case #2 : Decent number of ranges 1000
In [32]: arr = np.random.randint(0,9,(1010))
In [33]: starts = np.random.randint(0,9,(1000))
In [34]: ends = starts + np.random.randint(0,9,(1000))
In [35]: %timeit [np.std(arr[i:j]) for i,j in zip(starts,ends)]
10 loops, best of 3: 47.5 ms per loop
In [36]: %timeit ranged_std(arr,starts,ends)
1000 loops, best of 3: 217 µs per loop
Case #3 : Large number of ranges 10000
In [60]: arr = np.random.randint(0,9,(1010))
In [61]: arr = np.random.randint(0,9,(10010))
In [62]: starts = np.random.randint(0,9,(10000))
In [63]: ends = starts + np.random.randint(0,9,(10000))
In [64]: %timeit [np.std(arr[i:j]) for i,j in zip(starts,ends)]
1 loops, best of 3: 474 ms per loop
In [65]: %timeit ranged_std(arr,starts,ends)
100 loops, best of 3: 2.17 ms per loop
Really amazing speedups of 200x+!
Using ranged_std to solve our case
# Get start, stop numeric indices as needed for getting ranges array later on
starts = asd_1.index.searchsorted(index_1)
ends = asd_1.index.searchsorted(index_2)
# Create final dataframe output using ranged_std func
df = pd.DataFrame(ranged_std(asd_1.values,starts,ends+1),index=index_1)
Sample run for verification -
In [17]: asd_1 = pd.Series(0.01 * np.random.randn(252), index=\
...: pd.date_range('2011-1-1', periods=252))
...:
...: index_1 = pd.to_datetime(['2011-2-2', '2011-4-3', '2011-5-1',])
...: index_2 = pd.to_datetime(['2011-2-15', '2011-4-16', '2011-5-17',])
...:
...: index_tot = list(zip(index_1,index_2))
...: aux_learning_std = pd.DataFrame([np.nanstd(asd_1.loc[i:j]) for i, j in \
...: index_tot], index=index_1)
...:
In [18]: starts = asd_1.index.searchsorted(index_1)
...: ends = asd_1.index.searchsorted(index_2)
...: df = pd.DataFrame(ranged_std(asd_1.values,starts,ends+1),index=index_1)
...:
In [19]: aux_learning_std
Out[19]:
0
2011-02-02 0.007244
2011-04-03 0.012862
2011-05-01 0.010155
In [20]: df
Out[20]:
0
2011-02-02 0.007244
2011-04-03 0.012862
2011-05-01 0.010155

How to call numerical results to integrate a ODE using Runge-Kutta-4 in Python 3?

I'm trying to solve (for m_0) numerically the following ordinary differential equation:
dm0/dx=(((1-x)*(x*(2-x))**(1.5))/(k+x)**2)*(((x*(2-x))/3.0)*(dw/dx)**2 + ((8*(k+1))/(3*(k+x)))*w**2)
The values of w and dw/dx have been found already numerically using the Runge-Kutta 4th order and k is a factor that is fixed. I wrote a code where I call the values for w and dw/dx from an external file, then I organize them in an array, then I call the array in the function and then I run the integration. My outcome is not what it's expected :(, I don't know what is wrong. If anyone could give me a hand, it would be highly appreciated. Thank you!
from math import sqrt
from numpy import array,zeros,loadtxt
from printSoln import *
from run_kut4 import *
m = 1.15 # Just a constant.
k = 3.0*sqrt(1.0-(1.0/m))-1.0 # k in terms of m.
omegas = loadtxt("omega.txt",float) # Import values of w
domegas = loadtxt("domega.txt",float) # Import values of dw/dx
w = [] # Defines the array w to store the values w^2
s = 0.0
for s in omegas:
w.append(s**2) # Calculates the values w**2
omeg = array(w,float) # Array to store the value of w**2
dw = [] # Defines the array dw to store the values dw**2
t = 0.0
for t in domegas:
dw.append(t**2) # Calculates the values for dw**2
domeg = array(dw,float) # Array to store the values of dw**2
x = 1.0e-12 # Starting point of integration
xStop = (2.0 - k)/3.0 # Final point of integration
def F(x,y): # Define function to be integrated
F = zeros(1)
for i in domeg: # Loop to call w^2, (dw/dx)^2
for j in omeg:
F[0] = (((1.0-x)*(x*(2.0-x))**(1.5))/(k+x)**2)*((1.0/3.0)*x* (2.0-x)*domeg[i] + (8.0*(k+1.0)*omeg[j])/(3.0*(k+x)))
return F
y = array([((32.0*sqrt(2.0)*(k+1.0)*(x**2.5))/(15.0*(k**3)))]) # Initial condition for m_{0}
h = 1.0e-5 # Integration step
freq = 0 # Prints only initial and final values
X,Y = integrate(F,x,y,xStop,h) # Calls Runge-Kutta 4
printSoln(X,Y,freq) # Prints solution
Interpreting your verbal description, there is an ODE for omega, w'=F(x,w), and a coupled ODE for m0, m'=G(x,m,w,w'). The almost always optimal way to solve this is to treat it as system of ODE,
def ODEfunc(x,y)
w,m = y
dw = F(x,w)
dm = G(x,m,w,dw)
return np.array([dw, dm])
which you can then insert in the ODE solver of your choice, e.g., the fictitious
ODEintegrate(ODEfunc, xsamples, y0)

Resources