I have an independent variable x which is vector treated as a numpy array. I'd like to be able to constrain some of the values in that vector based on other values in the vector. i.e. x_(k) < x_(k+1). I've tried:
root.add('p1',IndepVarComp('x',x=np.ones(10,dtype=float))
root.add('con',ExecComp('c0=x[1]-x[0]')
root.connect('p1.x','con.x')
That gives me errors about variable not existing and arrays trying to be connected to floats. What is the correct syntax to connect a particular value from an output array to a scaler input?
There are a couple of missing parenthesis in the code you typed, but those must come from transcribing it to here. The main thing you were missing was an extra argument to ExecComp so that it knows how to size the incoming x.
import numpy as np
from openmdao.api import Problem, Group, IndepVarComp, ExecComp
prob = Problem()
prob.root = root = Group()
root.add('p1', IndepVarComp('x', np.array([3, 7, 5], dtype=float)))
root.add('con', ExecComp('co = x[1] - x[0]', x=np.zeros(3, )))
root.connect('p1.x','con.x')
prob.setup()
prob.run()
print('con', prob['con.co'])
When I run this, I get the expected output:
##############################################
Setup: Checking root problem for potential issues...
No recorders have been specified, so no data will be saved.
Setup: Check of root problem complete.
##############################################
('con', 4.0)
An alternate way to do this is to use a scaler expression and then issue connections from a single index of 'p1.x':
root.add('con2', ExecComp('co = b - a'))
root.connect('p1.x', 'con2.a', src_indices=[0])
root.connect('p1.x', 'con2.b', src_indices=[1])
Related
I need tensor mode n product.
The defination of tenosr mode n product can be seen here.
https://www.alexejgossmann.com/tensor_decomposition_tucker/
I found python code.
I would like to convert this code into julia.
def mode_n_product(x, m, mode):
x = np.asarray(x)
m = np.asarray(m)
if mode <= 0 or mode % 1 != 0:
raise ValueError('`mode` must be a positive interger')
if x.ndim < mode:
raise ValueError('Invalid shape of X for mode = {}: {}'.format(mode, x.shape))
if m.ndim != 2:
raise ValueError('Invalid shape of M: {}'.format(m.shape))
return np.swapaxes(np.swapaxes(x, mode - 1, -1).dot(m.T), mode - 1, -1)
I have found another answer using Tensortoolbox.jl
using TensorToolbox
X=rand(5,4,3);
A=rand(2,5);
ttm(X,A,n) #X times A[1] by mode n
One way is:
using TensorOperations
#tensor y[i1, i2, i3, out, i5] := x[i1, i2, i3, s, i5] * a[out, s]
This is literally the formula given at your link to define this, except that I changed the name of the summed index to s; you can you any index names you like, they are just markers. The sum is implicit, because s does not appear on the left.
There is nothing very special about putting the index out back in the same place. Like your python code, #tensor permutes the dimensions of x in order to use ordinary matrix multiplication, and then permutes again to give y the requested order. The fewer permutations needed, the faster this will be.
Alternatively, you can try using LoopVectorization, Tullio; #tullio y[i1, i2, ... with the same notation. Instead of permuting in order to call a library matrix multiplication function, this writes a pure-Julia version which works with the array as it arrives.
I have an R list (actually a list of list of list) that contains data i want to run a convex optimization procedure on in SAGE.
The code goes like this
sage_list = [None] * 2
for k in range(2):
x = r('my_r_list[[1]][[1]][[k+1]]')
sage_list[k] = x._sage_()
First the x assignment is not consistent. If i execute several times the same code i obtain different data. But more importantly the sage_list[k] assignment gives an error:
NameError: name 'structure' is not defined
doing the following however works:
sage_list = [None] * 2
x = r('my_r_list[[1]][[1]][[1]]')
y = r('my_r_list[[1]][[1]][[2]]')
sage_list[0] = x._sage_()
sage_list[1] = y._sage_()
any idea of why? (of course in reality i have much more than 2 iterations)
This solution actually works
x = r('my_r_list')[[1]][[1]][[k+1]]
r() yields and RElement type of object over which R indexing works.
I have a five-dimensional rootfinding problem I'd like to solve from within a Sage notebook, but the functions I wish to solve depend on other parameters that shouldn't be varied during the rootfinding. Figuring out how to set up a call to, say, scipy.optimize.newton_krylov has got me stumped. So let's say I have (with a,b,c,d,e the parameters I want to vary, F1,F2,F3,F4,F5 the five expressions I which to solve to be equal to F1Val,F2Val,F3Val,F4Val,F5Val, values I already know, and posVal another known parameter)
def func(a, b, c, d, e, F1Val, F2Val, F3Val, F4Val, F5Val, posVal):
F1.subs(x1=a,x2=b,x3=c,x4=d,x5=e,position=posVal)
F2.subs(x1=a,x2=b,x3=c,x4=d,x5=e,position=posVal)
F3.subs(x1=a,x2=b,x3=c,x4=d,x5=e,position=posVal)
F4.subs(x1=a,x2=b,x3=c,x4=d,x5=e,position=posVal)
F5.subs(x1=a,x2=b,x3=c,x4=d,x5=e,position=posVal)
return (F1-F1Val, F2-F2Val, F3-F3Val, F4-F4Val, F5-F5Val)
and now I want to pass this to a rootfinding function to yield func = (0,0,0,0,0). I want to pass an initial guess (a0, b0, c0, d0, e0) vector and a set of arguments (F1Val, F2Val, F3Val, F4Val, F5Val, posVal) for the evaluation, but I can't figure out how to do this. Is there a standard technique for this sort of thing? The multidimensional rootfinders in scipy seem to be lacking the args=() variable that the 1D rootfinders offer.
Best,
-user2275987
Well, I'm still not sure how to actually employ the Newton-Raphson method here, but using fsolve works, for functions that accept a vector of variables and a vector of constant arguments. I'm reproducing my proof of concept here
def tstfunc(xIn, constIn):
x = xIn[0]
y = xIn[1]
a = constIn[0]
b = constIn[1]
out = [x+2*y+a]
out.append(a*x*y+b)
return out
from scipy.optimize import fsolve
ans = fsolve(tstfunc, x0=[1,1], args=[0.3, 2.1])
print ans
I have defined a couple of functions of arity 1, say func1(-) and func2(-). I have tested them and seen that they actually do what they are supposed to.
I wish to define a third function, say func3(-), that outputs the difference of func1(-) and func2(-). This is what I do
func3(k) = {j=func1(k)-func2(k); print(j)}
Nevertheless, it doesn't return what it ought to. Let us suppose that func1(5) outputs 10 and func2(5) outputs 2. Then, func3(5) ought to output an 8, right? It returns instead the output of func1(5) in one row, the output of func2(2) in another row, and then a zero (even though the difference of the corresponding outputs is not 0).
Do you know what's wrong with the definition of func3(-)?
A GP user function returns the last evaluated value. Here, it's the resut of
the 'print(j)' command, which prints j (side effect) and returns 'void',
which is typecast to 0 when it must be given a value, as here.
f1(x) = 10
f2(x) = 2
f3(x) = f1(x) - f2(x)
correctly returns 8. You didn't give the code for your func1 / func2
functions, but I expect you included a 'print' statement, maybe expecting it
to return a value. That's why you get outputs on different rows, before the 0.
If you don't like this 'return-last-evaluation-result' behaviour, you can use
explicit 'return (result)' statements.
I have a binary Matrix and would like to get the indices of the non-zero elements, preferably as a vector of cv::Points. There is a function that counts non zero elements, but that's not what I need.
In Matlab, the equivalent call would be simply find().
I could search through the entire matrix and save the indices, but that is not classy!
If you don't mind using numpy module see NumPy For Matlab Users. There is nonzero function which is eqivalent to matlab find.
>>> m = cv.CreateMat(2,2,cv.CV_16SC1)
>>> a = numpy.asarray(m)
>>> a.nonzero()
(array([1, 1]), array([0, 1]))