RuntimeError: You should not call `__bool__` / `__nonzero__` on `Formula`, - jupyter-notebook

I was trying to write a least time control code, using drake toolbox. But in the middle, I cannot understand the error info: (please ignore things happened in this parentheis, i just don't know how much detail is needed to submit the post, god!)
'''python
from pydrake.all import MathematicalProgram, Solve
import numpy as np
def g(x):
if abs(x)<1e-7:
return 0.
else:
return 1.
mp = MathematicalProgram()
state_initial = np.asarray([1., 0])
position_goal = np.asarray([0, 0])
N=100
dt=0.01
u_over_time=mp.NewContinuousVariables(1,"u_0")
states_over_time = np.asarray([state_initial])
for k in range(1,N):
u = mp.NewContinuousVariables(1, "u_%d" % k)
state =mp.NewContinuousVariables(2,"state_%d" % k)
u_over_time = np.vstack((u_over_time, u))
states_over_time = np.vstack((states_over_time,state))
print "Number of decision vars", mp.num_vars()
for i in range(N-1):
state_next0 = states_over_time[i,0]+ dt*states_over_time[i,1]
state_next1 = states_over_time[i,1]+ dt*u_over_time[i]
mp.AddLinearConstraint(states_over_time[i+1,0]>=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]>=state_next1)
mp.AddLinearConstraint(states_over_time[i+1,0]<=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]<=state_next1)
mp.AddLinearConstraint(u_over_time[i]<=1.)
mp.AddLinearConstraint(u_over_time[i]>=-1.)
And the error info is :
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-2-be1aa565be42> in <module>()
29 state_next1 = states_over_time[i,1]+ dt*u_over_time[i]
30 mp.AddLinearConstraint(states_over_time[i+1,0]>=state_next0)
---> 31 mp.AddLinearConstraint(states_over_time[i+1,1]>=state_next1)
32 mp.AddLinearConstraint(states_over_time[i+1,0]<=state_next0)
33 mp.AddLinearConstraint(states_over_time[i+1,1]<=state_next1)
RuntimeError: You should not call `__bool__` / `__nonzero__` on `Formula`. If you are trying to make a map with `Variable`, `Expression`, or `Polynomial` as keys (and then access the map in Python), please use pydrake.common.containers.EqualToDict`.
May I know what's happening here? Thanks
----------------update line-----------------
I modified the code as you told me. Now the code now becomes:
'''python
from pydrake.all import MathematicalProgram, Solve
import numpy as np
def g(x):
if abs(x)<1e-7:
return 0.
else:
return 1.
mp = MathematicalProgram()
state_initial = np.asarray([1., 0])
position_goal = np.asarray([0, 0])
N=100
dt=0.01
u_over_time=mp.NewContinuousVariables(1,"u_0")
states_over_time = np.asarray([state_initial])
for k in range(1,N):
u = mp.NewContinuousVariables(1, "u_%d" % k)
state =mp.NewContinuousVariables(2,"state_%d" % k)
u_over_time = np.vstack((u_over_time, u))
states_over_time = np.vstack((states_over_time,state))
print "Number of decision vars", mp.num_vars()
for i in range(N-1):
state_next0 = states_over_time[i,0]+ dt*states_over_time[i,1]
state_next1 = states_over_time[i,1]+ dt*u_over_time[i,0]
mp.AddLinearConstraint(states_over_time[i+1,0]>=state_next0[0])
mp.AddLinearConstraint(states_over_time[i+1,1]>=state_next1[0])
mp.AddLinearConstraint(states_over_time[i+1,0]<=state_next0[0])
mp.AddLinearConstraint(states_over_time[i+1,1]<=state_next1[0])
mp.AddLinearConstraint(u_over_time[i,0]<=1.)
mp.AddLinearConstraint(u_over_time[i,0]>=-1.)
'''
And the error info is:
TypeError Traceback (most recent call last)
<ipython-input-7-82e68c2ebfaa> in <module>()
27 state_next0 = states_over_time[i,0]+ dt*states_over_time[i,1]
28 state_next1 = states_over_time[i,1]+ dt*u_over_time[i,0]
---> 29 mp.AddLinearConstraint(states_over_time[i+1,0]>=state_next0[0])
30 mp.AddLinearConstraint(states_over_time[i+1,1]>=state_next1[0])
31 mp.AddLinearConstraint(states_over_time[i+1,0]<=state_next0[0])
TypeError: 'float' object has no attribute '__getitem__'
What's the problem this time? Thanks.
(Btw, one of my complain is that, the error info always not that effective to give the hint of where the problem is...)
-----------------update 2nd time line--------------------
Now a similar problem happened to the g(x), the code:
'''
from pydrake.all import MathematicalProgram, Solve
import numpy as np
def g(x):
print 'x=',x
print 'x[0]=',x[0]
if x[0]*x[0]+x[1]*x[1]<1e-7: # x.dot(x)
return 0.
else:
return 1.
mp = MathematicalProgram()
state_initial = np.asarray([1., 0])
#position_goal = np.asarray([0, 0]) # already in g(x)
N=100
dt=0.01
u_over_time=mp.NewContinuousVariables(1,"u_0")
states_over_time = np.asarray([state_initial])
for k in range(1,N):
u = mp.NewContinuousVariables(1, "u_%d" % k)
state =mp.NewContinuousVariables(2,"state_%d" % k)
u_over_time = np.vstack((u_over_time, u))
states_over_time = np.vstack((states_over_time,state))
print "Number of decision vars", mp.num_vars()
for i in range(N-1):
state_next0 = states_over_time[i,0]+ dt*states_over_time[i,1]
state_next1 = states_over_time[i,1]+ dt*u_over_time[i,0]
mp.AddLinearConstraint(states_over_time[i+1,0]>=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]>=state_next1)
mp.AddLinearConstraint(states_over_time[i+1,0]<=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]<=state_next1)
mp.AddLinearConstraint(u_over_time[i,0]<=1.)
mp.AddLinearConstraint(u_over_time[i,0]>=-1.)
reward=np.zeros((N,1))
for i in range(N):
reward[i]=g(states_over_time[i,:])
mp.AddQuadraticCost(reward.dot(reward))
result=Solve(mp)
'''
This time neither x or x[0] could solve the problem. the output info is :
Number of decision vars 298
x= [1.0 0.0]
x[0]= 1.0
x= [Variable('state_1(0)', Continuous) Variable('state_1(1)', Continuous)]
x[0]= state_1(0)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-8-08d1cd75397e> in <module>()
37 reward=np.zeros((N,1))
38 for i in range(N):
---> 39 reward[i]=g(states_over_time[i,:])
40
41 mp.AddQuadraticCost(reward.dot(reward))
<ipython-input-8-08d1cd75397e> in g(x)
5 print 'x=',x
6 print 'x[0]=',x[0]
----> 7 if x[0]*x[0]+x[1]*x[1]<1e-7: # x.dot(x)
8 return 0.
9 else:
RuntimeError: You should not call `__bool__` / `__nonzero__` on `Formula`. If you are trying to make a map with `Variable`, `Expression`, or `Polynomial` as keys (and then access the map in Python), please use pydrake.common.containers.EqualToDict`.
What can I do this time? Thanks
Btw, you see in the code i print x or x[0] only once, but i got two different answer? funny, isn't it? why is this?

state_next1 is not a symbolic expression, it is a numpy array of symbolic expression, so you need to do state_next1[0]. Similarly you will need to change u_over_time[i] <= 1 to u_over_time[i, 0] <= 1.
The other way to solve the problem is to compute state_next1 using u_overt_time[i, 0] instead of u_over_time[i]. After modification, the for loop in your code should be
for i in range(N-1):
state_next0 = states_over_time[i,0]+ dt*states_over_time[i,1]
state_next1 = states_over_time[i,1]+ dt*u_over_time[i, 0]
mp.AddLinearConstraint(states_over_time[i+1,0]>=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]>=state_next1)
mp.AddLinearConstraint(states_over_time[i+1,0]<=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]<=state_next1)
mp.AddLinearConstraint(u_over_time[i, 0]<=1.)
mp.AddLinearConstraint(u_over_time[i, 0]>=-1.)
I changed u_over_time[i] to u_over_time[i, 0] where you define state_next1.

The error thrown in the line
if x[0]*x[0]+x[1]*x[1]<1e-7: # x.dot(x)
return 0.
is because you called with AddQuadraticCost, but your cost is not quadratic. Drake tries to parse the symbolic expression as a quadratic expression, and failed. Specifically Drake fails when you check if the expression x[0] * x[0] + x[1] * x[1] < 1e-7. No quadratic cost can have this type of "if" statement.
What is the mathematical formulation of your cost? Do you really want to impose the cost as defined in your g(x) function, that if x'*x < 1e-7, then g(x) = 0, otherwise g(x) = 1? This is a pretty bad cost (it is almost constant everywhere, but have discrete jumps from 1 to 0 near the origin).
Since you want to solve a least time optimal control problem, I would suggest to change your formulation, and make dt a decision variable in your problem. Namely you will have the dynamic constraint
x[n+1] = x[n] + f(x[n], u[n]) * dt[n]
The final state constraint
x[N] = x_desired
The initial state constraint
x[0] = x_initial
And your cost function is to minimize the time
min sum_i dt[i]
Then you will have smooth cost and constraint.

Here is a piece of code that doesn't throw syntax error
from pydrake.all import MathematicalProgram, Solve
import numpy as np
def g(x):
x_squared_norm = np.power(x.reshape((2, -1)), 2)
return np.sum(x_squared_norm > 1e-7)
mp = MathematicalProgram()
state_initial = np.asarray([1., 0])
#position_goal = np.asarray([0, 0]) # already in g(x)
N=100
dt=0.01
u_over_time=mp.NewContinuousVariables(1,"u_0")
states_over_time = np.asarray([state_initial])
for k in range(1,N):
u = mp.NewContinuousVariables(1, "u_%d" % k)
state =mp.NewContinuousVariables(2,"state_%d" % k)
u_over_time = np.vstack((u_over_time, u))
states_over_time = np.vstack((states_over_time,state))
print "Number of decision vars", mp.num_vars()
for i in range(N-1):
state_next0 = states_over_time[i,0]+ dt*states_over_time[i,1]
state_next1 = states_over_time[i,1]+ dt*u_over_time[i,0]
mp.AddLinearConstraint(states_over_time[i+1,0]>=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]>=state_next1)
mp.AddLinearConstraint(states_over_time[i+1,0]<=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]<=state_next1)
mp.AddLinearConstraint(u_over_time[i,0]<=1.)
mp.AddLinearConstraint(u_over_time[i,0]>=-1.)
mp.AddCost(g, vars=states_over_time[1:,:].reshape((1, -1)).squeeze())
result=Solve(mp)
Notice that I changed the definition of g, and called mp.AddCost instead of mp.AddQuadraticCost. mp.AddQuadraticCost expects a quadratic symbolic expression. The expression in your code is not quadratic (it has an if statement in the cost, and quadratic cost doesn't allow if statement.).
This code should run without error, but I don't know if it can find the solution. Again this cost is not differentiable, so any gradient based nonlinear solver will have trouble.
If you really don't want to solve the problem as a nonlinear optimization problem, you can consider to re-formulate the problem as a mixed-integer program. Namely your cost is the summation of a bunch of binary variables b[i], that b[i] = 1 if |x[i, 0]| > epsilon or |x[i, 1]| > epsilon; otherwise b[i] = 0, and your can formulate this as a mixed-integer linear constraints.

I wrote an answer by bisection method, which also recommended by tedrake on class. but I don't like this method. Too many iterations. I just put it here, when i have a mixed integer code, i will back.
god, i just cannot pass the code check...i really hate the code check machanism of stackoverflow...
'''
from pydrake.all import MathematicalProgram, Solve
import numpy as np
import matplotlib.pyplot as plt
'''
def g(x):
print 'x=',x
print 'x[0]=',x[0]
if x[0]*x[0]+x[1]*x[1]<1e-7: # x.dot(x)
return 0.
else:
return 1.
'''
#mp = MathematicalProgram()
state_initial = np.asarray([1., 0])
#position_goal = np.asarray([0, 0]) # already in g(x)
#N=201
dt=0.01
upper=1000; lower=1;
N=upper
while upper-lower>1:
print '---------------------'
print 'N=',N
mp = MathematicalProgram()
u_over_time=mp.NewContinuousVariables(1,"u_0")
states_over_time = mp.NewContinuousVariables(2,"state intial")
mp.AddLinearConstraint(states_over_time[0]==np.asarray([state_initial[0]]))
mp.AddLinearConstraint(states_over_time[1]==np.asarray([state_initial[1]]))
#states_over_time = np.asarray([state_initial])
for k in range(1,N):
u = mp.NewContinuousVariables(1, "u_%d" % k)
state =mp.NewContinuousVariables(2,"state_%d" % k)
u_over_time = np.vstack((u_over_time, u))
states_over_time = np.vstack((states_over_time,state))
print "Number of decision vars", mp.num_vars()
for i in range(N-1):
state_next0 = states_over_time[i,0]+ dt*states_over_time[i,1]
state_next1 = states_over_time[i,1]+ dt*u_over_time[i,0]
mp.AddLinearConstraint(states_over_time[i+1,0]>=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]>=state_next1)
mp.AddLinearConstraint(states_over_time[i+1,0]<=state_next0)
mp.AddLinearConstraint(states_over_time[i+1,1]<=state_next1)
mp.AddLinearConstraint(u_over_time[i,0]<=1.)
mp.AddLinearConstraint(u_over_time[i,0]>=-1.)
'''
reward=np.zeros((N,1))
for i in range(N):
reward[i]=g(states_over_time[i,:])
'''
mp.AddLinearConstraint(states_over_time[-1,0]<=1e-7)
mp.AddLinearConstraint(states_over_time[-1,1]<=1e-7)
mp.AddLinearConstraint(states_over_time[-1,0]>=1e-7)
mp.AddLinearConstraint(states_over_time[-1,1]>=1e-7)
#mp.AddQuadraticCost(reward.dot(reward))
result=Solve(mp)
print result.is_success()
if result.is_success():
upper=N
else:
lower=N
N=lower+int((upper-lower)/2.0)
N=upper
#print result.is_success()
print 'least time=',dt*N
u_over_time=result.GetSolution(u_over_time)
states_over_time=result.GetSolution(states_over_time)
#print 'u=',u_over_time
#print 'last state=',states_over_time[-1,:]
fig, ax = plt.subplots(2, 1)
plt.subplot(2, 1, 1);plt.plot(np.arange(dt, dt*N, dt),u_over_time);
plt.legend(["u against t"])
plt.subplot(2, 1, 2);plt.plot(states_over_time[:,0],states_over_time[:,1]);
plt.legend(["phase portrait"])
'''

Related

ST-HOSVD in Julia

I am trying to implement ST-HOSVD algorithm in Julia because I could not found library which contains ST-HOSVD.
See this paper in Algorithm 1 in page7.
https://people.cs.kuleuven.be/~nick.vannieuwenhoven/papers/01-STHOSVD.pdf
I cannot reproduce input (4,4,4,4) tensor by approximated tensor whose tucker rank is (2,2,2,2).
I think I have some mistake in indexes of matrix or tensor elements, but I could not locate it.
How to fix it?
If you know library of ST-HOSVD, let me know.
ST-HOSVD is really common way to reduce information. I hope the question helps many Julia user.
using TensorToolbox
function STHOSVD(A, reqrank)
N = ndims(A)
S = copy(A)
Sk = undef
Uk = []
for k = 1:N
if k == 1
Sk = tenmat(S, k)
end
Sk_svd = svd(Sk)
U1 = Sk_svd.U[ :, 1:reqrank[k] ]
V1t = Sk_svd.V[1:reqrank[k], : ]
Sigma1 = diagm( Sk_svd.S[1:reqrank[k]] )
Sk = Sigma1 * V1t
push!(Uk, U1)
end
X = ttm(Sk, Uk[1], 1)
for k=2:N
X = ttm(X, Uk[k], k)
end
return X
end
A = rand(4,4,4,4)
X = X_STHOSVD(A, [2,2,2,2])
EDIT
Here, Sk = tenmat(S, k) is mode n matricization of tensor S.
S∈R^{I_1×I_2×…×I_N}, S_k∈R^{I_k×(Π_{m≠k}^{N} I_m)}
The function is contained in TensorToolbox.jl. See "Basis" in Readme.
The definition of mode-k Matricization can be seen the paper in page 460.
It works.
I have seen 26 page in this slide
using TensorToolbox
using LinearAlgebra
using Arpack
function STHOSVD(T, reqrank)
N = ndims(T)
tensor_shape = size(T)
for i = 1 : N
T_i = tenmat(T, i)
if reqrank[i] == tensor_shape[i]
USV = svd(T_i)
else
USV = svds(T_i; nsv=reqrank[i] )[1]
end
T = ttm( T, USV.U * USV.U', i)
end
return T
end

Custom Evaluation Function based on F1 for use in xgboost - Python API

I have written the following custom evaluation function to use with xgboost, in order to optimize F1. Umfortuantely it returns an exception when run with xgboost.
The evaluation function is the following:
def F1_eval(preds, labels):
t = np.arange(0, 1, 0.005)
f = np.repeat(0, 200)
Results = np.vstack([t, f]).T
P = sum(labels == 1)
for i in range(200):
m = (preds >= Results[i, 0])
TP = sum(labels[m] == 1)
FP = sum(labels[m] == 0)
if (FP + TP) > 0:
Precision = TP/(FP + TP)
Recall = TP/P
if (Precision + Recall >0) :
F1 = 2 * Precision * Recall / (Precision + Recall)
else:
F1 = 0
Results[i, 1] = F1
return(max(Results[:, 1]))
Below I provide a reproducible example along with the error message:
from sklearn import datasets
Wine = datasets.load_wine()
X_wine = Wine.data
y_wine = Wine.target
y_wine[y_wine == 2] = 1
X_wine_train, X_wine_test, y_wine_train, y_wine_test = train_test_split(X_wine, y_wine, test_size = 0.2)
clf_wine = xgb.XGBClassifier(max_depth=6, learning_rate=0.1,silent=False, objective='binary:logistic', \
booster='gbtree', n_jobs=8, nthread=None, gamma=0, min_child_weight=1, max_delta_step=0, \
subsample=0.8, colsample_bytree=0.8, colsample_bylevel=1, reg_alpha=0, reg_lambda=1)
clf_wine.fit(X_wine_train, y_wine_train,\
eval_set=[(X_wine_train, y_wine_train), (X_wine_test, y_wine_test)], eval_metric=F1_eval, early_stopping_rounds=10, verbose=True)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-453-452852658dd8> in <module>()
12 clf_wine = xgb.XGBClassifier(max_depth=6, learning_rate=0.1,silent=False, objective='binary:logistic', booster='gbtree', n_jobs=8, nthread=None, gamma=0, min_child_weight=1, max_delta_step=0, subsample=0.8, colsample_bytree=0.8, colsample_bylevel=1, reg_alpha=0, reg_lambda=1)
13
---> 14 clf_wine.fit(X_wine_train, y_wine_train,eval_set=[(X_wine_train, y_wine_train), (X_wine_test, y_wine_test)], eval_metric=F1_eval, early_stopping_rounds=10, verbose=True)
15
C:\ProgramData\Anaconda3\lib\site-packages\xgboost\sklearn.py in fit(self, X, y, sample_weight, eval_set, eval_metric, early_stopping_rounds, verbose, xgb_model, sample_weight_eval_set)
519 early_stopping_rounds=early_stopping_rounds,
520 evals_result=evals_result, obj=obj, feval=feval,
--> 521 verbose_eval=verbose, xgb_model=None)
522
523 self.objective = xgb_options["objective"]
C:\ProgramData\Anaconda3\lib\site-packages\xgboost\training.py in train(params, dtrain, num_boost_round, evals, obj, feval, maximize, early_stopping_rounds, evals_result, verbose_eval, xgb_model, callbacks, learning_rates)
202 evals=evals,
203 obj=obj, feval=feval,
--> 204 xgb_model=xgb_model, callbacks=callbacks)
205
206
C:\ProgramData\Anaconda3\lib\site-packages\xgboost\training.py in _train_internal(params, dtrain, num_boost_round, evals, obj, feval, xgb_model, callbacks)
82 # check evaluation result.
83 if len(evals) != 0:
---> 84 bst_eval_set = bst.eval_set(evals, i, feval)
85 if isinstance(bst_eval_set, STRING_TYPES):
86 msg = bst_eval_set
C:\ProgramData\Anaconda3\lib\site-packages\xgboost\core.py in eval_set(self, evals, iteration, feval)
957 if feval is not None:
958 for dmat, evname in evals:
--> 959 feval_ret = feval(self.predict(dmat), dmat)
960 if isinstance(feval_ret, list):
961 for name, val in feval_ret:
<ipython-input-383-dfb8d5181b18> in F1_eval(preds, labels)
11
12
---> 13 P = sum(labels == 1)
14
15
TypeError: 'bool' object is not iterable
I do not understand why the function is not working. I have followed the examples here: https://github.com/dmlc/xgboost/blob/master/demo/guide-python/custom_objective.py
I would like to understand where I err.
When doing sum(labels == 1), Python evaluates labels == 1 as a Boolean object, thus you get TypeError: 'bool' object is not iterable
The function sum expecting an iterable object, like a list. Here's an example of your error:
In[32]: sum(True)
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2963, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-32-6eb8f80b7f2e>", line 1, in <module>
sum(True)
TypeError: 'bool' object is not iterable
If you want to use f1_score of scikit-learn you can implement the following wrapup:
from sklearn.metrics import f1_score
import numpy as np
def f1_eval(y_pred, dtrain):
y_true = dtrain.get_label()
err = 1-f1_score(y_true, np.round(y_pred))
return 'f1_err', err
params of the wrap up are list (of predictions) and DMatrix, and it returns a string, float
# Setting your classifier
clf_wine = xgb.XGBClassifier(max_depth=6, learning_rate=0.1,silent=False, objective='binary:logistic', \
booster='gbtree', n_jobs=8, nthread=None, gamma=0, min_child_weight=1, max_delta_step=0, \
subsample=0.8, colsample_bytree=0.8, colsample_bylevel=1, reg_alpha=0, reg_lambda=1)
# When you fit, add eval_metric=f1_eval
# Please don't forget to insert all the .fit arguments required
clf_wine.fit(eval_metric=f1_eval)
Here you can see an example of how to implement custom objective function and custom evaluation metric
Example containing the following code:
# user defined evaluation function, return a pair metric_name, result
# NOTE: when you do customized loss function, the default prediction value is margin
# this may make builtin evaluation metric not function properly
# for example, we are doing logistic loss, the prediction is score before logistic transformation
# the builtin evaluation error assumes input is after logistic transformation
# Take this in mind when you use the customization, and maybe you need write customized evaluation function
def evalerror(preds, dtrain):
labels = dtrain.get_label()
# return a pair metric_name, result
# since preds are margin(before logistic transformation, cutoff at 0)
return 'error', float(sum(labels != (preds > 0.0))) / len(labels)
which specify that an evaluation function gets as arguments (predictions, dtrain) dtrain is of type DMatrix and returns a string, float which is the name of the metric and the error.
Adding working python code example
import numpy as np
def _F1_eval(preds, labels):
t = np.arange(0, 1, 0.005)
f = np.repeat(0, 200)
results = np.vstack([t, f]).T
# assuming labels only containing 0's and 1's
n_pos_examples = sum(labels)
if n_pos_examples == 0:
raise ValueError("labels not containing positive examples")
for i in range(200):
pred_indexes = (preds >= results[i, 0])
TP = sum(labels[pred_indexes])
FP = len(labels[pred_indexes]) - TP
precision = 0
recall = TP / n_pos_examples
if (FP + TP) > 0:
precision = TP / (FP + TP)
if (precision + recall > 0):
F1 = 2 * precision * recall / (precision + recall)
else:
F1 = 0
results[i, 1] = F1
return (max(results[:, 1]))
if __name__ == '__main__':
labels = np.random.binomial(1, 0.75, 100)
preds = np.random.random_sample(100)
print(_F1_eval(preds, labels))
And if you want to implement _F1_eval to work specifically for xgboost evaluation methods add this:
def F1_eval(preds, dtrain):
res = _F1_eval(preds, dtrain.get_label())
return 'f1_err', 1-res

Tensorflow: 6 layer CNN: OOM (use 10Gb GPU memory)

I am using the following code for running a 6 layer CNN with 2 FC layers on top (on Tesla K-80 GPU).
Somehow, it consumes entire memory 10GB and died out of memory.I know that i can reduce the batch_size and then run , but i also want to run with 15 or 20 CNN layers.Whats wrong with the following code and why it takes all the memory? How should i run the code for 15 layers CNN.
Code:
import model
with tf.Graph().as_default() as g_train:
filenames = tf.train.match_filenames_once(FLAGS.train_dir+'*.tfrecords')
filename_queue = tf.train.string_input_producer(filenames, shuffle=True, num_epochs=FLAGS.num_epochs)
feats,labels = get_batch_input(filename_queue, batch_size=FLAGS.batch_size)
### feats size=(batch_size, 100, 50)
logits = model.inference(feats, FLAGS.batch_size)
loss = model.loss(logits, labels, feats)
tvars = tf.trainable_variables()
global_step = tf.Variable(0, name='global_step', trainable=False)
# Add to the Graph operations that train the model.
train_op = model.training(loss, tvars, global_step, FLAGS.learning_rate, FLAGS.clip_gradients)
# Add the Op to compare the logits to the labels during evaluation.
eval_correct = model.evaluation(logits, labels, feats)
summary_op = tf.merge_all_summaries()
saver = tf.train.Saver(tf.all_variables(), max_to_keep=15)
# The op for initializing the variables.
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
summary_writer = tf.train.SummaryWriter(FLAGS.model_dir,
graph=sess.graph)
# Start input enqueue threads.
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
try:
step = 0
while not coord.should_stop():
_, loss_value = sess.run([train_op, loss])
if step % 100 == 0:
print('Step %d: loss = %.2f (%.3f sec)' % (step, loss_value))
# Update the events file.
summary_str = sess.run(summary_op)
summary_writer.add_summary(summary_str, step)
if (step == 0) or (step + 1) % 1000 == 0 or (step + 1) == FLAGS.max_steps:
ckpt_model = os.path.join(FLAGS.model_dir, 'model.ckpt')
saver.save(sess, ckpt_model, global_step=step)
#saver.save(sess, FLAGS.model_dir, global_step=step)
step += 1
except tf.errors.OutOfRangeError:
print('Done training for %d epochs, %d steps.' % (FLAGS.num_epochs, step))
finally:
coord.join(threads)
sess.close()
###################### File model.py ####################
def conv2d(x, W, b, strides=1):
# Conv2D wrapper, with bias and relu activation
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1],
padding='SAME')
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2,s=2):
# MaxPool2D wrapper
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, s,
s,1],padding='SAME')
def inference(feats,batch_size):
#feats size (batch_size,100,50,1) #batch_size=256
conv1_w=tf.get_variable("conv1_w", [filter_size,filter_size,1,256],initializer=tf.uniform_unit_scaling_initializer())
conv1_b=tf.get_variable("conv1_b",[256])
conv1 = conv2d(feats, conv1_w, conv1_b,2)
conv1 = maxpool2d(conv1, k=2,s=2)
### This was replicated for 6 layers and the 2 FC connected layers are added
return logits
def training(loss, train_vars, global_step, learning_rate, clip_gradients):
# Add a scalar summary for the snapshot loss.
tf.scalar_summary(loss.op.name, loss)
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, train_vars,aggregation_method=1), clip_gradients)
optimizer = tf.train.AdamOptimizer(learning_rate)
train_op = optimizer.apply_gradients(zip(grads, train_vars), global_step=global_step)
return train_op
I am not too sure what the model python library is. If it is something you wrote and can change the setting in the optimizer I would suggest the following which I use in my own code
train_step = tf.train.AdamOptimizer(learning_rate).minimize(cost, aggregation_method = tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N)
By default the aggeragetion_method is ADD_N but if you change it to EXPERIMENTAL_ACCUMULATE_N or EXPERIMENTAL_TREE this will greatly save memory. The main memory hog in these programs is that tensorflow must save the output values at every neuron so that it can compute the gradients. Changing the aggregation_method helps a lot from my experience.
Also BTW I don't think there is anything wrong with your code. I can run out of memory on small cov-nets as well.

Minimising log function in cvxpy

I am trying to simulate an exact line search experiment using CVXPY.
objective = cvx.Minimize(func(x+s*grad(x)))
s = cvx.Variable()
constraints = [ s >= 0]
prob = cvx.Problem(objective, constraints)
obj = cvx.Minimize(prob)
(cvxbook byod pg472)
the above equation is my input objective function.
def func(x):
np.random.seed(1235813)
A = np.asmatrix(np.random.randint(-1,1, size=(n, m)))
b = np.asmatrix(np.random.randint(50,100,size=(m,1)))
c = np.asmatrix(np.random.randint(1,50,size=(n,1)))
fx = c.transpose()*x - sum(np.log((b - A.transpose()* x)))
return fx
Gradient Function
def grad(x):
np.random.seed(1235813)
A = np.asmatrix(np.random.randint(-1,1, size=(n, m)))
b = np.asmatrix(np.random.randint(50,100,size=(m,1)))
c = np.asmatrix(np.random.randint(1,50,size=(n,1)))
gradient = A * (1.0/(b - A.transpose()*x)) + c
return gradient
Using this to find the t "Step Size" by minimising the objective function results in an error 'AddExpression' object has no attribute 'log'.
I am new to CVXPY and Optimization. I would be grateful if someone could guide on how to fix the errors.
Thanks
You need to use CVXPY functions, not NumPy functions. Something like this should work:
def func(x):
np.random.seed(1235813)
A = np.asmatrix(np.random.randint(-1,1, size=(n, m)))
b = np.asmatrix(np.random.randint(50,100,size=(m,1)))
c = np.asmatrix(np.random.randint(1,50,size=(n,1)))
fx = c.transpose()*x - cvxpy.sum_entries(cvxpy.log((b - A.transpose()* x)))
return fx

function for computing bicoherence

Dear all
I'm looking for a numpy/scipy function to compute bicoherence and auto-bicoherence fore the studying of 3-wave interaction.
Thank you for all the possible help
nicola
The best package for this in python land is http://pypi.python.org/pypi/nitime
It has several coherence estimators, but I didn't look very carefully at those. It is a package for neuroimaging, but the algorithms only use numpy and scipy, intentionally, so it can be used by other applications.
Perhaps this Matlab toolbox will help; it's quite easy to translate Matlab into Python, generally.
Here is a function that relies on the scipy.spectrogram function (scipy version > 0.17) and compute the bicoherence between two signals.
Definition from Hagihira 2001 and Hayashi 2007. See Wikipedia-bicoherence
Hope this helps.
Regards,
def compute_bicoherence(s1, s2, rate, nperseg=1024, noverlap=512):
""" Compute the bicoherence between two signals of the same lengths s1 and s2
using the function scipy.signal.spectrogram
"""
from scipy import signal
import numpy
# compute the stft
f1, t1, spec_s1 = signal.spectrogram(s1, fs = rate, nperseg = nperseg, noverlap = noverlap, mode = 'complex',)
f2, t2, spec_s2 = signal.spectrogram(s2, fs = rate, nperseg = nperseg, noverlap = noverlap, mode = 'complex')
# transpose (f, t) -> (t, f)
spec_s1 = numpy.transpose(spec_s1, [1, 0])
spec_s2 = numpy.transpose(spec_s2, [1, 0])
# compute the bicoherence
arg = numpy.arange(f1.size / 2)
sumarg = arg[:, None] + arg[None, :]
num = numpy.abs(
numpy.mean(spec_s1[:, arg, None] * spec_s1[:, None, arg] * numpy.conjugate(spec_s2[:, sumarg]),
axis = 0)
) ** 2
denum = numpy.mean(
numpy.abs(spec_s1[:, arg, None] * spec_s1[:, None, arg]) ** 2, axis = 0) * numpy.mean(
numpy.abs(numpy.conjugate(spec_s2[:, sumarg])) ** 2,
axis = 0)
bicoh = num / denum
return f1[arg], bicoh
# exemple of use and display
freqs, bicoh = compute_bicoherence(s1, s2, rate)
f = plt.figure(figsize = (9, 9))
plt.pcolormesh(freqs, freqs, bicoh,
# cmap = 'inferno'
)
plt.colorbar()
plt.clim(0, 0.5)
plt.show()
If you refer to normalized cross spectral density (as defined in wikipedia) then matplotlib.mlab.cohere would do the trick.

Resources