I am trying to simulate an exact line search experiment using CVXPY.
objective = cvx.Minimize(func(x+s*grad(x)))
s = cvx.Variable()
constraints = [ s >= 0]
prob = cvx.Problem(objective, constraints)
obj = cvx.Minimize(prob)
(cvxbook byod pg472)
the above equation is my input objective function.
def func(x):
np.random.seed(1235813)
A = np.asmatrix(np.random.randint(-1,1, size=(n, m)))
b = np.asmatrix(np.random.randint(50,100,size=(m,1)))
c = np.asmatrix(np.random.randint(1,50,size=(n,1)))
fx = c.transpose()*x - sum(np.log((b - A.transpose()* x)))
return fx
Gradient Function
def grad(x):
np.random.seed(1235813)
A = np.asmatrix(np.random.randint(-1,1, size=(n, m)))
b = np.asmatrix(np.random.randint(50,100,size=(m,1)))
c = np.asmatrix(np.random.randint(1,50,size=(n,1)))
gradient = A * (1.0/(b - A.transpose()*x)) + c
return gradient
Using this to find the t "Step Size" by minimising the objective function results in an error 'AddExpression' object has no attribute 'log'.
I am new to CVXPY and Optimization. I would be grateful if someone could guide on how to fix the errors.
Thanks
You need to use CVXPY functions, not NumPy functions. Something like this should work:
def func(x):
np.random.seed(1235813)
A = np.asmatrix(np.random.randint(-1,1, size=(n, m)))
b = np.asmatrix(np.random.randint(50,100,size=(m,1)))
c = np.asmatrix(np.random.randint(1,50,size=(n,1)))
fx = c.transpose()*x - cvxpy.sum_entries(cvxpy.log((b - A.transpose()* x)))
return fx
Related
I am trying to implement ST-HOSVD algorithm in Julia because I could not found library which contains ST-HOSVD.
See this paper in Algorithm 1 in page7.
https://people.cs.kuleuven.be/~nick.vannieuwenhoven/papers/01-STHOSVD.pdf
I cannot reproduce input (4,4,4,4) tensor by approximated tensor whose tucker rank is (2,2,2,2).
I think I have some mistake in indexes of matrix or tensor elements, but I could not locate it.
How to fix it?
If you know library of ST-HOSVD, let me know.
ST-HOSVD is really common way to reduce information. I hope the question helps many Julia user.
using TensorToolbox
function STHOSVD(A, reqrank)
N = ndims(A)
S = copy(A)
Sk = undef
Uk = []
for k = 1:N
if k == 1
Sk = tenmat(S, k)
end
Sk_svd = svd(Sk)
U1 = Sk_svd.U[ :, 1:reqrank[k] ]
V1t = Sk_svd.V[1:reqrank[k], : ]
Sigma1 = diagm( Sk_svd.S[1:reqrank[k]] )
Sk = Sigma1 * V1t
push!(Uk, U1)
end
X = ttm(Sk, Uk[1], 1)
for k=2:N
X = ttm(X, Uk[k], k)
end
return X
end
A = rand(4,4,4,4)
X = X_STHOSVD(A, [2,2,2,2])
EDIT
Here, Sk = tenmat(S, k) is mode n matricization of tensor S.
S∈R^{I_1×I_2×…×I_N}, S_k∈R^{I_k×(Π_{m≠k}^{N} I_m)}
The function is contained in TensorToolbox.jl. See "Basis" in Readme.
The definition of mode-k Matricization can be seen the paper in page 460.
It works.
I have seen 26 page in this slide
using TensorToolbox
using LinearAlgebra
using Arpack
function STHOSVD(T, reqrank)
N = ndims(T)
tensor_shape = size(T)
for i = 1 : N
T_i = tenmat(T, i)
if reqrank[i] == tensor_shape[i]
USV = svd(T_i)
else
USV = svds(T_i; nsv=reqrank[i] )[1]
end
T = ttm( T, USV.U * USV.U', i)
end
return T
end
I'm new to Julia programming I managed to solve some 1st order DDE (Delay Differential Equations) and ODE. I now need to solve a second order delay differential equation but I didn't manage to find documentation about that (I previously used DifferentialEquations.jl).
The equation (where F is a function and τ the delay):
How can I do this?
Here is my code using the given information, it seems that the system stay at rest which is incorrect. I probably did something wrong.
function bc_model(du,u,h,p,t)
# [ u'(t), u''(t) ] = [ u[1], -u[1] + F(ud[0],u[0]) ] // off by one in julia A[0] -> A[1]
γ,σ,Q = p
ud = h(p, t-σ)[1]
du = [u[2], + Q^2*(γ/Q*tanh(ud)-u[1]) - u[2]]
end
u0 = [0.1, 0]
h(p, t) = u0
lags = [σ,0]
tspan = (0.0,σ*100.0)
alg = MethodOfSteps(Tsit5())
p = (γ,σ,Q,ω0)
prob = DDEProblem(bc_model,u0,h,tspan,p; constant_lags=lags)
sol = solve(prob,alg)
plot(sol)
The code is in fact working! It seems that it is my normalization constants that are not consistent. Thank you!
You get a state space of dimension 2, containing u = [u(t),u'(t)]. Consequently the return vector of the right-side function is [u'(t),u''(t)]. Then if ud is the delayed state [u(t-τ),u'(t-τ)] the right side function can be formulated as
[ u'(t), u''(t) ] = [ u[1], -u[1] + F(ud[0],u[0]) ]
I am trying to make documentation for NeuraNetDiffEq.jl. I find an ODE solution with Flux.jl from this amazing tutorial here https://mitmath.github.io/18S096SciML/lecture2/ml .
using Flux
using DifferentialEquations
using LinearAlgebra
using Plots
using Statistics
NNODE = Chain(x -> [x],
Dense(1, 32, tanh),
Dense(32, 1),
first)
NNODE(1.0)
g(t) = t * NNODE(t) + 1f0
ϵ = sqrt(eps(Float32))
loss() = mean(abs2(((g(t + ϵ) - g(t)) / ϵ) - cos(2π * t)) for t in 0:1f-2:1f0)
opt = Flux.Descent(0.01)
data = Iterators.repeated((), 5000)
iter = 0
cb = function () # callback function to observe training
global iter += 1
if iter % 500 == 0
display(loss())
end
end
display(loss())
Flux.train!(loss, Flux.params(NNODE), data, opt; cb=cb)
t = 0:0.001:1.0
plot(t,g.(t),label="NN")
plot!(t,1.0 .+ sin.(2π .* t) / 2π, label="True")
I am having trouble understanding the parameters involved to invoke training process for NueralNetDiffEq.jl as in:
function DiffEqBase.solve(
prob::DiffEqBase.AbstractODEProblem,
alg::NeuralNetDiffEqAlgorithm,
args...;
dt,
timeseries_errors = true,
save_everystep=true,
adaptive=false,
abstol = 1f-6,
verbose = false,
maxiters = 100)
What would be a valid input for alg parameter? What would be the equivalent code in NeuralNetDiffEq.jl for the above ODE example?
not sure what I'm doing wrong here. I'm trying to get a cross-validation score for a mixture-of-two-gammas model.
llikGammaMix2 = function(param, x) {
if (any(param < 0) || param["p1"] > 1) {
return(-Inf)
} else {
return(sum(log(
dgamma(x, shape = param["k1"], scale = param["theta1"]) *
param["p1"] + dgamma(x, shape = param["k2"], scale = param["theta2"]) *
1
(1 - param["p1"])
)))
}
}
initialParams = list(
theta1 = 1,
k1 = 1.1,
p1 = 0.5,
theta2 = 10,
k2 = 2
)
for (i in 1:nrow(cichlids)) {
SWS1_training <- cichlids$SWS1 - cichlids$SWS1[i]
SWS1_test <- cichlids$SWS1[i]
MLE_training2 <-
optim(
par = initialParams,
fn = llikGammaMix2,
x = SWS1_training,
control = list(fnscale = -1)
)$par
LL_test2 <-
optim(
par = MLE_training2,
fn = llikGammaMix2,
x = SWS1_test,
control = list(fnscale = -1)
)$value
}
print(LL_test2)
This runs until it gets to the first optim(), then spits out Error in fn(par, ...) : attempt to apply non-function.
My first thought was a silly spelling error somewhere, but that doesn't seem to be the case. Any help is appreciated.
I believe the issue is in the return statement. It's unclear if you meant to multiply or add the last quantity (1 - param["p1"])))) to the return value. Based on being a mixture, I'm guessing you mean for it to be multiplied. Instead it just hangs at the end which throws issues for the function:
return(sum(log(dgamma(x, shape = param["k1"], scale = param["theta1"]) *
param["p1"] +
dgamma(x, shape = param["k2"], scale = param["theta2"]) *
(1 - param["p1"])))) ## ISSUE HERE: Is this what you meant?
There could be other issues with the code. I would double check that the function you are optimizing is what you think it ought to be. It's also hard to tell unless you give a reproducible example we might be able to use. Try to clear up the above issue and let us know if there are still problems.
I have come across an implementation of SG-filter in Julia at this link. When I execute the function apply_filter, an error is returned -
UndefVarError: apply_filter not defined
I think this is an implementation for a previous version of Julia (?). I am executing this in Julia 1.0 as of now. Couldn't find documentation about the defined types, which is where my guess is concerning the error
I would like to forewarn the user about using the function savitzkyGolay in Julia. There is a mismatch with the result from Scipy implementation (which must have undergone several iterations of checking by the community)
#pyimport scipy.signal as ss
x=[1,2,3,4,5,6,7,8,9,10]
savitzkyGolay(x,5,1)
10-element Array{Float64,1}:
1.6000000000000003
2.200000000000001
3.0
4.0
5.000000000000001
6.000000000000001
7.0
8.0
8.8
9.400000000000002
#Python's scipy implementation
ss.savgol_filter(x,5,1)
10-element Array{Float64,1}:
1.0000000000000007
2.0000000000000004
2.9999999999999996
3.999999999999999
4.999999999999999
5.999999999999999
6.999999999999998
7.999999999999998
8.999999999999996
9.999999999999995
If it can help, I have simplified the code.
using Pkg, LinearAlgebra, DSP, Plots
function vandermonde(halfWindow, polyDeg)
x=[1.0*i for i in -halfWindow:halfWindow]
n = polyDeg+1
m = length(x)
V = zeros(m, n)
for i = 1:m
V[i,1] = 1.0
end
for j = 2:n
for i = 1:m
V[i,j] = x[i] * V[i,j-1]
end
end
return V
end
function SG(halfWindow, polyDeg)
V = vandermonde(halfWindow,polyDeg)
Q,R=qr(V)
n = polyDeg+1
m = 2*halfWindow+1
R1 = vcat(R, zeros(m-n,n))
sg = R1\Q'
for i in 1:(polyDeg+1)
sg[i,:] = sg[i,:]*factorial(i-1)
end
return sg'
end
function apply_filter(filter,signal)
halfWindow = round(Int,(length(filter)-1)/2)
padded_signal = [signal[1]*ones(halfWindow);signal;signal[end]*ones(halfWindow)]
filter_cross_signal = conv(filter[end:-1:1], padded_signal)
return filter_cross_signal[2*halfWindow+1:end-2*halfWindow]
end
Here is how I use it :
mean_speed_unfiltered = readdlm("mean_speeds_raw_-2.txt")
sg = SG(500,2); # halt-window, polynomal degree
t = 10*10^(-3)#s #time of the simulation
dt = 0.1/γ; #time step
Nt = convert(Int, round(t/dt)); #number of iteration
#Smooth the mean speed curve:
mean_speeds_smoothed = apply_filter(sg[:,1],mean_speed_unfiltered)
png(plot([j*dt for j=0:Nt] , mean_speeds_smoothed, title = "Smoothed mean speed over
time", xlabel = "t (s)"), "Mean_speed_filtered_SG")
derivative_mean_speeds_smoothed = apply_filter(sg[:,2],mean_speed_unfiltered)
plt1 = plot(mean_speeds_smoothed,derivative_mean_speeds_smoothed, title = "derivative mean speed over speed", xlabel = "<v>(t) (s)", ylabel = "d<v(t)>/dt")
png(plt1, "Force_SG_1D2Lasers")
However it seems to me that the code presented in https://gist.github.com/lnacquaroli/c97fbc9a15488607e236b3472bcdf097#file-savitzkygolay-jl-L34 is faster.