I have a constraint in OPL in which I need a dvar like an index in another dvar, but CPLEX gives me an error. I try to avoid this error using logical constraint as explained in https://www.ibm.com/developerworks/community/forums/html/threadTopic?id=2be2ec22-db4b-4a2c-b164-615b9f735dc9&ps=25. But now a receive this error:
Error 5002: Q is not positive semi-definite
This is the constraint:
forall(j in pat,k in gior,w in slotp)
vinc4: (k==t[j])*y[j,k,w] ==
sum(g in giorni)(r[j,g,w+1]) +
sum(g in giorni)(l[j,g,w-1]);
If t[j] is a variable, the expression k == t[j] is not a constant, but it is the truth value of a constraint. This truth value is equivalent to a variable that is 1 if the constraint is true, and 0 if not.
It seems you multiply this 'variable' by another one, y[j,k,w]. So you end up with a quadratically-constrained model: a model in which some of the constraints contain quadratic terms. CPLEX can solve these models only if they are convex, and, from the error, it is not the case here.
Related
I have derived a survival function for a system of components (ignore the details of how this system is setup) and I am trying to maximize its expected, or more specifically, maximizing the expected value of the function:
surv_func = function(x,mu) = {(exp(-(x/(mu))^(1/3))*((1-exp(-(4/3)*x^(3/2)))+exp(-(-(4/3)*x^(3/2)))))*exp(-(x/(3-mu))^(1/3))}
and I am supposed (since the pdf including my tasks gives a hint about it) to use the function
optimize()
and the expected value for a function can be computed with
# Computes expected value of a the function "function"
E <- integrate(function, 0, Inf)
but my function depends on x and mu. The expected value could (obviously) be computed if the integral had no mu but instead only depended on x. For those interested, the mu comes from the fact that one of the components has a Weibull-distribution with parameters (1/3,mu) and the 3-mu comes from that has a Weibull-distribution with parameters (1/3,lambda). In the task there is a constraint mu + lambda = 3, so I tought substituting the lambda-parameter in the second Weibull-distribution with lambda = 3 - mu and trying to maximize this problem would yield not only mu, but also lambda.
If I try to, just for the sake of learing about R, compute the expected value using the code below (in the console window), it just gives me the following:
> E <- integrate(surv_func,0,Inf)
Error in (function (x, mu) : argument "mu" is missing, with no default
I am new to R and seem to be a little bit "slow" at learning. How can I approach this problem?
I have tried to follow the approach mentioned here: JuMP constraints involving matrix inverse. But I am still not able to get my code to run.
My code is as follows:
using JuMP, Ipopt, LinearAlgebra
FP = Model(solver=IpoptSolver())
#variable(FP, x[1:2,1:2] >= 0)
#objective(FP, Max, 0)
#NLconstraint(FP, inv(x) <= 0.5*I)
status = solve(FP)
I get the following error:
ERROR: LoadError: Unexpected object x[i,j] >= 0 for all i in {1,2}, j in {1,2} in nonlinear expression.
I am not sure what is going wrong. I am using JuMP 0.18.6. Could you please help? Thanks.
I have a variable Y with beta distribution : Beta(alpha,1/3).
I have to find the value of alpha such as :
P(Y <= 0.416) =0.2
for more details read my question in math stackexchange here :
https://math.stackexchange.com/questions/3038125/beta-distribution-find-the-parameter-alpha-of-mathcalbe-alpha-frac1
I wrote this function (i supposed that x is alpha, the root of the function):
f=function(x){
pbeta(0.416,x,1/3)
}
and i tried to use uniroot:
uniroot(f,interval=c(0,5),tol=1e-5)
I din't understand this message: Error in uniroot(f, interval = c(0, 5), tol = 1e-05) : f() values at end points not of opposite sign.
I read here Uniroot solution in R that this method method needs stronger assumption to ensure the existence of a root: f(lower) * f(upper) < 0; but i
have a positive function and then i couldn't use this function!
Does an alternative function exist in R?
Can anyone suggest me the code, how to do find alpha with R?
Thanks for the help in advance!!
I've been trying for a whole day to get my code to work but it fails despite the inputs and outputs being consistent.
Someone mentioned somewhere that classnllcliterion does not accept values less than or equal to zero.
How am I supposed to go about training this network.
here is part of my code, I suppose it fails when in backward here the models output may contain -ve values.
However when I switch to meansquarederror criterion, the code works just fine.
ninputs = 22; noutputs = 3
hidden =22
model = nn.Sequential()
model:add(nn.Linear(ninputs, hidden)) -- define the only module
model:add(nn.Tanh())
model:add(nn.Linear(hidden, noutputs))
model:add(nn.LogSoftMax())
----------------------------------------------------------------------
-- 3. Define a loss function, to be minimized.
-- In that example, we minimize the Mean Square Error (MSE) between
-- the predictions of our linear model and the groundtruth available
-- in the dataset.
-- Torch provides many common criterions to train neural networks.
criterion = nn.ClassNLLCriterion()
----------------------------------------------------------------------
-- 4. Train the model
i=1
mean = {}
std = {}
-- To minimize the loss defined above, using the linear model defined
-- in 'model', we follow a stochastic gradient descent procedure (SGD).
-- SGD is a good optimization algorithm when the amount of training data
-- is large, and estimating the gradient of the loss function over the
-- entire training set is too costly.
-- Given an arbitrarily complex model, we can retrieve its trainable
-- parameters, and the gradients of our loss function wrt these
-- parameters by doing so:
x, dl_dx = model:getParameters()
-- In the following code, we define a closure, feval, which computes
-- the value of the loss function at a given point x, and the gradient of
-- that function with respect to x. x is the vector of trainable weights,
-- which, in this example, are all the weights of the linear matrix of
-- our model, plus one bias.
feval = function(x_new)
-- set x to x_new, if differnt
-- (in this simple example, x_new will typically always point to x,
-- so the copy is really useless)
if x ~= x_new then
x:copy(x_new)
end
-- select a new training sample
_nidx_ = (_nidx_ or 0) + 1
if _nidx_ > (#csv_tensor)[1] then _nidx_ = 1 end
local sample = csv_tensor[_nidx_]
local target = sample[{ {23,25} }]
local inputs = sample[{ {1,22} }] -- slicing of arrays.
-- reset gradients (gradients are always accumulated, to accommodate
-- batch methods)
dl_dx:zero()
-- evaluate the loss function and its derivative wrt x, for that sample
local loss_x = criterion:forward(model:forward(inputs), target)
model:backward(inputs, criterion:backward(model.output, target))
-- return loss(x) and dloss/dx
return loss_x, dl_dx
end
The error received is
/home/stormy/torch/install/bin/luajit:
/home/stormy/torch/install/share/lua/5.1/nn/THNN.lua:110: Assertion
`cur_target >= 0 && cur_target < n_classes' failed. at
/home/stormy/torch/extra/nn/lib/THNN/generic/ClassNLLCriterion.c:45
stack traceback: [C]: in function 'v'
/home/stormy/torch/install/share/lua/5.1/nn/THNN.lua:110: in function
'ClassNLLCriterion_updateOutput'
...rmy/torch/install/share/lua/5.1/nn/ClassNLLCriterion.lua:43: in
function 'forward' nn.lua:178: in function 'opfunc'
/home/stormy/torch/install/share/lua/5.1/optim/sgd.lua:44: in
function 'sgd' nn.lua:222: in main chunk [C]: in function 'dofile'
...ormy/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in
main chunk [C]: at 0x00405d50
The error message results from passing in targets that are out of bounds.
For example:
m = nn.ClassNLLCriterion()
nClasses = 3
nBatch = 10
net_output = torch.randn(nBatch, nClasses)
targets = torch.Tensor(10):random(1,3) -- targets are between 1 and 3
m:forward(net_output, targets)
m:backward(net_output, targets)
Now, see the bad example (that you suffer from)
targets[5] = 13 -- an out of bounds set of classes
targets[4] = 0 -- an out of bounds set of classes
-- these lines below will error
m:forward(net_output, targets)
m:backward(net_output, targets)
I am trying to evaluate hierarchical models from R using the R2OpenBUGS library.
Relevant variables are:
N = 191,
p = 4,
k = 1,
x = N * p matrix (i.e. 191 * 4) of values,
t0 = k * (x' * x),
y = vector of continuous data with length N,
mu0 = vector of 4 zeros (i.e. c(0,0,0,0)),
prob = vector of 4 probabilities at 0.5 (i.e. c(0.5,0.5,0.5,0.5)),
indimodel = vector of 4 parameter groupings (i.e. c(1,2,4,8)).
Initial values for tau and gama are generated using the following function in R:
inits<-function()
{
list(tau=runif(1,0,10),gama=c(1,1,1,1))
}
Thus, BUGS should just generate initial values for relevant variables missing from the list in inits().
However, when I attempt to run the following BUGS model:
model
{
for (i in 1:N)
{
mu[i]<-inprod(x[i,],nu[])
y[i]~dnorm(mu[i],tau)
}
for (i in 1:p)
{
gama[i]~dbern(prob[i])
nu[i]<-beta[i]*gama[i]
}
for (i in 1:p)
{
beta[i]~dnorm(mu0[i],t0[i])
}
tau~dgamma(0.00001,0.00001)
model<-inprod(gama[],indimodel[])
sigma<-sqrt(1/tau)
}
...I get the following error:
"expected the collection operator c error pos 13018"
"variable N is not defined"
...described in the log as:
model is syntactically correct
expected the collection operator c error pos 13018
variable N is not defined
model must have been compiled but not updated to be able to change RN generator
BugsCmds:NoCompileInits
model must be compiled before generating initial values
model must be initialized before updating
model must be initialized before monitors used
model must be initialized before monitors used
model must be initialized before monitors used
model must be initialized before monitors used
model must be initialized before monitors used
model must be initialized before monitors used
model must be initialized before monitors used
model must be initialized before DIC can be monitored
model must be initialized before updating
model must be initialized before monitors used
DIC monitor not set
I have a feeling that this issue stems from a missing declaration for some variable's (or variables') initial value(s).
I found the bug. I mistakenly specified t0[i] the vector, when I should have specified it as a matrix. From R, t0 is defined as a matrix (see list of variables above), and in WinBUGS the collection error is thrown because it expects t0 the vector.