Please read the following scenario and tell me what's the problem:
I have two Content Packages (CP1 and CP2).
CP2 has objectivesGlobalToSystem = true.
CP2 has a global objective with readSatisfiedStatus = true and targetObjectiveID = g-obj.
CP1 has a local objective with writeSatisfiedStatus = true and targetObjectiveID = g-obj.
Objective Progress Status and Objective Satisfied Status of local objective of CP1 are true.
CP1 is launched and Objective Satisfied Status of the local objective is written to the global objective of CP2 after an Exit All Navigation Request.
The control returns to the LMS.
CP1 is launched again. What is the value of Objective Progress Status and Objective Satisfied Status of local objective of CP1?
CP1 will not be affected because it doesn't read the global objective values. Only CP2 has access to the global objective values since it has "readSatisfiedStatus" set to true. "writeSatisfiedStatus" is the same as assigning a value to a variable, while "readSatisfiedStatus" reads the value assigned to that variable.
Related
I’m using R+desolve to model seaweed growth and wish to implement a function in which seaweed is harvested given a threshold value of an internal variable of the model (i.e. not a state variable so not accessed via the ‘y’ of (t, y, parms)). I would also like the threshold value to be controlled by a variable that I can pass to the model in parms. Some (simplified) example code
Seaweed_model <- function(t, y, parms,...) {
with(as.list(c(y, parms)), {
#(lots of internal variables hidden here for brevity)
I_top<-PAR(t)*exp(-K_d*(z-h_MA)) # calculate incident irradiance at the top of the farm
I_av <-(I_top/(K_d*z+N_f*a_cs))*(1-exp(-(K_d*z+N_f*a_cs))) # calclulate the average irradiance throughout the height of the farm
g_E <- I_av/((I_s)+I_av)) # light limitation scaling function
#(state variables not relevant to this problem omitted)
dN_f <- mu_g_EQT*N_s-(d_m*N_f) # change in fixed nitrogen
list(c(dNH4, dNO3,dN_s,dN_f,dD),
g_E = g_E,
)
})
}
Then I would like to trigger an event function when g_E reaches a threshold value
rootfunc <- function(t,y,parms,...){
return(g_E - threshold_value) #how do I access these values from the ode solver call?
}
And the event
eventfunc <- function(t, y, parms,...){
c(y[1:2],y[3]*harvest_fraction,y[4]*harvest_fraction,y[5]) #reduces biomass state variables fractionally
}
The harvest parameters are added in to the parms passed to the ode call
parms_harvest<-c(
harvest_threshold=0.2,
harvest_fraction=0.75
)
lim_harvest <- ode(times = times, func = Seaweed_model, y = y0, parms = c(parms_porphyra,parms_farm,parms_harvest), events=list(func=eventfunc, root=TRUE),rootfun=rootfunc)
All of the examples for desolve events do a root evaluation on a state variable of the model (e.g. y[‘N_s’] is accessible in the example above) rather than internal variables of the model. g_E as written in the above rootfunc clearly will not work but I have no idea how to access it if it’s even possible. Substituting a state variable of the model for g_E (not useful scientifically but good for testing purposes) I get error: object ‘threshold_value’ not found so it doesn’t seem like the root function is able to access parms even.
I’m sure I’ve missed something fundamental here. Thanks so much in advance for you help!
Issue 1
I have an objective function, gFun(modelOutput,l,u), which returns 0 if the simulated output is in interval [l,u], otherwise it returns a positive(!) number.
OFfun <- function(params) {
out <- simulate(params)
OF <- gFun(out,0,5)
return(OF)
}
The objective function is called from the optim function with some tolerance settings.
fitval=optim(par=parms,fn=OFfun,method="SANN",control = list(abstol = 1e-2))
summary(fitval)
My issue is that the optimization doesn't stop if the OFfun == 0.
I have tried with the condition below:
if (OF == 0){
opt <- options(show.error.messages=FALSE)
on.exit(options(opt))
stop()
}
it works but it doesn't return the OF back to optim and therefore I don't get the fitval info with estimated parameters.
Issue 2
Another issue is that the solver sometimes crashes and aborts the entire optimisation. I would like to harvest many solution sets for different initial guesses - so I need to handle failed simulations. probably related to issue 1.
Any advice would be very appreciated.
I am trying to use the Nomad technique for blackbox optimisation from the crs package (C implementation), which is called via the snomadr function. The method works when trying straight numerical optimisation, but errors when categorical features are included. However the help for categorical optimisation is not very well documented, so I am struggling to see where I am going wrong. Reproducible code below:
library(crs)
library(randomForest)
Illustrating this on randomForest & the iris dataset.
Creating the randomForest model (leaving the last row out as starting points for the optimizer)
rfIris <- randomForest(x=iris[-150,-c(1)], y=unlist(iris[-150,1]))
The objective function (functions we want to optimize)
objFn <- function(x0,model){
preds <- predict(object = model, newdata = x0)
as.numeric(preds)
}
Test to see if the objective function works (should return ~6.37)
objOut <- objFn(x0=unlist(iris[150,-c(1)]),model = rfIris)
Creating initial conditions, options list, and upper/lower bounds for Nomad
x0 <- iris[150,-c(1)]
x0 <- unlist(x0)
options <- list("MAX_BB_EVAL"=10000,
"MIN_MESH_SIZE"=0.001,
"INITIAL_MESH_SIZE"=1,
"MIN_POLL_SIZE"=0.001,
"NEIGHBORS_EXE" = c(1,2,3),
"EXTENDED_POLL_ENABLED" = 'yes',
"EXTENDED_POLL_TRIGGER" = 'r0.01',
"VNS_SEARCH" = '1')
up <- c(10,10,10,10)
low <- c(0,0,0,0)
Calling the optimizer
opt <- snomadr(eval.f = objFn, n = 4, bbin = c(0,0,0,2), bbout = 0, x0= x0 ,model = rfIris, opts=options,
ub = up, lb = low)
and I get an error about the NEIGHBORS_EXE parameter in the options list. It seems as if I need to supply NEIGHBORS_EXE a file corresponding to a set of 'extended poll' coordinates, however is it not clear what these exactly are.
The method works by setting "EXTENDED_POLL_ENABLED" = 'no' in the options list, as it then ignores the categorical variables and defaults to numerical optimisation, but this is not what I want.
I also managed to pull up some additional information for NEIGHBORS_EXE using
snomadr(information=list("help"="-h NEIGHBORS_EXE"))
and again, do not understand what the 'neighbours.exe' is meant to be.
Any help would be much appreciated!
This is the response from Zhenghua who coded the R interface:
The issue is that he did not configure the parameter “NEIGHBORS_EXE” properly. He need to prepare an Executable file for defining the neighbors, put the executable file in the folder where R is called, and then set the parameter “NEIGHBORS_EXE” to the executable file name.
You can contact us at nomad#gerad.ca if you wish to continue the discussion.
About the neighbours_exe parameter you can refer to the section 7.1 of user guide of Nomad
https://www.gerad.ca/nomad/Downloads/user_guide.pdf
I've been trying for a whole day to get my code to work but it fails despite the inputs and outputs being consistent.
Someone mentioned somewhere that classnllcliterion does not accept values less than or equal to zero.
How am I supposed to go about training this network.
here is part of my code, I suppose it fails when in backward here the models output may contain -ve values.
However when I switch to meansquarederror criterion, the code works just fine.
ninputs = 22; noutputs = 3
hidden =22
model = nn.Sequential()
model:add(nn.Linear(ninputs, hidden)) -- define the only module
model:add(nn.Tanh())
model:add(nn.Linear(hidden, noutputs))
model:add(nn.LogSoftMax())
----------------------------------------------------------------------
-- 3. Define a loss function, to be minimized.
-- In that example, we minimize the Mean Square Error (MSE) between
-- the predictions of our linear model and the groundtruth available
-- in the dataset.
-- Torch provides many common criterions to train neural networks.
criterion = nn.ClassNLLCriterion()
----------------------------------------------------------------------
-- 4. Train the model
i=1
mean = {}
std = {}
-- To minimize the loss defined above, using the linear model defined
-- in 'model', we follow a stochastic gradient descent procedure (SGD).
-- SGD is a good optimization algorithm when the amount of training data
-- is large, and estimating the gradient of the loss function over the
-- entire training set is too costly.
-- Given an arbitrarily complex model, we can retrieve its trainable
-- parameters, and the gradients of our loss function wrt these
-- parameters by doing so:
x, dl_dx = model:getParameters()
-- In the following code, we define a closure, feval, which computes
-- the value of the loss function at a given point x, and the gradient of
-- that function with respect to x. x is the vector of trainable weights,
-- which, in this example, are all the weights of the linear matrix of
-- our model, plus one bias.
feval = function(x_new)
-- set x to x_new, if differnt
-- (in this simple example, x_new will typically always point to x,
-- so the copy is really useless)
if x ~= x_new then
x:copy(x_new)
end
-- select a new training sample
_nidx_ = (_nidx_ or 0) + 1
if _nidx_ > (#csv_tensor)[1] then _nidx_ = 1 end
local sample = csv_tensor[_nidx_]
local target = sample[{ {23,25} }]
local inputs = sample[{ {1,22} }] -- slicing of arrays.
-- reset gradients (gradients are always accumulated, to accommodate
-- batch methods)
dl_dx:zero()
-- evaluate the loss function and its derivative wrt x, for that sample
local loss_x = criterion:forward(model:forward(inputs), target)
model:backward(inputs, criterion:backward(model.output, target))
-- return loss(x) and dloss/dx
return loss_x, dl_dx
end
The error received is
/home/stormy/torch/install/bin/luajit:
/home/stormy/torch/install/share/lua/5.1/nn/THNN.lua:110: Assertion
`cur_target >= 0 && cur_target < n_classes' failed. at
/home/stormy/torch/extra/nn/lib/THNN/generic/ClassNLLCriterion.c:45
stack traceback: [C]: in function 'v'
/home/stormy/torch/install/share/lua/5.1/nn/THNN.lua:110: in function
'ClassNLLCriterion_updateOutput'
...rmy/torch/install/share/lua/5.1/nn/ClassNLLCriterion.lua:43: in
function 'forward' nn.lua:178: in function 'opfunc'
/home/stormy/torch/install/share/lua/5.1/optim/sgd.lua:44: in
function 'sgd' nn.lua:222: in main chunk [C]: in function 'dofile'
...ormy/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in
main chunk [C]: at 0x00405d50
The error message results from passing in targets that are out of bounds.
For example:
m = nn.ClassNLLCriterion()
nClasses = 3
nBatch = 10
net_output = torch.randn(nBatch, nClasses)
targets = torch.Tensor(10):random(1,3) -- targets are between 1 and 3
m:forward(net_output, targets)
m:backward(net_output, targets)
Now, see the bad example (that you suffer from)
targets[5] = 13 -- an out of bounds set of classes
targets[4] = 0 -- an out of bounds set of classes
-- these lines below will error
m:forward(net_output, targets)
m:backward(net_output, targets)
I'm working with Matlab's Neural Network toolbox and I have generated a neural network function with genFunction.
I would like to know what mapminmax_apply function does, what are these variables used for and their meaning in the neural network:
% Input 1
x1_step1_xoffset = [0.151979470539401;-89.4008362047824;0.387909026651698;0.201508462422352];
x1_step1_gain = [2.67439342164766;0.0112020512930696;3.56055585104964;4.09080417195814];
x1_step1_ymin = -1;
Here it's the mapminmax_apply function:
% Map Minimum and Maximum Input Processing Function
function y = mapminmax_apply(x,settings_gain,settings_xoffset,settings_ymin)
y = bsxfun(#minus,x,settings_xoffset);
y = bsxfun(#times,y,settings_gain);
y = bsxfun(#plus,y,settings_ymin);
end
And here it's the call to the function with the above variables:
% Input 1
Xp1 = mapminmax_apply(X{1,ts},x1_step1_gain,x1_step1_xoffset,x1_step1_ymin);
I think:
the mapminmax function can also return the settings it uses (amongst others, offset, gain and ymin). For some reason in the code spat out by the NN function, these settings are given at the begining of the file, under Input1, in the form of x1_step1_xoffset, etc.
mapminmax('apply',X,PS) will apply the settings in PS to the mapminmax algorithm.
So, I think the code generated here has more steps than you necessarily need. You could get rid of the Input1 steps and just use a simple xp1 = mapminmax(x1'), instead of the mapminmax_apply
Cheers
Matlab NN toolbox automatically normalizes the features of the dataset.
The functions mapminmax_apply and mapminmax_reverse are related to normalizing the features.
The function mapminmax_apply exactly converts/normalizes input range to -1 to 1.
Since the output will also come out as a normalized vector/value(between -1 to 1) it needs to be reversed normalized using the function mapminmax_reverse .
Cheers