What are x1_step1_xoffset, x1_step1_gain and x1_step1_ymin in a neural network generated by genFunction in Matlab? - networking

I'm working with Matlab's Neural Network toolbox and I have generated a neural network function with genFunction.
I would like to know what mapminmax_apply function does, what are these variables used for and their meaning in the neural network:
% Input 1
x1_step1_xoffset = [0.151979470539401;-89.4008362047824;0.387909026651698;0.201508462422352];
x1_step1_gain = [2.67439342164766;0.0112020512930696;3.56055585104964;4.09080417195814];
x1_step1_ymin = -1;
Here it's the mapminmax_apply function:
% Map Minimum and Maximum Input Processing Function
function y = mapminmax_apply(x,settings_gain,settings_xoffset,settings_ymin)
y = bsxfun(#minus,x,settings_xoffset);
y = bsxfun(#times,y,settings_gain);
y = bsxfun(#plus,y,settings_ymin);
end
And here it's the call to the function with the above variables:
% Input 1
Xp1 = mapminmax_apply(X{1,ts},x1_step1_gain,x1_step1_xoffset,x1_step1_ymin);

I think:
the mapminmax function can also return the settings it uses (amongst others, offset, gain and ymin). For some reason in the code spat out by the NN function, these settings are given at the begining of the file, under Input1, in the form of x1_step1_xoffset, etc.
mapminmax('apply',X,PS) will apply the settings in PS to the mapminmax algorithm.
So, I think the code generated here has more steps than you necessarily need. You could get rid of the Input1 steps and just use a simple xp1 = mapminmax(x1'), instead of the mapminmax_apply
Cheers

Matlab NN toolbox automatically normalizes the features of the dataset.
The functions mapminmax_apply and mapminmax_reverse are related to normalizing the features.
The function mapminmax_apply exactly converts/normalizes input range to -1 to 1.
Since the output will also come out as a normalized vector/value(between -1 to 1) it needs to be reversed normalized using the function mapminmax_reverse .
Cheers

Related

How to optimize with differential evolution using julia package Evolutionary.jl?

I encountered such problem after I specified a differential evolution algorithm and an initial population of multiplied layer perceptron network. It requires to evolve a population of MLPs by DE. I tried to use Evolutionary package, but failed at this problem. I am just a beginner of julia. Can anyone help me with this problem? Or if there is any other way to implement a DE to evolve MLPs? Because I don't know much how to reuse codes if I don't see any similar example, I can't find any example of julia to evolve MLP by DE. The codes are attached as follow.
enter image description hereenter image description hereenter image description hereenter image description hereenter image description hereenter image description hereenter image description here
//Here are the snippets of codes
begin
features = Iris.features();
slabels = Iris.labels();
classes = unique(slabels) # unique classes in the dataset
nclasses = length(classes) # number of classes
d, n = size(features) # dimension and size if the dataset
end
//define MLP
model = Chain(Dense(d, 15, relu), Dense(15, nclasses))
//rewrite initial_population to generate a group of MLPs
begin
import Evolutionary.initial_population
function initial_population(method::M, individual::Chain;
rng::Random.AbstractRNG=Random.default_rng(),
kwargs...) where {M<:Evolutionary.AbstractOptimizer}
θ, re = Flux.destructure(individual);
[re(randn(rng, length(θ))) for i in 1:Evolutionary.population_size(method)]
end
end
//define DE algorithm and I just used random parameters
algo2 = DE(
populationSize=150,
F=0.9,
n=1,
K=0.5*(1.9),
selection = rouletteinv
)
popu = initial_population(algo2, model)
//in the source code of Evolutionary.jl, it seems that to use optimize() function, I need to pass a constranit? I am not sure. I have tried every method of optimize function, but it still reported error. What's worse, I am not sure how to use box constraint, so I tried to use Nonconstranit constraint, but it still failed. I don't know how to set upper and lower bounds of box constraint in this case, so I don't know how to use it. and I tried to set a random box constraint to try to run optimize() function, but it still failed. error reported is in pitcure attached.
cnst = BoxConstraints([0.5, 0.5], [2.0, 2.0])
res2 = Evolutionary.optimize(fitness,cnst,algo2,popu,opts)
//so far what I do is simply define a DE algorithm, an initial population, a MLP network and there is a uniform_mlp(), which is used to deconstruct a mlp into a vector, perform crossover operator and reconstruct from them a new mlp
function uniform_mlp(m1::T, m2::T; rng::Random.AbstractRNG=Random.default_rng()) where {T <: Chain}
θ1, re1 = Flux.destructure(m1);
θ2, re2 = Flux.destructure(m2);
c1, c2 = UX(θ1,θ2; rng=rng)
return re1(c1), re2(c2)
end
//there is also a mutation function
function gaussian_mlp(σ::Real = 1.0)
vop = gaussian(σ)
function mutation(recombinant::T; rng::Random.AbstractRNG=Random.default_rng()) where{T <: Chain}
θ, re = Flux.destructure(recombinant)
return re(convert(Vector{Float32}, vop(θ; rng=rng)))
end
return mutation
end
The easiest way to use this is through Optimization.jl. There is an Evolutionary.jl wrapper that makes it use the standardized Optimization.jl interface. This looks like:
using Optimization, OptimizationEvolutionary
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
p = [1.0, 100.0]
f = OptimizationFunction(rosenbrock)
prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0,-1.0], ub = [1.0,1.0])
sol = solve(prob, Evolutionary.DE())
Though given previous measurements of global optimizer performance, we would recommend BlackBoxOptim's methods as well, this can be changed through simply by changing the optimizer dispatch:
using Optimization, OptimizationBBO
sol = solve(prob, BBO_adaptive_de_rand_1_bin_radiuslimited(), maxiters=100000, maxtime=1000.0)
This is also a DE method, but one with some adaptive radius etc. etc. that performs much better (on average).

zfit straight line fitting for 2 dim dataset

I would like to fit 2-dim plot by straight line (a*x+b) using zfit like the following figure.
That is very easy work by a probfit package, but it has been deprecated by scikit-hep. https://nbviewer.jupyter.org/github/scikit-hep/probfit/blob/master/tutorial/tutorial.ipynb
How can I fit such 2dim plots by any function?
I've checked zfit examples, but it seems to be assumed some distribution (histogram) thus zfit requires dataset like 1d array and I couldn't reach how to pass 2d data to zfit.
There is no direct way in zfit currently to implement this out-of-the-box (with one line), since a corresponding loss is simply not added.
However, the SimpleLoss (zfit.loss.SimpleLoss) allows you to construct any loss that you can think of (have a look at the example as well in the docstring). In your case, this would look along this:
x = your_data
y = your_targets # y-value
obs = zfit.Space('x', (lower, upper))
param1 = zfit.Parameter(...)
param2 = zfit.Parameter(...)
...
model = Func(...) # a function is the way to go here
data = zfit.Data.from_numpy(array=x, obs=obs)
def mse():
prediction = model.func(data)
value = tf.reduce_mean((prediction - y) ** 2) # or whatever you want to have
return value
loss = zfit.loss.SimpleLoss(mse, [param1, param2])
# etc.
On another note, it would be a good idea to add such a loss. If you're interested to contribute I recommend to get in contact with the authors and they will gladly help you and guide you to it.
UPDATE
The loss function itself consists presumably of three to four things: x, y, a model and maybe an uncertainty on y. The chi2 loss looks like this:
def chi2():
y_pred = model.func(x)
return tf.reduce_sum((y_pred - y) / y_error) ** 2)
loss = zfit.loss.SimpleLoss(chi2, model.get_params())
That's all, 4 lines of code. x is a zfit.Data object, model is in this case a Func.
Does that work?
That's all.

Split output of LSTM to do computation on each vector in keras/tensorflow

I'm trying to implement a custom layer in keras. At the end it should implement an Attention Layer. So what i want to do is take the output of an LSTM and make a computation on every vector of the output.
I've got an LSTM with return-sequence=True. So I get an output with a shape like (batch_size, num_vectors, dim_vector).
How can I access a single vector in the call-function of the custom-layer? Or better, how can i split the input tensor to get a list of tensors with shape (dim_vector).
So it should be batch_size * num_vectors vectors/tensors in this list.
What i want to do looks kind of like this:
for i in range(num_vectors):
x_i = list_of_vectors/tensors[i]
W = self.W.eval().transponse()
W1 = self.W1.eval()
b = self.b.eval()
b1 = self.b1.eval()
activated = self.kernel_activation(W1.dot(x_i) + b1)
score = W.dot(activated) + b
scores.append(score)
What looks kind a promising but is poorly documented is K.gather(). Maybe someone could explain how it is working or has a better idea how to deal with my problem.
I also tried tf.unstack() to get a list. But this doesn't work because the dimensions of my input-tensor are unknown except of dim_vector.
I'm working with keras on tensorflow-backend.
Thanks in advance

r Nomad categorical optimisation (snomadr)

I am trying to use the Nomad technique for blackbox optimisation from the crs package (C implementation), which is called via the snomadr function. The method works when trying straight numerical optimisation, but errors when categorical features are included. However the help for categorical optimisation is not very well documented, so I am struggling to see where I am going wrong. Reproducible code below:
library(crs)
library(randomForest)
Illustrating this on randomForest & the iris dataset.
Creating the randomForest model (leaving the last row out as starting points for the optimizer)
rfIris <- randomForest(x=iris[-150,-c(1)], y=unlist(iris[-150,1]))
The objective function (functions we want to optimize)
objFn <- function(x0,model){
preds <- predict(object = model, newdata = x0)
as.numeric(preds)
}
Test to see if the objective function works (should return ~6.37)
objOut <- objFn(x0=unlist(iris[150,-c(1)]),model = rfIris)
Creating initial conditions, options list, and upper/lower bounds for Nomad
x0 <- iris[150,-c(1)]
x0 <- unlist(x0)
options <- list("MAX_BB_EVAL"=10000,
"MIN_MESH_SIZE"=0.001,
"INITIAL_MESH_SIZE"=1,
"MIN_POLL_SIZE"=0.001,
"NEIGHBORS_EXE" = c(1,2,3),
"EXTENDED_POLL_ENABLED" = 'yes',
"EXTENDED_POLL_TRIGGER" = 'r0.01',
"VNS_SEARCH" = '1')
up <- c(10,10,10,10)
low <- c(0,0,0,0)
Calling the optimizer
opt <- snomadr(eval.f = objFn, n = 4, bbin = c(0,0,0,2), bbout = 0, x0= x0 ,model = rfIris, opts=options,
ub = up, lb = low)
and I get an error about the NEIGHBORS_EXE parameter in the options list. It seems as if I need to supply NEIGHBORS_EXE a file corresponding to a set of 'extended poll' coordinates, however is it not clear what these exactly are.
The method works by setting "EXTENDED_POLL_ENABLED" = 'no' in the options list, as it then ignores the categorical variables and defaults to numerical optimisation, but this is not what I want.
I also managed to pull up some additional information for NEIGHBORS_EXE using
snomadr(information=list("help"="-h NEIGHBORS_EXE"))
and again, do not understand what the 'neighbours.exe' is meant to be.
Any help would be much appreciated!
This is the response from Zhenghua who coded the R interface:
The issue is that he did not configure the parameter “NEIGHBORS_EXE” properly. He need to prepare an Executable file for defining the neighbors, put the executable file in the folder where R is called, and then set the parameter “NEIGHBORS_EXE” to the executable file name.
You can contact us at nomad#gerad.ca if you wish to continue the discussion.
About the neighbours_exe parameter you can refer to the section 7.1 of user guide of Nomad
https://www.gerad.ca/nomad/Downloads/user_guide.pdf

Estimate parameters of Frechet distribution using mmedist or fitdist(with mme) error

I'm relatively new in R and I would appreciated if you could take a look at the following code. I'm trying to estimate the shape parameter of the Frechet distribution (or inverse weibull) using mmedist (I tried also the fitdist that calls for mmedist) but it seems that I get the following error :
Error in mmedist(data, distname, start = start, fix.arg = fix.arg, ...) :
the empirical moment function must be defined.
The code that I use is the below:
require(actuar)
library(fitdistrplus)
library(MASS)
#values
n=100
scale = 1
shape=3
# simulate a sample
data_fre = rinvweibull(n, shape, scale)
memp=minvweibull(c(1,2), shape=3, rate=1, scale=1)
# estimating the parameters
para_lm = mmedist(data_fre,"invweibull",start=c(shape=3,scale=1),order=c(1,2),memp = "memp")
Please note that I tried many times en-changing the code in order to see if my mistake was in syntax but I always get the same error.
I'm aware of the paradigm in the documentation. I've tried that as well but with no luck. Please note that in order for the method to work the order of the moment must be smaller than the shape parameter (i.e. shape).
The example is the following:
require(actuar)
#simulate a sample
x4 <- rpareto(1000, 6, 2)
#empirical raw moment
memp <- function(x, order)
ifelse(order == 1, mean(x), sum(x^order)/length(x))
#fit
mmedist(x4, "pareto", order=c(1, 2), memp="memp",
start=c(shape=10, scale=10), lower=1, upper=Inf)
Thank you in advance for any help.
You will need to make non-trivial changes to the source of mmedist -- I recommend that you copy out the code, and make your own function foo_mmedist.
The first change you need to make is on line 94 of mmedist:
if (!exists("memp", mode = "function"))
That line checks whether "memp" is a function that exists, as opposed to whether the argument that you have actually passed exists as a function.
if (!exists(as.character(expression(memp)), mode = "function"))
The second, as I have already noted, relates to the fact that the optim routine actually calls funobj which calls DIFF2, which calls (see line 112) the user-supplied memp function, minvweibull in your case with two arguments -- obs, which resolves to data and order, but since minvweibull does not take data as the first argument, this fails.
This is expected, as the help page tells you:
memp A function implementing empirical moments, raw or centered but
has to be consistent with distr argument. This function must have
two arguments : as a first one the numeric vector of the data and as a
second the order of the moment returned by the function.
How can you fix this? Pass the function moment from the moments package. Here is complete code (assuming that you have made the change above, and created a new function called foo_mmedist):
# values
n = 100
scale = 1
shape = 3
# simulate a sample
data_fre = rinvweibull(n, shape, scale)
# estimating the parameters
para_lm = foo_mmedist(data_fre, "invweibull",
start= c(shape=5,scale=2), order=c(1, 2), memp = moment)
You can check that optimization has occurred as expected:
> para_lm$estimate
shape scale
2.490816 1.004128
Note however, that this actually reduces to a crude way of doing overdetermined method of moments, and am not sure that this is theoretically appropriate.

Resources