Find a variable value with Solver function - r

I'm trying to set up a "Solver" function to find a normal depth of a channel (yn). The parameters are given in the code below, where I can estimate one side of the equation. All other parameters are function of yn. I need to find yn that solves the function A*(R^(2/3)=nQSo.
So=0.001
n=0.013
Q=30
B=10
nQSo=(n*Q)/(So^(1/2))
A=B*yn
P=B+2*yn
R=A/P
A*(R^(2/3)=nQSo

You can take a look at optimize
So=0.001
n=0.013
Q=30
B=10
nQSo=(n*Q)/(So^(1/2))
error = function(yn,nQSo){
A=B*yn
P=B+2*yn
R=A/P
return(abs(A*(R^(2/3))-nQSo))
}
optimize(error,interval = c(0,2),nQSo = nQSo)
the result as you see is yn = 1.239066

Related

How can I optimize the expected value of a function in R?

I have derived a survival function for a system of components (ignore the details of how this system is setup) and I am trying to maximize its expected, or more specifically, maximizing the expected value of the function:
surv_func = function(x,mu) = {(exp(-(x/(mu))^(1/3))*((1-exp(-(4/3)*x^(3/2)))+exp(-(-(4/3)*x^(3/2)))))*exp(-(x/(3-mu))^(1/3))}
and I am supposed (since the pdf including my tasks gives a hint about it) to use the function
optimize()
and the expected value for a function can be computed with
# Computes expected value of a the function "function"
E <- integrate(function, 0, Inf)
but my function depends on x and mu. The expected value could (obviously) be computed if the integral had no mu but instead only depended on x. For those interested, the mu comes from the fact that one of the components has a Weibull-distribution with parameters (1/3,mu) and the 3-mu comes from that has a Weibull-distribution with parameters (1/3,lambda). In the task there is a constraint mu + lambda = 3, so I tought substituting the lambda-parameter in the second Weibull-distribution with lambda = 3 - mu and trying to maximize this problem would yield not only mu, but also lambda.
If I try to, just for the sake of learing about R, compute the expected value using the code below (in the console window), it just gives me the following:
> E <- integrate(surv_func,0,Inf)
Error in (function (x, mu) : argument "mu" is missing, with no default
I am new to R and seem to be a little bit "slow" at learning. How can I approach this problem?

zfit straight line fitting for 2 dim dataset

I would like to fit 2-dim plot by straight line (a*x+b) using zfit like the following figure.
That is very easy work by a probfit package, but it has been deprecated by scikit-hep. https://nbviewer.jupyter.org/github/scikit-hep/probfit/blob/master/tutorial/tutorial.ipynb
How can I fit such 2dim plots by any function?
I've checked zfit examples, but it seems to be assumed some distribution (histogram) thus zfit requires dataset like 1d array and I couldn't reach how to pass 2d data to zfit.
There is no direct way in zfit currently to implement this out-of-the-box (with one line), since a corresponding loss is simply not added.
However, the SimpleLoss (zfit.loss.SimpleLoss) allows you to construct any loss that you can think of (have a look at the example as well in the docstring). In your case, this would look along this:
x = your_data
y = your_targets # y-value
obs = zfit.Space('x', (lower, upper))
param1 = zfit.Parameter(...)
param2 = zfit.Parameter(...)
...
model = Func(...) # a function is the way to go here
data = zfit.Data.from_numpy(array=x, obs=obs)
def mse():
prediction = model.func(data)
value = tf.reduce_mean((prediction - y) ** 2) # or whatever you want to have
return value
loss = zfit.loss.SimpleLoss(mse, [param1, param2])
# etc.
On another note, it would be a good idea to add such a loss. If you're interested to contribute I recommend to get in contact with the authors and they will gladly help you and guide you to it.
UPDATE
The loss function itself consists presumably of three to four things: x, y, a model and maybe an uncertainty on y. The chi2 loss looks like this:
def chi2():
y_pred = model.func(x)
return tf.reduce_sum((y_pred - y) / y_error) ** 2)
loss = zfit.loss.SimpleLoss(chi2, model.get_params())
That's all, 4 lines of code. x is a zfit.Data object, model is in this case a Func.
Does that work?
That's all.

R stats::step function with forward direction param is not optimizing the LR model(AIC)

I've used AIC and step function for the variable selection before, but for some reason not able to get it to work.
library(ISLR)
d = data("Caravan")
train_data = Caravan[-c(1:500,]
m0 <- glm(Purchase ~ 1, data = train_data, family = "binomial")
stats::step(m0, direction = "forward", trace = 1 )
PN - I tried the stepAIC function and tried passing the scope as scope = Purchase ~., but not those change solve the issue.
The output of the step function is a model that is the same as the base model(m0).
step function uses update within it. On the other hand, the . has a different meaning in the update function as compared to the lm function. The . in update is used to indicate that you would like to MAINTAIN the formula the way it was originally rather than used to INCLUDE ALL THE VARIABLES as in lm. thus if your model is m<-lm(y~x), update(m,log(.)~.) simply means change the left hand side to log, ie log(y) while maintaining the right hand side as it is. ie x. The perios does not include any other variables other than the ones in the model already.
WHAT YOU SHOULD DO:
scopef <- reformulate(grep("Purchase",names(Caravan),value=T,invert = T),"Purchase")
step(m0,scopef,direction = "forward")
This is how I solved the issue. As Onyambu mentioned in his reply, in AIC, the dot doesn't work the way it does in lm. Instead of concatenating the 84 predictors manually, I used the paste function with collapse="+".
glmnet( formula(paste0("Y~", paste(names(Caravan)[1:85], collapse="+"))),
....)

Estimate parameters of Frechet distribution using mmedist or fitdist(with mme) error

I'm relatively new in R and I would appreciated if you could take a look at the following code. I'm trying to estimate the shape parameter of the Frechet distribution (or inverse weibull) using mmedist (I tried also the fitdist that calls for mmedist) but it seems that I get the following error :
Error in mmedist(data, distname, start = start, fix.arg = fix.arg, ...) :
the empirical moment function must be defined.
The code that I use is the below:
require(actuar)
library(fitdistrplus)
library(MASS)
#values
n=100
scale = 1
shape=3
# simulate a sample
data_fre = rinvweibull(n, shape, scale)
memp=minvweibull(c(1,2), shape=3, rate=1, scale=1)
# estimating the parameters
para_lm = mmedist(data_fre,"invweibull",start=c(shape=3,scale=1),order=c(1,2),memp = "memp")
Please note that I tried many times en-changing the code in order to see if my mistake was in syntax but I always get the same error.
I'm aware of the paradigm in the documentation. I've tried that as well but with no luck. Please note that in order for the method to work the order of the moment must be smaller than the shape parameter (i.e. shape).
The example is the following:
require(actuar)
#simulate a sample
x4 <- rpareto(1000, 6, 2)
#empirical raw moment
memp <- function(x, order)
ifelse(order == 1, mean(x), sum(x^order)/length(x))
#fit
mmedist(x4, "pareto", order=c(1, 2), memp="memp",
start=c(shape=10, scale=10), lower=1, upper=Inf)
Thank you in advance for any help.
You will need to make non-trivial changes to the source of mmedist -- I recommend that you copy out the code, and make your own function foo_mmedist.
The first change you need to make is on line 94 of mmedist:
if (!exists("memp", mode = "function"))
That line checks whether "memp" is a function that exists, as opposed to whether the argument that you have actually passed exists as a function.
if (!exists(as.character(expression(memp)), mode = "function"))
The second, as I have already noted, relates to the fact that the optim routine actually calls funobj which calls DIFF2, which calls (see line 112) the user-supplied memp function, minvweibull in your case with two arguments -- obs, which resolves to data and order, but since minvweibull does not take data as the first argument, this fails.
This is expected, as the help page tells you:
memp A function implementing empirical moments, raw or centered but
has to be consistent with distr argument. This function must have
two arguments : as a first one the numeric vector of the data and as a
second the order of the moment returned by the function.
How can you fix this? Pass the function moment from the moments package. Here is complete code (assuming that you have made the change above, and created a new function called foo_mmedist):
# values
n = 100
scale = 1
shape = 3
# simulate a sample
data_fre = rinvweibull(n, shape, scale)
# estimating the parameters
para_lm = foo_mmedist(data_fre, "invweibull",
start= c(shape=5,scale=2), order=c(1, 2), memp = moment)
You can check that optimization has occurred as expected:
> para_lm$estimate
shape scale
2.490816 1.004128
Note however, that this actually reduces to a crude way of doing overdetermined method of moments, and am not sure that this is theoretically appropriate.

What are x1_step1_xoffset, x1_step1_gain and x1_step1_ymin in a neural network generated by genFunction in Matlab?

I'm working with Matlab's Neural Network toolbox and I have generated a neural network function with genFunction.
I would like to know what mapminmax_apply function does, what are these variables used for and their meaning in the neural network:
% Input 1
x1_step1_xoffset = [0.151979470539401;-89.4008362047824;0.387909026651698;0.201508462422352];
x1_step1_gain = [2.67439342164766;0.0112020512930696;3.56055585104964;4.09080417195814];
x1_step1_ymin = -1;
Here it's the mapminmax_apply function:
% Map Minimum and Maximum Input Processing Function
function y = mapminmax_apply(x,settings_gain,settings_xoffset,settings_ymin)
y = bsxfun(#minus,x,settings_xoffset);
y = bsxfun(#times,y,settings_gain);
y = bsxfun(#plus,y,settings_ymin);
end
And here it's the call to the function with the above variables:
% Input 1
Xp1 = mapminmax_apply(X{1,ts},x1_step1_gain,x1_step1_xoffset,x1_step1_ymin);
I think:
the mapminmax function can also return the settings it uses (amongst others, offset, gain and ymin). For some reason in the code spat out by the NN function, these settings are given at the begining of the file, under Input1, in the form of x1_step1_xoffset, etc.
mapminmax('apply',X,PS) will apply the settings in PS to the mapminmax algorithm.
So, I think the code generated here has more steps than you necessarily need. You could get rid of the Input1 steps and just use a simple xp1 = mapminmax(x1'), instead of the mapminmax_apply
Cheers
Matlab NN toolbox automatically normalizes the features of the dataset.
The functions mapminmax_apply and mapminmax_reverse are related to normalizing the features.
The function mapminmax_apply exactly converts/normalizes input range to -1 to 1.
Since the output will also come out as a normalized vector/value(between -1 to 1) it needs to be reversed normalized using the function mapminmax_reverse .
Cheers

Resources