Mode of density function using optimize - r

I want to find the mode (x-value) of a univariate density function using R
s optimize function
I.e. For a standard normal function f(x) ~ N(3, 1) the mode should be the mean i.e. x=3.
I tried the following:
# Define the function
g <- function(x) dnorm(x = x, mean = 3, sd = 1)
Dvec <- c(-1000, 1000)
# First get the gradient of the function
gradfun <- function(x){grad(g, x)}
# Find the maximum value
x_mode <- optimize(f=g,interval = Dvec, maximum=TRUE)
x_mode
This gives the incorrect value of the mode as:
$maximum
[1] 999.9999
$objective
[1] 0
Which is incorrect i.e. gives the max value of the (-1000, 1000) interval as opposed to x=3.
Could anyone please help edit the optimisation code.
It will be used to pass more generic functions of x if this simple test case works

I would use optim for this, avoiding to mention the interval. You can tailor the seed by taking the maximum of the function on the original guessed interval:
guessedInterval = min(Dvec):max(Dvec)
superStarSeed = guessedInterval[which.max(g(guessedInterval))]
optim(par=superStarSeed, fn=function(y) -g(y))
#$par
#[1] 3
#$value
#[1] -0.3989423
#$counts
#function gradient
# 24 NA
#$convergence
#[1] 0
#$message
#NULL

Related

Estimate a global load reduction using optimization

I am trying to find a global reduction (global_reduct) for a timeseries of monthly loads.
The goal is make sure that only 50% (or any other taget) of all loads exceed a specific reference load.
global_reduct <- c(50) ## initial value
load_ref <- 450.5 ## reference load
tobject <- 50 ## 50% above reference
Data example which is a subset of 20+ years of data.
df <- data.frame(
Date=seq(as.Date('2010-01-01'), as.Date('2012-04-01'), by='months'),
load= c(1.496169954, 1.29147009, 1.964195241, 1.14352707, 1.319144304,
0.773288093, 0.65175612, 0.685340958, 0.416934849,
0.769853258, 1.104639594, 0.92213209, 1.685588986,
1.972510175, 2.6882446, 2.153314503, 1.324735759,
1.027755411, 0.610207197, 0.674642831, 0.721971375,
1.13233884, 0.739325423, 0.90031817, 1.366597449,
1.928098735, 1.216538229, 1.514353244)
)
In this case the reduction would be around 62% at a target of 50% of the reference load.
I tried to setup a function that can be called by optim to estimate
the new reduct value.
optfuc <- function(reduct, ttarget=50){
reduct_eq <- df$load *(1 - (reduct/100))
tt_exceed <- ifelse((reduct_eq *1000) > load_ref, 1, 0)
ave_ref <- sum(tt_exceed)/length(tt_exceed)*100 - ttarget
# ave_ref in this case should be = ttarget
# ave_ref
reduct
}
optim(c(30), optfuc, method ="L-BFGS-B", lower=0, upper=100)
How can I get the correct new reduct value?
Is there a different package that I can use?
It might be better to use proportions, i.e. values within [0, 1], instead of percentages.
Then minimizing the abs difference of reduced load load - load*reduct and tolerance tobject within interval [0, 1] should give the desired minimum, i.e. the reduction factor.
I use optimize directly here.
load_ref <- mean(df$load) ## for example
tobject <- .25 ## 25%
optfuc <- \(reduct, ref=load_ref, tol=tobject, data=df) {
load1 <- with(data, load - load*reduct)
abs(tol - mean(load1 > ref))
}
(o <- optimize(optfuc, c(0, 1)))
# $minimum
# [1] 0.1935267
#
# $objective
# [1] 0
reduct <- o$minimum
cat(sprintf('reduction:%s%% at target of%s%%',
formatC(reduct*100, digits=2),
formatC(tobject*100, digits=2)))
# reduction: 19% at target of 25%
Check:
(with(df, load - load*reduct) > load_ref) |> table() |> proportions()
# FALSE TRUE
# 0.75 0.25

How do I optimize a function in R for all values of y?

So I'm trying to plot a function into a graph and ultimately I landed on the easiest option being to give alpha-values and optimize velocity for the x_position values. The problem is, I'm doing something wrong with the optimization.
Here's what I've got thus far:
y <- seq(0, 55, 0.1)
x_position <- function(alpha,velo)
{velo*cos(alpha)*((velo*sin(alpha)+sqrt((velo^2*sin(alpha))^2+2*16.5*9.81)/9.81))}
x <- optimize(x_position,c(1,1000),alpha=y,maximum=TRUE)$objective
Basically, I'm trying to make "y" a vector for the angle and "x" a vector of maximum function value for each angle value so that I could then plot the x,y vector for the function. The problem is, I can't get the x-vector right. For whatever reason it just keeps telling me "invalid function value in 'optimize'". Changing the optimization interval doesn't seem to accomplish anything and I'm out of ideas. The function seems to work just fine when I tested it with e.g. alpha 55 and velocity 10.
y <- seq(0, 55, 0.1)
x_position <- function(velo,alpha){
velo*cos(alpha)*((velo*sin(alpha)+sqrt((velo^2*sin(alpha))^2+2*16.5*9.81)/9.81))
}
optimize(f = x_position, interval = c(1,1000), maximum=TRUE, alpha = y[1])
#> $maximum
#> [1] 999.9999
#>
#> $objective
#> [1] 1834.098
a <- sapply(y, function(y) optimize(f = x_position, interval = c(1,1000),
maximum=TRUE, alpha = y)$objective)
head(a)
#> [1] 1834.098 10225190.493 20042734.667 29061238.316 36921162.118
#> [6] 43309155.705
Created on 2021-09-29 by the reprex package (v2.0.1)

Gamma Distribution in R (qgamma)

This may be a very stupid question but
Does anyone know why i am not getting the second bit equals to the mean (100)?
#beta=4, alpha=5, mean=20
qgamma(0.5, 5, 1/4)
# 18.68364
#beta=2500, alpha=0.04, mean=100
qgamma(0.5,0.04,1/2500)
# 0.00004320412
It is because you are using the quantile function, and qgamma(0.5, shape, scale) corresponds to the median - not the mean as you are expecting.
See the example below;
x <- rgamma(50000, shape = 0.04, scale = 2500)
mean(x)
# [1] 98.82911
median(x)
# [1] 3.700012e-05

A strange error from an algorithm

I have written this function that computes the MLE from a Cauchy distribution numerically based on the Newton-Raphson algorithm:
mlec <- function(x,theta0=median(x),numstp=100,eps=0.01){
numfin <- numstp
ic <- 0
istop <- 0
while(istop==0){
ic <- ic+1
ltheta <- -2*sum((x-theta0)/(1+(x-theta0)^2))
lprimetheta <- -2*(sum(2*(x-theta0)^2/
(1+(x-theta0)^2)^2-1/(1+(x-theta0)^2)^2))
theta1 <- theta0-(ltheta/lprimetheta)
check <- abs((theta1-theta0)/theta1)
if(check < eps ) { istop <- 1 }
theta0 <- theta1
}
list(theta1=theta1,check=check,realnumstps=ic)
}
The goal is then to generate observations from a Cauchy distribution with scale parameter 2 and see how the MLE performs. The problem is that while for some samples, the MLE runs wonderfully for others I get the strange error
Error in if (check < eps) { : missing value where TRUE/FALSE needed
What is going on here? I have defined what "check" is so that shouldn't happen.
Thank you.
I've added a little bit of instrumentation (see the cat() statement in the middle), and fixed the second-derivative expression (fixed: see below):
mlec <- function(x,theta0=median(x),numstp=100,eps=0.01,
debug=TRUE,fixed=FALSE){
numfin <- numstp
ic <- 0
istop <- 0
while(istop==0){
ic <- ic+1
ltheta <- -2*sum((x-theta0)/(1+(x-theta0)^2))
lprimetheta <- -2*(sum(2*(x-theta0)^2/
(1+(x-theta0)^2)^2-1/(1+(x-theta0)^2)^2))
if (!fixed) {
theta1 <- theta0-(ltheta/lprimetheta)
} else theta1 <- theta0-ltheta/ff(theta0)
check <- abs((theta1-theta0)/theta1)
if (debug) cat(ic,ltheta,lprimetheta,theta0,theta1,check,"\n")
if(check < eps ) { istop <- 1 }
theta0 <- theta1
}
list(theta1=theta1,check=check,realnumstps=ic)
}
set.seed(1)
x <- rcauchy(100,2)
mlec(x)
Here's the tail end of the output:
## ic ltheta lprimetheta theta0 theta1 check
## 427 -4.48838e-75 -2.014555e-151 -4.455951e+76 -6.683926e+76 0.3333333
## 428 -2.992253e-75 -8.953579e-152 -6.683926e+76 -1.002589e+77 0.3333333
## 429 -1.994835e-75 -3.979368e-152 -1.002589e+77 -1.503883e+77 0.3333333
## 430 -1.32989e-75 0 -1.503883e+77 -Inf NaN
Now, why is it happening? Either you've got a bug somewhere, or the algorithm is unstable. tl;dr it turns out the answer is actually "both"; your second-derivative expression seems wrong, but even it were correct the N-R algorithm is extremely unstable for this problem.
Here are your derivative and second-derivative functions (I'm wrapping them with Vectorize() for convenience so I can evaluate these at multiple theta values simultaneously):
lthetafun <- Vectorize(function(theta) {
-2*sum((x-theta)/(1+(x-theta)^2))
})
lprimethetafun <- Vectorize(function(theta) {
-2*(sum(2*(x-theta)^2/
(1+(x-theta)^2)^2-1/(1+(x-theta)^2)^2))
})
A negative log-likelihood function based on the built-in dcauchy function:
thetafun <- Vectorize(function(theta) -sum(dcauchy(x,theta,log=TRUE)))
Check differentiation (at an arbitrarily chosen point):
library("numDeriv")
all.equal(grad(thetafun,2),lthetafun(2)) ## TRUE
Check second derivative:
hessian(thetafun,2) ## 36.13297
lprimethetafun(2) ## 8.609859: ???
I think your second-derivative expression is wrong.
The following alternative second-derivative function is based on lazily cheating with Wolfram Alpha, differentiating your gradient function (which matches with the finite-difference approximation):
ff <- Vectorize(function(theta)
2*sum(((x-theta)^2-1)/((x-theta)^2+1)^2))
ff(2) ## matches hessian() above.
But it looks like you may have further problems.
The negative log-likelihood surface looks OK:
curve(thetafun, from=-10,to=10,n=501)
But trouble is on the horizon:
curve(lthetafun, from=-10,to=10, n=501)
This looks irregular, and going up one level to the second derivative shows that it is:
curve(ff, from=-10, to=10, n=501)
Here's the curve of N-R updates:
ff2 <- function(x) x-lthetafun(x)/ff(x)
curve(ff2, from=-10, to=10, n=501,ylim=c(-100,100))
Yikes! This indicates why the Newton-Raphson method could go wrong unless you start close enough to the minimum (any time the likelihood surface has an inflection point, the N-R updating curve will have a pole ...). Further analysis of the problem would probably tell you why the second derivative of the Cauchy is so scary.
If you just want to find the MLE you can do it by some more robust 1-D method:
library("bbmle")
mle2(x~dcauchy(location=m),
data=data.frame(x),
start=list(m=median(x)),
method="Brent",
lower=-100,upper=100)
##
## Call:
## mle2(minuslogl = x ~ dcauchy(location = m), start = list(m = median(x)),
## method = "Brent", data = data.frame(x), lower = -100, upper = 100)
##
## Coefficients:
## m
## 1.90179
##
## Log-likelihood: -262.96
##
If you start close enough, N-R seems to work OK:
mlec(x,1.85,debug=FALSE,fixed=TRUE,eps=0.0001)
## $theta1
## [1] 1.901592
##
## $check
## [1] 5.214763e-05
##
## $realnumstps
## [1] 37079

Create a function with multiple parameters in R

I want to compute the following functions :
here, g(x) is the density function of a distribution. I want to compute this function for several distributions. In addition, I use the library fitdistrplus.
To create g, I use the function do.call this way :
g<-function(x) {do.call(paste("d",i,sep=""),c(list(x=x),fti$estimate))}
fti$estimate contains the parameters of the distribution i.
G(x) is the cumulative distribution computed this way :
G<-function(x) {do.call(paste("p",i,sep=""),c(list(q=x),fti$estimate))}
I compute f(x) this way :
f<function(n,x) {n*g(x)*(1-G(x))^(n-1)
At last, I compute h(x) this way :
h<- function(n) {integrate(function(x) {x*f(n,x)},0,Inf)}
However, I can't plot these functions, I get the following errors :
1: In n*g(x):
Longer object length is not a multiple of shorter object length
2: In (1-G(x))^(n-1):
Longer object length is not a multiple of shorter object length
3: In x*f(n,x) :
Longer object length is not a multiple of shorter object length
Beyond, if I juste want to plot f(n,x), I get this error :
Error in list(x=x) :'x' is missing
The minimal snipset I have is the following
#i can be "exp" "lnorm" "norm" etc...
for( i in functionsName) {
png(paste(fileBase,"_",i,"_","graphics.png",sep=""))
plot.new()
fti<-fitdist(data, i)
plotdist(data,i, para=as.list(fti[[1]]))
#fti is a datatable or datafram
#fti$estimate looks like this :
# meanlog sdlog
#8.475449 1.204958
#g
pdf<-function(x) {do.call(paste("d",i,sep=""), c(list(x=x),fti$estimate))}
#G
cdf<-function(x) do.call(paste("p",i,sep=""), c(list(q=x),fti$estimate))
#f
minLaw<- function(n,x) {n*pdf(x)*(1-cdf(x))^(n-1)}
#h
minExpectedValue<-function(n) {integrate(function(x) {x*minLaw(n,x)},0,Inf)}
#these 2 following lines give an error
plot(minExpectedValue)
plot(minLaw)
dev.off()
}
I had to do some reverse engineering to figure out your d1, q1 etc calls, but I think this is how you do it. Perhaps the original problem lies in a function call like f(n=2:3, x=1:9); in such a call n should be a single value, not a vector of values.
Even if length of x was a multiple of n length, the output would most likely not be what you really wanted.
If you try to give n a vector form, you might end up in a recycled (false) output:
> print(data.frame(n=2:3, x=1:6))
- n x
1 2 1
2 3 2
3 2 3
4 3 4
5 2 5
6 3 6
where x would be evaluated with n=2 at point x=1, n=3 at point x=2 etc. What you really would've wanted is something in the lines of:
> print(expand.grid(x=1:5, n=2:3))
- x n
1 1 2
2 2 2
3 3 2
4 4 2
5 5 2
6 1 3
7 2 3
8 3 3
9 4 3
10 5 3
You could do this by calling separately for each n value:
lapply(2:3, FUN=function(n) (f(n, x=1:5)))
#[[1]]
#[1] 0.0004981910 0.0006066275 0.0007328627 0.0008786344 0.0010456478
#
#[[2]]
#[1] 0.0007464956 0.0009087272 0.0010974595 0.0013152213 0.0015644676
Did you use the same fti for all the distribution fits, even though it should've been different? Or does the i in fti refer to index i and it was a list of fits in form of ft[[i]]?
Below is a wrapper function, which is called separately for each n-value (and distribution i):
wrapper <- function(i, x, n, fti){
# As was provided by OP
g<-function(x) {do.call(paste("d",i,sep=""),c(list(x=x),fti$estimate))}
G<-function(x) {do.call(paste("p",i,sep=""),c(list(q=x),fti$estimate))}
# does the i in fti refer to fit of i:th distribution, i.e. should it be a list where i:th location in ft is i:th distribution estimates?
f<-function(n,x) {n*g(x)*(1-G(x))^(n-1)}
# was missing a '-' and a '}'
h<- function(n) {integrate(function(x) {x*f(n,x)},0,Inf)}
list(gres = g(x), Gres = G(x), fres = f(n,x), hres = h(n))
}
# Example data
require("fitdistrplus")
data(groundbeef)
serving <- groundbeef$serving
# Gumbel distribution
d1 <- function(x, a, b) 1/b*exp((a-x)/b)*exp(-exp((a-x)/b))
p1 <- function(q, a, b) exp(-exp((a-q)/b))
q1 <- function(p, a, b) a-b*log(-log(p))
fti1 <- fitdist(serving, "1", start=list(a=10, b=10))
#> fti1$estimate
# a b
#56.95893 29.07871
# Normal distribution
# dnorm, pnorm and qnorm are available in the default environment
d2 <- dnorm
p2 <- pnorm
q2 <- qnorm
fti2 <- fitdist(serving, "2", start=list(mean=0, sd=1))
#> fti2$estimate
# mean sd
#73.67743 35.92581
# Sequence of x-values
xs <- seq(-100, 100, by=1)
print((resultdist1n2 <- wrapper(i=1, x=xs, n=2, fti=fti1))$hres)
print((resultdist1n3 <- wrapper(i=1, x=xs, n=3, fti=fti1))$hres)
print((resultdist2n2 <- wrapper(i=2, x=xs, n=2, fti=fti2))$hres)
print((resultdist2n3 <- wrapper(i=2, x=xs, n=3, fti=fti2))$hres)
plot(xs, resultdist1n2$fres, col=1, type="l", ylim=c(0,0.025), xlab="x", ylab="f(n, x)")
points(xs, resultdist1n3$fres, col=2, type="l")
points(xs, resultdist2n2$fres, col=3, type="l")
points(xs, resultdist2n3$fres, col=4, type="l")
legend("topleft", legend=c("Gamma (i=1) n=2", "Gamma (i=1) n=3", "Normal (i=2) n=2", "Normal (i=2) n=3"), col=1:4, lty=1)
And the results of your desired h as found in resultdist1n2$hres etc:
h(n=2) for distribution i=1:
53.59385 with absolute error < 0.00022
h(n=3) for distribution i=1:
45.23146 with absolute error < 4.5e-05
h(n=2) for distribution i=2:
53.93748 with absolute error < 1.1e-05
h(n=3) for distribution i=2:
44.06331 with absolute error < 2e-05
EDIT: Here's how one uses the lapply function to call for each of the vector of n values 0<=n<=256:
ns <- 0:256
res1 <- lapply(ns, FUN=function(nseq) wrapper(i=1, x=xs, n=nseq, fti=fti1))
par(mfrow=c(1,2))
plot.new()
plot.window(xlim=c(-100,100), ylim=c(0, 0.05))
box(); axis(1); axis(2); title(xlab="x", ylab="f(n,x)", main="f(n,x) for gamma (i=1), n=0:256")
for(i in 1:length(ns)) points(xs, res1[[i]]$fres, col=rainbow(257)[i], type="l")
# perform similarly for the other distributions by calling with i=2, fti=fti2
# h as a function of n for dist i=1
plot(ns, unlist(lapply(res1, FUN=function(x) x$hres$value)), col=rainbow(257), xlab="n", ylab="h(n)", main="h(n) for gamma (i=1), n=0:256")
I would plot each distribution i separately like this.
The problem is that the plot method for a function expects that the function will be vectorised. In other words, if given an argument of length N, it should return a vector of results also of length N.
Your minExpectedValue doesn't satisfy this; it expects that n will be a scalar, and returns a scalar. You can quickly fix this up with Vectorize. You also need to specify the name of the argument to plot over, in this case n.
minExpectedValue <- Vectorize(minExpectedValue)
plot(minExpectedValue, xname="n")

Resources