I am trying to compute this integral :
"A" and "Beta" are constants, "PHI" capital is the marginal distribution function of the Normal Law N(0,1), and "phi" is the density of the Normal Law N(0,1) and P(tau <= t) = 1/2
Here is my implementation :
integral <- function(A, beta) {
f <- function(x) {
# We have P(tau <= t) = 1/2
pnorm(qnorm(1/2,0,1) - beta*x / (sqrt(1-(beta^2))), 0, 1)*(1/sqrt(2*pi)*exp(-x^2 / 2)
}
integrate(f,lower=-Inf, upper = A)$value
}
I am not really sure about the qnorm function.. Is there a better way to do the computation?
not that The "/" operator is superior to "-".means in this line
pnorm(qnorm(1/2,0,1) - beta*x / (sqrt(1-(beta^2))), 0, 1)
you made a mistake.correct is
(qnorm(1/2,0,1) - beta*x)
###not pnorm(qnorm(1/2,0,1)- beta*x.... =>pnorm((qnorm(1/2,0,1) -
###beta*x)....
I use this code and I got answer
integral <- function(A, beta) {
f <- function(x) {
temp<-(qnorm(1/2,0,1) - beta*x) / (sqrt(1-(beta^2)))
pnorm(temp,0,1)*dnorm(x,0,1)
}
integrate(f,lower=-Inf, upper = A)$value
}
integral(0,0) ##.25
integral(10,.9) ##.5
also if you want another way to calculate any complicated integrate you can use monte carlo methods or .....
Related
Summary of problem
My objective is to create a function called newton.raphson to implement the Newton-Raphson root-finding algorithm.
Root Finding Algorithm: x1 = X0 - f(xo)/f'(x0)
I have 2 arguments:
iter = number of iteration (value = 10^5)
epsilon = for the tolerance (value = 10^-10)
Can not depend on variables outside of the function
newton.raphson <- function(f, x0, iter=1e5, epsilon=1e-10) {
x <- x0
h <- 1e-5
for (t in 1:iter) {
drvt <- f((x+h)) - f((x-h)) / (2 * h)
update <- x - f(x)/ drvt
if (abs(update) < epsilon) {
break
}
x <- update
}
root <- x
return(root)
}
# Define some function to test
f <- function(x) {
x^2 - 4 * x - 7
}
I get the following results:
> newton.raphson(f, 0)
[1] 2.000045
> newton.raphson(f, 3)
[1] 5.000024
But results should be:
-1.316625
5.316625
Your derivative calculation is a little bit broken - you forgot parenthesis around the difference between f(x+h) and f(x-h):
drvt <- ( f(x+h) - f(x-h) ) / (2 * h)
Also, you should compare the difference between the old and new root approximation to the tolerance. In order to make things more clear, rename your misleading update variable to something like new.x. Then, your should check if (abs(new.x - x) < epsilon).
I have two equations. They are as follows:
( 1 - 0.25 ^ {1/alpha} ) * lambda = 85
( 1 - 0.75 ^ {1/alpha} ) * lambda = 11
I would like to compute the values of alpha and lambda by solving the above two equations. How do I do this using R?
One approach is to translate it into an optimization problem by introducing an loss function:
loss <- function(X) {
L = X[1]
a = X[2]
return(sum(c(
(1 - 0.25^(1/a))*L - 85,
(1 - 0.75^(1/a))*L - 11
)^2))
}
nlm(loss, c(-1,-1))
If the result returned from nlm() has a minimum near zero, then estimate will be a vector containing lambda and alpha. When I tried this, I got an answer that passed the sniff test:
> a = -1.28799
> L = -43.95321
> (1 - 0.25^(1/a))*L
[1] 84.99999
> (1 - 0.75^(1/a))*L
[1] 11.00005
#olooney's answer is best.
Another way to solve these equations is to use uniroot function. We can cancel the lambda values and can use the uniroot to find the value of alpha. Then substitute back to find lambda.
f <- function(x) {
(11/85) - ((1 - (0.75) ^ (1/x)) / (1 - (0.25) ^ (1/x)) )
}
f_alpha <- uniroot(f, lower = -10, upper = -1, extendInt = "yes")
f_lambda <- function(x) {
11 - ((1 - (0.75) ^ (1/f_alpha$root)) * x)
}
lambda = uniroot(f_lambda, lower = -10, upper = -2, extendInt = "yes")$root
sprintf("Alpha equals %f", f_alpha$root)
sprintf("Lambda equals %f", lambda)
results in
[1] "Alpha equals -1.287978"
[1] "Lambda equals -43.952544"
I have problems implementing a gradient descent in R for an exponential function.
Let's say
foo <- function(x) {
y = -2 + 2.5 * exp(0.1*x^2-0.7*x)
return(y) }
is my exponential function then
grad <- function(x) {
y = 2.5*exp(0.1*x^2-0.7*x)*(0.2*x-0.7)
return(y) }
is the gradient function of foo(x).
The task is to implement a function called
gdescent <- function(x0, fc, grd, diff, step) {}
where
x0 is a random initial value
foo is the exponential function I want to minimize
grad is the gradient function (derivative of foo(x))
diff is the difference that terminates the algo
step is the size of the steps (i.e. the learning rate)
The result of the function shall be a list containing
par - the value of x in the minimum;
value - the corresponding value of foo(par) at the minimum;
and iter - the number of iterations it took the algo to find the minimum.
The update rule after every iteration shall be:
xi+1 = xi - step * grd(xi) # i didn't understand this at all
How do I implement this in R?
My understanding of the gradient descent method so far:
pick a random initial value x0 insert x0 in the gradient function grd(x)
if grd(x0) < 0, reduce x0 by "step" (x0+1=x0-step);
if grd(x0) > 0, increase x0 by "step" (x0+1=x0+step); and go back to 2) with x0+1 as initial value
my solution so far:
gdescent <- function(x0, fc, grd, diff, step) {
x = x0
x_history = vector("numeric", iter)
for(i in 1:iter) {
x = x - step * grad(x)
if( x > diff ) { #not sure how to proceed here
}
}
How can I solve this without a fixed number of iterations? so without initialising iter
I'm trying to replicate the Excel Solver in R- which is basically a constraint optimization problem
I'm trying to minimize the cost per action which is total spend/ total actions which equals to the below function with a few constraints.
CPA function:
(a+b+c+d)/((consta+(Baln(a)))+ (constb+(Bbln(b)))+(constc+(Bcln(c)))+(constd+(Bdln(d)))
where the unknown variables are a,b,c,d and const* stands for constant from a regressions and B* stand for coefficient from a regression (so they are values that I have).
Here is the simplified filled in function that I'm trying to minimize:
(a+b+c+d)/ (((69.31*ln(a))+(14.885*ln(b))+(21.089*ln(c))+(9.934*ln(d))-(852.93))
Constraints:
a+b+c+d>=0
a+b+c+d<=130000(total spend)
a<=119000 (maxa)
a>=272.56(mina)
b<=11000(maxb)
b>=2.04(minb)
c<=2900(maxc)
c>=408.16(minc)
d<=136800(maxd)
d>=55.02(mind)
I'm doing this using the constraints optimization function. My code is below:
g<-function(a,b,c,d) { (a+b+c+d)/((consta+(Balog(a)))+ (constb+(Bblog(b)))+ (constc+(Bclog(c)))+ (constd+(Bdlog(d)))) }
gb<-function(a) g(a[1], a[2], a[3],a[4])
A<-matrix(c(1,0,0,0,-1,0,0,0,0,1,0,0,0,-1,0,0,0,0,1,0,0,0,-1,0,0,0,0,1,0,0,0,-1,-1,-1,-1,-1,1,1,1,1),4,10)
B<- c(mina, -maxa, minb, -maxb, minc, -maxc, mind, -maxd,-totalspend, 0)
constrOptim(c(273,6,409,56),g,gb,A,B)
When I run the optimization function, it states that something is wrong with my arguments (Error in ui %*% theta : non-conformable arguments). I think it is the gradient of the function that is coded wrong but I'm not sure. Any help is appreciated.
You can consider the following approach
library(DEoptim)
fn_Opt <- function(param)
{
a <- param[1]
b <- param[2]
c <- param[3]
d <- param[4]
bool_Cond <- a + b + c + d <= 130000
if(bool_Cond == FALSE)
{
return(10 ^ 30)
}else
{
val <- (a + b + c + d) / (((69.31 * log(a)) + (14.885 * log(b)) + (21.089 * log(c)) + (9.934 * log(d)) - (852.93)))
return(val)
}
}
obj_DEoptim <- DEoptim(fn = fn_Opt, lower = c(272.56, 2.04, 408.16, 55.02),
upper = c(119000, 11000, 2900, 136800),
control = list(itermax = 10000))
What is behind Approx and approxfun? I know that these two functions perform a linear interpolation, however I didn't find any reference on how they do that. I guess they use a least square regression model but I am not sure.
Finally, if it's true that they use a least square regression model what is the difference between them and lm + predict?
As commented , you should read the source code. Interpolation problem
Find y(v), given (x,y)[i], i = 0,..,n-1 */
For example approxfun use a simple this algorithm for linear approximation :
y(v), given (x,y)[i], i = 0,..,n-1 */
find the correct interval (i,j) by bisection */
Use i,j for linear interpolation
Here an R code that aprahrase the C function approx1 :
approx1 <-
function( v, x, y)
{
## Approximate y(v), given (x,y)[i], i = 0,..,n-1 */
i <- 1
j <- length(x)
ij <- 0
## find the correct interval by bisection */
while(i < (j-1) ) {
ij <- floor((i + j)/2)
if(v < x[ij])
j <- ij
else
i <- ij
}
## linear interpolation */
if(v == x[j]) return(y[j])
if(v == x[i]) return(y[i])
return (y[i] + (y[j] - y[i]) * ((v - x[i])/(x[j] - x[i])))
}