Summary of problem
My objective is to create a function called newton.raphson to implement the Newton-Raphson root-finding algorithm.
Root Finding Algorithm: x1 = X0 - f(xo)/f'(x0)
I have 2 arguments:
iter = number of iteration (value = 10^5)
epsilon = for the tolerance (value = 10^-10)
Can not depend on variables outside of the function
newton.raphson <- function(f, x0, iter=1e5, epsilon=1e-10) {
x <- x0
h <- 1e-5
for (t in 1:iter) {
drvt <- f((x+h)) - f((x-h)) / (2 * h)
update <- x - f(x)/ drvt
if (abs(update) < epsilon) {
break
}
x <- update
}
root <- x
return(root)
}
# Define some function to test
f <- function(x) {
x^2 - 4 * x - 7
}
I get the following results:
> newton.raphson(f, 0)
[1] 2.000045
> newton.raphson(f, 3)
[1] 5.000024
But results should be:
-1.316625
5.316625
Your derivative calculation is a little bit broken - you forgot parenthesis around the difference between f(x+h) and f(x-h):
drvt <- ( f(x+h) - f(x-h) ) / (2 * h)
Also, you should compare the difference between the old and new root approximation to the tolerance. In order to make things more clear, rename your misleading update variable to something like new.x. Then, your should check if (abs(new.x - x) < epsilon).
Related
I have two equations. They are as follows:
( 1 - 0.25 ^ {1/alpha} ) * lambda = 85
( 1 - 0.75 ^ {1/alpha} ) * lambda = 11
I would like to compute the values of alpha and lambda by solving the above two equations. How do I do this using R?
One approach is to translate it into an optimization problem by introducing an loss function:
loss <- function(X) {
L = X[1]
a = X[2]
return(sum(c(
(1 - 0.25^(1/a))*L - 85,
(1 - 0.75^(1/a))*L - 11
)^2))
}
nlm(loss, c(-1,-1))
If the result returned from nlm() has a minimum near zero, then estimate will be a vector containing lambda and alpha. When I tried this, I got an answer that passed the sniff test:
> a = -1.28799
> L = -43.95321
> (1 - 0.25^(1/a))*L
[1] 84.99999
> (1 - 0.75^(1/a))*L
[1] 11.00005
#olooney's answer is best.
Another way to solve these equations is to use uniroot function. We can cancel the lambda values and can use the uniroot to find the value of alpha. Then substitute back to find lambda.
f <- function(x) {
(11/85) - ((1 - (0.75) ^ (1/x)) / (1 - (0.25) ^ (1/x)) )
}
f_alpha <- uniroot(f, lower = -10, upper = -1, extendInt = "yes")
f_lambda <- function(x) {
11 - ((1 - (0.75) ^ (1/f_alpha$root)) * x)
}
lambda = uniroot(f_lambda, lower = -10, upper = -2, extendInt = "yes")$root
sprintf("Alpha equals %f", f_alpha$root)
sprintf("Lambda equals %f", lambda)
results in
[1] "Alpha equals -1.287978"
[1] "Lambda equals -43.952544"
I have problems implementing a gradient descent in R for an exponential function.
Let's say
foo <- function(x) {
y = -2 + 2.5 * exp(0.1*x^2-0.7*x)
return(y) }
is my exponential function then
grad <- function(x) {
y = 2.5*exp(0.1*x^2-0.7*x)*(0.2*x-0.7)
return(y) }
is the gradient function of foo(x).
The task is to implement a function called
gdescent <- function(x0, fc, grd, diff, step) {}
where
x0 is a random initial value
foo is the exponential function I want to minimize
grad is the gradient function (derivative of foo(x))
diff is the difference that terminates the algo
step is the size of the steps (i.e. the learning rate)
The result of the function shall be a list containing
par - the value of x in the minimum;
value - the corresponding value of foo(par) at the minimum;
and iter - the number of iterations it took the algo to find the minimum.
The update rule after every iteration shall be:
xi+1 = xi - step * grd(xi) # i didn't understand this at all
How do I implement this in R?
My understanding of the gradient descent method so far:
pick a random initial value x0 insert x0 in the gradient function grd(x)
if grd(x0) < 0, reduce x0 by "step" (x0+1=x0-step);
if grd(x0) > 0, increase x0 by "step" (x0+1=x0+step); and go back to 2) with x0+1 as initial value
my solution so far:
gdescent <- function(x0, fc, grd, diff, step) {
x = x0
x_history = vector("numeric", iter)
for(i in 1:iter) {
x = x - step * grad(x)
if( x > diff ) { #not sure how to proceed here
}
}
How can I solve this without a fixed number of iterations? so without initialising iter
I'm trying to write a square root function in R. The function is supposed to behave like sqrt() but not use that function of course. I'm supposed to use Newton's method for computing the square root, which is:
y(a+1) = [y(a) + x / y(a)]/2
Here x is the number I'm trying to calculate the square root of and y(0) would be the initial guess of the square root of x.
The function is supposed to take in four arguments: x (the number I'm trying to compute the square root of), eps (the difference in value between iterations that are considered be equal), iter (the max number of iterations), and verbose (says I want to output intermediate results).
My issue is that I am not very well versed in writing functions in R. I have experience in C++, but they are slightly different in R.
I believe I'm supposed to write something that goes like this.
Asks the user to input a number as a guess for the value we want to calculate the square root of. Make a for loop from 1 to iter with two if statements 1) that stop the function and output the y value if the max number of iterations have been reached 2) stop the function and output the y value if the difference between successive iterations is less than eps.
Here is the code I have so far:
MySqrt <- function (x, eps = 1e-6, iter = 100, verbose = TRUE) {
for (i in 0:itmax) {
y[0] <- readline(prompt="Please enter your initial square root guess: ")
y[i + 1] = (y[i] + x / y[i])/2
if (i == 100) {
stop (return(y[i + 1]))
}
if (abs(y[i + 1] - y[i]) < eps) {
stop (return(y[i + 1]))
}
}
return(y[i + 1])
}
Here is the error I receive after entering the initial square root guess: Error in y[0] <- readline(prompt = "Please enter your initial square root guess: ") :
object 'y' not found
Honestly, I didn't expect the code to work because I'm sure there are more than one errors.
You should use iter instead of itmax.
I initialized y within the function and input of y should be formatted as a number instead of a character. You could also simplify the if statement by using | (or).
I also added "cat" function so you could see what i is before the function prints out the square root value.
MySqrt <- function (x, eps = 1e-6, iter = 100, verbose = TRUE) {
y = 0
y[1] = as.numeric(readline(prompt="Please enter your initial square root guess: "))
for (i in 1:iter) {
y[i+1] = as.numeric((y[i] + (x/y[i]))/2)
if (i == 100 || abs(y[i+1] - y[i]) < eps) {
cat("This is", i,"th try: \n")
return(y[i+1])
}
}
}
Try this simply:
newton.raphson <- function(x, start, epsilon=0.0001, maxiter=100) {
y <- c(start) # initial guess
a <- 1 # number of iterations
while (TRUE) {
y <- c(y, (y[a] + x / y[a])/2)
if (abs(y[a+1] - y[a]) < epsilon | a > maxiter) { # converged or exceeded maxiter
return(y[a+1])
}
a <- a + 1
}
}
newton.raphson(2, 0.5, 0.01)
# [1] 1.414234
newton.raphson(3, 0.5, 0.01)
# [1] 1.732051
since sqrt(n) < n/2 then with precision of 1/10000
sqrnt=function(y){
x=y/2
while (abs(x*x-y) > 1e-10)
{x=(x+y/x)/2 }
x
}
In Newton’s method. If you want to know the square root of a, you can start estimate a number, x (for examples a/2), you can compute a better estimate with the following formula:
y = (x + a / x) / 2
If y != x, you set x = y, and repeat until y == x. Then you get the square root of a. Please see the code below:
square_root <- function(a) {
x <- a/2
while (TRUE) {
y <- (x + a / x) / 2
if (y == x) break
x <- y
}
return(y)
}
What is behind Approx and approxfun? I know that these two functions perform a linear interpolation, however I didn't find any reference on how they do that. I guess they use a least square regression model but I am not sure.
Finally, if it's true that they use a least square regression model what is the difference between them and lm + predict?
As commented , you should read the source code. Interpolation problem
Find y(v), given (x,y)[i], i = 0,..,n-1 */
For example approxfun use a simple this algorithm for linear approximation :
y(v), given (x,y)[i], i = 0,..,n-1 */
find the correct interval (i,j) by bisection */
Use i,j for linear interpolation
Here an R code that aprahrase the C function approx1 :
approx1 <-
function( v, x, y)
{
## Approximate y(v), given (x,y)[i], i = 0,..,n-1 */
i <- 1
j <- length(x)
ij <- 0
## find the correct interval by bisection */
while(i < (j-1) ) {
ij <- floor((i + j)/2)
if(v < x[ij])
j <- ij
else
i <- ij
}
## linear interpolation */
if(v == x[j]) return(y[j])
if(v == x[i]) return(y[i])
return (y[i] + (y[j] - y[i]) * ((v - x[i])/(x[j] - x[i])))
}
I'm writing a function for Gaussian mixture models with spherical covariance structures--ie $\Sigma_k = \sigma_k^2 I$. This particular function is similar to the mclust package with identifier VII.
http://en.wikipedia.org/wiki/Mixture_model
Anyways, the problem I'm having is running into infinite values for the weight matrix. Definition: Let W be an n x m matrix where n = 1, ..., n (number of obs) and m = 1, ..., m (number of mixtues). Each element of W (ie w_ij) can essentially be defined as a specific form of:
w_im = \frac{a / b * exp(c)}{\sum_i=1^m [a_i / b_i * exp(c_i)]}
Computing this numerically is giving me infinite values. So I'm trying to use the log-identity log(x+y) = log(x) + log(1 + y/x). But the issue is that it's not as simple as log(x+y) but rather log(\sum_i=1^m [a_i / b_i * exp(c_i)]).
Here's some code define:
n_im = a / b * exp(c) ;
d_.m = \sum_i=1^m [a_i / b_i * exp(c_i)] ; and
c_mat[i,j] as the value of the exponent for the [i,j]th term.
n_mat[, i] <- log(a[i]) - log(b[i]) - c[,i] # numerator of w_im
internal_vec1[i] <- (a[i] * b[1])/ (a[1] * b[i]) # an internal for the step below
c_mat2 <- cbind(rep(1, n), c_mat[,1] - c_mat[,-1]) # since e^a / e^b = e^(a-b)
for (i in 1:n) {
d_vec[i] <- n_mat[i,1] + log(sum(internal_vec1 * exp(c_mat2[i,)))
} ## still getting infinite values
I'm trying to define the problem as briefly as possible. the entire function is obviously much larger than this. But, since the problem I'm running into is specifically dealing with infinite (and 1/infinity) values, I'm hoping this snippet is sufficient. Anyone with a coding trick here?
Here is the solution!! (I've spent way too damn long on this)
**The first function log_plus() solves the simple problem where you want log(\sum_{i=1)^n x_i)
**The second function log_plus2() solves the more complicated problem described above where you want log(\sum_{i=1}^n [a_i / b_i * exp(c_i)])
log_plus <- function(xvec) {
m <- length(xvec)
x <- log(xvec[1])
for (j in 2:m) {
sum_j <- sum(xvec[1:j-1])
x <- x + log(1 + xvec[j]/sum_j)
}
return(x)
}
log_plus2 <- function(a, b, c) {
# assumes intended input of form sum(a/b * e^c)
if ((length(a) != length(b)) || (length(a) != length(c))) {
stop("Input equal length vectors")
}
if (!(all(c > 0) || all(c < 0))) {
stop("All values of c must be either > 0 or < 0.")
}
m <- length(a)
# initilialize log sum
x <- log(a[1]) - log(b[1]) + c[1]
# aggregate / loop log sum
for (j in 2:m) {
# build denominator
b2 <- b[1:j-1]
for (i in 1:j-1) {
d1 <- 0
c2 <- c[1:i]
if (all(c2 > 0)) {
c_min <- min(c2[1:j-1])
c2 <- c2 - c_min
} else if (all(c2 < 0)) {
c_min <- max(c2[1:j-1])
c2 <- c2 - c_min
}
d1 <- d1 + a[i] * prod(b2[-i]) * exp(c2[i])
}
den <- b[j] * (d1)
num <- a[j] * prod(b[1:j-1]) * exp(c[j] - c_min)
x <- x + log(1 + num / den)
}
return(x)
}