least square regression model - r

What is behind Approx and approxfun? I know that these two functions perform a linear interpolation, however I didn't find any reference on how they do that. I guess they use a least square regression model but I am not sure.
Finally, if it's true that they use a least square regression model what is the difference between them and lm + predict?

As commented , you should read the source code. Interpolation problem
Find y(v), given (x,y)[i], i = 0,..,n-1 */
For example approxfun use a simple this algorithm for linear approximation :
y(v), given (x,y)[i], i = 0,..,n-1 */
find the correct interval (i,j) by bisection */
Use i,j for linear interpolation
Here an R code that aprahrase the C function approx1 :
approx1 <-
function( v, x, y)
{
## Approximate y(v), given (x,y)[i], i = 0,..,n-1 */
i <- 1
j <- length(x)
ij <- 0
## find the correct interval by bisection */
while(i < (j-1) ) {
ij <- floor((i + j)/2)
if(v < x[ij])
j <- ij
else
i <- ij
}
## linear interpolation */
if(v == x[j]) return(y[j])
if(v == x[i]) return(y[i])
return (y[i] + (y[j] - y[i]) * ((v - x[i])/(x[j] - x[i])))
}

Related

Newton-Raphson Root Finding Algorithm

Summary of problem
My objective is to create a function called newton.raphson to implement the Newton-Raphson root-finding algorithm.
Root Finding Algorithm: x1 = X0 - f(xo)/f'(x0)
I have 2 arguments:
iter = number of iteration (value = 10^5)
epsilon = for the tolerance (value = 10^-10)
Can not depend on variables outside of the function
newton.raphson <- function(f, x0, iter=1e5, epsilon=1e-10) {
x <- x0
h <- 1e-5
for (t in 1:iter) {
drvt <- f((x+h)) - f((x-h)) / (2 * h)
update <- x - f(x)/ drvt
if (abs(update) < epsilon) {
break
}
x <- update
}
root <- x
return(root)
}
# Define some function to test
f <- function(x) {
x^2 - 4 * x - 7
}
I get the following results:
> newton.raphson(f, 0)
[1] 2.000045
> newton.raphson(f, 3)
[1] 5.000024
But results should be:
-1.316625
5.316625
Your derivative calculation is a little bit broken - you forgot parenthesis around the difference between f(x+h) and f(x-h):
drvt <- ( f(x+h) - f(x-h) ) / (2 * h)
Also, you should compare the difference between the old and new root approximation to the tolerance. In order to make things more clear, rename your misleading update variable to something like new.x. Then, your should check if (abs(new.x - x) < epsilon).

Issue when Implementing gradient descent in R

I have problems implementing a gradient descent in R for an exponential function.
Let's say
foo <- function(x) {
y = -2 + 2.5 * exp(0.1*x^2-0.7*x)
return(y) }
is my exponential function then
grad <- function(x) {
y = 2.5*exp(0.1*x^2-0.7*x)*(0.2*x-0.7)
return(y) }
is the gradient function of foo(x).
The task is to implement a function called
gdescent <- function(x0, fc, grd, diff, step) {}
where
x0 is a random initial value
foo is the exponential function I want to minimize
grad is the gradient function (derivative of foo(x))
diff is the difference that terminates the algo
step is the size of the steps (i.e. the learning rate)
The result of the function shall be a list containing
par - the value of x in the minimum;
value - the corresponding value of foo(par) at the minimum;
and iter - the number of iterations it took the algo to find the minimum.
The update rule after every iteration shall be:
xi+1 = xi - step * grd(xi) # i didn't understand this at all
How do I implement this in R?
My understanding of the gradient descent method so far:
pick a random initial value x0 insert x0 in the gradient function grd(x)
if grd(x0) < 0, reduce x0 by "step" (x0+1=x0-step);
if grd(x0) > 0, increase x0 by "step" (x0+1=x0+step); and go back to 2) with x0+1 as initial value
my solution so far:
gdescent <- function(x0, fc, grd, diff, step) {
x = x0
x_history = vector("numeric", iter)
for(i in 1:iter) {
x = x - step * grad(x)
if( x > diff ) { #not sure how to proceed here
}
}
How can I solve this without a fixed number of iterations? so without initialising iter

Numerical integration in R

I am trying to compute this integral :
"A" and "Beta" are constants, "PHI" capital is the marginal distribution function of the Normal Law N(0,1), and "phi" is the density of the Normal Law N(0,1) and P(tau <= t) = 1/2
Here is my implementation :
integral <- function(A, beta) {
f <- function(x) {
# We have P(tau <= t) = 1/2
pnorm(qnorm(1/2,0,1) - beta*x / (sqrt(1-(beta^2))), 0, 1)*(1/sqrt(2*pi)*exp(-x^2 / 2)
}
integrate(f,lower=-Inf, upper = A)$value
}
I am not really sure about the qnorm function.. Is there a better way to do the computation?
not that The "/" operator is superior to "-".means in this line
pnorm(qnorm(1/2,0,1) - beta*x / (sqrt(1-(beta^2))), 0, 1)
you made a mistake.correct is
(qnorm(1/2,0,1) - beta*x)
###not pnorm(qnorm(1/2,0,1)- beta*x.... =>pnorm((qnorm(1/2,0,1) -
###beta*x)....
I use this code and I got answer
integral <- function(A, beta) {
f <- function(x) {
temp<-(qnorm(1/2,0,1) - beta*x) / (sqrt(1-(beta^2)))
pnorm(temp,0,1)*dnorm(x,0,1)
}
integrate(f,lower=-Inf, upper = A)$value
}
integral(0,0) ##.25
integral(10,.9) ##.5
also if you want another way to calculate any complicated integrate you can use monte carlo methods or .....

How does the R 'density' function use specified weights?

How does the density function in R incorporate weights if they are specified (assume weights sum to 1, which is what the function wants)? I mean mathematically, how does it work? I know how to look at the underlying R code for a function but not when it just returns a generic method like this:
> density
function (x, ...)
UseMethod("density")
<bytecode: 0x00000000079ee728>
<environment: namespace:stats>
The reason I'm asking is that I have made some nice side-by-side empirical density plots using unweighted and weighted samples which shows the benefits of weighting one's sample to make the distribution of covariates more balanced between groups. These were all continuous covariates. Now I want to do the same thing with dichotomous variables, but the density function isn't great for this. I want to see if I can apply the same weighting method to generate side-by-side box plots for the dichotomous covariates that I have.
This is an exercise in source code hunting, but here goes:
In density.default, the relevant part (besides checking the weights are valid) is only the line:
y <- .Call(C_BinDist, x, weights, lo, up, n) * totMass
In the relevant source file, massdist.c we find (comments my own):
for(R_xlen_t i = 0; i < XLENGTH(sx) ; i++) {
if(R_FINITE(x[i])) {
double xpos = (x[i] - xlo) / xdelta;
int ix = (int) floor(xpos);
double fx = xpos - ix;
double wi = w[i]; // w: weights vector
if(ixmin <= ix && ix <= ixmax) {
y[ix] += (1 - fx) * wi;
y[ix + 1] += fx * wi;
}
else if(ix == -1) y[0] += fx * wi;
else if(ix == ixmax + 1) y[ix] += (1 - fx) * wi;
}
}

Mixture modeling - troublee with infinite values from exp() and log()

I'm writing a function for Gaussian mixture models with spherical covariance structures--ie $\Sigma_k = \sigma_k^2 I$. This particular function is similar to the mclust package with identifier VII.
http://en.wikipedia.org/wiki/Mixture_model
Anyways, the problem I'm having is running into infinite values for the weight matrix. Definition: Let W be an n x m matrix where n = 1, ..., n (number of obs) and m = 1, ..., m (number of mixtues). Each element of W (ie w_ij) can essentially be defined as a specific form of:
w_im = \frac{a / b * exp(c)}{\sum_i=1^m [a_i / b_i * exp(c_i)]}
Computing this numerically is giving me infinite values. So I'm trying to use the log-identity log(x+y) = log(x) + log(1 + y/x). But the issue is that it's not as simple as log(x+y) but rather log(\sum_i=1^m [a_i / b_i * exp(c_i)]).
Here's some code define:
n_im = a / b * exp(c) ;
d_.m = \sum_i=1^m [a_i / b_i * exp(c_i)] ; and
c_mat[i,j] as the value of the exponent for the [i,j]th term.
n_mat[, i] <- log(a[i]) - log(b[i]) - c[,i] # numerator of w_im
internal_vec1[i] <- (a[i] * b[1])/ (a[1] * b[i]) # an internal for the step below
c_mat2 <- cbind(rep(1, n), c_mat[,1] - c_mat[,-1]) # since e^a / e^b = e^(a-b)
for (i in 1:n) {
d_vec[i] <- n_mat[i,1] + log(sum(internal_vec1 * exp(c_mat2[i,)))
} ## still getting infinite values
I'm trying to define the problem as briefly as possible. the entire function is obviously much larger than this. But, since the problem I'm running into is specifically dealing with infinite (and 1/infinity) values, I'm hoping this snippet is sufficient. Anyone with a coding trick here?
Here is the solution!! (I've spent way too damn long on this)
**The first function log_plus() solves the simple problem where you want log(\sum_{i=1)^n x_i)
**The second function log_plus2() solves the more complicated problem described above where you want log(\sum_{i=1}^n [a_i / b_i * exp(c_i)])
log_plus <- function(xvec) {
m <- length(xvec)
x <- log(xvec[1])
for (j in 2:m) {
sum_j <- sum(xvec[1:j-1])
x <- x + log(1 + xvec[j]/sum_j)
}
return(x)
}
log_plus2 <- function(a, b, c) {
# assumes intended input of form sum(a/b * e^c)
if ((length(a) != length(b)) || (length(a) != length(c))) {
stop("Input equal length vectors")
}
if (!(all(c > 0) || all(c < 0))) {
stop("All values of c must be either > 0 or < 0.")
}
m <- length(a)
# initilialize log sum
x <- log(a[1]) - log(b[1]) + c[1]
# aggregate / loop log sum
for (j in 2:m) {
# build denominator
b2 <- b[1:j-1]
for (i in 1:j-1) {
d1 <- 0
c2 <- c[1:i]
if (all(c2 > 0)) {
c_min <- min(c2[1:j-1])
c2 <- c2 - c_min
} else if (all(c2 < 0)) {
c_min <- max(c2[1:j-1])
c2 <- c2 - c_min
}
d1 <- d1 + a[i] * prod(b2[-i]) * exp(c2[i])
}
den <- b[j] * (d1)
num <- a[j] * prod(b[1:j-1]) * exp(c[j] - c_min)
x <- x + log(1 + num / den)
}
return(x)
}

Resources