How can I use cubic splines for extrapolation? - r

I am looking to use natural cubic splines to interpolate between some data points using stats::splinefun(). The documentation states:
"These interpolation splines can also be used for extrapolation, that is prediction at points outside the range of ‘x’. Extrapolation makes little sense for ‘method = "fmm"’; for natural splines it is linear using the slope of the interpolating curve at the nearest data point."
I have attempted to replicate the spline function in Excel as a review, which is working fine except that I can't replicate the extrapolation approach. Example data and code below:
library(stats)
# Example data
x <- c(1,2,3,4,5,6,7,8,9,10,12,15,20,25,30,40,50)
y <- c(7.1119,5.862,5.4432,5.1458,4.97,4.8484,4.7726,4.6673,4.5477,4.437,4.3163,4.1755,4.0421,3.9031,3.808,3.6594,3.663)
df <- data.frame(x,y)
# Create spline functions
splinetest <- splinefun(x = df$x, y = df$y, method = "natural")
# Create dataframe of coefficients
splinetest_coef <- environment(splinetest)$z
splinetest_coefdf <- data.frame(i = 0:16, x = splinecoef_inf$x, a = splinecoef_inf$y, b = splinecoef_inf$b, c = splinecoef_inf$c, d = splinecoef_inf$d)
# Calculate extrapolated value at 51
splinetest(51)
# Result:
# [1] 3.667414
Question: How is this result calculated?
Expected result using linear extrapolation from x = 40 and x = 50 is 3.663 + (51 - 50) x (3.663 - 3.6594) / (50 - 40) = 3.66336
The spline coefficients are as follows at i = 50: a = 3.663 and b = 0.00441355...
Therefore splinetest(51) is calculated as 3.663 + 0.0441355
How is 0.0441355 calculated in this function?

Linear extrapolation is not done by computing the slope between a particular pair of points, but by using the estimated derivatives at the boundary ("closest point" in R's documentation). The derivatives at any point can be calculated directly from the spline function, e.g. to calculate the estimated first derivative at the upper boundary:
splinetest(max(df$x), deriv = 1)
[1] 0.004413552
This agrees with your manual back-calculation of the slope used to do the extrapolation.
As pointed out in the comments, plotting the end of the curve/data set with curve(splinetest, from = 30, to = 60); points(x,y) illustrates clearly the difference between the derivative at the boundary (x=50) and the line based on the last two data points (i.e. (y(x=50) - y(x=40))/10)

Related

Estimating PDF with monotonically declining density at tails

tldr: I am numerically estimating a PDF from simulated data and I need the density to monotonically decrease outside of the 'main' density region (as x-> infinity). What I have yields a close to zero density, but which does not monotonically decrease.
Detailed Problem
I am estimating a simulated maximum likelihood model, which requires me to numerically evaluate the probability distribution function of some random variable (the probability of which cannot be analytically derived) at some (observed) value x. The goal is to maximize the log-likelihood of these densities, which requires them to not have spurious local maxima.
Since I do not have an analytic likelihood function I numerically simulate the random variable by drawing the random component from some known distribution function, and apply some non-linear transformation to it. I save the results of this simulation in a dataset named simulated_stats.
I then use density() to approximate the PDF and approxfun() to evaluate the PDF at x:
#some example simulation
Simulated_stats_ <- runif(n=500, 10,15)+ rnorm(n=500,mean = 15,sd = 3)
#approximation for x
approxfun(density(simulated_stats))(x)
This works well within the range of simulated simulated_stats, see image:
Example PDF. The problem is I need to be able to evaluate the PDF far from the range of simulated data.
So in the image above, I would need to evaluate the PDF at, say, x=50:
approxfun(density(simulated_stats))(50)
> [1] NA
So instead I use the from and to arguments in the density function, which correctly approximate near 0 tails, such
approxfun(
density(Simulated_stats, from = 0, to = max(Simulated_stats)*10)
)(50)
[1] 1.924343e-18
Which is great, under one condition - I need the density to go to zero the further out from the range x is. That is, if I evaluated at x=51 the result must be strictly smaller. (Otherwise, my estimator may find local maxima far from the 'true' region, since the likelihood function is not monotonic very far from the 'main' density mass, i.e. the extrapolated region).
To test this I evaluated the approximated PDF at fixed intervals, took logs, and plotted. The result is discouraging: far from the main density mass the probability 'jumps' up and down. Always very close to zero, but NOT monotonically decreasing.
a <- sapply(X = seq(from = 0, to = 100, by = 0.5), FUN = function(x){approxfun(
density(Simulated_stats_,from = 0, to = max(Simulated_stats_)*10)
)(x)})
aa <- cbind( seq(from = 0, to = 100, by = 0.5), a)
plot(aa[,1],log(aa[,2]))
Result:
Non-monotonic log density far from density mass
My question
Does this happen because of the kernel estimation in density() or is it inaccuracies in approxfun()? (or something else?)
What alternative methods can I use that will deliver a monotonically declining PDF far from the simulated density mass?
Or - how can I manually change the approximated PDF to monotonically decline the further I am from the density mass? I would happily stick some linear trend that goes to zero...
Thanks!
One possibility is to estimate the CDF using a beta regression model; numerical estimate of the derivative of this model could then be used to estimate the pdf at any point. Here's an example of what I was thinking. I'm not sure if it helps you at all.
Import libraries
library(mgcv)
library(data.table)
library(ggplot2)
Generate your data
set.seed(123)
Simulated_stats_ <- runif(n=5000, 10,15)+ rnorm(n=500,mean = 15,sd = 3)
Function to estimate CDF using gam beta regression model
get_mod <- function(ss,p = seq(0.02, 0.98, 0.02)) {
qp = quantile(ss, probs=p)
betamod = mgcv::gam(p~s(qp, bs="cs"), family=mgcv::betar())
return(betamod)
}
betamod <- get_mod(Simulated_stats_)
Very basic estimate of PDF at val given model that estimates CDF
est_pdf <- function(val, betamod, tol=0.001) {
xvals = c(val,val+tol)
yvals = predict(betamod,newdata=data.frame(qp = xvals), type="response")
as.numeric((yvals[1] - yvals[2])/(xvals[1] - xvals[2]))
}
Lets check if monotonically increasing below min of Simulated_stats
test_x = seq(0,min(Simulated_stats_), length.out=1000)
pdf = sapply(test_x, est_pdf, betamod=betamod)
all(pdf == cummax(pdf))
[1] TRUE
Lets check if monotonically decreasing above max of Simulated_stats
test_x = seq(max(Simulated_stats_), 60, length.out=1000)
pdf = sapply(test_x, est_pdf, betamod=betamod)
all(pdf == cummin(pdf))
[1] TRUE
Additional thoughts 3/5/22
As discussed in comments, using the betamod to predict might slow down the estimator. While this could be resolved to a great extent by writing your own predict function directly, there is another possible shortcut.
Generate estimates from the betamod over the range of X, including the extremes
k <- sapply(seq(0,max(Simulated_stats_)*10, length.out=5000), est_pdf, betamod=betamod)
Use the approach above that you were initially using, i.e. a linear interpolation across the density, but rather than doing this over the density outcome, instead do over k (i.e. over the above estimates from the beta model)
lin_int = approxfun(x=seq(0,max(Simulated_stats_)*10, length.out=5000),y=k)
You can use the lin_int() function for prediction in the estimator, and it will be lighting fast. Note that it produces virtually the same value for a given x
c(est_pdf(38,betamod), lin_int(38))
[1] 0.001245894 0.001245968
and it is very fast
microbenchmark::microbenchmark(
list = alist("betamod" = est_pdf(38, betamod),"lin_int" = lint(38)),times=100
)
Unit: microseconds
expr min lq mean median uq max neval
betamod 1157.0 1170.20 1223.304 1188.25 1211.05 2799.8 100
lin_int 1.7 2.25 3.503 4.35 4.50 10.5 100
Finally, lets check the same plot you did before, but using lin_int() instead of approxfun(density(....))
a <- sapply(X = seq(from = 0, to = 100, by = 0.5), lin_int)
aa <- cbind( seq(from = 0, to = 100, by = 0.5), a)
plot(aa[,1],log(aa[,2]))

GAM with "gp" smoother: how to retrieve the variogram parameters?

I am using the following geoadditive model
library(gamair)
library(mgcv)
data(mack)
mack$log.net.area <- log(mack$net.area)
gm2 <- gam(egg.count ~ s(lon,lat,bs="gp",k=100,m=c(2,10,1)) +
s(I(b.depth^.5)) +
s(c.dist) +
s(temp.20m) +
offset(log.net.area),
data = mack, family = tw, method = "REML")
Here I am using an exponential covariance function with range = 10 and power = 1 (m=c(2,10,1)). How can I retrieve from the results the variogram parameters (nugget, sill)? I couldn't find anything in the model output.
In smoothing approach the correlation matrix is specified so you only estimate variance parameter, i.e., the sill. For example, you've set m = c(2, 10, 1) to s(, bs = 'gp'), giving an exponential correlation matrix with range parameter phi = 10. Note that phi is not identical to range, except for spherical correlation. For many correlation models the actual range is a function of phi.
The variance / sill parameter is closely related to the smoothing parameter in penalized regression, and you can obtain it by dividing the scale parameter by smoothing parameter:
with(gm2, scale / sp["s(lon,lat)"])
#s(lon,lat)
# 26.20877
Is this right? No. There is a trap here: smoothing parameters returned in $sp are not real ones, and we need the following:
gm2_sill <- with(gm2, scale / sp["s(lon,lat)"] * smooth[[1]]$S.scale)
#s(lon,lat)
# 7.7772
And we copy in the range parameter you've specified:
gm2_phi <- 10
The nugget must be zero, since a smooth function is continuous. Using lines.variomodel function from geoR package, you can visualize the semivariogram for the latent Gaussian spatial random field modeled by s(lon,lat).
library(geoR)
lines.variomodel(cov.model = "exponential", cov.pars = c(gm2_sill, gm2_phi),
nugget = 0, max.dist = 60)
abline(h = gm2_sill, lty = 2)
However, be skeptical on this variogram. mgcv is not an easy environment to interpret geostatistics. The use of low-rank smoothers suggests that the above variance parameter is for parameters in the new parameter space rather than the original one. For example, there are 630 unique spatial locations in the spatial field for mack dataset, so the correlation matrix should be 630 x 630, and the full random effects should be a vector of length-630. But by setting k = 100 in s(, bs = 'gp') the truncated eigen decomposition and subsequent low-rank approximation reduce the random effects to length-100. The variance parameter is really for this vector not the original one. This might explain why the sill and the actual range do not agree with the data and predicted s(lon,lat).
## unique locations
loc <- unique(mack[, c("lon", "lat")])
max(dist(loc))
#[1] 15.98
The maximum distance between two spatial locations in the dataset is 15.98, but the actual range from the variogram seems to be somewhere between 40 and 60, which is too large.
## predict `s(lon, lat)`, using the method I told you in your last question
## https://stackoverflow.com/q/51634953/4891738
sp <- predict(gm2,
data.frame(loc, b.depth = 0, c.dist = 0, temp.20m = 0,
log.net.area = 0),
type = "terms", terms = "s(lon,lat)")
c(var(sp))
#[1] 1.587126
The predicted s(lon,lat) only has variance 1.587, but the sill at 7.77 is way much higher.

Sample from a custom likelihood function

I have the following likelihood function which I used in a rather complex model (in practice on a log scale):
library(plyr)
dcustom=function(x,sd,L,R){
R. = (log(R) - log(x))/sd
L. = (log(L) - log(x))/sd
ll = pnorm(R.) - pnorm(L.)
return(ll)
}
df=data.frame(Range=seq(100,500),sd=rep(0.1,401),L=200,U=400)
df=mutate(df, Likelihood = dcustom(Range, sd,L,U))
with(df,plot(Range,Likelihood,type='l'))
abline(v=200)
abline(v=400)
In this function, the sd is predetermined and L and R are "observations" (very much like the endpoints of a uniform distribution), so all 3 of them are given. The above function provides a large likelihood (1) if the model estimate x (derived parameter) is in between the L-R range, a smooth likelihood decrease (between 0 and 1) near the bounds (of which the sharpness is dependent on the sd), and 0 if it is too much outside.
This function works very well to obtain estimates of x, but now I would like to do the inverse: draw a random x from the above function. If I would do this many times, I would generate a histogram that follows the shape of the curve plotted above.
The ultimate goal is to do this in C++, but I think it would be easier for me if I could first figure out how to do this in R.
There's some useful information online that helps me start (http://matlabtricks.com/post-44/generate-random-numbers-with-a-given-distribution, https://stats.stackexchange.com/questions/88697/sample-from-a-custom-continuous-distribution-in-r) but I'm still not entirely sure how to do it and how to code it.
I presume (not sure at all!) the steps are:
transform likelihood function into probability distribution
calculate the cumulative distribution function
inverse transform sampling
Is this correct and if so, how do I code this? Thank you.
One idea might be to use the Metropolis Hasting Algorithm to obtain a sample from the distribution given all the other parameters and your likelihood.
# metropolis hasting algorithm
set.seed(2018)
n_sample <- 100000
posterior_sample <- rep(NA, n_sample)
x <- 300 # starting value: I chose 300 based on your likelihood plot
for (i in 1:n_sample){
lik <- dcustom(x = x, sd = 0.1, L = 200, R =400)
# propose a value for x (you can adjust the stepsize with the sd)
x.proposed <- x + rnorm(1, 0, sd = 20)
lik.proposed <- dcustom(x = x.proposed, sd = 0.1, L = 200, R = 400)
r <- lik.proposed/lik # this is the acceptance ratio
# accept new value with probablity of ratio
if (runif(1) < r) {
x <- x.proposed
posterior_sample[i] <- x
}
}
# plotting the density
approximate_distr <- na.omit(posterior_sample)
d <- density(approximate_distr)
plot(d, main = "Sample from distribution")
abline(v=200)
abline(v=400)
# If you now want to sample just a few values (for example, 5) you could use
sample(approximate_distr,5)
#[1] 281.7310 371.2317 378.0504 342.5199 412.3302

How to find confidence interval in 3d plots in cran R?

I have this 3d plot that I draw in cran R
rb = rep(seq(0.1, 1, 0.1), 10)
ro = sort(rb)
lods = runif(100) #create a random LOD score
library(scatterplot3d)
lodsplot<- scatterplot3d(rb, ro, lods)
I found the maximum of the LOD score using max(lods) and thus, find the respective rb and ro. Now, I want to find the 95% CI of rb and ro. Assume max(lods) = 0.8 and respective rb and ro are 0.2 and 0.3, I thought of drawing a plane using:
lodsplot$plane3d(c(0.2, 0.3, 0.8))
and then find points above the plane (which I don't know how to do). Am I thinking correctly? Thank you!
Note:
If I just do a 2d plot, this is how i would do it:
plot(rb, lods, type = "l)
which(lods == max(lods))
limit = max(lods) - 1.92
abline(h = limit)
#Find intersect points:
above <- lr > limit
intersect.points <- which(diff(above) != 0)
You need to find the points that are above your plane defining the hypothesized 95% upper bound which you are suggesting has the equation:
lods = 0.2+ 0.3*rb+ 0.8*ro
So calculate the item numbers for the points satisfying the implicit inequality:
high <- which(lods > 0.2+ 0.3*rb+ 0.8*ro)
And plot:
png()
lodsplot<- scatterplot3d(rb, ro, lods)
high <- which(lods > 0.2+ 0.3*rb+ 0.8*ro)
lodsplot$plane3d(c(0.2, 0.3, 0.8))
lodsplot$points3d( rb[high], ro[high], lods[high], col="red"); dev.off()
Notice that the plane3d function in scatterplot3d also accepts a result from lm or glm, so you could first calculate a model where lods ~ rb +ro and then calculate a 95% prediction surface using predict( ..., type="response") and color the points using this method. See: predict and multiplicative variables / interaction terms in probit regressions for a worked example on an equivalent procedure on an admittedly more complex model.
You can also do a search on [r] prediction surface and find other potentially useful answers such as this BenBolker suggestion to use rgl: "A: scatterplot3d for Response Surface in R"

how to perform nls fro three matrix?

If we have these matrix as example:
x=matrix(runif(50 ),5,5)
z=matrix(runif(60 ),5,5)
y=exp(0.5*x+0.2*z+0.3)^1
the values in these three matrix change form one day to another so I have two years (daily,x1,x2,x3,etc)of these matrix(speaking of my real data)
to fit this equation y=exp(s1x+s2x+s3)^s4
use normal Nls as:
fit <- nls(y ~ exp(s1*x+s2*z+s3)^s4,
data = data,
start = list(s1 = 0.5, s2 = 0.2, s3 = 0.3,s4=1))
but my problem is that I want to do this fit for all corresponding values in three matrix for example start taking values from:
> use all values of x1[1,1], x2[1,1],.....etc
[1] 0.3617776 .......etc
> and the corresponding form z1[1,1], z2[1,1].....etc
[1] 0.5544851 .......etc
> y1[1,1], y2[1,1],etc .....
[1] 1.807213 .......etc
Find the fit parameters then do the same for all other corresponding values. Finally I get a matrix with, for example, values of s1, another matrix for s2 etc….
Any help is appreciated(Note: I have to use it as it is and not to linearise it or use lm.fit or lm)
I think the x,y,z as matrices are throwing you off. As stated by MrFlick in the comment you need multiple observations to perform the regression. Here is an example of nlm applied to a simple 3d normal surface with some noise.
You need a n-by-3 matrix of observations in the 3d space. Here I make some noisy points along a Gaussian curve and rotate it to form a surface.
This example uses the rgl package.
library(rgl)
z_rotate <- function(mat,rads){
rot.mat <- matrix(c(cos(rads),-sin(rads),0,
sin(rads),cos(rads),0,
0, 0, 1),nrow=3,ncol=3)
mat %*% rot.mat
}
x <- seq(-2,2,0.1)
x <- x + rnorm(length(x),sd=0.5)
y <- seq(-2,2,0.1)
#y <- y + rnorm(length(y),sd=0.5)
z <- 0.5*exp(-x^2/2 + (-y^2/2)) + rnorm(1,sd=0.6)
m <- as.matrix(cbind(x,y,z))
now the matrix m is just the initial points about the normal curve.
points <- Reduce(rbind,lapply(1:8,function(n){z_rotate(m,n*(pi/8))}))
colnames(points) <- c("x","y","z")
The previous command just calls the rotate function then appends the results as new rows in the matrix.
Now the points matrix represents the observations to be approximated with our nlm function.
> dim(points)
[1] 328 3
Now we can view these points in 3d with RGL.
plot3d(points[,1],points[,2],points[,3],type='s',size=0.3)
Use nlm to fit a function that approximates this data.
fit <- nls(z ~ s1*exp(-s2*x^2 + (-s3*y^2)) +s4,
data = data.frame(points),
start = list(s1 = 0.3, s2 = 0.3, s3 = 0.3, s4=0.3),
#trace=TRUE,
control = list(warnOnly=TRUE))
It produces this fit:
> fit
Nonlinear regression model
model: z ~ s1 * exp(-s2 * x^2 + (-s3 * y^2)) + s4
data: data.frame(points)
s1 s2 s3 s4
0.5000 0.5000 0.5000 -0.1084
residual sum-of-squares: 1.493e-31
Number of iterations till stop: 50
Achieved convergence tolerance: 0.05046
Reason stopped: number of iterations exceeded maximum of 50
We can see that it recovers the values we had, namely 0.5 for the coefficients when we compare the formulas.
z <- 0.5*exp(-x^2/2 + (-y^2/2)) + rnorm(1,sd=0.6)
z ~ s1*exp(-s2*x^2 + (-s3*y^2)) +s4
I know its a toy example but it might give you some ideas as to how to set up your data. I think you will need to structure your data in an n-by-3 observations matrix to be able to do your regression with nlm.

Resources