Grid too small for kernelUD /getverticeshr/adehabitatHR home range estimation - r

The documentation for adehabitat HR recommends the following code for calculating a 95% kernel for a home range after creating the UD object:
## Calculation of the 95 percent home range
ver <- getverticeshr(ud, 95)
For some of my data, the following error appears:
Error in getverticeshr.estUD(x[[i]], percent, ida = names(x)[i], unin, :
The grid is too small to allow the estimation of home-range.
You should rerun kernelUD with a larger extent parameter
On a Nabble forum people recommending changing the "grid" and "extent" inputs, but I wasn't able to get any better results after using numerous combinations of these 2 parameters. Any suggestions?

This a common issue I've found in some forums. But the answer is simple and is exactly in the error message. "You need to extent your grid". This is happening because when you apply getverticeshr(ud, 95) part of the polygons is out of the grid, so it is not possible to get an area.
For example, in the below code, KDE is estimated for two hypothetical animals. I use random points from 0 to 100, so I have defined a grid 100x100 (Domain).
#"""
# Language: R script
# This is a temporary script file.
#"""
# 1. Packages
library(adehabitatHR) # Package for spatal analysis
# 2. Empty Dataframe
points <- data.frame(ID = double())
XY_cor <- data.frame(X = double(),
Y = double())
# 3. Assigning values (this will be our spatial coordinates)
set.seed(17)
for(i in c(1:100)){
if(i >= 50){points[i, 1] <- 1}
else {points[i, 1] <- 2}
XY_cor[i, 1] <- runif(1, 0, 100)
XY_cor[i, 2] <- runif(1, 0, 100)}
# 4. Transform to SpatialDataframe
coordinates(points) <- XY_cor[, c("X", "Y")]
class(points)
# 5. Domain
x <- seq(0, 100, by=1.) # resolution is the pixel size you desire
y <- seq(0, 100, by=1.)
xy <- expand.grid(x=x,y=y)
coordinates(xy) <- ~x+y
gridded(xy) <- TRUE
class(xy)
# 6. Kernel Density
kud_points <- kernelUD(points, h = "href", grid = xy)
image(kud_points)
# 7. Get the Volum
vud_points <- getvolumeUD(kud_points)
# 8. Get contour
levels <- c(50, 75, 95)
list <- vector(mode="list", length = 2)
list[[1]] <- as.image.SpatialGridDataFrame(vud_points[[1]])
list[[2]] <- as.image.SpatialGridDataFrame(vud_points[[2]])
# 9. Plot
par(mfrow = c(2, 1))
image(vud_points[[1]])
contour(list[[1]], add=TRUE, levels=levels)
image(vud_points[[2]])
contour(list[[2]], add=TRUE, levels=levels)
The plot shows that the contour to 50% is inside the grid, but 75% contour is cut, this means part of this one is out.
If you try to estimate vertices of KDE to 50% you will obtain a fine result:
# 10. Get vertices (It will be fine)
vkde_points <- getverticeshr(kud_points, percent = 50,
unin = 'm', unout='m2')
plot(vkde_points)
But if you try with 75% level you will obtain the classical error: Error in getverticeshr.estUD(x[[i]], percent, ida = names(x)[i], unin, :
The grid is too small to allow the estimation of home-range.
You should rerun kernelUD with a larger extent parameter
# 10. Get vertices (Will be an Error)
vkde_points <- getverticeshr(kud_points, percent = 75,
unin = 'm', unout='m2')
plot(vkde_points)
Now, you can see clearly what is happening, R can't estimate the vertices to 75% because they are outside of the grid, so you need to increase the domain (grid)! Here I will increase the domain in 50 (see # 5. Domain)
# 5. Domain HERE GRID IS INCREASED 50 AT X AND Y!!
x <- seq(-50, 150, by=1.) # resolution is the pixel size you desire
y <- seq(-50, 150, by=1.)
xy <- expand.grid(x=x,y=y)
coordinates(xy) <- ~x+y
gridded(xy) <- TRUE
class(xy)
# 6. Kernel Density
kud_points <- kernelUD(points, h = "href", grid = xy)
image(kud_points)
# 7. Get the Volum
vud_points <- getvolumeUD(kud_points)
# 8. Get contour
levels <- c(50, 75, 95)
list <- vector(mode="list", length = 2)
list[[1]] <- as.image.SpatialGridDataFrame(vud_points[[1]])
list[[2]] <- as.image.SpatialGridDataFrame(vud_points[[2]])
# 9. Plot
par(mfrow = c(2, 1))
image(vud_points[[1]])
contour(list[[1]], add=TRUE, levels=levels)
image(vud_points[[2]])
contour(list[[2]], add=TRUE, levels=levels)
You can see all the contours are inside the grid (domain). So, now you will be able to estimate the vertices.
# 10. Get vertices
vkde_points <- getverticeshr(kud_points, percent = 75,
unin = 'm', unout='m2')
plot(vkde_points)

Related

Calculate the average of 2 'image's in R when the resolution is not the same

I have an image in R that is formed by X and Y plus a Z value which is in a matrix format. I use image to give me the output of that. I'd like to calculate the average value of the 2 images so I can join them together. There might be a way to use spatial packages to do that, but can't figure out a way to do this. The major problem is that the 2 Z matrices are not having the same number of columns, making the comparison between then difficult.
par(mfrow=c(1,1))
x1 = seq(1,10, by =1)
y1 = seq(1,10, by =1)
z1 = outer(x1,y1)
x2 = seq(7,12, by =.01)
y2 = seq(5,12, by =.01)
z2 = outer(x2,y2, FUN = "*")
image(x1,y1,z1, xlim = range(c(x1,x2)), ylim = range(c(y1,y2)), asp = 1)
image(x2,y2,z2, add =TRUE)
get.x.1 = x1 > min(x2)
get.y.1 = y1 > min(y2)
get.x.2 = x2 < max(x1)
get.y.2 = y2 < max(y1)
segments(min(x2),min(y2),max(x1),min(y2), lwd = 2)
segments(max(x1),min(y2),max(x1),max(y1), lwd = 2)
segments(max(x1),max(y1),min(x2),max(y1), lwd = 2)
segments(min(x2),max(y1),min(x2),min(y2), lwd = 2)
I'd like to average the 2 images below where they overlap (in the black rectangle that was added using the segments)
first without a buffer and
with a buffer (so that the buffer takes into account more surrounding values as a way to smooth the averaging process).
The end result should be a combined Z matrix with the average part but also the parts of the image that is not overlapping.
Perhaps a terra approach with glaring deviations from your presented data that might not serve your workflow, and in this case will result in an approximation of 'buffered'. In either case below, the difference in dim must be addressed:
library(terra)
x1a = seq(1,10, by =.01) # first glaring deviation
y1a = seq(1,10, by =.01) # and second
z1a = outer(x1a,y1a)
x1b = seq(1,10,by = 1)
y1b = seq(1,10,by = 1)
z1b = outer(x1b,y1b)
x2 = seq(7,12, by =.01)
y2 = seq(5,12, by =.01)
z2 = outer(x2,y2, FUN = "*")
z1a_apply <- apply(z1a, 2, FUN = 'rev') # get value gradients reversed
z2_apply <- apply(z2, 2, FUN = 'rev') # get value gradients reversed
z1a_rast <- rast(z1a_apply)
z2_rast <- rast(z2_apply)
# these leave origins at (0,0) which is not the case
ext(z2_rast) <- c(701, 1402, 501, 1002)
z1a_z2_crop_ext <- ext(crop(z1a_rast, z2_rast))
z1a_crop <- crop(z1a_rast, z1a_z2_crop_ext)
z2_crop <- crop(z2_rast, z1a_z2_crop_ext)
z1a_z2_mean <- app(c(z1a_crop, z2_crop), mean)
z_sprc <- sprc(z1a_z2_mean, z1a_rast, z2_rast)
z_merge <- merge(z_sprc)
plot(z_merge)
And as will be seen, my fontconfig is broken. So, a partial approach so far.
# picking up with z1b
x1b = seq(1,10,by = 1)
y1b = seq(1,10,by = 1)
z1b = outer(x1b,y1b)
z1b_rast <- rast(z1b)
dim(z1b_rast) <- c(1000, 1000)
# z1b_rast has lost all values
values(z1b_rast) <- outer(c(1:1000),c(1:1000))
# but here the gradient is wrong with high values lower right
z1b_flip <- flip(z1b_rast)
# a picture of a cat might not survive this treatment
# extent, resolution, and origin also have to be adjusted
ext(z1b_flip) <- c(0,1000,0,1000)
res(z1b_flip) <- c(1,1)
origin(z1b_flip) <- c(0.5, 0.5)# should have been done on z2
The subtleties in the 'polygon' approach via line seqments to be addressed next. To better approximate your plot above, points are inset from your segments above by -50.
library(sf)
# using your segments
pts1 <- matrix(c(min(x2),min(y2),max(x1a),min(y2)), nrow = 2, byrow = TRUE)*100
pts2 <- matrix(c(max(x1a),min(y2),max(x1a),max(y1a)), nrow = 2, byrow = TRUE)*100
pts3 <- matrix(c(max(x1a),max(y1a),min(x2),max(y1a)),nrow =2, byrow = TRUE)*100
pts4 <- matrix(c(min(x2),max(y1a),min(x2),min(y2)),nrow = 2, byrow=TRUE)*100
# one point from each line, all inset by 50
pts1_2 <- pts1[2, ] -50
pts2_2 <- pts2[2, ] -50
pts3_2 <- pts3[2, ] -50
pts4_2 <- pts4[2, ] -50
We later find this isn't actually useful compensation as the resulting poly falls outsize our z2_rast, and to avoid general madness it is better to have a poly that is within both rast(s) to pull our mean.
And so adjust out points...
pts1_2[2] <- pts1_2[2] + 75
pts4_2 <- pts4_2 + 75
pts3_2[1] <- pts3_2[1] + 75
poly4 <- st_cast(c(st_point(pts1_2), st_point(pts2_2), st_point(pts3_2), st_point(pts4_2)), 'POLYGON')
#make poly4 a SpatVector
poly4_vect <- vect(poly4)
z1b_poly4_crop <- crop(z1b_flip, poly4_vect)
z2_poly4_crop <- crop(z2_rast, poly4_vect)
#check for damages
all.equal(dim(z1b_poly4_crop), dim(z2_poly4_crop))
[1] TRUE
# Oh, hurray!, But
z1b_z2_poly4_mean <- app(c(z1b_poly4_crop, z2_poly4_crop), mean)
Error in h(simpleError(msg, call)) :
error in evaluating the argument 'x' in selecting a method for function 'app': [rast] extents do not match
ext(z1b_poly4_crop)
SpatExtent : 725.5, 950.5, 525.5, 950.5 (xmin, xmax, ymin, ymax)
ext(z2_poly4_crop)
SpatExtent : 725, 950, 525, 950 (xmin, xmax, ymin, ymax)
# so adjust origin on z2_poly4_crop, or better, on z2 before...
origin(z2_rast)
[1] 0 0
origin(z2_rast) <- c(0.5, 0.5)
z2_poly4_crop <- crop(z2_rast, poly4_vect)
ext(z2_poly4_crop)
SpatExtent : 725.5, 950.5, 525.5, 950.5 (xmin, xmax, ymin, ymax)
# and now can pull mean
z1b_z2_poly4_mean <- app(c(z1b_poly4_crop, z2_poly4_crop), mean)
###
mean_z1b_flip_z2 <- sprc(z1b_z2_poly4_mean, z1b_flip, z2_rast)
mean_z1b_flip_z2_sprc <- sprc(z1b_z2_poly4_mean, z1b_flip, z2_rast)
mean_flip_z2_merge <- merge(mean_z1b_flip_z2_sprc)
# or mosaic - mean goes last and plot
Lots of good fun and rabbit holes to stub one's toes on. There are likely much more compact approaches that others might offer. I imagine that much of this could also be approached through magick.
It appears that magick is not agnostic on the importance of 'dim'[s], albeit more compact:
library(magick)
Linking to ImageMagick 7.1.0.46
Enabled features: cairo, fontconfig, freetype, fftw, heic, lcms, pango, raw, rsvg, webp, x11
Disabled features: ghostscript
Using 4 threads
# again mangling your z1 z2
z1 <- z1/100
z1_rast <- as.raster(z1) # here grDevices::as.raster
z1_mgk <- image_read(z1_rast)
z2_1k <- z2/1000
z2_1k <- z2_1k + 0.5
z2_1k_rast <- as.raster(z2_1k)
z2_mgk <- image_read(z2_1k_rast)
happy_z12_2 <- image_composite(z2_mgk, z1_mgk, operator = 'blend', offset='+500+x150', compose_args = '50')
plot(happy_z12_2)
# a discerning eye can just make out the 10x10 'atop'and trust to 50%
Someone more experienced with magick will likely present a better approach. I was happy to get ImageMagic-7 built.

Some issues Related to simulation from rLGCP and Point Process Models fitting

I trying two generate points from rLGCP function. I assumed that the presence of these points in the Window is governed by two covaiates ras1 and ras2. Hence I need to comptute log-lambda.
rm(list= ls(all=T))
#Libraries
library(spatstat)
library(raster)
library(maptools)
library(fields)
Creating the domaine D and two rasters
D <- c(300, 300) # Square Domaine D of side 300
Win <- owin(xrange =c(0, D[1]), yrange =c(0,D[2]))
spatstat.options(npixel=c(D[1],D[2]))
ext <- extent(Win$xrange, Win$yrange) # Extent of the rasters
# First raster ras1
par(mfrow=c(1,1))
ras1 <- raster()
extent(ras1) <- ext
res(ras1) <- 10
names(ras1) <- 'Radiation sim'
crs(ras1) <- "+proj=lcc +lat_1=48 +lat_2=33 +lon_0=-100 +datum=WGS84"
values(ras1) <- matrix(c(seq(from =0, to =50, length.out=200), seq(from=50, to=100, length.out = 100), seq(from=100, to=150, length.out = 200), seq(from=150, to=200, length.out = 200), seq(from=200, to=290, length.out = 200)), nrow = 30, ncol = 30)
ras1
plot(ras1, asp=1)
# Second Raster ras2
ras2 <- raster()
extent(ras2) <- ext
res(ras2) <- 10
names(ras2) <- 'Precipitation sim'
crs(ras2) <- "+proj=lcc +lat_1=48 +lat_2=33 +lon_0=-100 +datum=WGS84"
values(ras2) <- matrix(c(seq(from =-0, to =200, length.out=500), seq(from=400, to=893, length.out = 20), seq(from=200, to=300, length.out = 300),seq(from=300, to = 400, length.out=80)))
ras2
plot(ras2, asp=1)
Rasters.group <- stack(ras1, ras2)
plot(Rasters.group)
graphics.off()
From Rasters to im. objects
im.ras1 <- as.im.RasterLayer(ras1); summary(im.ras1)
im.ras2 <- as.im.RasterLayer(ras2); summary(im.ras2)
covar.list <- list(Radiation.sim=im.ras1, Precipitation.sim=im.ras2)
# plot .im object
par(mfrow=c(1,2))
image.plot(list(x=im.ras1$xcol, y=im.ras1$yrow, z=t(im.ras1$v)), main= "Radiation sim", asp=1)
image.plot(list(x=im.ras2$xcol, y=im.ras2$yrow, z=t(im.ras2$v)), main= "Precipitation sim", asp=1)
Now I can compute log-Lambda
#normalization
norm.im.ras1 <- (im.ras1- summary(im.ras1)$mean)/sd(im.ras1) ; summary(norm.im.ras1)
norm.im.ras2 <- (im.ras2- summary(im.ras2)$mean)/sd(im.ras2) ; summary(norm.im.ras2)
#Compute log-lambda
log.lambda <- norm.im.ras1 + 2*norm.im.ras2
summary(log.lambda)
resut dispays very weak values
Pixel values
range = [-4.657923, 10.94624]
integral = -9.678445e-12
mean = -1.075383e-16
When I try to simulate from rLGCP
gen.lgcp <- rLGCP("matern", mu=log.lambda, var=0.5, scale=0.05, nu=1)
Error: could not allocate a vector of size 181.9 MB
I tried to get around that with
log.lambda0 <- as.im(solutionset(log.lambda>0))
gen.lgcp <- rLGCP("matern", mu=log.lambda0, var=0.5, scale=0.05, nu=1)
summary(gen.lgcp)
I can move forward. But further, I did not get relevent results
#Thinning
image.plot(list(x=log.lambda$xcol, y=log.lambda$yrow, z=t(log.lambda$v)), main= "log.lambda", asp=1)
samp.lgcp <- rthin(gen.lgcp, P=seq(from=0.02, to=0.2, length.out = gen.lgcp$n)); points(samp.lgcp$x, samp.lgcp$y, type = 'p', cex=0.2, lwd=1, col='white')
#point pattern
pts.locations <- as.data.frame(cbind(longitude=samp.lgcp$x, latitude=samp.lgcp$y))
ppp.lgcp <- ppp(pts.locations$longitude, pts.locations$latitude, window = owin(xrange=c(min(pts.locations [,1]),max(pts.locations [,1])), yrange = c(min(pts.locations[,2]),max(pts.locations[,2]))))
plot(ppp.lgcp)
#Extract value of each sampled point covariate
cov.value <- extract(Rasters.group, pts.locations)
cov.value <- as.data.frame(cov.value )
presence.data <- data.frame(pts.locations, cov.value, presence=rep(1, nrow(cov.value)))
### Choosing absence point pattern
abs.region <- crop(Virtual.species.domaine, extent(25.28486 , 162.2897 ,181.7417 , 280.7651 ))
im.abs.region <- as.im.RasterLayer(abs.region)
abs.points <- rasterToPoints(abs.region)
ppp.abs.points <- ppp(abs.points[,1], abs.points[,2], window = owin(xrange = c(min(abs.points[,1]), max(abs.points[,1])), yrange =c(min(abs.points[,2]), max(abs.points[,2]))))
plot(ppp.abs.points)
cov.value.abs <- extract(Rasters.group, abs.points[,1:2])
absence.data <- data.frame(abs.points[,1:2], cov.value.abs, presence=rep(0, nrow(abs.points)))
colnames(absence.data)[1:2] <- c("longitude", "latitude")
head(absence.data)
# Get database for LGCP
LGCP.Data.Set <- rbind(presence.data, absence.data)
#' Model
#' we will use non-stationary formula
covar.formula <- as.formula(paste("~", paste(names(LGCP.Data.Set[,3:4]), collapse = "+")))
#Quadrature scheme
Q.lgcp <- quadscheme(ppp.lgcp, ppp.abs.points, method = 'grid')
plot(Q.lgcp)
Warning message:
In countingweights(id, areas) :
some tiles with positive area do not contain any quadrature points: relative error = 94.2%
# Inhomogenous poisson process Model
fit.ipp <- ppm(Q.lgcp, trend = covar.formula, covariates = LGCP.Data.Set[,3:4])
summary(fit.ipp)
Warning message:
glm.fit: algorithm did not converge
What is going wrong?
My goal is to evaluate de model and the predict with
prediction.ipp <- predict.ppm(fit.ipp, log.lambda, type = 'intensity')
This is a very long and un-focused question but I will try to help.
After constructing the image log.lambda, you say "result shows very weak values". What do you mean? The image values were assigned as a sequence of values ranging from 0 to 200, and then standardised to have mean zero and standard deviation 1. How is this "weak"?
You then call rLGCP using this image as the mean log intensity. The values of log.lambda range from about -4 to +10. This means that the desired intensity will range from exp(-4) to exp(+10), that is, about 0.01 to 20 000 points per square unit. The image dimensions are 30 by 30 units. Thus, a very large number of random points must be generated, and this fails because of memory limits. (The expected number of points is integral(exp(log.lambda)).
You then change log.lambda to another image which takes only the values 0 and 1.
The next body of code appears to take a raster image (of "absence" pixels) and try to construct a quadrature scheme using the "absence" pixels as dummy points. This is not the purpose for which quadscheme is designed (for quadscheme the dummy points should be sparse).
You don't need to construct a quadrature scheme to use ppm. You could just do something like
D <- solist(A=im.ras.1, B=im.ras.2)
ppm(ppp.logi ~ A+B , data=D)
If you really want to construct a quadrature scheme, I suggest you use the function pixelquad instead. Just do pixelquad(ppp.lgcp, im.abs.region) or similar. Then use ppm.
Since the data were generated by a Cox process, it would be more appropriate to use kppm rather than ppm.
See the spatstat book for further information.

r deSolve - plotting time evolution pde

suppose that we have a pde that describes the evolution of a variable y(t,x) over time t and space x, and I would like to plot its evolution on a three dimensional diagram (t,x,y). With deSolve I can solve the pde, but I have no idea about how to obtain this kind of diagram.
The example in the deSolve package instruction is the following, where y is aphids, t=0,...,200 and x=1,...,60:
library(deSolve)
Aphid <- function(t, APHIDS, parameters) {
deltax <- c (0.5, rep(1, numboxes - 1), 0.5)
Flux <- -D * diff(c(0, APHIDS, 0)) / deltax
dAPHIDS <- -diff(Flux) / delx + APHIDS * r
list(dAPHIDS )
}
D <- 0.3 # m2/day diffusion rate
r <- 0.01 # /day net growth rate
delx <- 1 # m thickness of boxes
numboxes <- 60
Distance <- seq(from = 0.5, by = delx, length.out = numboxes)
APHIDS <- rep(0, times = numboxes)
APHIDS[30:31] <- 1
state <- c(APHIDS = APHIDS) # initialise state variables
times <-seq(0, 200, by = 1)
out <- ode.1D(state, times, Aphid, parms = 0, nspec = 1, names = "Aphid")
"out" produces a matrix containing all the data that we need, t, y(x1), y(x2), ... y(x60). How can I produce a surface plot to show the evolution and variability of y in (t,x)?
The ways change a bit depending on using package. But you can do it with little cost because out[,-1] is an ideal matrix form to draw surface. I showed two examples using rgl and plot3D package.
out2 <- out[,-1]
AphID <- 1:ncol(out2)
library(rgl)
persp3d(times, AphID, out2, col="gray50", zlab="y")
# If you want to change color with value of Z-axis
# persp3d(times, AphID, out2, zlab="y", col=topo.colors(256)[cut(c(out2), 256)])
library(plot3D)
mat <- mesh(times, AphID)
surf3D(mat$x, mat$y, out2, bty="f", ticktype="detailed", xlab="times", ylab="AphID", zlab="y")

Change axis and mirror graph to complete non-linear surface in R

I hope you can help me to solve this issue, I've been trying different things but nothing work so far:
I have a 3D graph that is using a squared term (x2) on the x axis (values go from 0 to 100). The original x has positive and negative values (values go from -10 to 10). In x2 and therefore in the X axis of my 3D graph the values are all positive. Thinking that x2=100 is the value obtained from x=-10^2 and x=10^, x2=25 comes from x= -5^2 and x=5^2 and so on. I have only "half" of the graph and I would like to:
1) Have the graph with the original scale going from -10 to 10 on the X axis.
2) Complete the other half of the graph to have the non-linear relationship (i.e. to complete the surface that corresponds from -10 to 0, which I assume should be a mirror of the one I have right now).
Using different colours you can see better the nonlinear relationship, but I didn't include them here to simplify the code.
Since it is not possible to get the negative values back to plot x because the square root will be always positive, I duplicated the data in Excel. I added negative values (values now go from -100 to 100), I made an R list again. This is not a solution because it still has the same scale of x2, but anyways it doesn’t work.
This is how I plot the graph:
Data: https://www.dropbox.com/s/fv943jf35eqtkd8/NSSH.csv?dl=0
link function code:
logexp <- function(days = 1)
{
linkfun <- function(mu) qlogis(mu^(1/days))
linkinv <- function(eta) plogis(eta)^days
mu.eta <- function(eta) days * plogis(eta)^(days-1) *
.Call("logit_mu_eta", eta, PACKAGE = "stats")
valideta <- function(eta) TRUE
link <- paste("logexp(", days, ")", sep="")
structure(list(linkfun = linkfun, linkinv = linkinv,
mu.eta = mu.eta, valideta = valideta, name = link),
class = "link-glm")
}
The 3D graph:
library(akima)
x <- NSSH$reLDM
x2<- x^2
y <- NSSH$yr
y2 <-y^2
n <-NSSH$AgeDay1
z <- NSSH$survive
m <- glm(z~x2+y+y2+x2:y+n,family=binomial(link=logexp(NSSH$exposure)))
# interaction
i <- 25
xtemp <- seq(min(x),max(x),length.out=i)
xrange <- rep(xtemp,times=i)
x2temp <- seq(min(0),max(100),length.out=i)
x2range <- rep(x2temp,times=i)
ytemp <- seq(min(y),max(y),length.out=i)
yrange <- rep(ytemp,each=i)
y2temp <- seq(min(y2),max(y2),length.out=i)
y2range <- rep(y2temp,each=i)
ntemp <- rep(mean(n),times=i)
nrange <- rep(ntemp,times=i)
newdata <- data.frame(x2=x2range,y=yrange,y2=y2range,n=nrange)
zhat <- predict(m,newdata=newdata)
NS <- zhat^27
xyz <- interp(x2range,yrange,NS)
quartz()
persp(xyz,
theta = 35, phi = 50,col="blue", border="grey40", ticktype = "detailed", zlim=c(0,1)) -> res2
Is there a way I can copy the "half" graph I have as a “mirror” and put it next to the part I already have and use the original scale from x?
Thanks a lot for your help!
UPDATE:
The 3D graph is perfect!
But when I use the "half graph" to make a contour plot it looks like this:
And now with the new graph it looks like this, I wonder why the origin around 0 next to the value 0.7 (area in the red circle) doesn't look the same as the first contour plot. Do you have any idea? is it possible to fix it? Thanks again.
this is the code of the contour plot:
image(xyz2,col = "white")
contour(xyz2,add=T)
I think you don't have to worry about the small things with the exception that make X and Y increase and dim(Z) are c(length(X), length(Y)) .
xyz2 <- interp(sqrt(x2range), yrange, NS) # change scale before interpolate
xyz2$x <- c(rev(xyz2$x)*-1, xyz2$x) # reverse and combine
xyz2$x[41] <- 1.0E-8 # because [40] = [41] = 0 (40 is interp's nx value)
xyz2$z <- rbind(apply(xyz2$z, 2, rev), xyz2$z) # reverse and combine
persp(xyz2,xlab="Relative laying date",ylab="Year",zlab="Nest success",
theta = 35, phi = 50,col="blue", border="grey40", ticktype = "detailed")
[EDITED]
I can't reproduce your additional question.
origin <- list(x = unique(x2range),
y = unique(yrange),
z = matrix(NS, ncol=length(unique(yrange))))
xyz <- interp(x2range,yrange,NS) # OP's code
image(origin, col = "white", xlim=c(-10,10), ylim=c(7, 24))
contour(origin, add=T, lwd=1.5, drawlabels=F) # no interp : black
contour(xyz, add=T, col=2, drawlabels=F) # OP's code : red
contour(x=sqrt(xyz$x), y=xyz$y, z=xyz$z, add=T, col=3, drawlabels=F) # only scale change : green
contour(xyz2, add=T, col=4, drawlabels=F) # my code : blue

Is it possible to sample from a conditional density in R given some conditional data?

In R, using the np package, I have created the bandwidths for a conditional density. What I would like to do is, given some new conditional vector, sample from the resulting distribution.
Current code:
library('np')
# Generate some test data.
somedata = data.frame(replicate(10,runif(100, 0, 1)))
# Conditional variables.
X <- data.frame(somedata[, c('X1', 'X2', 'X3')])
# Dependent variables.
Y <- data.frame(somedata[, c('X4', 'X5', 'X6')])
# Warning, this can be slow (but shouldn't be too bad).
bwsome = npcdensbw(xdat=X, ydat=Y)
# TODO: Given some vector t of conditional data, how can I sample from the resulting distribution?
I am quite new to R, so while I did read the package documentation, I haven't been able to figure out if what I vision makes sense or is possible. If necessary, I would happily use a different package.
Here is the Example 2.49 from: https://cran.r-project.org/web/packages/np/vignettes/np_faq.pdf , it gives the following
solution for for 2 variables:
###
library(np)
data(faithful)
n <- nrow(faithful)
x1 <- faithful$eruptions
x2 <- faithful$waiting
## First compute the bandwidth vector
bw <- npudensbw(~x1 + x2, ckertype = "gaussian")
plot(bw, view = "fixed", ylim = c(0, 3))
## Next generate draws from the kernel density (Gaussian)
n.boot <- 1000
i.boot <- sample(1:n, n.boot, replace = TRUE)
x1.boot <- rnorm(n.boot,x1[i.boot],bw$bw[1])
x2.boot <- rnorm(n.boot,x2[i.boot],bw$bw[2])
## Plot the density for the bootstrap sample using the original
## bandwidths
plot(npudens(~x1.boot+x2.boot,bws=bw$bw), view = "fixed")
Following this hint from #coffeejunky, the following is a possible
solution to your problem with 6 variables:
## Generate some test data.
somedata = data.frame(replicate(10, runif(100, 0, 1)))
## Conditional variables.
X <- data.frame(somedata[, c('X1', 'X2', 'X3')])
## Dependent variables.
Y <- data.frame(somedata[, c('X4', 'X5', 'X6')])
## First compute the bandwidth vector
n <- nrow(somedata)
bw <- npudensbw(~X$X1 + X$X2 + X$X3 + Y$X4 + Y$X5 + Y$X6, ckertype = "gaussian")
plot(bw, view = "fixed", ylim = c(0, 3))
## Next generate draws from the kernel density (Gaussian)
n.boot <- 1000
i.boot <- sample(1:n, n.boot, replace=TRUE)
x1.boot <- rnorm(n.boot, X$X1[i.boot], bw$bw[1])
x2.boot <- rnorm(n.boot, X$X2[i.boot], bw$bw[2])
x3.boot <- rnorm(n.boot, X$X3[i.boot], bw$bw[3])
x4.boot <- rnorm(n.boot, Y$X4[i.boot], bw$bw[4])
x5.boot <- rnorm(n.boot, Y$X5[i.boot], bw$bw[5])
x6.boot <- rnorm(n.boot, Y$X6[i.boot], bw$bw[6])
## Plot the density for the bootstrap sample using the original
## bandwidths
ob1 <- npudens(~x1.boot + x2.boot + x3.boot + x4.boot + x5.boot + x6.boot, bws = bw$bw)
plot(ob1, view = "fixed", ylim = c(0, 3))

Resources