Error when trying to image krig values - r
while trying to Krig benzene values from
WELL.ID X Y BENZENE
1 MW-02 268.8155 282.83 0.00150
2 IW-06 271.6961 377.01 0.00050
3 IW-07 251.0236 300.41 0.01040
4 IW-08 278.9238 300.37 0.03190
5 MW-10 281.4008 414.15 2.04000
6 MW-12 391.3973 449.40 0.01350
7 MW-13 309.5307 335.55 0.01940
8 MW-15 372.8967 370.04 0.01620
9 MW-17 250.0000 428.04 0.01900
10 MW-24 424.4025 295.69 0.00780
11 MW-28 419.3205 250.00 0.00100
12 MW-29 352.9197 277.27 0.00031
13 MW-31 309.3174 370.92 0.17900
i generate a grid (the property these wells reside on) like so
setwd("C:/.....")
getwd()
require(geoR)
require(ggplot2)
a <- read.table("krigbenz_loc.csv", sep = ",", header = TRUE)
b <- data.matrix(a)
c <- as.geodata(b, coords.col = 2:3, data.col = 4, )
ggplot(a, aes(x= X, y= Y, colour="green", label=WELL.ID)) + geom_point() + geom_text(aes(label=WELL.ID),hjust=0, vjust=0)
x.range <- as.integer(range(a[,2]))
y.range <- as.integer(range(a[,3]))
x = seq(from=x.range[1], to=x.range[2], by=1)
y = seq(from=y.range[1], to=y.range[2], by=1)
length(x)
length(y)
xv <- rep(x,length(y))
yv <- rep(y, each=length(x))
in_mat <- as.matrix(cbind(xv, yv))
look at the variogram.. (not very pretty but working on it)
### variogram ###
## on geo-object
v1 <- variog(c)
length(v1$n)
v1.summary <- cbind(c(1:11), v1$v, v1$n)
colnames(v1.summary) <- c("lag", "semi-variance", "# of pairs")
v1.summary
plot(v1, type = "b", main = "Variogram: BENZENE at CRAIG BP")
use ksline to generate krig values...
## variance of benzene readings = sd^2
sd <- sd(a$BENZENE)
var = sd^2
fitted_model <- variofit(vario=v1, ini.cov.pars=c(var, .29), cov.model='exp')
q <- ksline(c, cov.model=fitted_model$cov.model, cov.pars=fitted_model$cov.pars,
nugget=fitted_model$nugget, locations=in_mat)
but then its hold the phones, error when I try to image the results!!!!
> image(q, val = q$predict)
Error in eval(x$call$geodata, envir = attr(x, "parent.env"))$borders :
object of type 'builtin' is not subsettable
this seems to be completely out of left field as I have gone over this several times... I googled the error and it seems that i am trying to call a subset of a function and the answer 90% of the time is that my syntax is wrong somewhere but I have checked everything and I can not figure it out... any help would be greatly appreciated.
thanks
ZR
This looks like a bad evaluation situation in geoR. I mean, a bug!
If you rename your c object to something else, it works:
ccc =c
q <- ksline(ccc, cov.model=fitted_model$cov.model, cov.pars=fitted_model$cov.pars,
nugget=fitted_model$nugget, locations=in_mat)
image(q) # now works
This would be because image.kriging is trying to get something from the original c object, but its not evaluating it in the right context so it gets the R base c function (the word "builtin" in the error was my clue here).
ksline help also says
The function ‘krige.conv’ should be preferred, unless moving
neighborhood is to be used.
so maybe you should try that - it might not have the same problem! Note that it has a different set of arguments to ksline.
Related
fill NA raster cells using focal defined by boundary
I have a raster and a shapefile. The raster contains NA and I am filling the NAs using the focal function library(terra) v <- vect(system.file("ex/lux.shp", package="terra")) r <- rast(system.file("ex/elev.tif", package="terra")) r[45:60, 45:60] <- NA r_fill <- terra::focal(r, 5, mean, na.policy="only", na.rm=TRUE) However, there are some NA still left. So I do this: na_count <- terra::freq(r_fill, value = NA) while(na_count$count != 0){ r_fill <- terra::focal(r_fill, 5, mean, na.policy="only", na.rm=TRUE) na_count <- terra::freq(r_fill, value = NA) } Once all NA's are filled, I clip the raster again using the shapefile r_fill <- terra::crop(r_fill, v, mask = T, touches = T) This is what my before and after looks like: I wondered if the while loop is an efficient way to fill the NAs or basically determine how many times I have to run focal to fill all the NAs in the raster.
Perhaps we can, or want to, dispense with the while( altogether by making a better estimate of focal('s w= arg in a world where r, as ground truth, isn't available. Were it available, we could readily derive direct value of w r <- rast(system.file("ex/elev.tif", package="terra")) # and it's variants r2 <- r r2[45:60, 45:60] <- NA freq(r2, value=NA) - freq(r, value=NA) layer value count 1 0 NA 256 sqrt((freq(r2, value=NA) - freq(r, value=NA))$count) [1] 16 which might be a good value for w=, and introducing another variant r3 <- r r3[40:47, 40:47] <- NA r3[60:67, 60:67] <- NA r3[30:37, 30:37] <- NA r3[70:77, 40:47] <- NA rm(r) We no longer have our ground truth. How might we estimate an edge of w=? Turning to boundaries( default values (inner) r2_bi <- boundaries(r2) r3_bi <- boundaries(r3) # examining some properties of r2_bi, r3_bi freq(r2_bi, value=1)$count [1] 503 freq(r3_bi, value=1)$count [1] 579 freq(r2_bi, value=1)$count/freq(r2_bi, value = 0)$count [1] 0.1306833 freq(r3_bi, value=1)$count/freq(r3_bi, value = 0)$count [1] 0.1534588 sum(freq(r2_bi, value=1)$count,freq(r2_bi, value = 0)$count) [1] 4352 sum(freq(r3_bi, value=1)$count,freq(r3_bi, value = 0)$count) [1] 4352 Taken in reverse order, sum[s] and freq[s] suggest that while the total area of (let's call them holes) are the same, they differ in number and r2 is generally larger than r3. This is also clear from the first pair of freq[s]. Now we drift into some voodoo, hocus pocus in pursuit of a better edge estimate sum(freq(r2)$count) - sum(freq(r2, value = NA)$count) [1] 154 sum(freq(r3)$count) - sum(freq(r3, value = NA)$count) [1] 154 (sum(freq(r3)$count) - sum(freq(r3, value = NA)$count)) [1] 12.40967 freq(r2_bi, value=1)$count/freq(r2_bi, value = 0)$count [1] 0.1306833 freq(r2_bi, value=0)$count/freq(r2_bi, value = 1)$count [1] 7.652087 freq(r3_bi, value=1)$count/freq(r3_bi, value = 0)$count [1] 0.1534588 taking the larger, i.e. freq(r2_bi 7.052087 7.652087/0.1306833 [1] 58.55444 154+58 [1] 212 sqrt(212) [1] 14.56022 round(sqrt(212)+1) [1] 16 Well, except for that +1 part, maybe still a decent estimate for w=, to be used on both r2 and r3 if called upon to find a better w, and perhaps obviate the need for while(. Another approach to looking for squares and their edges: wtf3 <- values(r3_bi$elevation) wtf2 <- values(r2_bi$elevation) wtf2_tbl_df2 <- as.data.frame(table(rle(as.vector(is.na(wtf2)))$lengths)) wtf3_tbl_df2 <- as.data.frame(table(rle(as.vector(is.na(wtf3)))$lengths)) names(wtf2_tbl_df2) [1] "Var1" "Freq" wtf2_tbl_df2[which(wtf2_tbl_df2$Var1 == wtf2_tbl_df2$Freq), ] Var1 Freq 14 16 16 wtf3_tbl_df2[which(wtf3_tbl_df2$Freq == max(wtf3_tbl_df2$Freq)), ] Var1 Freq 7 8 35 35/8 [1] 4.375 # 4 squares of 8 with 3 8 length vectors bringing in v finally and filling v <- vect(system.file("ex/lux.shp", package="terra")) r2_fill_17 <- focal(r2, 16 + 1 , mean, na.policy='only', na.rm = TRUE) r3_fill_9 <- focal(r3, 8 + 1 , mean, na.policy='only', na.rm = TRUE) r2_fill_17_cropv <- crop(r2_fill_17, v, mask = TRUE, touches = TRUE) r3_fill_9_cropv <- crop(r3_fill_9, v, mask = TRUE, touches = TRUE) And I now appreciate your while( approach as your r2 looks better, more naturally transitioned, though the r3 looks fine. In my few, brief experiments with smaller than 'hole', i.e. focal(r2, 9, I got the sense it would take 2 passes to fill, that suggests focal(r2, 5 would take 4. I guess further determining the proportion of fill:hole:rast for when to deploy a while would be worthwhile.
Building a life table in R with for loops
I'm new to R and programming in general, and I'm struggling with a for-loop for building the lx function in a life table. I have the age function x, the death function qx (the probability that someone aged exactly x will die before reaching age x+1), and the surviving function px = 1 - qx. I want to write a function that returns a vector with all the lx values from first to last age in my table. The function is simple... I've defined cohort = 1000000. The first age in my table is x = 5, so, considering x = 5... l_(x) = cohort And, from now on, l_(x+n) = l_(x+n-1)*p_(x+n-1) I've searched about for-loops, and I can only get my code working for lx[1] and lx[2], and I get nothing for lx[n] if n > 2. I wrote that function: living_x <- function(px, cohort){ result <- vector("double", length(px)) l_x <- vector("double", length(px)) for (i in 1:length(px)){ if (i == 1){ l_x[i] = cohort } else l_x[i] = l_x[i-1]*px[i-1] result[i] = l_x print(result) } } When I run it, I get several outputs (more than length(px)) and "There were 50 or more warnings (use warnings() to see the first 50)". When I run warnings(), I get "In result[i] <- l_x : number of items to replace is not a multiple of replacement length" for every number. Also, everything I try besides it give me different errors or only calculate lx for lx[1] and lx[2]. I know there's something really wrong with my code, but I still couldn't identify it. I'd be glad if someone could give me a hint to find out what to change. Thank you!
Here's an approach using dplyr from the tidyverse packages, to use px to calculate lx. This can be done similarly in "Base R" using excerpt$lx = 100000 * cumprod(1 - lag(excerpt$qx)). lx is provided in the babynames package, so we can check our work: library(tidyverse) library(babynames) # Get excerpt with age, qx, and lx. excerpt <- lifetables %>% filter(year == 2010, sex == "F") %>% select(x, qx_given = qx, lx_given = lx) excerpt # A tibble: 120 x 3 x qx_given lx_given <dbl> <dbl> <dbl> 1 0 0.00495 100000 2 1 0.00035 99505 3 2 0.00022 99471 4 3 0.00016 99449 5 4 0.00012 99433 6 5 0.00011 99421 7 6 0.00011 99410 8 7 0.0001 99399 9 8 0.0001 99389 10 9 0.00009 99379 # ... with 110 more rows Using that data to estimate lx_calc: est_lx <- excerpt %>% mutate(px = 1 - qx_given, cuml_px = cumprod(lag(px, default = 1)), lx_calc = cuml_px * 100000) And finally, comparing visually the given lx with the one calculated based on px. They match exactly. est_lx %>% gather(version, val, c(lx_given, lx_calc)) %>% ggplot(aes(x, val, color = version)) + geom_line()
I could do it in a very simple way after thinking for some minutes more. lx = c() for (i in 2:length(px)){ lx[1] = 10**6 lx[i] = lx[i-1]*px[i-1] }
root values of simultaneous nonlinear equations in R
I've been trying to code this problem: https://sg.answers.yahoo.com/question/index?qid=20110127015240AA9RjyZ I believe there is a R function somewhere to solve for the root values of the following equations: (x+3)^2 + (y-50)^2 = 1681 (x-11)^2 + (y+2)^2 = 169 (x-13)^2 + (y-34)^2 = 625 I tried using the 'solve' function but they're only for linear equations(?) Also tried 'nls' dt = data.frame(a=c(-3,11,13), b = c(50, -2, 34), c = c(1681,169,625)) nls(c~(x-a)^2 + (y-b)^2, data = dt, start = list(x = 1, y = 1)) but getting an error all the time. (and yes I already tried changing the max iteration) Error in nls(c ~ (x - a)^2 + (y - b)^2, data = dt, start = list(x = 1, : number of iterations exceeded maximum of 50 how do you solve the root values in R?
nls does not work with zero residual data -- see ?nls where this is mentioned. nlxb in the nlmrt package is mostly similar to nls in terms of input arguments and does support zero residual data. Using dt from the question just replace nls with nlxb: library(nlmrt) nlxb(c~(x-a)^2 + (y-b)^2, data = dt, start = list(x = 1, y = 1)) giving: nlmrt class object: x residual sumsquares = 2.6535e-20 on 3 observations after 5 Jacobian and 6 function evaluations name coeff SE tstat pval gradient JSingval x 6 7.21e-12 8.322e+11 7.649e-13 -1.594e-09 96.93 y 10 1.864e-12 5.366e+12 1.186e-13 -1.05e-08 22.45
You cannot always solve three equations for two variables.You can solve two equations for two variables and test if the solution satisfies the third equation. Use package nleqslv as follows. library(nleqslv) f1 <- function(z) { f <- numeric(2) x <- z[1] y <- z[2] f[1] <- (x+3)^2 + (y-50)^2 - 1681 f[2] <- (x-11)^2 + (y+2)^2 - 169 f } f2 <- function(z) { x <- z[1] y <- z[2] (x-13)^2 + (y-34)^2 - 625 } zstart <- c(0,0) z1 <- nleqslv(zstart,f1) z1 f2(z1$x) which gives you the following output: >z1 $x [1] 6 10 $fvec [1] 7.779818e-09 7.779505e-09 $termcd [1] 1 $message [1] "Function criterion near zero" $scalex [1] 1 1 $nfcnt [1] 9 $njcnt [1] 1 $iter [1] 9 >f2(z1$x) [1] 5.919242e-08 So a solution has been found and the solution follows from the vector z$x. Inserting z$x in function f2 also gives almost zero. So a solution has been found. You could also try package BB.
Just go through rootSolve package and you will be done: https://cran.r-project.org/web/packages/rootSolve/vignettes/rootSolve.pdf
3D with value interpolation in R (X, Y, Z, V)
Is there an R package that does X, Y, Z, V interpolation? I see that Akima does X, Y, V but I need one more dimension. Basically I have X,Y,Z coordinates plus the value (V) that I want to interpolate. This is all GIS data but my GIS does not do voxel interpolation So if I have a point cloud of XYZ coordinates with a value of V, how can I interpolate what V would be at XYZ coordinate (15,15,-12) ? Some test data would look like this: X <-rbind(10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50) Y <- rbind(10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50) Z <- rbind(-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29) V <- rbind(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,25,35,75,25,50,0,0,0,0,0,10,12,17,22,27,32,37,25,13,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,50,125,130,105,110,115,165,180,120,100,80,60,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)
I had the same question and was hoping for an answer in R. My question was: How do I perform 3D (trilinear) interpolation using regular gridded coordinate/value data (x,y,z,v)? For example, CT images, where each image has pixel centers (x, y) and greyscale value (v) and there are multiple image "slices" (z) along the thing being imaged (e.g., head, torso, leg, ...). There is a slight problem with the given example data. # original example data (reformatted) X <- rep( rep( seq(10, 50, by=10), each=25), 3) Y <- rep( rep( seq(10, 50, by=10), each=5), 15) Z <- rep(c(-5, -17, -29), each=125) V <- rbind(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,25,35,75,25,50,0,0,0,0,0,10,12,17,22,27,32,37,25,13,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,50,125,130,105,110,115,165,180,120,100,80,60,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0) # the dimensions of the 3D grid described do not match the number of values (length(unique(X))*length(unique(Y))*length(unique(Z))) == length(V) ## [1] FALSE ## which makes sense since 75 != 375 # visualize this: library(rgl) plot3d(x=X, y=Y, z=Z, col=terrain.colors(181)[V]) # examine the example data real quick... df <- data.frame(x=X,y=Y,z=Z,v=V); head(df); table(df$x, df$y, df$z); # there are 5 V values at each X,Y,Z coordinate... duplicates! # redefine Z so there are 15 unique values # making 375 unique coordinate points # and matching the length of the given value vector, V df$z <- seq(-5, -29, length.out=15) head(df) table(df$x, df$y, df$z); # there is now 1 V value at each X,Y,Z coordinate # that was for testing, now actually redefine the Z vector. Z <- rep(seq(-5,-29, length.out = 15), 25) # plot it. library(rgl) plot3d(x=X, y=Y, z=Z, col=terrain.colors(181)[V]) I couldn't find any 4D interpolation functions in the usual R packages, so I wrote a quick and dirty one. The following implements (without ANY error checking... caveat emptor!) the technique described at: https://en.wikipedia.org/wiki/Trilinear_interpolation # convenience function #1: # define a function that takes a vector of lookup values and a value to lookup # and returns the two lookup values that the value falls between between = function(vec, value) { # extract list of unique lookup values u = unique(vec) # difference vector dvec = u - value vals = c(u[dvec==max(dvec[dvec<0])], u[dvec==min(dvec[dvec>0])]) return(vals) } # convenience function #2: # return the value (v) from a grid data.frame for given point (x, y, z) get_value = function(df, xi, yi, zi) { # assumes df is data.frame with column names: x, y, z, v subset(df, x==xi & y==yi & z==zi)$v } # inputs df (x,y,z,v), points to look up (x, y, z) interp3 = function(dfin, xin, yin, zin) { # TODO: check if all(xin, yin, zin) equals a grid point, if so just return the point value # TODO: check if any(xin, yin, zin) equals a grid point, if so then do bilinear or linear interp cube_x <- between(dfin$x, xin) cube_y <- between(dfin$y, yin) cube_z <- between(dfin$z, zin) # find the two values in each dimension that the lookup value falls within # and extract the cube of 8 points tmp <- subset(dfin, x %in% cube_x & y %in% cube_y & z %in% cube_z) stopifnot(nrow(tmp)==8) # define points in a periodic and cubic lattice x0 = min(cube_x); x1 = max(cube_x); y0 = min(cube_y); y1 = max(cube_y); z0 = min(cube_z); z1 = max(cube_z); # define differences in each dimension xd = (xin-x0)/(x1-x0); # 0.5 yd = (yin-y0)/(y1-y0); # 0.5 zd = (zin-z0)/(z1-z0); # 0.9166666 # interpolate along x: v00 = get_value(tmp, x0, y0, z0)*(1-xd) + get_value(tmp,x1,y0,z0)*xd # 2.5 v01 = get_value(tmp, x0, y0, z1)*(1-xd) + get_value(tmp,x1,y0,z1)*xd # 0 v10 = get_value(tmp, x0, y1, z0)*(1-xd) + get_value(tmp,x1,y1,z0)*xd # 0 v11 = get_value(tmp, x0, y1, z1)*(1-xd) + get_value(tmp,x1,y1,z1)*xd # 65 # interpolate along y: v0 = v00*(1-yd) + v10*yd # 1.25 v1 = v01*(1-yd) + v11*yd # 32.5 # interpolate along z: return(v0*(1-zd) + v1*zd) # 29.89583 (~91.7% between v0 and v1) } > interp3(df, 15, 15, -12) [1] 29.89583 Testing that same source's assertion that trilinear is simply linear(bilinear(), bilinear()), we can use the base R linear interpolation function, approx(), and the akima package's bilinear interpolation function, interp(), as follows: library(akima) approx(x=c(-11.857143,-13.571429), y=c(interp(x=df[round(df$z,1)==-11.9,"x"], y=df[round(df$z,1)==-11.9,"y"], z=df[round(df$z,1)==-11.9,"v"], xo=15, yo=15)$z, interp(x=df[round(df$z,1)==-13.6,"x"], y=df[round(df$z,1)==-13.6,"y"], z=df[round(df$z,1)==-13.6,"v"], xo=15, yo=15)$z), xout=-12)$y # [1] 0.2083331 Checked another package to triangulate: library(oce) Vmat <- array(data = V, dim = c(length(unique(X)), length(unique(Y)), length(unique(Z)))) approx3d(x=unique(X), y=unique(Y), z=unique(Z), f=Vmat, xout=15, yout=15, zout=-12) [1] 1.666667 So 'oce', 'akima' and my function all give pretty different answers. This is either a mistake in my code somewhere, or due to differences in the underlying Fortran code in the akima interp(), and whatever is in the oce 'approx3d' function that we'll leave for another day. Not sure what the correct answer is because the MWE is not exactly "minimum" or simple. But I tested the functions with some really simple grids and it seems to give 'correct' answers. Here's one simple 2x2x2 example: # really, really simple example: # answer is always the z-coordinate value sdf <- expand.grid(x=seq(0,1),y=seq(0,1),z=seq(0,1)) sdf$v <- rep(seq(0,1), each=4) > interp3(sdf,0.25,0.25,.99) [1] 0.99 > interp3(sdf,0.25,0.25,.4) [1] 0.4 Trying akima on the simple example, we get the same answer (phew!): library(akima) approx(x=unique(sdf$z), y=c(interp(x=sdf[sdf$z==0,"x"], y=sdf[sdf$z==0,"y"], z=sdf[sdf$z==0,"v"], xo=.25, yo=.25)$z, interp(x=sdf[sdf$z==1,"x"], y=sdf[sdf$z==1,"y"], z=sdf[sdf$z==1,"v"], xo=.25, yo=.25)$z), xout=.4)$y # [1] 0.4 The new example data in the OP's own, accepted answer was not possible to interpolate with my simple interp3() function above because: (a) the grid coordinates are not regularly spaced, and (b) the coordinates to lookup (x1, y1, z1) lie outside of the grid. # for completeness, here's the attempt: options(scipen = 999) XCoor=c(78121.6235,78121.6235,78121.6235,78121.6235,78136.723,78136.723,78136.723,78136.8969,78136.8969,78136.8969,78137.4595,78137.4595,78137.4595,78125.061,78125.061,78125.061,78092.4696,78092.4696,78092.4696,78092.7683,78092.7683,78092.7683,78092.7683,78075.1171,78075.1171,78064.7462,78064.7462,78064.7462,78052.771,78052.771,78052.771,78032.1179,78032.1179,78032.1179) YCoor=c(5213642.173,523642.173,523642.173,523642.173,523594.495,523594.495,523594.495,523547.475,523547.475,523547.475,523503.462,523503.462,523503.462,523426.33,523426.33,523426.33,523656.953,523656.953,523656.953,523607.157,523607.157,523607.157,523607.157,523514.671,523514.671,523656.81,523656.81,523656.81,523585.232,523585.232,523585.232,523657.091,523657.091,523657.091) ZCoor=c(-3.0,-5.0,-10.0,-13.0,-3.5,-6.5,-10.5,-3.5,-6.5,-9.5,-3.5,-5.5,-10.5,-3.5,-5.5,-7.5,-3.5,-6.5,-11.5,-3.0,-5.0,-9.0,-12.0,-6.5,-10.5,-2.5,-3.5,-8.0,-3.5,-6.5,-9.5,-2.5,-6.5,-8.5) V=c(2.4000,30.0,620.0,590.0,61.0,480.0,0.3700,0.0,0.3800,0.1600,0.1600,0.9000,0.4100,0.0,0.0,0.0061,6.0,52.0,0.3400,33.0,235.0,350.0,9300.0,31.0,2100.0,0.0,0.0,10.5000,3.8000,0.9000,310.0,0.2800,8.3000,18.0) adf = data.frame(x=XCoor, y=YCoor, z=ZCoor, v=V) # the first y value looks like a typo? > head(adf) x y z v 1 78121.62 5213642.2 -3.0 2.4 2 78121.62 523642.2 -5.0 30.0 3 78121.62 523642.2 -10.0 620.0 4 78121.62 523642.2 -13.0 590.0 5 78136.72 523594.5 -3.5 61.0 6 78136.72 523594.5 -6.5 480.0 x1=198130.000 y1=1913590.000 z1=-8 > interp3(adf, x1,y1,z1) numeric(0) Warning message: In min(dvec[dvec > 0]) : no non-missing arguments to min; returning Inf
Whether the test data did or not make sense, I still needed an algorithm. Test data is just that, something to fiddle with and as a test data it was fine. I wound up programming it in python and the following code takes XYZ V and does a 3D Inverse Distance Weighted (IDW) interpolation where you can set the number of points used in the interpolation. This python recipe only interpolates to one point (x1, y1, z1) but it is easy enough to extend. import numpy as np import math #34 points XCoor=np.array([78121.6235,78121.6235,78121.6235,78121.6235,78136.723,78136.723,78136.723,78136.8969,78136.8969,78136.8969,78137.4595,78137.4595,78137.4595,78125.061,78125.061,78125.061,78092.4696,78092.4696,78092.4696,78092.7683,78092.7683,78092.7683,78092.7683,78075.1171,78075.1171,78064.7462,78064.7462,78064.7462,78052.771,78052.771,78052.771,78032.1179,78032.1179,78032.1179]) YCoor=np.array([5213642.173,523642.173,523642.173,523642.173,523594.495,523594.495,523594.495,523547.475,523547.475,523547.475,523503.462,523503.462,523503.462,523426.33,523426.33,523426.33,523656.953,523656.953,523656.953,523607.157,523607.157,523607.157,523607.157,523514.671,523514.671,523656.81,523656.81,523656.81,523585.232,523585.232,523585.232,523657.091,523657.091,523657.091]) ZCoor=np.array([-3.0,-5.0,-10.0,-13.0,-3.5,-6.5,-10.5,-3.5,-6.5,-9.5,-3.5,-5.5,-10.5,-3.5,-5.5,-7.5,-3.5,-6.5,-11.5,-3.0,-5.0,-9.0,-12.0,-6.5,-10.5,-2.5,-3.5,-8.0,-3.5,-6.5,-9.5,-2.5,-6.5,-8.5]) V=np.array([2.4000,30.0,620.0,590.0,61.0,480.0,0.3700,0.0,0.3800,0.1600,0.1600,0.9000,0.4100,0.0,0.0,0.0061,6.0,52.0,0.3400,33.0,235.0,350.0,9300.0,31.0,2100.0,0.0,0.0,10.5000,3.8000,0.9000,310.0,0.2800,8.3000,18.0]) def Distance(x1,y1,z1, Npoints): i=0 d=[] while i < 33: d.append(math.sqrt((x1-XCoor[i])*(x1-XCoor[i]) + (y1-YCoor[i])*(y1-YCoor[i]) + (z1-ZCoor[i])*(z1-ZCoor[i]) )) i = i + 1 distance=np.array(d) myIndex=distance.argsort()[:Npoints] weightedNum=0 weightedDen=0 for i in myIndex: weightedNum=weightedNum + (V[i]/(distance[i]*distance[i])) weightedDen=weightedDen + (1/(distance[i]*distance[i])) InterpValue=weightedNum/weightedDen return InterpValue x1=198130.000 y1=1913590.000 z1=-8 print(Distance(x1,y1,z1, 12))
Overlap plots in R - from zoo package
Using the following code: library("ggplot2") require(zoo) args <- commandArgs(TRUE) input <- read.csv(args[1], header=F, col.names=c("POS","ATT")) id <- args[2] prot_len <- nrow(input) manual <- prot_len/100 # 4.3 att_name <- "Entropy" att_zoo <- zoo(input$ATT) att_avg <- rollapply(att_zoo, width = manual, by = manual, FUN = mean, align = "left") autoplot(att_avg, col="att1") + labs(x = "Positions", y = att_name, title="") With data: > str(input) 'data.frame': 431 obs. of 2 variables: $ POS: int 1 2 3 4 5 6 7 8 9 10 ... $ ATT: num 0.652 0.733 0.815 1.079 0.885 ... I do: I would like to upload input2 which has different lenght (therefore, different x-axis) and overlap the 2 curves in the same plot (I mean overlap because I want the two curves in the same plot size, so I will "ignore" the overlapped axis labels and tittles), I would like to compare the shape, regardles the lenght of input. First I've tried by generating toy input2 changing manual value, so that I have att_avg2 in which manual equals e.g. 7. In between original autoplot and new autoplot-2 I add par(new=TRUE), but this is not my expected output. Any hint on how doing this? Maybe it's better to save att_avg from zoo series to data.frame and not use autoplot? Thanks UPDATE, response to G. Grothendieck: If I do: [...] att_zoo <- zoo(input$ATT) att_avg <- rollapply(att_zoo, width = manual, by = manual, FUN = mean, align = "left") #manual=4.3 att_avg2 <- rollapply(att_zoo, width = 7, by = 7, FUN = mean, align = "left") autoplot(cbind(att_avg, att_avg2), facet=NULL) + labs(x = "Positions", y = att_name, title="") I get and a warning message: Removed 1 rows containing missing values (geom_path).
par is used with classic graphics, not for ggplot2. If you have two zoo series just cbind or merge the series together and autoplot them using facet=NULL: library(zoo) library(ggplot2) z1 <- zoo(1:3) # length 3 z2 <- zoo(5:1) # length 5 autoplot(cbind(z1, z2), facet = NULL) Note: The question omitted input2 so there could be some additional considerations from aspects not shown.