fill NA raster cells using focal defined by boundary - r
I have a raster and a shapefile. The raster contains NA and I am filling the NAs using the focal function
library(terra)
v <- vect(system.file("ex/lux.shp", package="terra"))
r <- rast(system.file("ex/elev.tif", package="terra"))
r[45:60, 45:60] <- NA
r_fill <- terra::focal(r, 5, mean, na.policy="only", na.rm=TRUE)
However, there are some NA still left. So I do this:
na_count <- terra::freq(r_fill, value = NA)
while(na_count$count != 0){
r_fill <- terra::focal(r_fill, 5, mean, na.policy="only", na.rm=TRUE)
na_count <- terra::freq(r_fill, value = NA)
}
Once all NA's are filled, I clip the raster again using the shapefile
r_fill <- terra::crop(r_fill, v, mask = T, touches = T)
This is what my before and after looks like:
I wondered if the while loop is an efficient way to fill the NAs or basically determine how many times I have to run focal to fill all the NAs in the raster.
Perhaps we can, or want to, dispense with the while( altogether by making a better estimate of focal('s w= arg in a world where r, as ground truth, isn't available. Were it available, we could readily derive direct value of w
r <- rast(system.file("ex/elev.tif", package="terra"))
# and it's variants
r2 <- r
r2[45:60, 45:60] <- NA
freq(r2, value=NA) - freq(r, value=NA)
layer value count
1 0 NA 256
sqrt((freq(r2, value=NA) - freq(r, value=NA))$count)
[1] 16
which might be a good value for w=, and introducing another variant
r3 <- r
r3[40:47, 40:47] <- NA
r3[60:67, 60:67] <- NA
r3[30:37, 30:37] <- NA
r3[70:77, 40:47] <- NA
rm(r)
We no longer have our ground truth. How might we estimate an edge of w=? Turning to boundaries( default values (inner)
r2_bi <- boundaries(r2)
r3_bi <- boundaries(r3)
# examining some properties of r2_bi, r3_bi
freq(r2_bi, value=1)$count
[1] 503
freq(r3_bi, value=1)$count
[1] 579
freq(r2_bi, value=1)$count/freq(r2_bi, value = 0)$count
[1] 0.1306833
freq(r3_bi, value=1)$count/freq(r3_bi, value = 0)$count
[1] 0.1534588
sum(freq(r2_bi, value=1)$count,freq(r2_bi, value = 0)$count)
[1] 4352
sum(freq(r3_bi, value=1)$count,freq(r3_bi, value = 0)$count)
[1] 4352
Taken in reverse order, sum[s] and freq[s] suggest that while the total area of (let's call them holes) are the same, they differ in number and r2 is generally larger than r3. This is also clear from the first pair of freq[s].
Now we drift into some voodoo, hocus pocus in pursuit of a better edge estimate
sum(freq(r2)$count) - sum(freq(r2, value = NA)$count)
[1] 154
sum(freq(r3)$count) - sum(freq(r3, value = NA)$count)
[1] 154
(sum(freq(r3)$count) - sum(freq(r3, value = NA)$count))
[1] 12.40967
freq(r2_bi, value=1)$count/freq(r2_bi, value = 0)$count
[1] 0.1306833
freq(r2_bi, value=0)$count/freq(r2_bi, value = 1)$count
[1] 7.652087
freq(r3_bi, value=1)$count/freq(r3_bi, value = 0)$count
[1] 0.1534588
taking the larger, i.e. freq(r2_bi 7.052087
7.652087/0.1306833
[1] 58.55444
154+58
[1] 212
sqrt(212)
[1] 14.56022
round(sqrt(212)+1)
[1] 16
Well, except for that +1 part, maybe still a decent estimate for w=, to be used on both r2 and r3 if called upon to find a better w, and perhaps obviate the need for while(.
Another approach to looking for squares and their edges:
wtf3 <- values(r3_bi$elevation)
wtf2 <- values(r2_bi$elevation)
wtf2_tbl_df2 <- as.data.frame(table(rle(as.vector(is.na(wtf2)))$lengths))
wtf3_tbl_df2 <- as.data.frame(table(rle(as.vector(is.na(wtf3)))$lengths))
names(wtf2_tbl_df2)
[1] "Var1" "Freq"
wtf2_tbl_df2[which(wtf2_tbl_df2$Var1 == wtf2_tbl_df2$Freq), ]
Var1 Freq
14 16 16
wtf3_tbl_df2[which(wtf3_tbl_df2$Freq == max(wtf3_tbl_df2$Freq)), ]
Var1 Freq
7 8 35
35/8
[1] 4.375 # 4 squares of 8 with 3 8 length vectors
bringing in v finally and filling
v <- vect(system.file("ex/lux.shp", package="terra"))
r2_fill_17 <- focal(r2, 16 + 1 , mean, na.policy='only', na.rm = TRUE)
r3_fill_9 <- focal(r3, 8 + 1 , mean, na.policy='only', na.rm = TRUE)
r2_fill_17_cropv <- crop(r2_fill_17, v, mask = TRUE, touches = TRUE)
r3_fill_9_cropv <- crop(r3_fill_9, v, mask = TRUE, touches = TRUE)
And I now appreciate your while( approach as your r2 looks better, more naturally transitioned, though the r3 looks fine. In my few, brief experiments with smaller than 'hole', i.e. focal(r2, 9, I got the sense it would take 2 passes to fill, that suggests focal(r2, 5 would take 4.
I guess further determining the proportion of fill:hole:rast for when to deploy a while would be worthwhile.
Related
Combinatorial optimization with discrete options in R
I have a function with five variables that I want to maximize using only an specific set of parameters for each variable. Are there any methods in R that can do this, other than by brutal force? (e.g. Particle Swarm Optimization, Genetic Algorithm, Greedy, etc.). I have read a few packages but they seem to create their own set of parameters from within a given range. I am only interested in optimizing the set of options provided. Here is a simplified version of the problem: #Example of 5 variable function to optimize Fn<-function(x){ a=x[1] b=x[2] c=x[3] d=x[4] e=x[5] SUM=a+b+c+d+e return(SUM) } #Parameters for variables to optimize Vars=list( As=c(seq(1.5,3, by = 0.3)), #float Bs=c(1,2), #Binary Cs=c(seq(1,60, by=10)), #Integer Ds=c(seq(60,-60, length.out=5)), #Negtive Es=c(1,2,3) ) #Full combination FullCombn= expand.grid(Vars) Results=data.frame(I=as.numeric(), Sum=as.numeric()) for (i in 1:nrow(FullCombn)){ ParsI=FullCombn[i,] ResultI=Fn(ParsI) Results=rbind(Results,c(I=i,Sum=ResultI)) } #Best iteration (Largest result) Best=Results[Results[, 2] == max(Results[, 2]),] #Best parameters FullCombn[Best$I,]
Two more possibilities. Both minimize by default, so I flip the sign in your objective function (i.e. return -SUM). #Example of 5 variable function to optimize Fn<-function(x, ...){ a=x[1] b=x[2] c=x[3] d=x[4] e=x[5] SUM=a+b+c+d+e return(-SUM) } #Parameters for variables to optimize Vars=list( As=c(seq(1.5,3, by = 0.3)), #float Bs=c(1,2), #Binary Cs=c(seq(1,60, by=10)), #Integer Ds=c(seq(60,-60, length.out=5)), #Negtive Es=c(1,2,3) ) First, a grid search. Exactly what you did, just convenient. And the implementation allows you to distribute the evaluations of the objective function. library("NMOF") gridSearch(fun = Fn, levels = Vars)[c("minfun", "minlevels")] ## 5 variables with 6, 2, 6, 5, ... levels: 1080 function evaluations required. ## $minfun ## [1] -119 ## ## $minlevels ## [1] 3 2 51 60 3 An alternative: a simple Local Search. You start with a valid initial guess, and then move randomly through possible feasible solutions. The key ingredient is the neighbourhood function. It picks one element randomly and then, again randomly, sets this element to one allowed value. nb <- function(x, levels, ...) { i <- sample(length(levels), 1) x[i] <- sample(levels[[i]], 1) x } (There would be better algorithms for neighbourhood functions; but this one is simple and so demonstrates the idea well.) LSopt(Fn, list(x0 = c(1.8, 2, 11, 30, 2), ## a feasible initial solution neighbour = nb, nI = 200 ## iterations ), levels = Vars)$xbest ## Local Search. ## ##... ## Best solution overall: -119 ## [1] 3 2 51 60 3 (Disclosure: I am the maintainer of package NMOF, which provides functions gridSearch and LSopt.) In response to the comment, a few remarks on Local Search and the neighbourhood function above (nb). Local Search, as implemented in LSopt, will start with an arbitrary solution, and then change that solution slightly. This new solution, called a neighbour, will be compared (by its objective-function value) to the old solution. If the new solution is better, it becomes the current solution; otherwise it is rejected and the old solution remains the current one. Then the algorithm repeats, for a number of iterations. So, in short, Local Search is not random sampling, but a guided random-walk through the search space. It's guided because only better solutions get accepted, worse one's get rejected. In this sense, LSopt will narrow down on good parameter values. The implementation of the neighbourhood is not ideal for two reasons. The first is that a solution may not be changed at all, since I sample from feasible values. But for a small set of possible values as here, it might often happen that the same element is selected again. However, for larger search spaces, this inefficiency is typically negligible, since the probability of sampling the same value becomes smaller. Often so small, that the additional code for testing if the solution has changed becomes more expensive that the occasionally-wasted iteration. A second thing could be improved, albeit through a more complicated function. And again, for this small problem it does not matter. In the current neighbourhood, an element is picked and then set to any feasible value. But that means that changes from one solution to the next might be large. Instead of picking any feasible values of the As, in realistic problems it will often be better to pick a value close to the current value. For example, when you are at 2.1, either move to 1.8 or 2.4, but not to 3.0. (This reasoning is only relevant, of course, if the variable in question is on a numeric or at least ordinal scale.) Ultimately, what implementation works well can be tested only empirically. Many more details are in this tutorial. Here is one alternative implementation. A solution is now a vector of positions for the original values, e.g. if x[1] is 2, it "points" to 1.8, if x[2] is 2, it points to 1, and so on. ## precompute lengths of vectors in Vars lens <- lengths(Vars) nb2 <- function(x, lens, ...) { i <- sample(length(lens), 1) if (x[i] == 1L) { x[i] <- 2 } else if (x[i] == lens[i]) { x[i] <- lens[i] - 1 } else x[i] <- x[i] + sample(c(1, -1), 1) x } ## the objective function now needs to map the ## indices in x back to the levels in Vars Fn2 <- function(x, levels, ...){ y <- mapply(`[`, levels, x) ## => same as ## y <- numeric(length(x)) ## y[1] <- Vars[[1]][x[1]] ## y[2] <- Vars[[2]][x[2]] ## .... SUM <- sum(y) return(-SUM) } xbest <- LSopt(Fn2, list(x0 = c(1, 1, 1, 1, 1), ## an initial solution neighbour = nb2, nI = 200 ## iterations ), levels = Vars, lens = lens)$xbest ## Local Search. ## .... ## Best solution overall: -119 ## map the solution back to the values mapply(`[`, Vars, xbest) ## As Bs Cs Ds Es ## 3 2 51 60 3
Here is a genetic algorithm solution with package GA. The key is to write a function decode enforcing the constraints, see the package vignette. library(GA) #> Loading required package: foreach #> Loading required package: iterators #> Package 'GA' version 3.2.2 #> Type 'citation("GA")' for citing this R package in publications. #> #> Attaching package: 'GA' #> The following object is masked from 'package:utils': #> #> de decode <- function(x) { As <- Vars$As Bs <- Vars$Bs Cs <- Vars$Cs Ds <- rev(Vars$Ds) # fix real variable As i <- findInterval(x[1], As) if(x[1L] - As[i] < As[i + 1L] - x[1L]) x[1L] <- As[i] else x[1L] <- As[i + 1L] # fix binary variable Bs if(x[2L] - Bs[1L] < Bs[2L] - x[2L]) x[2L] <- Bs[1L] else x[2L] <- Bs[2L] # fix integer variable Cs i <- findInterval(x[3L], Cs) if(x[3L] - Cs[i] < Cs[i + 1L] - x[3L]) x[3L] <- Cs[i] else x[3L] <- Cs[i + 1L] # fix integer variable Ds i <- findInterval(x[4L], Ds) if(x[4L] - Ds[i] < Ds[i + 1L] - x[4L]) x[4L] <- Ds[i] else x[4L] <- Ds[i + 1L] # fix the other, integer variable x[5L] <- round(x[5L]) setNames(x , c("As", "Bs", "Cs", "Ds", "Es")) } Fn <- function(x){ x <- decode(x) # a <- x[1] # b <- x[2] # c <- x[3] # d <- x[4] # e <- x[5] # SUM <- a + b + c + d + e SUM <- sum(x, na.rm = TRUE) return(SUM) } #Parameters for variables to optimize Vars <- list( As = seq(1.5, 3, by = 0.3), # Float Bs = c(1, 2), # Binary Cs = seq(1, 60, by = 10), # Integer Ds = seq(60, -60, length.out = 5), # Negative Es = c(1, 2, 3) ) res <- ga(type = "real-valued", fitness = Fn, lower = c(1.5, 1, 1, -60, 1), upper = c(3, 2, 51, 60, 3), popSize = 1000, seed = 123) summary(res) #> ── Genetic Algorithm ─────────────────── #> #> GA settings: #> Type = real-valued #> Population size = 1000 #> Number of generations = 100 #> Elitism = 50 #> Crossover probability = 0.8 #> Mutation probability = 0.1 #> Search domain = #> x1 x2 x3 x4 x5 #> lower 1.5 1 1 -60 1 #> upper 3.0 2 51 60 3 #> #> GA results: #> Iterations = 100 #> Fitness function value = 119 #> Solutions = #> x1 x2 x3 x4 x5 #> [1,] 2.854089 1.556080 46.11389 49.31045 2.532682 #> [2,] 2.869408 1.638266 46.12966 48.71106 2.559620 #> [3,] 2.865254 1.665405 46.21684 49.04667 2.528606 #> [4,] 2.866494 1.630416 46.12736 48.78017 2.530454 #> [5,] 2.860940 1.650015 46.31773 48.92642 2.521276 #> [6,] 2.851644 1.660358 46.09504 48.81425 2.525504 #> [7,] 2.855078 1.611837 46.13855 48.62022 2.575492 #> [8,] 2.857066 1.588893 46.15918 48.60505 2.588992 #> [9,] 2.862644 1.637806 46.20663 48.92781 2.579260 #> [10,] 2.861573 1.630762 46.23494 48.90927 2.555612 #> ... #> [59,] 2.853788 1.640810 46.35649 48.87381 2.536682 #> [60,] 2.859090 1.658127 46.15508 48.85404 2.590679 apply(res#solution, 1, decode) |> t() |> unique() #> As Bs Cs Ds Es #> [1,] 3 2 51 60 3 Created on 2022-10-24 with reprex v2.0.2
Finding the peak of a mountain
so I've combined those 2 rasters and made them into one dem raster which contains elevation values: dem1 = read_stars("srtm_43_06.tif") dem2 = read_stars("srtm_44_06.tif") pol = st_read("israel_borders.shp") dem = st_mosaic(dem1, dem2) dem = dem[, 5687:6287, 2348:2948] names(dem) = "elevation" dem = st_warp(src = dem, crs = 32636, method = "near", cellsize = 90) Now I need to calculate a point geometry of the peak of the mountain by finding the centroid of the pixel that has the highest elevation in the image, does anyone know what functions I can use?
Building on Grzegorz Sapijaszko's example, here is an alternative path to the top of the mountain. library(terra) f <- system.file("ex/elev.tif", package="terra") x <- rast(f) If there is a single maximum, you can do g <- global(x, which.max) xyFromCell(x, g[,1]) # x y #[1,] 6.020833 50.17917 Now, consider a situation with multiple maxima. I add three more cells with the maximum value. x[c(1000, 2500, 5000)] <- 547 We can find the four highest peaks with: g <- global(x, which.max)[[1]] v <- x[g] |> unlist() y <- ifel(x == v, v, NA) p <- as.points(y) crds(p) #[1,] 6.020833 50.17917 #[2,] 6.154167 50.10417 #[3,] 5.987500 49.97083 #[4,] 6.237500 49.75417 You should not warp (project with terra) the raster data first because that changes the cell values and potentially the location of the highest peak. You should find the peaks with the original data, but then you can transform the results like this. pp <- project(p, "EPSG:32636") crds(pp) # x y #[1,] -1411008 5916157 #[2,] -1404896 5904422 #[3,] -1422145 5894509 #[4,] -1413735 5864236 With your files, you could start with something like ff <- c("srtm_43_06.tif", "srtm_44_06.tif") v <- vrt(ff) g <- global(x, which.max) And then continue as in the examples above.
Let's use terra, however similar approach can be applied by raster package as well. For testing purposes we will use raster supplied with terra package library(terra) #> terra 1.5.12 f <- system.file("ex/elev.tif", package="terra") v <- rast(f) plot(v) You can check the details of your raster just typing the raster object name and pressing enter, you can check the min and max values with minmax() function form terra: minmax(v) #> elevation #> [1,] 141 #> [2,] 547 Let's create another raster by copying original one, however checking if the value is the max value of elevation: w <- v == minmax(v)[2] plot(w) Let's create a substitution matrix, and substitute all FALSE with NA and TRUE with 1: mx <- matrix(c(FALSE, NA, TRUE, 1), ncol = 2, byrow = TRUE) w <- classify(w, mx) plot(v) plot(as.polygons(w), add=TRUE) Let's find centroids of those polygon(s): pts <- centroids(as.polygons(w)) plot(pts, add=TRUE) Let's see our coordinates: as.data.frame(pts, geom = "WKT") #> elevation geometry #> 1 1 POINT (6.020833 50.179167) Created on 2022-01-29 by the reprex package (v2.0.1)
cosine similarity(patient similarity metric) between 48k patients data with predictive variables
I have to calculate cosine similarity (patient similarity metric) in R between 48k patients data with some predictive variables. Here is the equation: PSM(P1,P2) = P1.P2/ ||P1|| ||P2|| where P1 and P2 are the predictor vectors corresponding to two different patients, where for example P1 index patient and P2 will be compared with index (P1) and finally pairwise patient similarity metric PSM(P1,P2) will be calculated. This process will go on for all 48k patients. I have added sample data-set for 300 patients in a .csv file. Please find the sample data-set here.https://1drv.ms/u/s!AhoddsPPvdj3hVTSbosv2KcPIx5a
First things first: You can find more rigorous treatments of cosine similarity at either of these posts: Find cosine similarity between two arrays Creating co-occurrence matrix Now, you clearly have a mixture of data types in your input, at least decimal integer categorical I suspect that some of the integer values are Booleans or additional categoricals. Generally, it will be up to you to transform these into continuous numerical vectors if you want to use them as input into the similarity calculation. For example, what's the distance between admission types ELECTIVE and EMERGENCY? Is it a nominal or ordinal variable? I will only be modelling the columns that I trust to be numerical dependent variables. Also, what have you done to ensure that some of your columns don't correlate with others? Using just a little awareness of data science and biomedical terminology, it seems likely that the following are all correlated: diasbp_max, diasbp_min, meanbp_max, meanbp_min, sysbp_max and sysbp_min I suggest going to a print shop and ordering a poster-size printout of psm_pairs.pdf. :-) Your eyes are better at detecting meaningful (but non-linear) dependencies between variable. Including multiple measurements of the same fundamental phenomenon may over-weight that phenomenon in your similarity calculation. Don't forget that you can derive variables like diasbp_rage <- diasbp_max - diasbp_min Now, I'm not especially good at linear algebra, so I'm importing a cosine similarity function form the lsa text analysis package. I'd love to see you write out the formula in your question as an R function. I would write it to compare one row to another, and use two nested apply loops to get all comparisons. Hopefully we'll get the same results! After calculating the similarity, I try to find two different patients with the most dissimilar encounters. Since you're working with a number of rows that's relatively large, you'll want to compare various algorithmic methodologies for efficiency. In addition, you could use SparkR/some other Hadoop solution on a cluster, or the parallel package on a single computer with multiple cores and lots of RAM. I have no idea whether the solution I provided is thread-safe. Come to think of it, the transposition alone (as I implemented it) is likely to be computationally costly for a set of 1 million patient-encounters. Overall, (If I remember my computational complexity correctly) as the number of rows in your input increases, the performance could degrade exponentially. library(lsa) library(reshape2) psm_sample <- read.csv("psm_sample.csv") row.names(psm_sample) <- make.names(paste0("patid.", as.character(psm_sample$subject_id)), unique = TRUE) temp <- sapply(psm_sample, class) temp <- cbind.data.frame(names(temp), as.character(temp)) names(temp) <- c("variable", "possible.type") numeric.cols <- (temp$possible.type %in% c("factor", "integer") & (!(grepl( pattern = "_id$", x = temp$variable ))) & (!( grepl(pattern = "_code$", x = temp$variable) )) & (!( grepl(pattern = "_type$", x = temp$variable) ))) | temp$possible.type == "numeric" psm_numerics <- psm_sample[, numeric.cols] row.names(psm_numerics) <- row.names(psm_sample) psm_numerics$gender <- as.integer(psm_numerics$gender) psm_scaled <- scale(psm_numerics) pair.these.up <- psm_scaled # checking for independence of variables # if the following PDF pair plot is too big for your computer to open, # try pair-plotting some random subset of columns # keep.frac <- 0.5 # keep.flag <- runif(ncol(psm_scaled)) < keep.frac # pair.these.up <- psm_scaled[, keep.flag] # pdf device sizes are in inches dev <- pdf( file = "psm_pairs.pdf", width = 50, height = 50, paper = "special" ) pairs(pair.these.up) dev.off() #transpose the dataframe to get the #similarity between patients cs <- lsa::cosine(t(psm_scaled)) # this is super inefficnet, because cs contains # two identical triangular matrices cs.melt <- melt(cs) cs.melt <- as.data.frame(cs.melt) names(cs.melt) <- c("enc.A", "enc.B", "similarity") extract.pat <- function(enc.col) { my.patients <- sapply(enc.col, function(one.pat) { temp <- (strsplit(as.character(one.pat), ".", fixed = TRUE)) return(temp[[1]][[2]]) }) return(my.patients) } cs.melt$pat.A <- extract.pat(cs.melt$enc.A) cs.melt$pat.B <- extract.pat(cs.melt$enc.B) same.pat <- cs.melt[cs.melt$pat.A == cs.melt$pat.B ,] different.pat <- cs.melt[cs.melt$pat.A != cs.melt$pat.B ,] most.dissimilar <- different.pat[which.min(different.pat$similarity),] dissimilar.pat.frame <- rbind(psm_numerics[rownames(psm_numerics) == as.character(most.dissimilar$enc.A) ,], psm_numerics[rownames(psm_numerics) == as.character(most.dissimilar$enc.B) ,]) print(t(dissimilar.pat.frame)) which gives patid.68.49 patid.9 gender 1.00000 2.00000 age 41.85000 41.79000 sysbp_min 72.00000 106.00000 sysbp_max 95.00000 217.00000 diasbp_min 42.00000 53.00000 diasbp_max 61.00000 107.00000 meanbp_min 52.00000 67.00000 meanbp_max 72.00000 132.00000 resprate_min 20.00000 14.00000 resprate_max 35.00000 19.00000 tempc_min 36.00000 35.50000 tempc_max 37.55555 37.88889 spo2_min 90.00000 95.00000 spo2_max 100.00000 100.00000 bicarbonate_min 22.00000 26.00000 bicarbonate_max 22.00000 30.00000 creatinine_min 2.50000 1.20000 creatinine_max 2.50000 1.40000 glucose_min 82.00000 129.00000 glucose_max 82.00000 178.00000 hematocrit_min 28.10000 37.40000 hematocrit_max 28.10000 45.20000 potassium_min 5.50000 2.80000 potassium_max 5.50000 3.00000 sodium_min 138.00000 136.00000 sodium_max 138.00000 140.00000 bun_min 28.00000 16.00000 bun_max 28.00000 17.00000 wbc_min 2.50000 7.50000 wbc_max 2.50000 13.70000 mingcs 15.00000 15.00000 gcsmotor 6.00000 5.00000 gcsverbal 5.00000 0.00000 gcseyes 4.00000 1.00000 endotrachflag 0.00000 1.00000 urineoutput 1674.00000 887.00000 vasopressor 0.00000 0.00000 vent 0.00000 1.00000 los_hospital 19.09310 4.88130 los_icu 3.53680 5.32310 sofa 3.00000 5.00000 saps 17.00000 18.00000 posthospmort30day 1.00000 0.00000
Usually I wouldn't add a second answer, but that might be the best solution here. Don't worry about voting on it. Here's the same algorithm as in my first answer, applied to the iris data set. Each row contains four spatial measurements of the flowers form three different varieties of iris plants. Below that you will find the iris analysis, written out as nested loops so you can see the equivalence. But that's not recommended for production with large data sets. Please familiarize yourself with starting data and all of the intermediate dataframes: The input iris data psm_scaled (the spatial measurements, scaled to mean=0, SD=1) cs (the matrix of pairwise similarities) cs.melt (the pairwise similarities in long format) At the end I have aggregated the mean similarities for all comparisons between one variety and another. You will see that comparisons between individuals of the same variety have mean similarities approaching 1, and comparisons between individuals of the same variety have mean similarities approaching negative 1. library(lsa) library(reshape2) temp <- iris[, 1:4] iris.names <- paste0(iris$Species, '.', rownames(iris)) psm_scaled <- scale(temp) rownames(psm_scaled) <- iris.names cs <- lsa::cosine(t(psm_scaled)) # this is super inefficient, because cs contains # two identical triangular matrices cs.melt <- melt(cs) cs.melt <- as.data.frame(cs.melt) names(cs.melt) <- c("enc.A", "enc.B", "similarity") names(cs.melt) <- c("flower.A", "flower.B", "similarity") class.A <- strsplit(as.character(cs.melt$flower.A), '.', fixed = TRUE) cs.melt$class.A <- sapply(class.A, function(one.split) { return(one.split[1]) }) class.B <- strsplit(as.character(cs.melt$flower.B), '.', fixed = TRUE) cs.melt$class.B <- sapply(class.B, function(one.split) { return(one.split[1]) }) cs.melt$comparison <- paste0(cs.melt$class.A , '_vs_', cs.melt$class.B) cs.agg <- aggregate(cs.melt$similarity, by = list(cs.melt$comparison), mean) print(cs.agg[order(cs.agg$x),]) which gives # Group.1 x # 3 setosa_vs_virginica -0.7945321 # 7 virginica_vs_setosa -0.7945321 # 2 setosa_vs_versicolor -0.4868352 # 4 versicolor_vs_setosa -0.4868352 # 6 versicolor_vs_virginica 0.3774612 # 8 virginica_vs_versicolor 0.3774612 # 5 versicolor_vs_versicolor 0.4134413 # 9 virginica_vs_virginica 0.7622797 # 1 setosa_vs_setosa 0.8698189 If you’re still not comfortable with performing lsa::cosine() on a scaled, numerical dataframe, we can certainly do explicit pairwise calculations. The formula you gave for PSM, or cosine similarity of patients, is expressed in two formats at Wikipedia Remembering that vectors A and B represent the ordered list of attributes for PatientA and PatientB, the PSM is the dot product of A and B, divided by (the scalar product of [the magnitude of A] and [the magnitude of B]) The terse way of saying that in R is cosine.sim <- function(A, B) { A %*% B / sqrt(A %*% A * B %*% B) } But we can rewrite that to look more similar to your post as cosine.sim <- function(A, B) { A %*% B / (sqrt(A %*% A) * sqrt(B %*% B)) } I guess you could even re-write that (the calculations of similarity between a single pair of individuals) as a bunch of nested loops, but in the case of a manageable amount of data, please don’t. R is highly optimized for operations on vectors and matrices. If you’re new to R, don’t second guess it. By the way, what happened to your millions of rows? This will certainly be less stressful now that your down to tens of thousands. Anyway, let’s say that each individual only has two elements. individual.1 <- c(1, 0) individual.2 <- c(1, 1) So you can think of individual.1 as a line that passes between the origin (0,0) and (0, 1) and individual.2 as a line that passes between the origin and (1, 1). some.data <- rbind.data.frame(individual.1, individual.2) names(some.data) <- c('element.i', 'element.j') rownames(some.data) <- c('individual.1', 'individual.2') plot(some.data, xlim = c(-0.5, 2), ylim = c(-0.5, 2)) text( some.data, rownames(some.data), xlim = c(-0.5, 2), ylim = c(-0.5, 2), adj = c(0, 0) ) segments(0, 0, x1 = some.data[1, 1], y1 = some.data[1, 2]) segments(0, 0, x1 = some.data[2, 1], y1 = some.data[2, 2]) So what’s the angle between vector individual.1 and vector individual.2? You guessed it, 0.785 radians, or 45 degrees. cosine.sim <- function(A, B) { A %*% B / (sqrt(A %*% A) * sqrt(B %*% B)) } cos.sim.result <- cosine.sim(individual.1, individual.2) angle.radians <- acos(cos.sim.result) angle.degrees <- angle.radians * 180 / pi print(angle.degrees) # [,1] # [1,] 45 Now we can use the cosine.sim function I previously defined, in two nested loops, to explicitly calculate the pairwise similarities between each of the iris flowers. Remember, psm_scaled has already been defined as the scaled numerical values from the iris dataset. cs.melt <- lapply(rownames(psm_scaled), function(name.A) { inner.loop.result <- lapply(rownames(psm_scaled), function(name.B) { individual.A <- psm_scaled[rownames(psm_scaled) == name.A, ] individual.B <- psm_scaled[rownames(psm_scaled) == name.B, ] similarity <- cosine.sim(individual.A, individual.B) return(list(name.A, name.B, similarity)) }) inner.loop.result <- do.call(rbind.data.frame, inner.loop.result) names(inner.loop.result) <- c('flower.A', 'flower.B', 'similarity') return(inner.loop.result) }) cs.melt <- do.call(rbind.data.frame, cs.melt) Now we repeat the calculation of cs.melt$class.A, cs.melt$class.B, and cs.melt$comparison as above, and calculate cs.agg.from.loops as the mean similarity between the various types of comparisons: cs.agg.from.loops <- aggregate(cs.agg.from.loops$similarity, by = list(cs.agg.from.loops $comparison), mean) print(cs.agg.from.loops[order(cs.agg.from.loops$x),]) # Group.1 x # 3 setosa_vs_virginica -0.7945321 # 7 virginica_vs_setosa -0.7945321 # 2 setosa_vs_versicolor -0.4868352 # 4 versicolor_vs_setosa -0.4868352 # 6 versicolor_vs_virginica 0.3774612 # 8 virginica_vs_versicolor 0.3774612 # 5 versicolor_vs_versicolor 0.4134413 # 9 virginica_vs_virginica 0.7622797 # 1 setosa_vs_setosa 0.8698189 Which, I believe is identical to the result we got with lsa::cosine. So what I'm trying to say is... why wouldn't you use lsa::cosine? Maybe you should be more concerned with selection of variables, including removal of highly correlated variables scaling/normalizing/standardizing the data performance with a large input data set identifying known similars and dissimilars for quality control as previously addressed
3D with value interpolation in R (X, Y, Z, V)
Is there an R package that does X, Y, Z, V interpolation? I see that Akima does X, Y, V but I need one more dimension. Basically I have X,Y,Z coordinates plus the value (V) that I want to interpolate. This is all GIS data but my GIS does not do voxel interpolation So if I have a point cloud of XYZ coordinates with a value of V, how can I interpolate what V would be at XYZ coordinate (15,15,-12) ? Some test data would look like this: X <-rbind(10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,20,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,40,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50) Y <- rbind(10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50,10,10,10,10,10,20,20,20,20,20,30,30,30,30,30,40,40,40,40,40,50,50,50,50,50) Z <- rbind(-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-17,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29,-29) V <- rbind(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,25,35,75,25,50,0,0,0,0,0,10,12,17,22,27,32,37,25,13,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,50,125,130,105,110,115,165,180,120,100,80,60,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)
I had the same question and was hoping for an answer in R. My question was: How do I perform 3D (trilinear) interpolation using regular gridded coordinate/value data (x,y,z,v)? For example, CT images, where each image has pixel centers (x, y) and greyscale value (v) and there are multiple image "slices" (z) along the thing being imaged (e.g., head, torso, leg, ...). There is a slight problem with the given example data. # original example data (reformatted) X <- rep( rep( seq(10, 50, by=10), each=25), 3) Y <- rep( rep( seq(10, 50, by=10), each=5), 15) Z <- rep(c(-5, -17, -29), each=125) V <- rbind(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,25,35,75,25,50,0,0,0,0,0,10,12,17,22,27,32,37,25,13,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,50,125,130,105,110,115,165,180,120,100,80,60,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0) # the dimensions of the 3D grid described do not match the number of values (length(unique(X))*length(unique(Y))*length(unique(Z))) == length(V) ## [1] FALSE ## which makes sense since 75 != 375 # visualize this: library(rgl) plot3d(x=X, y=Y, z=Z, col=terrain.colors(181)[V]) # examine the example data real quick... df <- data.frame(x=X,y=Y,z=Z,v=V); head(df); table(df$x, df$y, df$z); # there are 5 V values at each X,Y,Z coordinate... duplicates! # redefine Z so there are 15 unique values # making 375 unique coordinate points # and matching the length of the given value vector, V df$z <- seq(-5, -29, length.out=15) head(df) table(df$x, df$y, df$z); # there is now 1 V value at each X,Y,Z coordinate # that was for testing, now actually redefine the Z vector. Z <- rep(seq(-5,-29, length.out = 15), 25) # plot it. library(rgl) plot3d(x=X, y=Y, z=Z, col=terrain.colors(181)[V]) I couldn't find any 4D interpolation functions in the usual R packages, so I wrote a quick and dirty one. The following implements (without ANY error checking... caveat emptor!) the technique described at: https://en.wikipedia.org/wiki/Trilinear_interpolation # convenience function #1: # define a function that takes a vector of lookup values and a value to lookup # and returns the two lookup values that the value falls between between = function(vec, value) { # extract list of unique lookup values u = unique(vec) # difference vector dvec = u - value vals = c(u[dvec==max(dvec[dvec<0])], u[dvec==min(dvec[dvec>0])]) return(vals) } # convenience function #2: # return the value (v) from a grid data.frame for given point (x, y, z) get_value = function(df, xi, yi, zi) { # assumes df is data.frame with column names: x, y, z, v subset(df, x==xi & y==yi & z==zi)$v } # inputs df (x,y,z,v), points to look up (x, y, z) interp3 = function(dfin, xin, yin, zin) { # TODO: check if all(xin, yin, zin) equals a grid point, if so just return the point value # TODO: check if any(xin, yin, zin) equals a grid point, if so then do bilinear or linear interp cube_x <- between(dfin$x, xin) cube_y <- between(dfin$y, yin) cube_z <- between(dfin$z, zin) # find the two values in each dimension that the lookup value falls within # and extract the cube of 8 points tmp <- subset(dfin, x %in% cube_x & y %in% cube_y & z %in% cube_z) stopifnot(nrow(tmp)==8) # define points in a periodic and cubic lattice x0 = min(cube_x); x1 = max(cube_x); y0 = min(cube_y); y1 = max(cube_y); z0 = min(cube_z); z1 = max(cube_z); # define differences in each dimension xd = (xin-x0)/(x1-x0); # 0.5 yd = (yin-y0)/(y1-y0); # 0.5 zd = (zin-z0)/(z1-z0); # 0.9166666 # interpolate along x: v00 = get_value(tmp, x0, y0, z0)*(1-xd) + get_value(tmp,x1,y0,z0)*xd # 2.5 v01 = get_value(tmp, x0, y0, z1)*(1-xd) + get_value(tmp,x1,y0,z1)*xd # 0 v10 = get_value(tmp, x0, y1, z0)*(1-xd) + get_value(tmp,x1,y1,z0)*xd # 0 v11 = get_value(tmp, x0, y1, z1)*(1-xd) + get_value(tmp,x1,y1,z1)*xd # 65 # interpolate along y: v0 = v00*(1-yd) + v10*yd # 1.25 v1 = v01*(1-yd) + v11*yd # 32.5 # interpolate along z: return(v0*(1-zd) + v1*zd) # 29.89583 (~91.7% between v0 and v1) } > interp3(df, 15, 15, -12) [1] 29.89583 Testing that same source's assertion that trilinear is simply linear(bilinear(), bilinear()), we can use the base R linear interpolation function, approx(), and the akima package's bilinear interpolation function, interp(), as follows: library(akima) approx(x=c(-11.857143,-13.571429), y=c(interp(x=df[round(df$z,1)==-11.9,"x"], y=df[round(df$z,1)==-11.9,"y"], z=df[round(df$z,1)==-11.9,"v"], xo=15, yo=15)$z, interp(x=df[round(df$z,1)==-13.6,"x"], y=df[round(df$z,1)==-13.6,"y"], z=df[round(df$z,1)==-13.6,"v"], xo=15, yo=15)$z), xout=-12)$y # [1] 0.2083331 Checked another package to triangulate: library(oce) Vmat <- array(data = V, dim = c(length(unique(X)), length(unique(Y)), length(unique(Z)))) approx3d(x=unique(X), y=unique(Y), z=unique(Z), f=Vmat, xout=15, yout=15, zout=-12) [1] 1.666667 So 'oce', 'akima' and my function all give pretty different answers. This is either a mistake in my code somewhere, or due to differences in the underlying Fortran code in the akima interp(), and whatever is in the oce 'approx3d' function that we'll leave for another day. Not sure what the correct answer is because the MWE is not exactly "minimum" or simple. But I tested the functions with some really simple grids and it seems to give 'correct' answers. Here's one simple 2x2x2 example: # really, really simple example: # answer is always the z-coordinate value sdf <- expand.grid(x=seq(0,1),y=seq(0,1),z=seq(0,1)) sdf$v <- rep(seq(0,1), each=4) > interp3(sdf,0.25,0.25,.99) [1] 0.99 > interp3(sdf,0.25,0.25,.4) [1] 0.4 Trying akima on the simple example, we get the same answer (phew!): library(akima) approx(x=unique(sdf$z), y=c(interp(x=sdf[sdf$z==0,"x"], y=sdf[sdf$z==0,"y"], z=sdf[sdf$z==0,"v"], xo=.25, yo=.25)$z, interp(x=sdf[sdf$z==1,"x"], y=sdf[sdf$z==1,"y"], z=sdf[sdf$z==1,"v"], xo=.25, yo=.25)$z), xout=.4)$y # [1] 0.4 The new example data in the OP's own, accepted answer was not possible to interpolate with my simple interp3() function above because: (a) the grid coordinates are not regularly spaced, and (b) the coordinates to lookup (x1, y1, z1) lie outside of the grid. # for completeness, here's the attempt: options(scipen = 999) XCoor=c(78121.6235,78121.6235,78121.6235,78121.6235,78136.723,78136.723,78136.723,78136.8969,78136.8969,78136.8969,78137.4595,78137.4595,78137.4595,78125.061,78125.061,78125.061,78092.4696,78092.4696,78092.4696,78092.7683,78092.7683,78092.7683,78092.7683,78075.1171,78075.1171,78064.7462,78064.7462,78064.7462,78052.771,78052.771,78052.771,78032.1179,78032.1179,78032.1179) YCoor=c(5213642.173,523642.173,523642.173,523642.173,523594.495,523594.495,523594.495,523547.475,523547.475,523547.475,523503.462,523503.462,523503.462,523426.33,523426.33,523426.33,523656.953,523656.953,523656.953,523607.157,523607.157,523607.157,523607.157,523514.671,523514.671,523656.81,523656.81,523656.81,523585.232,523585.232,523585.232,523657.091,523657.091,523657.091) ZCoor=c(-3.0,-5.0,-10.0,-13.0,-3.5,-6.5,-10.5,-3.5,-6.5,-9.5,-3.5,-5.5,-10.5,-3.5,-5.5,-7.5,-3.5,-6.5,-11.5,-3.0,-5.0,-9.0,-12.0,-6.5,-10.5,-2.5,-3.5,-8.0,-3.5,-6.5,-9.5,-2.5,-6.5,-8.5) V=c(2.4000,30.0,620.0,590.0,61.0,480.0,0.3700,0.0,0.3800,0.1600,0.1600,0.9000,0.4100,0.0,0.0,0.0061,6.0,52.0,0.3400,33.0,235.0,350.0,9300.0,31.0,2100.0,0.0,0.0,10.5000,3.8000,0.9000,310.0,0.2800,8.3000,18.0) adf = data.frame(x=XCoor, y=YCoor, z=ZCoor, v=V) # the first y value looks like a typo? > head(adf) x y z v 1 78121.62 5213642.2 -3.0 2.4 2 78121.62 523642.2 -5.0 30.0 3 78121.62 523642.2 -10.0 620.0 4 78121.62 523642.2 -13.0 590.0 5 78136.72 523594.5 -3.5 61.0 6 78136.72 523594.5 -6.5 480.0 x1=198130.000 y1=1913590.000 z1=-8 > interp3(adf, x1,y1,z1) numeric(0) Warning message: In min(dvec[dvec > 0]) : no non-missing arguments to min; returning Inf
Whether the test data did or not make sense, I still needed an algorithm. Test data is just that, something to fiddle with and as a test data it was fine. I wound up programming it in python and the following code takes XYZ V and does a 3D Inverse Distance Weighted (IDW) interpolation where you can set the number of points used in the interpolation. This python recipe only interpolates to one point (x1, y1, z1) but it is easy enough to extend. import numpy as np import math #34 points XCoor=np.array([78121.6235,78121.6235,78121.6235,78121.6235,78136.723,78136.723,78136.723,78136.8969,78136.8969,78136.8969,78137.4595,78137.4595,78137.4595,78125.061,78125.061,78125.061,78092.4696,78092.4696,78092.4696,78092.7683,78092.7683,78092.7683,78092.7683,78075.1171,78075.1171,78064.7462,78064.7462,78064.7462,78052.771,78052.771,78052.771,78032.1179,78032.1179,78032.1179]) YCoor=np.array([5213642.173,523642.173,523642.173,523642.173,523594.495,523594.495,523594.495,523547.475,523547.475,523547.475,523503.462,523503.462,523503.462,523426.33,523426.33,523426.33,523656.953,523656.953,523656.953,523607.157,523607.157,523607.157,523607.157,523514.671,523514.671,523656.81,523656.81,523656.81,523585.232,523585.232,523585.232,523657.091,523657.091,523657.091]) ZCoor=np.array([-3.0,-5.0,-10.0,-13.0,-3.5,-6.5,-10.5,-3.5,-6.5,-9.5,-3.5,-5.5,-10.5,-3.5,-5.5,-7.5,-3.5,-6.5,-11.5,-3.0,-5.0,-9.0,-12.0,-6.5,-10.5,-2.5,-3.5,-8.0,-3.5,-6.5,-9.5,-2.5,-6.5,-8.5]) V=np.array([2.4000,30.0,620.0,590.0,61.0,480.0,0.3700,0.0,0.3800,0.1600,0.1600,0.9000,0.4100,0.0,0.0,0.0061,6.0,52.0,0.3400,33.0,235.0,350.0,9300.0,31.0,2100.0,0.0,0.0,10.5000,3.8000,0.9000,310.0,0.2800,8.3000,18.0]) def Distance(x1,y1,z1, Npoints): i=0 d=[] while i < 33: d.append(math.sqrt((x1-XCoor[i])*(x1-XCoor[i]) + (y1-YCoor[i])*(y1-YCoor[i]) + (z1-ZCoor[i])*(z1-ZCoor[i]) )) i = i + 1 distance=np.array(d) myIndex=distance.argsort()[:Npoints] weightedNum=0 weightedDen=0 for i in myIndex: weightedNum=weightedNum + (V[i]/(distance[i]*distance[i])) weightedDen=weightedDen + (1/(distance[i]*distance[i])) InterpValue=weightedNum/weightedDen return InterpValue x1=198130.000 y1=1913590.000 z1=-8 print(Distance(x1,y1,z1, 12))
Running Mean/SD: How can I select within the averaging window based on criteria
I need to calculate a moving average and standard deviation for a moving window. This is simple enough with the catools package! ... However, what i would like to do, is having defined my moving window, i want to take an average from ONLY those values within the window, whose corresponding values of other variables meet certain criteria. For example, I would like to calculate a moving Temperature average, using only the values within the window (e.g. +/- 2 days), when say Relative Humidity is above 80%. Could anybody help point me in the right direction? Here is some example data: da <- data.frame(matrix(c(12,15,12,13,8,20,18,19,20,80,79,91,92,70,94,80,80,90), ncol = 2, byrow = TRUE)) names(da) = c("Temp", "RH") Thanks, Brad
I haven't used catools, but in the help text for the (presumably) most relevant function in that package, ?runmean, you see that x, the input data, can be either "a numeric vector [...] or matrix with n rows". In your case the matrix alternative is most relevant - you wish to calculate mean of a focal variable, Temp, conditional on a second variable, RH, and the function needs access to both variables. However, "[i]f x is a matrix than each column will be processed separately". Thus, I don't think catools can solve your problem. Instead, I would suggest rollapply in the zoo package. In rollapply, you have the argument by.column. Default is TRUE: "If TRUE, FUN is applied to each column separately". However, as explained above we need access to both columns in the function, and set by.column to FALSE. # First, specify a function to apply to each window: mean of Temp where RH > 80 meanfun <- function(x) mean(x[(x[ , "RH"] > 80), "Temp"]) # Apply the function to windows of size 3 in your data 'da'. meanTemp <- rollapply(data = da, width = 3, FUN = meanfun, by.column = FALSE) meanTemp # If you want to add the means to 'da', # you need to make it the same length as number of rows in 'da'. # This can be acheived by the `fill` argument, # where we can pad the resulting vector of running means with NA meanTemp <- rollapply(data = da, width = 3, FUN = meanfun, by.column = FALSE, fill = NA) # Add the vector of means to the data frame da2 <- cbind(da, meanTemp) da2 # even smaller example to make it easier to see how the function works da <- data.frame(Temp = 1:9, RH = rep(c(80, 81, 80), each = 3)) meanTemp <- rollapply(data = da, width = 3, FUN = meanfun, by.column = FALSE, fill = NA) da2 <- cbind(da, meanTemp) da2 # Temp RH meanTemp # 1 1 80 NA # 2 2 80 NaN # 3 3 80 4.0 # 4 4 81 4.5 # 5 5 81 5.0 # 6 6 81 5.5 # 7 7 80 6.0 # 8 8 80 NaN # 9 9 80 NA