foreach doesn't change value of raster cell in R - r

I'm trying to simulate herding behavior in R.
Here's the code
library(raster)
library(sp)
library(foreach)
K=100
sig=0.2
G=0.3
x <- raster(ncol=2000,nrow=2000)
values(x) <- sign(rnorm(4000000,mean=0,sd=0.3))
y <- raster(ncol=2000,nrow=2000)
values(y) <- sign(rnorm(4000000,mean=0,sd=0.3))
#plot(x)
ei <- rnorm(4000000)
j=0
while(j < 30) {
for(i in 1:4000000){
ad <- adjacent(x,cell=c(i))[,2]
y[i] <- sign(K*sum(x[ad])+sig*ei[i]+G)
}
x <- y
plot(x)
j = j+1
}
The classic loop approach is too slow.
If I use a foreach loop instead of a classic for loop it doesn't change the values of y in every iteration.
I can't fix it at all.
Can someone please help about this?
Thank you

You have a dynamic model in which the output of each (time) step is input for the next step. It is not possible to do that in parallel. But that does not mean you cannot make the model run faster.
Looping over raster cells in R is always going to be slow, so we need to avoid that. Normally a problem like this could be solved with focal (see code a the bottom) --- but in this case it is difficult because you effectively use two rasters (x and ei) --- I will look at implementing multi-layer focal operations in the terra package.
Here is an approach with getFocalValues. It is much faster (and I use Sys.sleep to slow it down a bit).
library(raster)
set.seed(0)
x <- raster(ncol=200, nrow=200)
values(x) <- sign(rnorm(ncell(x),mean=0,sd=0.3))
y <- raster(x)
values(y) <- sign(rnorm(ncell(x),mean=0,sd=0.3))
ei <- rnorm(ncell(x))
K=100
sig=0.2
G=0.3
for (j in 1:29) {
# with large rasters, you may need to do the below in chunks
v <- getValuesFocal(x, 1, nrow(x), c(3,3))
# only keep the rook neighbors
v <- v[, c(2,4,6,8)]
v <- rowSums(v, na.rm=TRUE)
values(x) <- sign(K*v+sig*ei+G)
plot(x)
Sys.sleep(0.1)
}
This how you could use focal in similar cases
w <- matrix(c(0,1,0,1,0,1,0,1,0), 3, 3)
y <- focal(x, w, fun=function(i)sign(K*sum(i)+sig+G))
Also see the cellular automata examples in ?focal

Related

Is there any way I can optimize this R code?

This is a code I'm trying to run in rstudio. I know the iterations are way too long. Is there any optimal/faster way to do this? I've been stuck for 4+ hours and it doesn't seem like finishing any time soon.
I'm trying to make a distance matrix between 415 cities and 3680126 monuments. To optimize, I am only comparing those monuments with cities which are present in the same country.
for(x in 1:3680126){
for(y in 1:415){
if(list2_cities$Country[y]==list1_POI$Country[x]){
distance_matrix [x,y] <- ({POI$Longitude[x]-cities$Longitude[y]}^2)+({POI$Latitude[x]-cities$Latitude[y]}^2)
}
else{
distance_matrix [x,y] <- 0
}
}
}
Maybe you can try distm from package geosphere
library(geosphere)
d <- distm(list1_POI[c("Longitude","Latitude")],list2_cities[c("Longitude","Latitude")])
m <- +(outer(list1_POI$Country,list2_cities$Country,`==`))
res <- d*m
where
the distm part gives the all paired distances between two cities
the outer part provides a mask such that values for non-matched cities are set to 0
If your desired matrix is sparse, here is another option
common <- intersect(list1_POI$Country,list2_cities$Country)
rl <- match(common,list1_POI$Country)
cl <- match(common,list2_cities$Country)
d <- diag(distm(list1_POI[rl,c("Longitude","Latitude")],list2_cities[cl,c("Longitude","Latitude")]))
res <- matrix(0,length(list1_POI$Country),length(list1_cities$Country))
res[cbind(rl,cl)] <- d
where you only need to locate the matched cities and calculate their distances.

Draw a heatmap with "super big" matrix

I want to draw a heatmap.
I have 100k*100k square matrix (50Gb(csv), numbers on right-top side and other filled by 0).
I want to ask "How can I draw a heatmap with R?" with this huge dataset.
I'm trying to this code on large RAM machine.
d = read.table("data.csv", sep=",")
d = as.matrix(d + t(d))
heatmap(d)
I tried some libraries like heatmap.2(in gplots) or something.
But they are take so much time and memories.
What I suggest you is to heavily down-sample your matrix before plotting the heatmap, e.g. doing the mean of each submatrices (as suggested by #IaroslavDomin) :
# example of big mx 10k x 10 k
bigMx <- matrix(rnorm(10000*10000,mean=0,sd=100),10000,10000)
# here we downsample the big matrix 10k x 10k to 100x100
# by averaging each submatrix
downSampledMx <- matrix(NA,100,100)
subMxSide <- nrow(bigMx)/nrow(downSampledMx)
for(i in 1:nrow(downSampledMx)){
rowIdxs <- ((subMxSide*(i-1)):(subMxSide*i-1))+1
for(j in 1:ncol(downSampledMx)){
colIdxs <- ((subMxSide*(j-1)):(subMxSide*j-1))+1
downSampledMx[i,j] <- mean(bigMx[rowIdxs,colIdxs])
}
}
# NA to disable the dendrograms
heatmap(downSampledMx,Rowv=NA,Colv=NA)
For sure with your huge matrix it will take a while to compute the downSampledMx, but it should be feasible.
EDIT :
I think downsampling should preserve recognizable "macro-patterns", e.g. see the following example :
# create a matrix with some recognizable pattern
set.seed(123)
bigMx <- matrix(rnorm(50*50,mean=0,sd=100),50,50)
diag(bigMx) <- max(bigMx) # set maximum value on the diagonal
# set maximum value on a circle centered on the middle
for(i in 1:nrow(bigMx)){
for(j in 1:ncol(bigMx)){
if(abs((i - 25)^2 + (j - 25)^2 - 10^2) <= 16)
bigMx[i,j] <- max(bigMx)
}
}
# plot the original heatmap
heatmap(bigMx,Rowv=NA,Colv=NA, main="original")
# function used to down sample
downSample <- function(m,newSize){
downSampledMx <- matrix(NA,newSize,newSize)
subMxSide <- nrow(m)/nrow(downSampledMx)
for(i in 1:nrow(downSampledMx)){
rowIdxs <- ((subMxSide*(i-1)):(subMxSide*i-1))+1
for(j in 1:ncol(downSampledMx)){
colIdxs <- ((subMxSide*(j-1)):(subMxSide*j-1))+1
downSampledMx[i,j] <- mean(m[rowIdxs,colIdxs])
}
}
return(downSampledMx)
}
# downsample x 2 and plot heatmap
downSampledMx <- downSample(bigMx,25)
heatmap(downSampledMx,Rowv=NA,Colv=NA, main="downsample x 2")
# downsample x 5 and plot heatmap
downSampledMx <- downSample(bigMx,10)
heatmap(downSampledMx,Rowv=NA,Colv=NA, main="downsample x 5")
Here's the 3 heatmaps :

R - Function to combine index from vector and raster stack layers

I am trying to write an efficient script to calibrate hundreds of Landsat 8 images. At a certain point of the calibration steps, I need to apply some coefficients in each layer of a raster stack.
This is one sample stack:
fn <- system.file("external/test.grd", package="raster")
s <- stack(fn, fn)
And these are sample coefficients:
mult <- c(0.0003342, 0.0005534)
add <- c(0.1, 0.2)
What I need to is to apply each index of the coefficients to the correspondent index of the stack layer, like in this example:
s[[1]] <- (s[[1]] * mult[1]) + add[1]
s[[2]] <- (s[[2]] * mult[2]) + add[2]
This is my poor attempt, which obviously does not work:
cal.fun <- function(x) {
x <- (x * mult) + add
}
s.cal <- calc(s, cal.fun, progress='text')
Any ideas on how to do that?
Many thanks.
raster is a phenomenally well-constructed package and you can simply do:
s2 <- s * mult + add
For quick visual confirmation that that vectorized call "just works", do something like this:
library(gridExtra)
library(rasterVis)
grid.arrange(levelplot(s), levelplot(s2), nrow=2)

Choose n most evenly spread points across point dataset in R

Given a set of points, I am trying to select a subset of n points that are most evenly distributed across this set of points. In other words, I am trying to thin out the dataset while still evenly sampling across space.
So far, I have the following, but this approach likely won't do well with larger datasets. Maybe there is a more intelligent way to choose the subset of points in the first place...
The following code randomly chooses a subset of the points, and seeks to minimize the distance between the points within this subset and the points outside of this subset.
Suggestions appreciated!
evenSubset <- function(xy, n) {
bestdist <- NA
bestSet <- NA
alldist <- as.matrix(dist(xy))
diag(alldist) <- NA
alldist[upper.tri(alldist)] <- NA
for (i in 1:1000){
subset <- sample(1:nrow(xy),n)
subdists <- alldist[subset,-subset]
distsum <- sum(subdists,na.rm=T)
if (distsum < bestdist | is.na(bestdist)) {
bestdist <- distsum
bestSet <- subset
}
}
return(xy[bestSet,])
}
xy2 <- evenSubset(xy=cbind(rnorm(1000),rnorm(1000)), n=20)
plot(xy)
points(xy2,col='blue',cex=1.5,pch=20)
Following #Spacedman's suggestion, I used voronoi tesselation to identify and drop those points that were closest to other points.
Here, the percentage of points to drop is given to the function. This appears to work quite well, except for the fact that it is slow with large datasets.
library(tripack)
voronoiFilter <- function(occ,drop) {
n <- round(x=(nrow(occ) * drop),digits=0)
subset <- occ
dropped <- vector()
for (i in 1:n) {
v <- voronoi.mosaic(x=subset[,'Longitude'],y=subset[,'Latitude'],duplicate='error')
info <- cells(v)
areas <- unlist(lapply(info,function(x) x$area))
smallest <- which(areas == min(areas,na.rm=TRUE))
dropped <- c(dropped,which(paste(occ[,'Longitude'],occ[,'Latitude'],sep='_') == paste(subset[smallest,'Longitude'],subset[smallest,'Latitude'],sep='_')))
subset <- subset[-smallest,]
}
return(occ[-dropped,])
}
xy <- cbind(rnorm(500),rnorm(500))
colnames(xy) <- c('Longitude','Latitude')
xy2 <- voronoiFilter(xy, drop=0.7)
plot(xy)
points(xy2,col='blue',cex=1.5,pch=20)

R: pairwise Euclidean distance between columns of two matrices

The following loop takes too lonng to run (2mins/iteration)
The tumor_signals is size 950000x422
The normal_signals is size 950000x772
Any ideas for how to speed it up?
for(i in 1:ncol(tumor_signals)){
x <- as.vector(tumor_signals[,i])
print("Assigned x")
y <- t((t(normal_signals) - x)^2)
print("assigned y")
y <- t(sqrt(colSums(y)))
print("done")
#all_distance <- cbind(all_distance,matrix(distance))
print(i)
}
There's a bug in your code -- you don't need to take the transpose of normal_signals. As I understand it, you are trying to compute, for all i = 1,2,...422, and j=1,2,...,772, the Euclidean distance between tumor_signals[,i] and normal_signals[,j]. You would probably want the results in a 422 x 772 matrix. There's a function rdist() in the package fields that will do this for you:
require(fields)
result <- rdist(t(tumor_signals), t(normal_signals))
Incidentally, a Google search for [R Euclidean distance] would have easily found this package.

Resources