I have a table of available sampling stations with unique identifier GRID_ID as well as the latitude and longitude (and UTM) of each station. I would like to randomly select a subset of these stations, which I can easily do using sample().
However, to maximize efficiency, I would also like to employ some clustering by specifying that any randomly selected sampling station must be within x distance of at least y other stations.
The rationale is that it makes sense to travel a long distance to sample a group of y stations, but it does not make sense to travel a long distance to sample < y stations.
Does this make sense? Is there a straightforward way to handle this in R?
How about this:
# some random data
set.seed(1)
df <- data.frame(x=runif(10), y=runif(10))
# e.g. select obs that have >= 1 neighbour closer than .3 (euclidean)
mat <- as.matrix(dist(df))
sel <- rowSums(mat < .3) >= 2
plot(y~x, df, col = sel + 1L) # viz
# e.g. select obs that have >= 2 neighbours closer than 40000 (great circle/lon,lat)
library(geosphere)
mat <- distm(as.matrix(df))
sel <- rowSums(mat < 40000) >= 3
plot(y~x, df, col = sel + 1L) # viz
# Take 2 random obs from those who meet the criteria
df[sample(which(sel), size = 2), ]
Ok, calculating a distance matrix between ~31000 data points probably choke an ordinary computer. Another approach could be using density based clustering like DBSCAN. It could look like this:
# load your data
set.seed(1)
download.file("https://dl.dropboxusercontent.com/u/17339799/MHI_BF_Survey_Domain_PSU.txt", tf <- tempfile(fileext = ".csv"))
fullds <- read.csv(tf)
df <- fullds[, c("lon_deg", "lat_deg")]
library(dbscan)
kNNdistplot(as.matrix(df), k=4) # determine eps value...
res <- dbscan(as.matrix(df), eps = .005, minPts = 4, borderPoints=F)
# DBSCAN clustering for 31083 objects.
# Parameters: eps = 0.005, minPts = 4
# The clustering contains 134 cluster(s).
# Available fields: cluster, eps, minPts
noise <- res$cluster == 0
sum(noise)
# [1] 2499
# interactive plot with zoom
# (draw rectangle with right mouse,
# CTRL to reset)
library(iplot)
iplot(df$lon_deg, df$lat_deg, col=noise + 1L)
You probably have to tune it to fit your needs. But
idx <- sample(which(!noise), 250)
fullds[idx, ]
would then give you the sample.
Related
I have a dataset (this is just a dummy, my real datasets are much larger) in which there are five variables: two spatial variables X and Y (basically pairs of coordinates) and three attributes A, B and C associated to each X,Y point:
X Y A B C
1 1 34 11 26
1 2 47 16 31
1 3 60 21 36
1 4 73 26 41
1 5 86 31 46
2 1 99 36 51
... with 15 more rows
If I run a k-Means Clustering model on the dataset, I can easily produce a plot in which each X,Y point is coloured according to the related cluster:
library(tidyverse)
#Read the dataset
My_ds <- read_delim("test_dataset.csv",delim = ",", escape_double = FALSE, trim_ws = TRUE)
#Set the number of clusters
kClusters <- 3
#Create the model
kMeans <- kmeans(My_ds[ , c("A", "B", "C")], centers = kClusters)
#Plot the result
ggplot(My_ds, aes(X, Y)) +
geom_point(col = kMeans$cluster,
size = 15) +
theme_minimal()
k-Means scatter plot
With the kohonen package I can also use a different clustering approach based on self-organising maps (SOM):
library(kohonen)
#Prepare the dataset
My_ds_SOM <- as.matrix(scale(My_ds[ , c("A", "B", "C")]))
#Set the grid
My_Grid <- somgrid(xdim = 3, ydim = 3, topo = "hexagonal")
#Create the model
My_Model <- som(X = My_ds_SOM,
grid = My_Grid)
However, I cannot find a way to produce a scatter plot similar to the one above and based on the SOM clusters. With k-Means I used kMeans$cluster to control the colour of the X,Y points, what should I use with SOM?
Update 1
OK, I made some progress thanks to this blog post. The key is to perform clustering on the SOM nodes, to isolate groups of samples with similar metrics.
First, an estimate of the number of clusters that would be suitable can be ascertained using a K-means algorithm and looking for an elbow-point in the plot of within cluster sum of squares (WCSS):
#View WCSS for K-means
mydata <- getCodes(My_Model)
wcss <- (nrow(mydata)-1)*sum(apply(mydata,2,var))
for (i in 2:8) { #Second number is of one's choosing (I used number_of_nodes-1)
wcss[i] <- sum(kmeans(mydata, centers=i)$withinss)
}
plot(wcss)
WCSS plot
Then I use hierarchical clustering and the SOM plot function to visualise the clusters on the node map:
#Define colour palette
pretty_palette <- c("#1f77b4", '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2')
#Use hierarchical clustering to cluster the codebook vectors
som_cluster <- cutree(hclust(dist(getCodes(My_Model))), 3)
#Plot these results
plot(My_Model, type="mapping", bgcol = pretty_palette[som_cluster], main = "Clusters")
add.cluster.boundaries(My_Model, som_cluster)
Clusters on node map
Finally, I assign labels to the original data by using the som_cluster variable that maps nodes, with the som_model$unit.classif variable that maps data samples to nodes:
#Get vector with cluster value for each original data sample
cluster_assignment <- som_cluster[My_Model$unit.classif]
#Add the assignment as a column in the original data
My_ds$cluster <- cluster_assignment
#Plot the result
ggplot(My_ds, aes(X, Y)) +
geom_point(col = My_ds$cluster,
size = 15) +
theme_minimal()
SOM+hierarchical scatter plot
Applying hierarchical clustering on top of the SOM nodes makes the process a bit convoluted, as SOM already helps reduce the dimensions and cluster neighbouring nodes together. But this was the only way I could get what I wanted.
Update 2
Some more progress. This time I'm focusing on making the whole process fully automatic. Specifically, I want to avoid choosing 1) the SOM grid size and 2) the number of clusters during the clustering of the node map.
Regarding point 1, I used a rule of thumb suggested by Vesanto J, Alhoniemi E. Clustering of the self-organizing map. IEEE Transactions on neural networks. 2000 May;11(3):586-600, which is #nodes = 5*sqrt(#observations). Therefore, setting the grid for the SOM model works like this:
My_dim <- as.integer(sqrt(5*sqrt(nrow(My_ds_SOM))))
My_Grid <- somgrid(xdim = My_dim, ydim = My_dim, topo = "hexagonal")
Of course, this works best with large datasets. In any case, this approach should be a starting point only, the grid size can (and should) then be adjusted by looking at the resulting node count plot, weight vector plot and heatmap.
About point 2, when using hierarchical clustering to cluster the codebook vectors, the kgs function of the maptree package allows the optimal number of clusters to be calculated automatically:
library(maptree)
distance <- dist(getCodes(My_Model))
clustering <- hclust(distance)
optimal_k <- kgs(clustering, distance, maxclus = 20)
clusters <- as.integer(names(optimal_k[which(optimal_k == min(optimal_k))]))
som_cluster <- cutree(clustering, clusters)
Also in this case, the number of clusters determined by the code can be compared to the one suggested by the WCSS plot, to check if there is a significant discrepancy.
I'm trying to find sites to collect snails by using a semi-random selection method. I have set a 10km2 grid around the region I want to collect snails from, which is broken into 10,000 10m2 cells. I want to randomly this grid in R to select 200 field sites.
Randomly sampling a matrix in R is easy enough;
dat <- matrix(1:10000, nrow = 100)
sample(dat, size = 200)
However, I want to bias the sampling to pick cells closer to a single position (representing sites closer to the research station). It's easier to explain this with an image;
The yellow cell with a cross represents the position I want to sample around. The grey shading is the probability of picking a cell in the sample function, with darker cells being more likely to be sampled.
I know I can specify sampling probabilities using the prob argument in sample, but I don't know how to create a 2D probability matrix. Any help would be appreciated, I don't want to do this by hand.
I'm going to do this for a 9 x 6 grid (54 cells), just so it's easier to see what's going on, and sample only 5 of these 54 cells. You can modify this to a 100 x 100 grid where you sample 200 from 10,000 cells.
# Number of rows and columns of the grid (modify these as required)
nx <- 9 # rows
ny <- 6 # columns
# Create coordinate matrix
x <- rep(1:nx, each=ny);x
y <- rep(1:ny, nx);y
xy <- cbind(x, y); xy
# Where is the station? (edit: not snails nest)
Station <- rbind(c(x=3, y=2)) # Change as required
# Determine distance from each grid location to the station
library(SpatialTools)
D <- dist2(xy, Station)
From the help page of dist2
dist2 takes the matrices of coordinates coords1 and coords2 and
returns the inter-Euclidean distances between coordinates.
We can visualize this using the image function.
XY <- (matrix(D, nr=nx, byrow=TRUE))
image(XY) # axes are scaled to 0-1
# Create a scaling function - scales x to lie in [0-1)
scale_prop <- function(x, m=0)
(x - min(x)) / (m + max(x) - min(x))
# Add the coordinates to the grid
text(x=scale_prop(xy[,1]), y=scale_prop(xy[,2]), labels=paste(xy[,1],xy[,2],sep=","))
Lighter tones indicate grids closer to the station at (3,2).
# Sampling probabilities will be proportional to the distance from the station, which are scaled to lie between [0 - 1). We don't want a 1 for the maximum distance (m=1).
prob <- 1 - scale_prop(D, m=1); range (prob)
# Sample from the grid using given probabilities
sam <- sample(1:nrow(xy), size = 5, prob=prob) # Change size as required.
xy[sam,] # Thse are your (**MY!**) 5 samples
x y
[1,] 4 4
[2,] 7 1
[3,] 3 2
[4,] 5 1
[5,] 5 3
To confirm the sample probabilities are correct, you can simulate many samples and see which coordinates were sampled the most.
snail.sam <- function(nsamples) {
sam <- sample(1:nrow(xy), size = nsamples, prob=prob)
apply(xy[sam,], 1, function(x) paste(x[1], x[2], sep=","))
}
SAMPLES <- replicate(10000, snail.sam(5))
tab <- table(SAMPLES)
cols <- colorRampPalette(c("lightblue", "darkblue"))(max(tab))
barplot(table(SAMPLES), horiz=TRUE, las=1, cex.names=0.5,
col=cols[tab])
If using a 100 x 100 grid and the station is located at coordinates (60,70), then the image would look like this, with the sampled grids shown as black dots:
There is a tendency for the points to be located close to the station, although the sampling variability may make this difficult to see. If you want to give even more weight to grids near the station, then you can rescale the probabilities, which I think is ok to do, to save costs on travelling, but these weights need to be incorporated into the analysis when estimating the number of snails in the whole region. Here I've cubed the probabilities just so you can see what happens.
sam <- sample(1:nrow(xy), size = 200, prob=prob^3)
The tendency for the points to be located near the station is now more obvious.
There may be a better way than this but a quick way to do it is to randomly sample on both x and y axis using a distribution (I used the normal - bell shaped distribution, but you can really use any). The trick is to make the mean of the distribution the position of the research station. You can change the bias towards the research station by changing the standard deviation of the distribution.
Then use the randomly selected positions as your x and y coordinates to select the positions.
dat <- matrix(1:10000, nrow = 100)
#randomly selected a position for the research station
rs <- c(80,30)
# you can change the sd to change the bias
x <- round(rnorm(400,mean = rs[1], sd = 10))
y <- round(rnorm(400, mean = rs[2], sd = 10))
position <- rep(NA, 200)
j = 1
i = 1
# as some of the numbers sampled can be outside of the area you want I oversampled # and then only selected the first 200 that were in the area of interest.
while (j <= 200) {
if(x[i] > 0 & x[i] < 100 & y[i] > 0 & y [i]< 100){
position[j] <- dat[x[i],y[i]]
j = j +1
}
i = i +1
}
plot the results:
plot(x,y, pch = 19)
points(x =80,y = 30, col = "red", pch = 19) # position of the station
I want to draw a heatmap.
I have 100k*100k square matrix (50Gb(csv), numbers on right-top side and other filled by 0).
I want to ask "How can I draw a heatmap with R?" with this huge dataset.
I'm trying to this code on large RAM machine.
d = read.table("data.csv", sep=",")
d = as.matrix(d + t(d))
heatmap(d)
I tried some libraries like heatmap.2(in gplots) or something.
But they are take so much time and memories.
What I suggest you is to heavily down-sample your matrix before plotting the heatmap, e.g. doing the mean of each submatrices (as suggested by #IaroslavDomin) :
# example of big mx 10k x 10 k
bigMx <- matrix(rnorm(10000*10000,mean=0,sd=100),10000,10000)
# here we downsample the big matrix 10k x 10k to 100x100
# by averaging each submatrix
downSampledMx <- matrix(NA,100,100)
subMxSide <- nrow(bigMx)/nrow(downSampledMx)
for(i in 1:nrow(downSampledMx)){
rowIdxs <- ((subMxSide*(i-1)):(subMxSide*i-1))+1
for(j in 1:ncol(downSampledMx)){
colIdxs <- ((subMxSide*(j-1)):(subMxSide*j-1))+1
downSampledMx[i,j] <- mean(bigMx[rowIdxs,colIdxs])
}
}
# NA to disable the dendrograms
heatmap(downSampledMx,Rowv=NA,Colv=NA)
For sure with your huge matrix it will take a while to compute the downSampledMx, but it should be feasible.
EDIT :
I think downsampling should preserve recognizable "macro-patterns", e.g. see the following example :
# create a matrix with some recognizable pattern
set.seed(123)
bigMx <- matrix(rnorm(50*50,mean=0,sd=100),50,50)
diag(bigMx) <- max(bigMx) # set maximum value on the diagonal
# set maximum value on a circle centered on the middle
for(i in 1:nrow(bigMx)){
for(j in 1:ncol(bigMx)){
if(abs((i - 25)^2 + (j - 25)^2 - 10^2) <= 16)
bigMx[i,j] <- max(bigMx)
}
}
# plot the original heatmap
heatmap(bigMx,Rowv=NA,Colv=NA, main="original")
# function used to down sample
downSample <- function(m,newSize){
downSampledMx <- matrix(NA,newSize,newSize)
subMxSide <- nrow(m)/nrow(downSampledMx)
for(i in 1:nrow(downSampledMx)){
rowIdxs <- ((subMxSide*(i-1)):(subMxSide*i-1))+1
for(j in 1:ncol(downSampledMx)){
colIdxs <- ((subMxSide*(j-1)):(subMxSide*j-1))+1
downSampledMx[i,j] <- mean(m[rowIdxs,colIdxs])
}
}
return(downSampledMx)
}
# downsample x 2 and plot heatmap
downSampledMx <- downSample(bigMx,25)
heatmap(downSampledMx,Rowv=NA,Colv=NA, main="downsample x 2")
# downsample x 5 and plot heatmap
downSampledMx <- downSample(bigMx,10)
heatmap(downSampledMx,Rowv=NA,Colv=NA, main="downsample x 5")
Here's the 3 heatmaps :
I have 1 million records of lat long [5 digits precision] and Route. I want to cluster those data points.
I dont want to use standard k-means clustering as I am not sure how many clsuters [tried Elbow method but not convinced].
Here is my Logic -
1) I want to reduce width of lat long from 5 digits to 3 digits.
2) Now lat longs which are in range of +/- 0.001 are to be clustered in once cluster. Calculate centroid of cluster.
But in doing so I am unable to find good algorithm and R Script to execute my thought code.
Can any one please help me in above problem.
Thanks,
Clustering can be done based on connected components.
All points that are in +/-0.001 distance to each other can be connected so we will have a graph that contains subgraphs that each may be a single poin or a series of connected points(connected components)
then connected components can be found and their centeroid can be calculated.
Two packages required for this task :
1.deldir to form triangulation of points and specify which points are adaject to each other and to calculate distances between them.
2 igraph to find connected components.
library(deldir)
library(igraph)
coords <- data.frame(lat = runif(1000000),long=runif(1000000))
#round to 3 digits
coords.r <- round(coords,3)
#remove duplicates
coords.u <- unique(coords.r)
# create triangulation of points. depends on the data may take a while an consume more memory
triangulation <- deldir(coords.u$long,coords.u$lat)
#compute distance between adjacent points
distances <- abs(triangulation$delsgs$x1 - triangulation$delsgs$x2) +
abs(triangulation$delsgs$y1 - triangulation$delsgs$y2)
#remove edges that are greater than .001
edge.list <- as.matrix(triangulation$delsgs[distances < .0011,5:6])
if (length(edge.list) == 0) { #there is no edge that its lenght is less than .0011
coords.clustered <- coords.u
} else { # find connected components
#reformat list of edges so that if the list is
# 9 5
# 5 7
#so reformatted to
# 3 1
# 1 2
sorted <- sort(c(edge.list), index.return = TRUE)
run.length <- rle(sorted$x)
indices <- rep(1:length(run.length$lengths),times=run.length$lengths)
edge.list.reformatted <- edge.list
edge.list.reformatted[sorted$ix] <- indices
#create graph from list of edges
graph.struct <- graph_from_edgelist(edge.list.reformatted, directed = FALSE)
# cluster based on connected components
clust <- components(graph.struct)
#computation of centroids
coords.connected <- coords.u[run.length$values, ]
centroids <- data.frame(lat = tapply(coords.connected$lat,factor(clust$membership),mean) ,
long = tapply(coords.connected$long,factor(clust$membership),mean))
#combine clustered points with unclustered points
coords.clustered <- rbind(coords.u[-run.length$values,], centroids)
# round the data and remove possible duplicates
coords.clustered <- round(coords.clustered, 3)
coords.clustered <- unique(coords.clustered)
}
Dear Crowd
Problem
I tried to calculate a spatial correlogram with the packages nfc, pgirmess, SpatialPack and spdep. However, I was troubling to define the start and end-point of the distance. I'm only interested in the spatial autocorrelation at smaller distances, but there on smaller bins. Additionally, as the raster is quite large (1.8 Megapixels), I run into memory troubles with these packages but the SpatialPack.
So I tried to produce my own code, using the function Moran from the package raster. But I must have some error, as the result for the complete dataset is somewhat different than the one from the other packages. If there is no error in my code, it might at least help others with similar problems.
Question
I'm not sure, whether my focal matrix is erroneous. Could you please tell me whether the central pixel needs to be incorporated? Using the testdata I can't show the differences between the methods, but on my complete dataset, there are differences visible, as shown in the Image below. However, the bins are not exactly the same (50m vs. 69m), so this might explain parts of the differences. However, at the first bin, this explanation seems not to be plausible to me. Or might the irregular shape of my raster, and different ways to handle NA's cause the difference?
Comparison of Own method with the one from SpatialPack
Runable Example
Testdata
The code for calculating the testdata is taken from http://www.petrkeil.com/?p=1050#comment-416317
# packages used for the data generation
library(raster)
library(vegan) # will be used for PCNM
# empty matrix and spatial coordinates of its cells
side=30
my.mat <- matrix(NA, nrow=side, ncol=side)
x.coord <- rep(1:side, each=side)*5
y.coord <- rep(1:side, times=side)*5
xy <- data.frame(x.coord, y.coord)
# all paiwise euclidean distances between the cells
xy.dist <- dist(xy)
# PCNM axes of the dist. matrix (from 'vegan' package)
pcnm.axes <- pcnm(xy.dist)$vectors
# using 8th PCNM axis as my atificial z variable
z.value <- pcnm.axes[,8]*200 + rnorm(side*side, 0, 1)
# plotting the artificial spatial data
r <- rasterFromXYZ(xyz = cbind(xy,z.value))
plot(r, axes=F)
Own Code
library(raster)
sp.Corr <- matrix(nrow = 0,ncol = 2)
formerBreak <- 0 #for the first run important
for (i in c(seq(10,200,10))) #Calculate the Morans I for these bins
{
cat(paste0("..",i)) #print the bin, which is currently calculated
w = focalWeight(r,d = i,type = 'circle')
wTemp <- w #temporarily saves the weigtht matrix
if (formerBreak>0) #if it is the second run
{
midpoint <- ceiling(ncol(w)/2) # get the midpoint
w[(midpoint-formerBreak):(midpoint+formerBreak),(midpoint-formerBreak):(midpoint+formerBreak)] <- w[(midpoint-formerBreak):(midpoint+formerBreak),(midpoint-formerBreak):(midpoint+formerBreak)]*(wOld==0)#set the previous focal weights to 0
w <- w*(1/sum(w)) #normalizes the vector to sum the weights to 1
}
wOld <- wTemp #save this weight matrix for the next run
mor <- Moran(r,w = w)
sp.Corr <- rbind(sp.Corr,c(Moran =mor,Distance = i))
formerBreak <- i/res(r)[1]#divides the breaks by the resolution of the raster to be able to translate them to the focal window
}
plot(x=sp.Corr[,2],y = sp.Corr[,1],type = "l",ylab = "Moran's I",xlab="Upper bound of distance")
Other methods to calculate the Spatial Correlogram
library(SpatialPack)
sp.Corr <- summary(modified.ttest(z.value,z.value,coords = xy,nclass = 21))
plot(x=sp.Corr$coef[,1],y = data$coef[,4],type = "l",ylab = "Moran's I",xlab="Upper bound of distance")
library(ncf)
ncf.cor <- correlog(x.coord, y.coord, z.value,increment=10, resamp=1)
plot(ncf.cor)
In order to compare the results of the correlogram, in your case, two things should be considered. (i) your code only works for bins proportional to the resolution of your raster. In that case, a bit of difference in the bins could make to include or exclude an important amount of pairs. (ii) The irregular shape of the raster has a strong impact of the pairs that are considered to compute the correlation for certain distance interval. So your code should deal with both, allow any value for the length of bin and consider the irregular shape of the raster. A small modification of your code to tackle those problems are below.
# SpatialPack correlation
library(SpatialPack)
test <- modified.ttest(z.value,z.value,coords = xy,nclass = 21)
# Own correlation
bins <- test$upper.bounds
library(raster)
sp.Corr <- matrix(nrow = 0,ncol = 2)
for (i in bins) {
cat(paste0("..",i)) #print the bin, which is currently calculated
w = focalWeight(r,d = i,type = 'circle')
wTemp <- w #temporarily saves the weigtht matrix
if (i > bins[1]) {
midpoint <- ceiling(dim(w)/2) # get the midpoint
half_range <- floor(dim(wOld)/2)
w[(midpoint[1] - half_range[1]):(midpoint[1] + half_range[1]),
(midpoint[2] - half_range[2]):(midpoint[2] + half_range[2])] <-
w[(midpoint[1] - half_range[1]):(midpoint[1] + half_range[1]),
(midpoint[2] - half_range[2]):(midpoint[2] + half_range[2])]*(wOld==0)
w <- w * (1/sum(w)) #normalizes the vector to sum the weights to 1
}
wOld <- wTemp #save this weight matrix for the next run
mor <- Moran(r,w=w)
sp.Corr <- rbind(sp.Corr,c(Moran =mor,Distance = i))
}
# Comparing
plot(x=test$upper.bounds, test$imoran[,1], col = 2,type = "b",ylab = "Moran's I",xlab="Upper bound of distance", lwd = 2)
lines(x=sp.Corr[,2],y = sp.Corr[,1], col = 3)
points(x=sp.Corr[,2],y = sp.Corr[,1], col = 3)
legend('topright', legend = c('SpatialPack', 'Own code'), col = 2:3, lty = 1, lwd = 2:1)
The image shows that the results of using the SpatialPack package and the own code are the same.