I have created a 3d map using rgl.surface(), mainly following Shane's answer in this post. Using my own data, I get this map
On top of this surface map, I would like to add a map of vegetation density such that I obtain something like this (obtained with the software Surfer):
Is it possible to do this with rgl, or for that matter any other package in r or is the only solution to have two maps like in Shane's answer?
Thank you.
Edit:
Following #gsk3's request, here is the code for this map:
library(rgl)
# Read the z (i.e. elevation) dimension from file
z1 = matrix(scan("myfile.txt"),nrow=256, ncol=256, byrow=TRUE)
#create / open x y (i.e. easting and northing coordinates) dimensions
y=8*(1:ncol(z)) # Each point is 8 m^2
x=8*(1:nrow(z))
# See https://stackoverflow.com/questions/1896419/plotting-a-3d-surface-plot-with-contour-map-overlay-using-r for details of code below
zlim <- range(z)
zlen <- zlim[2] - zlim[1] + 1
colorlut <- terrain.colors(zlen,alpha=0) # height color lookup table
col <- colorlut[ z-zlim[1]+1 ] # assign colors to heights for each point
open3d()
rgl.surface(x,y,z)
I can't post the elevation code because there are 65536 (i.e. x*y=256*256) points but it is a matrix which looks like this
[,1] [,2] [,3] [,4] [,5]
[1,] 1513.708 1513.971 1514.067 1513.971 1513.875
[2,] 1513.622 1513.524 1513.578 1513.577 1513.481
and so on.
Same for the vegetation density map, which is exactly the same format and for which I have a single value for each x*y point. I hope this makes things a bit clearer...?
Edit 2, final version
This is the map I have produced with R. I haven't got the legend on it yet but this is something I'll do at a later stage.
The final code for this is
library(rgl)
z1 = matrix(scan("myfile.txt"),nrow=256, ncol=256, byrow=TRUE)
# Multiply z by 2 to accentuate the relief otherwise it looks a little bit flat.
z= z1*2
#create / open x y dimensions
y=8*(1:ncol(z))
x=8*(1:nrow(z))
trn = matrix(scan("myfile.txt"),nrow=256, ncol=256, byrow=TRUE)
fv = trn*100
trnlim = range(fv)
fv.colors = colorRampPalette(c("white","tan4","darkseagreen1","chartreuse4")) ## define the color ramp
colorlut =fv.colors(100)c(1,seq(35,35,length.out=9),seq(35,75,length.out=30),seq(75,100,length.out=61))]
# Assign colors to fv for each point
col = colorlut[fv-trnlim[1]+1 ]
open3d()
rgl.surface(x,y,z,color=col)
Thank you very much to #gsk3 and #nullglob in this post for their help. Hope this post will help many others!
Modified above code to give an answer. Note that terrain should be a matrix in the same format as the elevation matrix. And I added a ,color argument to your function call so it actually uses the color matrix you created.
library(rgl)
# Read the z (i.e. elevation) dimension from file
z1 = matrix(scan("myfile.txt"),nrow=256, ncol=256, byrow=TRUE)
#create / open x y (i.e. easting and northing coordinates) dimensions
y=8*(1:ncol(z)) # Each point is 8 m^2
x=8*(1:nrow(z))
# Read the terrain types from a file
trn = matrix(scan("terrain.txt"),nrow=256, ncol=256, byrow=TRUE)
# See http://stackoverflow.com/questions/1896419/plotting-a-3d-surface-plot-with-contour-map-overlay-using-r for details of code below
trnlim <- range(trn)
trnlen <- trnlim[2] - trnlim[1] + 1
colorlut <- terrain.colors(trnlen,alpha=0) # height color lookup table
col <- colorlut[ trn-trnlim[1]+1 ] # assign colors to heights for each point
open3d()
rgl.surface(x,y,z,color=col)
Related
Suppose I have two datasets: (1) a data frame: coordinates of localities, each with ID; and (2) a linguistic distance matrix which reflects the linguistic distance between these localities.
# My data are similar to this structure
# dataframe
id <- c("A","B","C","D","E")
x_coor <- c(0.5,1,1,1.5,2)
y_coor <- c(5.5,3,7,6.5,5)
my.data <- data.frame(id = id, x_coor = x_coor, y_coor = y_coor)
# linguistic distance matrix
A B C D
B 308.298557
C 592.555483 284.256926
D 141.421356 449.719913 733.976839
E 591.141269 282.842712 1.414214 732.562625
Now, I want to visualize the linguistic distance between every two sites onto a map by the thickness or color of the line connect the adjacent localities in R.
Just like this:
enter image description here
My idea is to generate the delaunay triangulation by deldir or tripack package in R.
# generate delaunay triangulation
library(deldir)
de=deldir(my.data$x_coor,my.data$y_coor)
plot.deldir(de,wlines="triang",col='blue',wpoints = "real",cex = 0.1)
text(my.data$x_coor,my.data$y_coor,my.data$id)
this is the plot:
enter image description here
My question is how to reflect the linguistic distance by the thickness or color of the edges of triangles? Is there any other better method?
Thank you very much!
What you want to do in respect of the line widths can be done "fairly
easily" by the deldir package. You simply call plot.deldir() with the
appropriate value of "lw" (line width).
At the bottom of this answer is a demonstration script "demo.txt" which shows how to do this in the case of your example. In particular this script shows
how to obtain the appropriate value of lw from the "linguistic distance
matrix". I had to make some adjustments in the way this matrix was
presented. I.e. I had to convert it into a proper matrix.
I have rescaled the distances to lie between 0 and 10 to obtain the
corresponding values of the line widths. You might wish to rescale in a different manner.
In respect of colours, there are two issues:
(1) It is not at all clear how you would like to map the "linguistic
distances" to colours.
(2) Unfortunately the code for plot.deldir() is written in a very
kludgy way, whence the "col" argument to segments() cannot be
appropriately passed on in the same manner that the "lw" argument can.
(I wrote the plot.deldir() code a long while ago, when I knew far less about
R programming than I know now! :-))
I will adjust this code and submit a new version of deldir to CRAN
fairly soon.
#
# Demo script
#
# Present the linguistic distances in a useable way.
vldm <- c(308.298557,592.555483,284.256926,141.421356,449.719913,
733.976839,591.141269,282.842712,1.414214,732.562625)
ldm <- matrix(nrow=5,ncol=5)
ldm[row(ldm) > col(ldm)] <- vldm
ldm[row(ldm) <= col(ldm)] <- 0
ldm <- (ldm + t(ldm))/2
rownames(ldm) <- LETTERS[1:5]
colnames(ldm) <- LETTERS[1:5]
# Set up the example data. It makes life much simpler if
# you denote the "x" and "y" coordinates by "x" and "y"!!!
id <- c("A","B","C","D","E")
x_coor <- c(0.5,1,1,1.5,2)
y_coor <- c(5.5,3,7,6.5,5)
# Eschew nomenclature like "my.data". Such nomenclature
# is Micro$oft-ese and is an abomination!!!
demoDat <- data.frame(id = id, x = x_coor, y = y_coor)
# Form the triangulation/tessellation.
library(deldir)
dxy <- deldir(demoDat)
# Plot the triangulation with line widths proportional
# to "linguistic distances". Note that plot.deldir() is
# a *method* for plot, so you do not have to (and shouldn't)
# type the ".deldir" in the plotting command.
plot(dxy,col=0) # This, and plotting with "add=TRUE" below, is
# a kludge to dodge around spurious warnings.
ind <- as.matrix(dxy$delsgs[,c("ind1","ind2")])
lwv <- ldm[ind]
lwv <- 10*lwv/max(lwv)
plot(dxy,wlines="triang",col='grey',wpoints="none",
lw=10*lwv/max(lwv),add=TRUE)
with(demoDat,text(x,y,id,col="red",cex=1.5))
I'm trying to find sites to collect snails by using a semi-random selection method. I have set a 10km2 grid around the region I want to collect snails from, which is broken into 10,000 10m2 cells. I want to randomly this grid in R to select 200 field sites.
Randomly sampling a matrix in R is easy enough;
dat <- matrix(1:10000, nrow = 100)
sample(dat, size = 200)
However, I want to bias the sampling to pick cells closer to a single position (representing sites closer to the research station). It's easier to explain this with an image;
The yellow cell with a cross represents the position I want to sample around. The grey shading is the probability of picking a cell in the sample function, with darker cells being more likely to be sampled.
I know I can specify sampling probabilities using the prob argument in sample, but I don't know how to create a 2D probability matrix. Any help would be appreciated, I don't want to do this by hand.
I'm going to do this for a 9 x 6 grid (54 cells), just so it's easier to see what's going on, and sample only 5 of these 54 cells. You can modify this to a 100 x 100 grid where you sample 200 from 10,000 cells.
# Number of rows and columns of the grid (modify these as required)
nx <- 9 # rows
ny <- 6 # columns
# Create coordinate matrix
x <- rep(1:nx, each=ny);x
y <- rep(1:ny, nx);y
xy <- cbind(x, y); xy
# Where is the station? (edit: not snails nest)
Station <- rbind(c(x=3, y=2)) # Change as required
# Determine distance from each grid location to the station
library(SpatialTools)
D <- dist2(xy, Station)
From the help page of dist2
dist2 takes the matrices of coordinates coords1 and coords2 and
returns the inter-Euclidean distances between coordinates.
We can visualize this using the image function.
XY <- (matrix(D, nr=nx, byrow=TRUE))
image(XY) # axes are scaled to 0-1
# Create a scaling function - scales x to lie in [0-1)
scale_prop <- function(x, m=0)
(x - min(x)) / (m + max(x) - min(x))
# Add the coordinates to the grid
text(x=scale_prop(xy[,1]), y=scale_prop(xy[,2]), labels=paste(xy[,1],xy[,2],sep=","))
Lighter tones indicate grids closer to the station at (3,2).
# Sampling probabilities will be proportional to the distance from the station, which are scaled to lie between [0 - 1). We don't want a 1 for the maximum distance (m=1).
prob <- 1 - scale_prop(D, m=1); range (prob)
# Sample from the grid using given probabilities
sam <- sample(1:nrow(xy), size = 5, prob=prob) # Change size as required.
xy[sam,] # Thse are your (**MY!**) 5 samples
x y
[1,] 4 4
[2,] 7 1
[3,] 3 2
[4,] 5 1
[5,] 5 3
To confirm the sample probabilities are correct, you can simulate many samples and see which coordinates were sampled the most.
snail.sam <- function(nsamples) {
sam <- sample(1:nrow(xy), size = nsamples, prob=prob)
apply(xy[sam,], 1, function(x) paste(x[1], x[2], sep=","))
}
SAMPLES <- replicate(10000, snail.sam(5))
tab <- table(SAMPLES)
cols <- colorRampPalette(c("lightblue", "darkblue"))(max(tab))
barplot(table(SAMPLES), horiz=TRUE, las=1, cex.names=0.5,
col=cols[tab])
If using a 100 x 100 grid and the station is located at coordinates (60,70), then the image would look like this, with the sampled grids shown as black dots:
There is a tendency for the points to be located close to the station, although the sampling variability may make this difficult to see. If you want to give even more weight to grids near the station, then you can rescale the probabilities, which I think is ok to do, to save costs on travelling, but these weights need to be incorporated into the analysis when estimating the number of snails in the whole region. Here I've cubed the probabilities just so you can see what happens.
sam <- sample(1:nrow(xy), size = 200, prob=prob^3)
The tendency for the points to be located near the station is now more obvious.
There may be a better way than this but a quick way to do it is to randomly sample on both x and y axis using a distribution (I used the normal - bell shaped distribution, but you can really use any). The trick is to make the mean of the distribution the position of the research station. You can change the bias towards the research station by changing the standard deviation of the distribution.
Then use the randomly selected positions as your x and y coordinates to select the positions.
dat <- matrix(1:10000, nrow = 100)
#randomly selected a position for the research station
rs <- c(80,30)
# you can change the sd to change the bias
x <- round(rnorm(400,mean = rs[1], sd = 10))
y <- round(rnorm(400, mean = rs[2], sd = 10))
position <- rep(NA, 200)
j = 1
i = 1
# as some of the numbers sampled can be outside of the area you want I oversampled # and then only selected the first 200 that were in the area of interest.
while (j <= 200) {
if(x[i] > 0 & x[i] < 100 & y[i] > 0 & y [i]< 100){
position[j] <- dat[x[i],y[i]]
j = j +1
}
i = i +1
}
plot the results:
plot(x,y, pch = 19)
points(x =80,y = 30, col = "red", pch = 19) # position of the station
regarding two raster layers which do not match exactly because of defective data, i would like to know, how to find out about the x/y shift between these two layers to align them properly using raster::shift()
i have already tried to investigate on the x/y-shift using qgis, but i just found the georeferencing tool, providing to relocate raster layers but not something interactive. i am looking for a possibility to move my defective raster on a basemap and getting information about the x/y shift.
i am NOT looking for a solution where i have to set specific georeferencing points to align the two raster layers since i am working on a highly dynamic landscape where it is difficult to find matching points, but where it is possible to align the raster layers by textural information provided by the datasets.
a code example should look like the solution provided by user #dTanMan URL:https://gis.stackexchange.com/users/77712/dtanman in this post URL:https://gis.stackexchange.com/a/201750
raster <- raster()
raster <- shift(raster, x=5, y=-15)
thanks a lot in advance, cheers, ExploreR
Perhaps you can use something like this
Example data
library(raster)
a <- raster(ncol=20, nrow=20, xmn=0,xmx=20,ymn=0,ymx=20)
values(a) <- 1:400
set.seed(3)
b <- a + runif(400)
Function to compare similarity of cell values
rmse <- function(obs, prd) {
sqrt(mean((obs-prd)^2, na.rm=TRUE))
}
Values from reference raster. May need to take a sample if raster is very large
nsamples <- 10000
s <- sampleRegular(a, nsamples, cells=TRUE)
sample_a <- s[,2]
Locations to be compared
xy <- xyFromCell(a, s[,1])
Test range for cell shifts
xrange <- -5:5 * xres(a)
yrange <- -5:5 * yres(a)
Matrix to store the results in
result <- cbind(rep(xrange, each=length(yrange)), rep(yrange, length(xrange)), NA)
colnames(result) <- c("dx", "dy", "rmse")
Loop over cellshift combinations
i <- 1
for (dx in xrange) {
for (dy in yrange) {
x <- shift(b, dx, dy)
sample_b <- extract(x, xy)
result[i,3] <- rmse(sample_a, sample_b)
i <- i + 1
}
}
Results suggest that dx=0 and dy=0 is the best in this case.
r <- result[order(result[,3]), ]
head(r)
# dx dy rmse
#[1,] 0 0 0.5734866
#[2,] 1 0 0.5800670
#[3,] -1 0 1.5252878
#[4,] 2 0 1.5302921
#[5,] -2 0 2.5153573
#[6,] 3 0 2.5157728
Test
bb <- shift(b, dx=r[1,1], dy=r[1,2])
rmse(values(a), values(bb))
#[1] 0.5734866
> d
[,1] [,2]
1 -0.5561835 1.49947588
2 -2.3985544 3.07130217
3 -3.8833659 -4.29331711
4 3.1025836 5.45359160
5 0.7438354 -2.80116065
6 7.0787294 -2.78121213
7 -1.6633598 -1.17898157
8 -0.6751930 0.03466162
9 1.4633841 0.50173157
10 -3.2118758 0.49390863
The above table gives the x(1st column) and y(2nd column) coordinates of the plot i want to plot.
require(MASS) # for sammon using which i generated the above coordinates
require(deldir) # for voronoi tessellations
dd <- deldir(d[,1], d[,2]) # voronoi tessellations
plot(dd,wlines="tess") # This will give me tessellations
I want my next tessellations to be plotted in one region in the above tessellation. I can get the lines that form the tessellations using dd$dirsgs. In this each line that is there in the tessellation is given with their end points. The first four columns of this gives the x1,y1 and x2,y2 coordinates respectively. These coordinates are the end points of the line. Using this data can I plot the next sub-tessellation within this one region in the above tessellation.
For the next sub-tessellation, you can generate the coordinates of your choice. But I just want them to be in one region of the above plotted tessellation.
ind 1 and ind2 in the dd$dirsgs gives the points in 'd' which are separated by the line represented by the first 4 columns of dd$dirsgs.
For example, if we want to plot the sub-tessellation in the plot containing the first point in d, then the rows 1,2,9,12,17 are the rows that form the boundary for the first point in d. Using this information, can we plot the sub-tessellation within this region? –
I think I have covered all the things that are requisite to understand my problem. If there is any more data which I have not included then please let me know. I will give the information.
The way I understand it (and by that I mean if i understood correctly your question), because plot.deldir allows an argument add=TRUE to be passed, it can be done directly.
d<-structure(list(V1 = c(-0.5561835, -2.3985544, -3.8833659, 3.1025836, 0.7438354,
7.0787294, -1.6633598, -0.675193, 1.4633841, -3.2118758), V2 =
c(1.49947588, 3.07130217, -4.29331711, 5.4535916, -2.80116065,
-2.78121213, -1.17898157, 0.03466162, 0.50173157, 0.49390863)), .Names =
c("V1","V2"), class = "data.frame", row.names = c(NA, -10L))
library(MASS)
library(deldir)
dd <- deldir(d[,1], d[,2])
plot(dd, wlines="tess")
First let's extract the data for the polygon: as you noticed in the comments it need more processing that i previously thought since the polygons in plot.deldir are plotted line by line and not polygon after polygon so the order of the lines is scrambled in dd$dirsgs.
ddd <- as.matrix(dd$dirsgs[dd$dirsgs$ind2==1,1:4])
d1poly <- rbind(ddd[1,1:2],ddd[1,3:4])
for( i in 2:nrow(ddd)){
x <- ddd[ddd[,1]==d1poly[i,1], 3:4]
d1poly <- rbind(d1poly, x)
}
d1poly
x2 y2
-2.096990 1.559118
0.303986 4.373353
x 1.550185 3.220238
x 0.301414 0.692558
x -1.834581 0.866098
x -2.096990 1.559118
Let's create some random data in the polygon of interest using package splancs:
library(splancs)
rd <- csr(as.matrix(d1poly),10) # For 10 random points in the polygon containing point 1
rd
xc yc
[1,] -1.6904093 1.9281052
[2,] -1.1321334 1.7363064
[3,] 0.2264649 1.3986126
[4,] -1.1883844 2.5996515
[5,] -0.6929208 0.8745020
[6,] -0.8348241 2.3318222
[7,] 0.9101748 1.9439797
[8,] 0.1665160 1.8754703
[9,] -1.1100710 1.3517257
[10,] -1.5691826 0.8782223
rdd <- deldir(c(rd[,1],d[1,1]),c(rd[,2],d[1,2]))
# don't forget to add the coordinates of your point 1 so it s part of the sub-tessellation
plot(dd, wlines="tess")
plot(rdd, add=TRUE, wlines="tess")
Edit
Concerning restricting the lines within the boundary, the only solution I can think of is a very ugly workaround: drawing first the subtesselation, then hiding the outside of the polygon of interest and then plotting the global tesselation.
plot(dd, wlines="tess", col="white", wpoints="none")
plot(rdd, wlines="tess", add=TRUE)
plotlim <- cbind(par()$usr[c(1,2,2,1)],par()$usr[c(3,3,4,4)])
extpoly <- rbind(plotlim, d1poly)
#Here the first point of d1poly is oriented toward the upper left corner: if it is oriented otherwise the order of plotlim has to be changed accordingly
polygon(extpoly, border=NA, col="white")
plot(dd, wlines="tess", add=TRUE)
You may instead want to consider using the spatstat package for this, as it can greatly simplify constraining the new tessellation to a tile of the existing tessellation. Your setup will then look like this:
library(spatstat)
# Plot the main tessellation and points
d<-structure(list(V1 = c(-0.5561835, -2.3985544, -3.8833659, 3.1025836, 0.7438354,
7.0787294, -1.6633598, -0.675193, 1.4633841, -3.2118758), V2 =
c(1.49947588, 3.07130217, -4.29331711, 5.4535916, -2.80116065,
-2.78121213, -1.17898157, 0.03466162, 0.50173157, 0.49390863)), .Names =
c("V1","V2"), class = "data.frame", row.names = c(NA, -10L))
d_points <- ppp(d$V1, d$V2, window=owin(c(-5, 8), c(-6, 6)))
main_tessellation <- dirichlet(d_points)
plot(main_tessellation, lty=3) # plot the tessellation
plot(d_points, add=TRUE) # add the points
# Plot the interior tessellation and points (color=red so the difference is clear)
# Arbitrarily choosing the 9th tile from the above tessellation:
target_poly <- owin(poly=main_tessellation$tiles[[9]]$bdry[[1]])
# Generate random set of points within the boundaries of the polygon chosen above
new_points <- runifpoint(6, win=target_poly)
# Generate and plot the new tessellation and points
new_tessellation <- dirichlet(new_points)
plot(new_tessellation, add=TRUE, col='red')
plot(new_points, add=TRUE, col='red')
Which will produce:
See this closely related question: Voronoi diagram polygons enclosed in geographic borders
I have two matrices (of approximately 300 x 100) and I would like to plot a graph to see the parts of the first one that are higher than those of the second.
I can do, for instance:
# Calculate the matrices and put them into m1 and m2
# Note that the values are between -1 and 1
par(mfrow=c(1,3))
image(m1, zlim=c(-1,1))
image(m2, zlim=c(-1,1))
image(m1-m2, zlim=c(0,1))
This will plot only the desired regions in the 3rd plot but I would like to do something a bit different, like putting a line around those areas over the first plot in order to highlight them directly there.
Any idea how I can do that?
Thank you
nico
How about:
par(mfrow = c(1, 3))
image(m1, zlim = c(-1, 1))
contour(m1 - m2, add = TRUE)
image(m2, zlim = c(-1, 1))
contour(m1 - m2, add = TRUE)
image(m1 - m2, zlim = c(0, 1))
contour(m1 - m2, add = TRUE)
This adds a contour map around the regions. Sort of puts rings around the areas of the 3rd plot (might want to fiddle with the (n)levels of the contours to get fewer 'circles').
Another way of doing your third image might be:
image(m1>m2)
this produces a matrix of TRUE/FALSE values which gets imaged as 0/1, so you have a two-colour image. Still not sure about your 'putting a line around' thing though...
Here's some code I wrote to do something similar. I wanted to highlight contiguous regions above a 0.95 threshold by drawing a box round them, so I got all the grid squares above 0.95 and did a clustering on them. Then do a bit of fiddling with the clustering output to get the rectangle coordinates of the regions:
computeHotspots = function(xyz, thresh, minsize=1, margin=1){
### given a list(x,y,z), return a data frame where each row
### is a (xmin,xmax,ymin,ymax) of bounding box of a contiguous area
### over the given threshhold.
### or approximately. lets use the clustering tools in R...
overs <- which(xyz$z>thresh,arr.ind=T)
if(length(overs)==0){
## found no hotspots
return(NULL)
}
if(length(overs)==2){
## found one hotspot
xRange <- cbind(xyz$x[overs[,1]],xyz$x[overs[,1]])
yRange <- cbind(xyz$y[overs[,2]],xyz$y[overs[,2]])
}else{
oTree <- hclust(dist(overs),method="single")
oCut <- cutree(oTree,h=10)
oXYc <- data.frame(x=xyz$x[overs[,1]],y=xyz$y[overs[,2]],oCut)
xRange <- do.call("rbind",tapply(oXYc[,1],oCut,range))
yRange <- do.call("rbind",tapply(oXYc[,2],oCut,range))
}
### add user-margins
xRange[,1] <- xRange[,1]-margin
xRange[,2] <- xRange[,2]+margin
yRange[,1] <- yRange[,1]-margin
yRange[,2] <- yRange[,2]+margin
## put it all together
xr <- apply(xRange,1,diff)
xm <- apply(xRange,1,mean)
xRange[xr<minsize,1] <- xm[xr<minsize]-(minsize/2)
xRange[xr<minsize,2] <- xm[xr<minsize]+(minsize/2)
yr <- apply(yRange,1,diff)
ym <- apply(yRange,1,mean)
yRange[yr<minsize,1] <- ym[yr<minsize]-(minsize/2)
yRange[yr<minsize,2] <- ym[yr<minsize]+(minsize/2)
cbind(xRange,yRange)
}
Test code:
x=1:23
y=7:34
m1=list(x=x,y=y,z=outer(x,y,function(x,y){sin(x/3)*cos(y/3)}))
image(m1)
hs = computeHotspots(m1,0.95)
That should give you a matrix of rectangle coordinates:
> hs
[,1] [,2] [,3] [,4]
1 13 15 8 11
2 3 6 17 20
3 22 24 18 20
4 13 16 27 30
Now you can draw them over the image with rect:
image(m1)
rect(hs[,1],hs[,3],hs[,2],hs[,4])
and to show they are where they should be:
image(list(x=m1$x,y=m1$y,z=m1$z>0.95))
rect(hs[,1],hs[,3],hs[,2],hs[,4])
You could of course adapt this to draw circles, but more complex shapes would be tricky. It works best when the regions of interest are fairly compact.
Barry