convert loess to spatial data - r

I am completely at loss on how to convert from atomic vectors, lists, etc. to spatial data.
I want to work with data in polygons on a map (nxm matrix). Specifically the output from functions such as loess and akima. For example from loess I get:
List of 3
$ x: num [1:112] 656977 657024 657071 657118 657165 ...
$ y: num [1:82] 661500 661544 661587 661631 661675 ...
$ z: num [1:112, 1:82] -725 -724 -720 -715 -707 ...
where x and y a State Plane Coordinates, and z is a combination of land surface and bathymetry elevations. The bathymetry is inside a polygon and some of the loess results spill outside of the polygon onto the matrix. I want to zero out everything out side of the polygon. I believe I can do that with over, but the data needs to be a spatial coordinates.
How do I do that? I have been searching and trying things out for weeks.
Thanks in advance...

It's a pretty broad question so here are hints and pointers rather than specifics.
R has many types of spatial classes, but things over the past few years have converged to the Spatial___DataFrame classes of library sp.
The Bivand Applied Spatial Analysis with R book has a detailed set of examples and examination of the internals, including how to create them and how to convert between various spatial classes.
There's also a Spatial vignette to help gt you started.

Assuming you could do a traditional loess with a single predictor and plot it...
A two way interaction can be thought of as occurring on a two dimensional plane. Therefore, simply providing x and y as interaction terms you should be able to plot your loess function using some three dimensional system, even if it is just as the predicted z-values being levels in a heat plot.

Related

Point patterns analysis : clarkevans.test and edge correction

First I'm really sorry because I'm newR, so I apologise if I missed a previous respond to this question, and I also apologise because I can not join pictures to my text (and I apologise for english fault because I'm not bilingual).
I'm working with Q-GIS and R. I have a forest (layer = parcels) with georeferenced specific trees (layer = coordinates of specific trees). I want to know if those specific trees are agregates. Therefore I import my Q-GIS layer on R which shows my specific trees (package rgdal, function readOGR).
Then I calculate the clark evans indice (package spatstat, function clarkevans.test) using the following line. Clark evans indice is the ratio of observed mean nearest neighbour distance to adjusted theoretical mean. (R=0 : complete agregation. R=1 : complete random. R=2.14914 : uniform pattern.)
clarkevans.test(PPP, correction="Donnelly", alternative="two.sided")
Where PPP is the format ppp of my layer specific trees :
PPP <- ppp(x = coordinates(spec_trees)[,1], y = coordinates(spec_trees)[,2], xrange = range(coordinates(spec_trees)[,1]), yrange = range(coordinates(spec_trees)[,2]))
And where the Donnelly correction is a correction for a rectangle window. I'm not sure if I have to use it or not. When there is no correction, I have almost the same result.
To the clark envans test, R responds :
R = 0.48929, p-value = 0.002
alternative hypothesis: two-sided
Which means my points are significantly different from a spatial random distribution (p-value > 0.05) AND my points are agregates (R<1).
BUT, I think R might over estimates the agregation, and I need the true value. My forest (layer = parcels) is not a square or an oval. Parcels are discontinues (there are lakes, roads, houses) ! Trees can not be everywhere in the square, where there are no parcels, there can be no trees, so of course there are no specific_trees. But R does not know that, so it just search agregation of specific_trees in an empty square.
So my question is the following one : can I search the spatial pattern of points IN a limited area which has a complex shape ?
I hope I am clear, but do not hesitate to ask me questions.
Since this question has been viewed many times, here is a detailed answer.
The analysis of a point pattern should always take account of the spatial region where points could have occurred. The spatstat package is designed to support this.
Point patterns are represented by objects of class ppp. The spatial region ("window") can be any complicated shape, or shapes, represented by an object of class owin.
You can create a point pattern in an irregularly-shaped window by
A <- ppp(x, y, window=W)
where x and y are vectors of coordinates, and W is a spatial region object of class owin created by the function owin. The object W can be a polygon, or several disconnected polygons, or a binary pixel mask, etc.
By the way, we strongly discourage the use of syntax like
PPP <- ppp(x, y, xrange = range(x), yrange = range(y))
because even when the spatial region is known to be a rectangle, it is unwise to estimate the limits of the rectangle by using the ranges of the coordinates.
For more information, see the spatstat book.

R: spatial interpolation with akima package on irregular grid with void data

I have an irregular grid of points in the form (X,Y,Z), where (X,Y) are coordinates on a plane (can be geographical longitude/latitude), Z is some property to interpolate between points. I use akima interpolation package in R. The data set can contain missed values, and akima package does not like it. This can be remedied by a complete.cases() directive reorganizing the data set. But there is a following issue. Certain points contain no data in the sense that the interpolated quality is absent there (NA in R). As a closest example, Z is a depth of a stratigraphic interval, for example, Quaternary deposits. In these places I need to have a "hole" in the interpolated grid, showing that this layer is absent here; meanwhile the algorithm simply interpolates between available points with data.
#small extract from data
mydf<-read.csv(text="lon, lat, Q
411,362,1300
377,395.5,1425
427,370,1800
435.5,352,
428,357,
390,423,1700")
library("akima")
bbb<-data.frame(lon=mydf$lon,lat=mydf$lat,H=mydf$Q)
ccc<-bbb[complete.cases(bbb),]
Q.int<-interp(ccc$lon,ccc$lat,ccc$H,linear=TRUE,
extrap=FALSE,nx=240,ny=240,duplicate="mean")
Then it can be visualized, for example, with image.plot() from fields package
library("fields")
image.plot(Q.int)
In this data set, points 4,5 are lacking. This can be either 1) lack of data on these points, or 2) indication, that the deposits are absent here. I can note it in data set explicitly, for example, with NA symbol. But I need the interpolation, which distinguishes these two cases.
My solution was to interpolate "as is", and then to use a trick: declare that all interpolated values of a property Z on a grid <30 meters are effectively NA's, and then plot it:
Q.int$z.cut<-ifelse(Q.int$z<30,NA,Q.int$z)
This could reflect the geological situation, since layers with decreasing thickness indeed "fade out", and can be stripped on a map, but would it be possible to arrange this problem in a more elegant solution?

R - DBSCAN fviz_cluster - get coordinates of elements with dim1 and dim2

I'm a noob with R, and I'm trying to do clustering on some data samples.
I've tried a PCA,
res.pca <- PCA(df,
ncp = 5, # nb composantes principales.
graph = TRUE,
)
and I can get the full elements list with new coordinates using
res.pca$ind
This is great and works perfectly
for info using the 2 first axis with the PCA, I've 80% of variability on one axis and a bit more than 10% on the Second axis. I was quite proud of the result considering that I've 30 variables ... and in the End the PCA implicitly says that 2 dimension will be enough.
Still working on those data I tried the DBSCAN Clustering method fpc::dbscan :
library (factoextra)
db <- fpc::dbscan(df, eps = 22, MinPts = 3)
and after doing the dbscan and graphing the clusters using fviz_cluster, the Two dimensions display says : 92.8% on axis 1 and 6.7% on axis 2!!!! (more than 99% of the total variance explained with 2 axis !
In short, the DBSCAN has transformed my 30 variables data in a way that looks to be better than the PCA. The overall clustering of DBSCAN is rubbish for my data, but the transformation that has been used is absolutely excellent.
My issue is that I would like to get access to those new coordinates ... but no way at this time...
the only accessible variables I can see are :
db$cluster, db$eps, db$Minpts, db$isseed
BUT I suspect that some data are accessible otherwize how fviz_cluster, could present the data.
Any Idea ?
The projection is not done by dbscan. fviz_cluster uses the first two components obtained via stats::prcomp on the data.

R: Is it possible to plot a grid from x, y spatial coordinates?

I've been working with a spatial model which contains 21,000 grid cells of unequal size (i by j, where i is [1:175] and j is[1:120]). I have the latitude and longitude values in two seperate arrays (lat_array,lon_array) of i and j dimensions.
Plotting the coordinates:
> plot(lon_array, lat_array, main='Grid Coordinates')
Result:
My question: Is it possible to plot these spatial coordinates as a grid rather than as points? Does anyone know of a package or function that might be able to do this? I haven't been able to find anything online to this nature.
Thanks.
First of all it is always a bit dangerous to plot inherently spherical coordinates (lat,long) directly in the plane. Usually you should project them in some way, but I will leave it for you to explore the sp package and the function spTransform or something like that.
I guess in principle you could simply use the deldir package to calculate the Dirichlet tessellation of you points which would give you a nice grid. However, you need a bounding region for this to avoid large cells radiating out from the border of your region. I personally use spatstat to call deldir so I can't give you the direct commands in deldir, but in spatstat I would do something like:
library(spatstat)
plot(lon_array, lat_array, main='Grid Coordinates')
W <- clickpoly(add = TRUE) # Now click the region that contains your grid
i_na <- is.na(lon_array) | is.na(lat_array) # Index of NAs
X <- ppp(lon_array[!i_na], lat_array[!i_na], window = W)
grid <- dirichlet(X)
plot(grid)
I have not tested this yet and I will update this answer once I get the chance to test it with some artificial data. A major problem is the size of your dataset which may take a long time to calculate the Dirichlet tessellation of. I have only tried to call dirichlet on dataset of size up to 3000 points...

options to allow heavily-weighted points on a map to overwhelm other points with low weights

what are some good kriging/interpolation idea/options that will allow heavily-weighted points to bleed over lightly-weighted points on a plotted R map?
the state of connecticut has eight counties. i found the centroid and want to plot poverty rates of each of these eight counties. three of the counties are very populated (about 1 million people) and the other five counties are sparsely populated (about 100,000 people). since the three densely-populated counties have more than 90% of the total state population, i would like those the three densely-populated counties to completely "overwhelm" the map and impact other points across the county borders.
the Krig function in the R fields package has a lot of parameters and also covariance functions that can be called, but i'm not sure where to start?
here is reproducible code to quickly produce a hard-bordered map and then three differently-weighted maps. hopefully i can just make changes to this code, but perhaps it requires something more complex like the geoRglm package? two of the three weighted maps look almost identical, despite one being 10x as weighted as the other..
https://raw.githubusercontent.com/davidbrae/swmap/master/20141001%20how%20to%20modify%20the%20Krig%20function%20so%20a%20huge%20weight%20overwhelms%20nearby%20points.R
thanks!!
edit: here's a picture example of the behavior i want-
disclaimer - I am not an expert on Krigging. Krigging is complex and takes a good understanding of the underlying data, the method and the purpose to achieve the correct result. You may wish to try to get input from #whuber [on the GIS Stack Exchange or contact him through his website (http://www.quantdec.com/quals/quals.htm)] or another expert you know.
That said, if you just want to achieve the visual effect you requested and are not using this for some sort of statistical analysis, I think there are some relatively simple solutions.
EDIT:
As you commented, though the suggestions below to use theta and smoothness arguments do even out the prediction surface, they apply equally to all measurements and thus do not extend the "sphere of influence" of more densely populated counties relative to less-densely populated. After further consideration, I think there are two ways to achieve this: by altering the covariance function to depend on population density or by using weights, as you have. Your weighting approach, as I wrote below, alters the error term of the krigging function. That is, it inversely scales the nugget variance.
As you can see in the semivariogram image, the nugget is essentially the y-intercept, or the error between measurements at the same location. Weights affect the nugget variance (sigma2) as sigma2/weight. Thus, greater weights mean less error at small-scale distances. This does not, however, change the shape of the semivariance function or have much effect on the range or sill.
I think that the best solution would be to have your covariance function depend on population. however, I'm not sure how to accomplish that and I don't see any arguments to Krig to do so. I tried playing with defining my own covariance function as in the Krig example, but only got errors.
Sorry I couldn't help more!
Another great resource to help understand Krigging is: http://www.epa.gov/airtrends/specialstudies/dsisurfaces.pdf
As I said in my comment, the sill and nugget values as well as the range of the semivariogram are things you can alter to affect the smoothing. By specifying weights in the call to Krig, you are altering the variance of the measurement errors. That is, in a normal use, weights are expected to be proportional to the accuracy of the measurement value so that higher weights represent more accurate measurements, essentially. This isn't actually true with your data, but it may be giving you the effect you desire.
To alter the way your data is interpolated, you can adjust two (and many more) parameters in the simple Krig call you are using: theta and smoothness. theta adjusts the semivariance range, meaning that measured points farther away contribute more to the estimates as you increase theta. Your data range is
range <- data.frame(lon=range(ct.data$lon),lat=range(ct.data$lat))
range[2,]-range[1,]
lon lat
2 1.383717 0.6300484
so, your measurement points vary by ~1.4 degrees lon and ~0.6 degrees lat. Thus, you can play with specifying your theta value in that range to see how that affects your result. In general, a larger theta leads to more smoothing since you are drawing from more values for each prediction.
Krig.output.wt <- Krig( cbind(ct.data$lon,ct.data$lat) , ct.data$county.poverty.rate ,
weights=c( size , 1 , 1 , 1 , 1 , size , size , 1 ),Covariance="Matern", theta=.8)
r <- interpolate(ras, Krig.output.wt)
r <- mask(r, ct.map)
plot(r, col=colRamp(100) ,axes=FALSE,legend=FALSE)
title(main="Theta = 0.8", outer = FALSE)
points(cbind(ct.data$lon,ct.data$lat))
text(ct.data$lon, ct.data$lat-0.05, ct.data$NAME, cex=0.5)
Gives:
Krig.output.wt <- Krig( cbind(ct.data$lon,ct.data$lat) , ct.data$county.poverty.rate ,
weights=c( size , 1 , 1 , 1 , 1 , size , size , 1 ),Covariance="Matern", theta=1.6)
r <- interpolate(ras, Krig.output.wt)
r <- mask(r, ct.map)
plot(r, col=colRamp(100) ,axes=FALSE,legend=FALSE)
title(main="Theta = 1.6", outer = FALSE)
points(cbind(ct.data$lon,ct.data$lat))
text(ct.data$lon, ct.data$lat-0.05, ct.data$NAME, cex=0.5)
Gives:
Adding the smoothness argument, will change the order of the function used to smooth your predictions. The default is 0.5 leading to a second-order polynomial.
Krig.output.wt <- Krig( cbind(ct.data$lon,ct.data$lat) , ct.data$county.poverty.rate ,
weights=c( size , 1 , 1 , 1 , 1 , size , size , 1 ),
Covariance="Matern", smoothness = 0.6)
r <- interpolate(ras, Krig.output.wt)
r <- mask(r, ct.map)
plot(r, col=colRamp(100) ,axes=FALSE,legend=FALSE)
title(main="Theta unspecified; Smoothness = 0.6", outer = FALSE)
points(cbind(ct.data$lon,ct.data$lat))
text(ct.data$lon, ct.data$lat-0.05, ct.data$NAME, cex=0.5)
Gives:
This should give you a start and some options, but you should look at the manual for fields. It is pretty well-written and explains the arguments well.
Also, if this is in any way quantitative, I would highly recommend talking to someone with significant spatial statistics know how!
Kriging is not what you want. (It is a statistical method for accurate--not distorted!--interpolation of data. It requires preliminary analysis of the data--of which you do not have anywhere near enough for this purpose--and cannot accomplish the desired map distortion.)
The example and the references to "bleed over" suggest considering an anamorph or area cartogram. This is a map which will expand and shrink the areas of the county polygons so that they reflect their relative population while retaining their shapes. The link (to the SE GIS site) explains and illustrates this idea. Although its answers are less than satisfying, a search of that site will reveal some effective solutions.
lot's of interesting comments and leads above.
I took a look at the Harvard dialect survey to get a sense for what you are trying to do first. I must say really cool maps. And before I start in on what I came up with...I've looked at your work on survey analysis before and have learned quite a few tricks. Thanks.
So my first take pretty quickly was that if you wanted to do spatial smoothing by way of kernel density estimation then you need to be thinking in terms of point process models. I'm sure there are other ways, but that's where I went.
So what I do below is grab a very generic US map and convert it into something I can use as a sampling window. Then I create random samples of points within that region, just pretend those are your centroids. After I attach random values to those points and plot it up.
I just wanted to test this conceptually, which is why I didn't go through the extra steps to grab cbsa's and also sorry for not projecting, but I think these are the fundamentals. Oh and the smoothing in the dialect study is being done over the whole country. I think. That is the author is not stratifying his smoothing procedure within polygons....so I just added states at the end.
code:
library(sp)
library(spatstat)
library(RColorBrewer)
library(maps)
library(maptools)
# grab us map from R maps package
usMap <- map("usa")
usIds <- usMap$names
# convert to spatial polygons so this can be used as a windo below
usMapPoly <- map2SpatialPolygons(usMap,IDs=usIds)
# just select us with no islands
usMapPoly <- usMapPoly[names(usMapPoly)=="main",]
# create a random sample of points on which to smooth over within the map
pts <- spsample(usMapPoly, n=250, type='random')
# just for a quick check of the map and sampling locations
plot(usMapPoly)
points(pts)
# create values associated with points, be sure to play aroud with
# these after you get the map it's fun
vals <-rnorm(250,100,25)
valWeights <- vals/sum(vals)
ptsCords <- data.frame(pts#coords)
# create window for the point pattern object (ppp) created below
usWindow <- as.owin(usMapPoly)
# create spatial point pattern object
usPPP <- ppp(ptsCords$x,ptsCords$y,marks=vals,window=usWindow)
# create colour ramp
col <- colorRampPalette(brewer.pal(9,"Reds"))(20)
# the plots, here is where the gausian kernal density estimation magic happens
# if you want a continuous legend on one of the sides get rid of ribbon=FALSE
# and be sure to play around with sigma
plot(Smooth(usPPP,sigma=3,weights=valWeights),col=col,main=NA,ribbon=FALSE)
map("state",add=TRUE,fill=FALSE)
example no weights:
example with my trivial weights
There is obviously a lot of work in between this and your goal of making this type of map reproducible at various levels of spatial aggregation and sample data, but good luck it seems like a cool project.
p.s. initially I did not use any weighting, but I suppose you could provide weights directly to the Smooth function. Two example maps above.

Resources