raster:: extract produces an empty list - r

I am trying to extract the values of pixels in a DSM(CHM) within digitized tree crowns.
first I set my working directory read in the shapefile and raster.
TreeCrowns <-shapefile("plot1sag_shape/plot1sag.shp")
CHM <- raster('272280split4.tif')
Then I try to extract the pixel values
pixel <- raster::extract(CHM, TreeCrowns, method= 'simple', weights=FALSE, fun=NULL)
But I get an empty list with all NULL values for every polygon. I have confirmed that the CHM and polygons are in the same location. What can I do to fix this?

Since your shapefile consists of polygon, the extract() function need to know how to summarise the pixel values across a polygon via the fun= argument. Since you provide fun=NULL, the function interpret as returning NULL values to summarise the pixel values.
Try fun=mean or fun=sum (and they mean different thing so see which one suits you).

That probably happens because the polygons and the raster do not overlap. Can you show(CHM) and TreeCrowns? Have you looked at
plot(CHM)
lines(TreeCrowns)
Or are your polygons very small relative to the raster cells? In that case try argument small=TRUE

Related

How to get covariate data from a geographic raster for `ppm`?

I want to fit a Poisson point-process model with spatstat::ppm and I'm unsure what is the best way to feed covariate data to the function. I understand that spatstat expects planar coordinates, so I have transformed my point location data to a planar crs before creating a ppp point pattern object. The covariate data are in a raster stack with unprojected geographic coordinates and I understand that projecting rasters is generally ill-advised. I extracted covariate values for the point locations from the raster using the points' original geographic coordinates and raster::extract. So far so good. The issue is ...
it is not sufficient to have observed the covariate only at the points
of the data point pattern; the covariate must also have been observed
at other locations in the window. -ppm helpfile
I appear to have two options for providing the covariate data to the data argument.
A pixel image; seems ill-advised because of raster projection issues.
A list of functions (one per covariate) that can be evaluated at any location (x,y) to obtain corresponding covariate values. This seems like the way to go, but my attempt at writing such a function turns out to be ridiculously slow. It calls raster::extract for each coordinate pair after transforming the coordinates to the raster's crs. While raster::extract is reasonably fast when given a large number of points, there appears to be a substantial overhead for each call. According to microbenchmark, the coordinate transformation takes about 4ms and the extraction takes about 582ms for a single covariate, or about 4 seconds for each point to get all 7 covariates. I don't know how many times ppm will want to call this, but if it's even once per point in the pattern, it'll take too long.
Is there some way I can find out what is the complete set of points that ppm will query for covariate data so that I can extract those beforehand with a single call?
It seems like my use case (covariates in a geographic raster) should be pretty common, so I'm guessing there's an established way to do this right. What is it?
Thanks for a well written question clearly identifying you need. It would have been even better with a simple reproducible example using e.g. built-in data from raster and spatstat or artificially generated data. In lack of the reproducible example my answer will not contain any code but outline what you could do.
First step in ppm is to make a quadrature scheme or class quad or logiquad depending on which maximum likelihood approximation is used in ppm. These can be generated directly by the user via quadscheme or quadscheme.logi. The quadrature scheme contains all the points where ppm will evaluate the covariates. You can extract the coordinates of the quadrature scheme using the function coords. If you construct a data.frame with all covariates evaluated at these points you can supply that as the data argument to ppm while the quadrature scheme is the first argument. To understand things better try to read the Details section of help(ppm.quad).
Another approach which may give you the optimal use of your data is to extract the grid points of you current raster stack together with all the covariate values and project this point data. Then convert it to a simple data.frame with columns x, y, covar1, covar2, etc. Then you can use x and y together with your point observations of interest to create a quadrature scheme manually and the remaining columns can be supplied as data to ppm.
It would be interesting to compare the results from both these approaches as well as the results from just projecting the raster stack and converting it to a list of im objects.

Road Length within Polygons in R

I have a shape file of a road network and another shape-file containing area boundaries. Is there any better code that I can use to get length of roads that lies inside each polygon?
This Question was asked earlier with the difference that I want to use R instead of QGIS.
I tried:
intersec=intersect(roads,Polygon)
road_length=tapply(intersec$length, intersec$polygon, sum)
This works, but the problem is that the intersection does not divide the length of the roads, that cross to Polygons, but doubles them in the intersec file and assigns the full length of those roads to both Polygons.
How I found out about that Problem: There is no error message, but the following proove tells me that something is wrong:
a=sum(roads$length) and b=sum(intersec$length)
a and b do not have same length -> a is smaller than b.
I actually did this for a project about 8 months ago.
I had been getting into the sf way of dealing with spatial data, and so my solution uses Classes, Methods, and functions from that package.
First, I made sure both my roads and shapes had the same coordinate-reference-system (CRS) by using sf::st_transform on one of them. Then I used sf::st_intersection() to find the intersections, and used sf::st_length() on the result to get the lengths. You may need to aggregate the lengths at this point, depending on whether your roads were combined into one super-multi-line or if each road is its own object. The following gives the gist of what I think ought to work:
sf::st_intersection(road, shape) %>% # Find the intersections, which should all be points or multilines
dplyr::mutate(len_m = sf::st_length(geom)) %>% # Find the length of each line
dplyr::group_by(SHAPE_COLUMNS) %>% # Here you need to insert all the columns from your shapes
dplyr::summarize(len_m = sum(len_m))

r alphahull post-processing/ avoid two hulls

I have got a table with coordinates of points and want to get the smalest polygon around them. I tried different functions and so far alphahull works best for my purposes. My major interest is in the area of the hull. I have got approximately 3500 datasets, so I have to find a reliable method for my analysis.
I analysed some datasets and realised that in some cases I get a hull in a hull and areahull() is not able to return an area. A higher alpha-value would avoid this but would overestimate my area by far.
Is there a possibility to post-process my alpha-hull to remove the second hull? Or a better method to get the size of the area?
library(alphahull)
tmp <- ahull(path.points.1$x, path.points.1$y, alpha = 50)
plot(tmp, wpoints = F)
lin to example dataset
I found a solution which seems to work for my purposes: the function ahull_track() returns only the boundary as a geom_path()-object. the coordinates of the single boundary segments are stored in a list. unfortunately they are not in the correct order, so it is no straight-forward solution. I had to write a function which rearranges the segments into the correct order and generates a polygon.

Randomly sampling an irregular raster extent in R

Is there a function in the R raster package that is analogous to sampleRandom but which extracts n random pixel values from within an irregularly shaped polygon feature rather than a rectangular extent object?
I know there are alternative approaches such as generating random points within a polygon and then use the extract() function to get pixel values, but am wondering if there is a more direct path I have missed.
Thanks
No, there is not a single function for this.

Matlab contourf() to plot data on a global map

I have been using Matlab 2011b and contourf/contourfm to plot 2D data on a map of North America. I started from the help page for contourfm on the mathworks website, and it works great if you use their default data called "geoid" and reference vector "geoidrefvec."
Here is some simple code that works with the preset data:
figure
axesm('MapProjection','lambert','maplo',[-175 -45],'mapla',[10 75]);
framem; gridm; axis off; tightmap
load geoid
%geoidrefvec=[1 90 0];
load 'TECvars.mat'
%contourfm(ITEC, geoidrefvec, -120:20:100, 'LineStyle', 'none');
contourfm(geoid, geoidrefvec, -120:20:100, 'LineStyle', 'none');
coast = load('coast');
geoshow(coast.lat, coast.long, 'Color', 'black')
whitebg('w')
title(sprintf('Total Electron Content Units x 10^1^6 m^-^2'),'Fontsize',14,'Color','black')
%axis([-3 -1 0 1.0]);
contourcbar
The problem arises when I try to use my data. I am quite sure the reference vector determines where the data should be plotted on the globe but I was not able to find any documentation about how this vector works or how to create one to work with different data.
Here is a .mat file with my data. ITEC is the matrix of values to be plotted. Information about the position of the grid relative to the earth can be found in the cell array called RT but the basic idea is. ITEC(1,1) refers to Lat=11 Long=-180 and ITEC(58,39) refers to Lat = 72.5 Long = -53 with evenly spaced data.
Does anyone know how the reference vector defines where the data is placed on the map? Or perhaps there is another way to accomplish this? Thanks in advance!
OK. So I figured it out. I realized that, given that there are only three dimensions in the vector, the degrees between latitude data must be the same as the degrees between longitude data. That is, the spacing between each horizontal data point must be the same as the spacing between each vertical point. For instance, 1 degree.
The first value in the reference vector is the distance (in degrees) between each data point (I think...this works in my case), and the two second values in the vector are the minimum latitude and minimum longitude respectively.
In my case the data was equally spaced in each direction, but not the same spacing vertically and horizontally. I simply interpolated the data to a 1x1 grid density and set the first value in the vector to 1.
Hopefully this will help someone with the same problem.
Quick question though, since I answered my own question do I get the bounty? I'd hate to loose 50 'valuable' reputation points haha

Resources