I have a dataframe with coordinates for which I want to create polygons, the most normal is a polygon like the one I put in the first image:
But I'm looking for something different, more like this:
As you can see, if the points are enough far, another polygon is created, is this possible in R? Thanks!
Here are the data
csv
I think you would find the concept of concave/alpha hulls relevant. There's an R package alphahull that may cover your need.
install.packages("alphahull")
library(alphahull)
fff <- readr::read_csv("data.csv")
dddd <- ahull(fff[,2:3],alpha = 0.01)
plot(dddd)
And in case you need to convert this output into a spatial data format, please see the following:
https://babichmorrowc.github.io/post/2019-03-18-alpha-hull/
Related
I have a problem getting raster data in the right orientation. The original raster data when imported into R looks this .
I tried using the transpose function in raster but it didn't work. The transposed data looks like this .
I used the code below. Any help or advice would be greatly appreciated. Also, is there a way to apply the plausible solution to the entire stack (all rasters are of the same extent)? Thank you.
f_PM <- list.files(path=".",
pattern='tif$', full.names=TRUE)
s_PM <- stack(f_PM) ## create a stack of the rasters
plot(s_PM[[1]]) ##check the orientation
PM <- s_PM[[1]] ## pick one raster and try to change the orientation
PM2 <- t(PM)
plot(PM2)
You need to flip then transpose:
> plot(m)
> r = t(flip(m))
> plot(r)
Note there's https://gis.stackexchange.com where spatial questions like this get asked. (Too much noise here to help with most spatial stuff).
I was assigned the task to clip a raster from .nc file from a .tif file.
edit (from comment):
i want to extract temp. info from the .nc because i need to check the yearly mean temperature of a specific region. to be comparable the comparison has to occur on exactly the same area. The .nc file is larger than the previously checked area so i need to "clip" it to the extent of a .tif I have. The .tif data is in form 0|1 where it is 0 (or the .tif is smaller than the .nc) the .nc data should be "cliped". In the end i want to keep the .nc data but at the extent of the .tif while still retaining its resolution & projection. (.tif and .nc have different projections&pixel sizes)
Now ordinarily that wouldn't be a problem as i could use raster::crop. This doesn't deal with different projections and different pixel size/resolution though. (I still used it to generate an approximation, but it is not precise enough for the final infromation, as can be seen in the code snippet below). The obvious method to generate a more reliable dataset/rasterset would be to first use a method like raster::projectRaster or raster::sp.Transform # adding sp.transform was done in an edit to the original question and homogenize the datasets but this approach takes too much time, as i have to do this for quite a few .nc files.
I was told the best method would be to generate a normalized matrix from the smaller raster "clip_frame" and then just multiply it with the "nc_to_clip" raster. Doing so should prevent any errors through map projections or other factors. This makes a lot of sense to me in theory but I have no idea how to do this in practice. I would be very grateful to any kind of hint/code snippet or any other help.
I have looked at similar problems on StackOverflow (and other sites) like:
convert matrix to raster in R
Convert raster into matrix with R
https://www.researchgate.net/post/Hi_Is_there_a_way_to_multiply_Raster_value_by_Raster_Latitude
As I am not even sure how to frame the question correctly, I might have overlooked an answer to this problem, if so please point me there!
My (working) code so far, just to give you an idea of how I want to approach the topic (here using the crop-function).
#library(ncdf4)
library(raster)
library(rgdal)
library(tidyverse)
nc_list<-list.files(pattern = ".*0.nc$") # list of .nc files containing raster and temperature information
#nc_to_clip <- lapply(nc_list, raster, varname="GST") # read in as raster
nc_to_clip < -raster(ABC.nc, vername="GST)
clip_frame <- raster("XYZ.tif") # read in .tif for further use as frame
mean_temp_from_raster<-function(input_clip_raster, input_clip_frame){ # input_clip_raster= raster to clip, input_clip_frame
r2_coord<-rasterToPoints(input_clip_raster, spatial = TRUE) # step 1 to extract coordinates
map_clip <- crop(input_clip_raster, extent(input_clip_frame)) # use crop to cut the input_clip_raster (this being the function I have to extend on)
temp<-raster::extract(map_clip, r2_coord#coords) # step 2 to extract coordinates
temp_C<-temp*0.01-273.15 # convert kelvin*100 to celsius
temp_C<-na.omit(temp_C)
mean(temp_C)
return_list<-list(map_clip, mean(temp_C))
return(return_list)
}
mean_tempC<-lapply(nc_to_clip, mean_temp_from_raster,clip_frame)
Thanks!
PS:
I don't have much experience working with .nc files and/or RasterLayers in R as I used to work with ArcGIS/Python (arcpy) for problems like this, which is not an option right now.
Perhaps something like this?
library(raster)
nc <- raster(ABC.nc, vername="GST)
clip <- raster("XYZ.tif")
x <- as(extent(clip), "SpatialPolygons")
crs(x) <- crs(clip)
y <- sp::spTransform(x, crs(nc))
clipped <- crop(nc, y)
I want to calculate the area of a wildfire. I tried this by substracting the NDVI calculated on a Landsat image before and another image after the fire and see where the NDVI was reduced. However, not only in the burning areas the NDVI has changed, but there are also many random differences. I used rasterToPolygons to create a large SpatialPolygonsDataFrame containing all areas where NDVI after - NDVI before < 0.
Now I want to remove all the polygons with an area below a certain threshold value. However, I cannot find a way to subset the large SpatialPolygonsDataFrame.
I found an example on how to get a list of the polygons with an area above the threshold (where burned_poly is the large SpatialPolygonsDataFrame):
pols <- lapply(burned_poly#polygons , slot , "Polygons")
pols_areas <- lapply(pols[[2]], function(x) slot(x, "area"))
However, accessing the large SpatialPolygonsDataFrame like this
bp <- burned_poly#polygons[[1]]#Polygons[pols_areas >= 9000]
gives me a list which I am currently unable to coerce into a SpatialPolygonsDataFrame.
Can someone tell me how to do this last step (I have trouble with the Sf argument of which I don't know what it is in the SpatialPolygonsDataFrame function), or maybe there is a different and better approach to extract the fire extent as a polygon?
Alright, I think I have found a way thanks to Orlandos suggestion to use sf.
I transformed my large SpatialPolygonsDataFrame object to a sf object via st_as_sf() which gave me a multipolygon. This stf_MULTIPOLYGON object can be subdivided into single polygons using st_cast() and the resulting object is subsettable like a data.frame.
bp_sf <- st_as_sf(burned_poly)
bps_sf <- st_cast(bp_sf, "POLYGON")
BpSf <- bps_sf[as.numeric(st_area(bps_sf))>=10000,]
If you are using the simple features sf library you can use functions from the tidyverse. Filtering data is a matter of using the filter() function. Notice that you can convert your objects to sf using st_as_sf(). See: https://r-spatial.github.io/sf/reference/st_as_sf.html and How to filter an R simple features collection using sf methods like st_intersects()?
I'm trying to find the mean daily temperature for counties in South Dakota from raster grids ('bil' files) found at http://prism.oregonstate.edu/. I am getting county boundaries from the 'maps' package.
library(maps)
library(raster)
sd_counties <- map('county','south dakota')
sd_raster <- raster('file_path')
How do I extract the grid cells within each county? I think I need to turn each county into it's own polygon to do this, but how? Then, I should be able to do something like the following. Any help would be greatly appreciated.
values <- extract(raster, list of polygons)
polygon_means <- unlist(lapply(values, FUN=mean))
I'm not familiar with the maps package or the map function, but it looks like it's solely for visualization, rather than geospatial operations.
While there might be a way to convert the map object to actual polygons, here's an easy way sing raster's getData function that works:
library(raster)
usa_adm2 <- getData(country='USA',level=2)
sd_counties <- usa_adm2[grepl('South Dakota',usa_adm2$NAME_1),]
plot(sd_counties)
Now you can extract pixels for each county using extract(r,sd_counties), where r is your desired raster.
Note, that depending on the number of pixels (and layers) you need to extract, that can take some time.
I made a choropleth map with a utm32-shapefile and now, I want to add a second layer with data from another data frame. This second data frame only has Lon/Lat-information.
I want to convert the Lon/Lat to utm32 coordinates, so I can use them with ggplot2 as another - convergent - layer.
The data frame looks like this:
GPS_Lat GPS_Lon Index
51,133 14,683 12.75
First, I use gsub to recode the "," to ".". Then I convert the variables to numeric.
#Then I tried to define the variables as Lon/Lat-Coordinates
cord.dec<-SpatialPointsDataFrame(cbind(plot.eeg$GPS_Lon,plot.eeg$GPS_Lat),data=plot.eeg,proj4string=CRS("+proj=longlat"))
#Then i tried to convert them to utm32
cord.UTM<-spTransform(cord.dec,CRS("+init=epsg:4326"))
It didn't work. I just get a copy of my Lat/Lon-variables.
I didn't find a good documentation on how to do this. Perhaps somebody else can help me?
Your CRS string is incorrect: EPSG:4326 is WGS84 longlat (see here); Try EPSG:32632 instead.