Extracting point data from a large shape file in R - r

I'm having trouble extracting point data from a large shape file (916.2 Mb, 4618197 elements - from here: https://earthdata.nasa.gov/data/near-real-time-data/firms/active-fire-data) in R. I'm using readShapeSpatial in maptools to read in the shape file which takes a while but eventually works:
worldmap <- readShapeSpatial("shp_file_name")
I then have a data.frame of coordinates that I want extract data for. However R is really struggling with this and either loses connection or freezes, even with just one set of coordinates!
pt <-data.frame(lat=-64,long=-13.5)
pt<-SpatialPoints(pt)
e<-over(pt,worldmap)
Could anyone advise me on a more efficient way of doing this?
Or is it the case that I need to run this script on something more powerful (currently using a mac mini with 2.3 GHz processor)?
Many thanks!

By 'point data' do you mean the longitude and latitude coordinates? If that's the case, you can obtain the data underlying the shapefile with:
worldmap#data
You can view this in the same way you would any other data frame, for example:
View(worldmap#data)
You can also access columns in this data frame in the same way you normally would, except you don't need the #data, e.g.:
worldmap$LATITUDE
Finally, it is recommended to use readOGR from the rgdal package rather than maptools::readShapeSpatial as the former reads in the CRS/projection information.

Related

Intersection and difference of PostGIS data using R

I am an absolute beginner in PostgreSQL and PostGIS (databases in general) but have a fairly good working experience in R. I have two multi-polygon data sets of vulnerable areas of India from two different sources - one is around 12gb and it's in .gdb format (let's call it mygdb) and the other is a shapefile around 2gb (let's call it myshp). I want to compare the two sets of vulnerability maps and generate some state-wise measures of fit using intersection (I), difference (D), and union (U) between the maps.
I would like to make use of PostGIS functionalities (via R) as neither R (crashes!) nor qgis (too slow) is efficient for this. To start with, I have uploaded both data sets in my PostGIS database. I used ogr2ogr in R to upload mygdb. But I am kind of stuck at this point. My idea is to split both polygon files by states and then apply other functions to get I, U and D. From my search, I think I can use sf functions like st_split, st_intersect, st_difference, and st_union. However, even after splitting, I would imagine that the file sizes will be still too large for r to process, so my questions are
Is my approach the best way forward?
How can I use sf::st_ functions (e.g. st_split, st_intersection) without importing the data from database into R
There are some useful answers to previous relevant questions, like this one for example. But I find it hard to put the steps together from different links and any help with a dummy example would be great. Many thanks in advance.
Maybe you could try loading it as a stars proxy. It doesn't load the file to the memory, it applies it directly to the hard drive.
https://r-spatial.github.io/stars/articles/stars2.html
Not answer for question sensu stricte, however in response to request in comment, an example of postgresql/postgis query for ST_Intersection. Based on OSM data in postgresql database imported with osm2pgsql:
WITH
highway AS (
select osm_id, way from planet_osm_line where osm_id = 332054927),
dln AS (
select osm_id, way from planet_osm_polygon where "boundary" = 'administrative'
and "admin_level" = '4' and "ref" = 'DS')
SELECT ST_Intersection(dln.way, highway.way) FROM highway, dln

Read specific raster files and create a mean raster in R

I am desesperate, because my problem seems very simple, but I cannot find out how to manage it.
Aim:
I would like to read 1 to 4 raster files from a folder. The names of the one that I need are listed in a list as character.
After having opened the files, I would like to create a new raster corresponding to the mean of the files.
I can manage it on QGIS, but I need to automatize hte process, as I have a lot of individuals!
1) It should work with list.files(pattern = ) but as the names are in a list, I do not know how to do.
Ex: for the first individual, I have to read 2 files named 2018-12-27_sic.tif and 2018-12-27_sic_con.tif
I tried to read with readGDAL , open.GDAL it didn't work
thanks a lot for your valuable help
I would use the stack and calc functions from the raster package. The function stack creates a stack of rasters, all with the same resolution and extent, and makes it easy to do operations like take the mean of every cell. So:
library(raster)
fs <- list.files(pattern='tif$')
rasterstack <- stack(fs)
rastermean <- calc(rasterstack, fun=mean)
Note, if your rasters are not the same resolution, you will have to use the resample function, and if they are not the same extent, you will have to use crop. Typing in ?resample and ?crop in RStudio will show you instructions for using those functions.

Handling multiple raster files and executing unit conversions on them: R

I've dug around a lot for an answer to this and wasn't able to find anything, so here I am.
I have a whole bunch of ascii raster files corresponding to air temperature and dew point temperature of a certain area over 744 hourly time steps. (So I have 744 air temp and 744 dew point files corresponding to a 31-day month). The files are only about 45 kB each.
I want to stack them together so I can perform some analyses on them, and I also want to convert their units from K to deg F.
The file names air Tair1.txt, Tair2.txt, ... Tair744.txt and Eair1.txt, Eair2.txt, ... Eair744.txt.
Using the raster package, I can easily load all the files as rasters:
for (i in 1:744) {
assign(paste0("Tair",i),raster(paste0("Tair",i,".txt")))
assign(paste0("Eair",i),raster(paste0("Tair",i,".txt")))
}
I've tried to use ls() with pattern or glob2rx to define just the raster file names and
then do conversions on them, or to do something similar to join them in a stack, but to no avail. I also tried mget, values(mget(filename)) and things like that to get at the values in a loop.
I know R doesn't handle large datasets very well, but I'm thinking these aren't really that large so there should be something pretty simple?
I would appreciate any help and advice! Thank you.
The raster package's RasterStack is for this:
library(raster)
files <- paste0("Tair",1:744,".txt")
rs <- stack(files)
Why do you have these files in text format though? Who imposed this disaster on you? I suspect your individual layers have insufficient metadata, so try one and see if it's sensible. You can use extent(rs) <- and projection(rs) <- to fix:
r <- raster(files[1])
print(r)
Don't use assign() that's just creating a mess.

Adding extra data column to shapefile using convert.to.shapefile in R's shapefiles package

My goal is very simple, namely to add 1 column of statistical data to a shapefile so that I can use it for example to colour a geographical area. The data are a country file from gadm. To this end I usually use the foreign package in R thus:
library(foreign)
newdbf <- read.dbf("CHN_adm1.dbf") #original shape file
incrdata <- read.csv("CHN_test.csv") #.csv file with same region names column + new data column
mergedbf <- merge(newdbf,incrdata)
write.dbf(mergedbf,"CHN_New")
This achieves what I want in almost all circumstances, but one of the pieces of software I am dealing with external to R will only recognize .shp files and will not read .dbf (although clearly in a sense that statement is a slight contradiction). Not sure why it won't. Anyhow, essentially it leaves me needing to do the same thing as above, but with a shapefile. I think that according to notes on shapefiles package, the process should run something like this:
library(shapefiles)
shaper <- read.shp("CHN_adm1.shp")
simplified <- convert.to.simple(shaper)
simplified <- change.id(simplified,incrdata$DataNew) #DataNew being new column of data from the .csv
simpleAsList <- by(simplified,simplified[,1],function(x)x)
####This is where I hit problems####
backToShape <- convert.to.shapefile(simplified,
data.frame(index=c("20","30","40","50","60","70","80")),"index",5)
write.shapefile(backToShape,"CHN_TestShape")
I'm afraid that I can't get my head around shapefiles, since I can't unpick them or visualize them in a way I can with dataframes, and so the resultant shape has been screwed up when it goes back to the external charting package.
To be clear: in 'backToShape' I just want to add the column of data and reconstruct the shapefile. It so happens that the data I have appears as a factor, ie 20,30,40 etc, but the data could just as easily be continuous, and I'm sure I don't need to type in all possibilities, but it was the only way I could seem to get it to be accepted. Can somebody please put me on the right track, and if I'm missing a simpler way, I'd also be extremely grateful to hear a suggestion. Many thanks in advance.
Stop using the shapefiles package.
Install the sp and rgdal packages.
Read shapefile with:
chn = readOGR(".","CHN_adm1") # first arg is path, second is shapefile name w/o .shp
Now chn is like a data frame. In fact chn#data is a data frame. Do what you like to that data frame but keep it in the same order, and then you can save the updated shapefile with the new data by:
writeOGR(chn, ".", "CHN_new", driver="ESRI Shapefile")
Note you shouldn't really manipulate the chn#data data frame directly, you can work with chn like it is a data frame in many respects, for example chn$foo gets the column named foo, or chn$popden = chn$pop/chn$area would create a new column of population density if you have population and area columns.
spplot(chn, "popden")
will map by the popden column you just created, and:
head(as.data.frame(chn))
should show you the first few lines of the shapefile data.

Extract certain values out of netCDF

I ve a netCDF file with 3 Dimensions. The first dimension is the longitude and reaches from 1-464. The second dimension is the latitude and reaches from 1-201. The third dimension is time and reaches from 1-5479.
Now I want to extract certain values out of the file. I think one can handle it with the start argument. I tried this command.
test = open.ncdf("rr_0.25deg_reg_1980-1994_v8.0.nc")
data = get.var.ncdf(test,start=c(1:464,1:201,1:365))
But somehow it doesnt work. Has anybody a solution?
Thanks in advance...
It looks like you are using the ncdf package in R. If you can, I recommend using the updated ncdf4 package, which is based on Unidata's netcdf version 4 library (link).
Back to your problem. I use the ncdf4 package, but I think the ncdf package works the same way. When you call the function get.var.ncdf, you also need to explicitly supply the name of the variable that you want to extract. I think you can get the names of the variables using names(test$var).
So you need to do something like this:
# Open the nc file
test = open.ncdf("rr_0.25deg_reg_1980-1994_v8.0.nc")
# Now get the names of the variables in the nc file
names(test$var)
# Get the data from the first variable listed above
# (May not fit in memory)
data = get.var.ncdf(test,varid=names(test$var)[1])
# If you only want a certain range of data.
# The following will probably not fit in memory either
# data = get.var.ncdf(test,varid=names(test$var)[1])[1:464,1:201,1:365]
For your problem, you would need to replace varid=names(test$var)[1] above with varid='VARIABLE_NAME', where VARIABLE_NAME is the variable you want to extract.
Hope that helps.
EDIT:
I installed the ncdf package on my system, and the above code works for me!
You could also do the extracting of timesteps/dates and locations outside of R before reading it into to R for plotting etc, by using CDO. This has the advantage that you can work directly in the coordinate space and specify timesteps or dates directly:
e.g.
cdo seldate,20100101,20121031 in.nc out.nc
cdo sellonlatbox,lon1,lon2,lat1,lat2 in.nc out.nc

Resources