Intersection and difference of PostGIS data using R - r

I am an absolute beginner in PostgreSQL and PostGIS (databases in general) but have a fairly good working experience in R. I have two multi-polygon data sets of vulnerable areas of India from two different sources - one is around 12gb and it's in .gdb format (let's call it mygdb) and the other is a shapefile around 2gb (let's call it myshp). I want to compare the two sets of vulnerability maps and generate some state-wise measures of fit using intersection (I), difference (D), and union (U) between the maps.
I would like to make use of PostGIS functionalities (via R) as neither R (crashes!) nor qgis (too slow) is efficient for this. To start with, I have uploaded both data sets in my PostGIS database. I used ogr2ogr in R to upload mygdb. But I am kind of stuck at this point. My idea is to split both polygon files by states and then apply other functions to get I, U and D. From my search, I think I can use sf functions like st_split, st_intersect, st_difference, and st_union. However, even after splitting, I would imagine that the file sizes will be still too large for r to process, so my questions are
Is my approach the best way forward?
How can I use sf::st_ functions (e.g. st_split, st_intersection) without importing the data from database into R
There are some useful answers to previous relevant questions, like this one for example. But I find it hard to put the steps together from different links and any help with a dummy example would be great. Many thanks in advance.

Maybe you could try loading it as a stars proxy. It doesn't load the file to the memory, it applies it directly to the hard drive.
https://r-spatial.github.io/stars/articles/stars2.html

Not answer for question sensu stricte, however in response to request in comment, an example of postgresql/postgis query for ST_Intersection. Based on OSM data in postgresql database imported with osm2pgsql:
WITH
highway AS (
select osm_id, way from planet_osm_line where osm_id = 332054927),
dln AS (
select osm_id, way from planet_osm_polygon where "boundary" = 'administrative'
and "admin_level" = '4' and "ref" = 'DS')
SELECT ST_Intersection(dln.way, highway.way) FROM highway, dln

Related

Is there a method in R of extracting nested relational data tables from a JSON file?

I am currently researching publicly available payer transparency files across multiple insurers and I am trying to parse and extract JSON files using R and output them into .CSV files to later use with SQL. The file I am currently working with contains nested tables within the highest table.
I have attached the specific file I am working with right now in a link below, along with the code to mount it into R's dataviewer. I have used R extensively in healthcare analytics classes for statistical analysis and machine learning; though, I have never used R for building out data tables.
My goal is to assign a primary key to the highest level of the table, apply foreign and primary keys to lower tables and extract the lower tables and join them onto eachother later to build out a large CSV or TXT file to load onto SQL.
So far, I have used the jsonlite and rjson packages to extract the JSON itself into R, but trying to delist an unnest the tables within the tables are an enigma to me even after extensive research. I also find myself running into problems with "subscript out of bounds", "unimplemented list errors" and other issues.
It could also very well be the case that the JSON is too large for R's packages or that the JSON is structurally flawed (I wouldn't know if it is, I am not accustomed to JSONs). It seems that this could be a problem better solved with Python, though I don't know how to use Python too well and I am optimistic in R given how powerful it is.
Any feedback or answers would be greatly appreciated.
JSON file link: https://individual.carefirst.com/carefirst-resources/machine-readable/Provider_Med_5.json
Code to load JSON:
json2 <- fromJSON('https://individual.carefirst.com/carefirst-resources/machine-readable/Provider_Med_5.json')
JSONs load correctly, but there are tables embedded within tables. I would hope that these tables could be easily exported and have keys for joining, but I can not figure out how to denest these tables from within the data.
Some nested tables are out of subscript bounds for the data array. I have never encountered this problem and am bewildered as to how to go about and resolve the issue.
I can not figure out how to 'extract' the lower level tables, let alone open them, due to the subscript boundary error.
I can assign row ID to the main/highest table in the file, but I can not figure out how to add sub row ID's to the lower table for future joins.
Maybe the jsonStrings package can help. It allows to manipulate JSON, without converting to a R object. That's the first time I try it on such a big JSON string and it works fine.
Here is how to get the table in the first element of the JSON array:
options(timeout = 300)
download.file(
"https://individual.carefirst.com/carefirst-resources/machine-readable/Provider_Med_5.json",
"jsonFile.json"
)
library(jsonStrings)
# load the JSON file
jstring <- jsonString$new("jsonFile.json")
# extract table "plans" of first element (indexed by 0)
jsonTable <- jstring$at(0, "plans")
# get a dataframe
library(jsonlite)
dat <- fromJSON(jsonTable$asString())
But the dataframe dat has a list column. I don't know how you want to make a CSV with this dataframe.

Using OpenStreetMapX to create a powergrid graph network

I want to find out a few things about OpenStreetMapX which from what I understand works well with transportation-based networks. I am wondering if it's also possible to use this package along with lightgraphs.jl to create a power grid network. In my case, I have filtered some power grid data using osmosis (a piece of software that allows filtering OpenStreetMap data based on a tag)
I want to know whether it is relevant to use OpenStreetMapX for this kind of data (power grid)?
using OpenStreetMapX
# Load power data for Germany
deData = get_map_data("D:/PowerGridNetwork/data/germany/de_power_160718.osm")
# Get roadways (which I believe has the meta data for edges)
deData.roadways
I ended up with metadata for power as well as roads, which I am wondering, how it came in the first place. Since I filtered only the power data.
The next question I have is, does deData.e returns an adjacency list?. Since what I am really after is creating a MetaGraph with nodes and edges with their respective properties.
Any ideas?
Thanks in advance

Extracting point data from a large shape file in R

I'm having trouble extracting point data from a large shape file (916.2 Mb, 4618197 elements - from here: https://earthdata.nasa.gov/data/near-real-time-data/firms/active-fire-data) in R. I'm using readShapeSpatial in maptools to read in the shape file which takes a while but eventually works:
worldmap <- readShapeSpatial("shp_file_name")
I then have a data.frame of coordinates that I want extract data for. However R is really struggling with this and either loses connection or freezes, even with just one set of coordinates!
pt <-data.frame(lat=-64,long=-13.5)
pt<-SpatialPoints(pt)
e<-over(pt,worldmap)
Could anyone advise me on a more efficient way of doing this?
Or is it the case that I need to run this script on something more powerful (currently using a mac mini with 2.3 GHz processor)?
Many thanks!
By 'point data' do you mean the longitude and latitude coordinates? If that's the case, you can obtain the data underlying the shapefile with:
worldmap#data
You can view this in the same way you would any other data frame, for example:
View(worldmap#data)
You can also access columns in this data frame in the same way you normally would, except you don't need the #data, e.g.:
worldmap$LATITUDE
Finally, it is recommended to use readOGR from the rgdal package rather than maptools::readShapeSpatial as the former reads in the CRS/projection information.

How do I divide a very large OpenStreetMap file into smaller files in R without running out of memory?

I am currently looking to have map files that are no larger than the sizes of municipalities in Mexico (at largest, about 3 degrees longitude/latitude across). However, I have been running into memory issues (at the very least) when trying to do so. The file size of the OSM XML object is 1.9 GB, for reference.
library(osmar)
get.map.for.municipality<-function(province,municipality){
base.map.filename = 'OpenStreetMap/mexico-latest.osm'
#bounds.list is a list that contains the boundaries
bounds = bounds.list[[paste0(province,'*',municipality)]]
my.bbox = corner_bbox(bounds[1],bounds[2],bounds[3],bounds[4])
my.map.source = osmsource_file(base.map.filename)
my.map = get_osm(my.bbox,my.map.source)
return(my.map)
}
I am running this inside of a loop, but it can't even get past the first one. When I tried running it, my computer froze and I was only able to take a screenshot with my phone. The memory steadily inclined over the course of a few minutes, and then it shot up really quickly, and I was unable to react before my computer froze.
What is a better way of doing this? I expect to have to run this loop about 100-150 times, so any way that is more efficient in terms of memory would help. I would prefer not to download smaller files from an API service.
If necessary, I would be willing to use another programming language (preferably Python or C++), but I prefer to keep this in R.
I'd suggest not use R for that.
There are better tools for that job. Many ways to split, filter stuff from the command line or using a DBMS.
Here are some alternatives extracted from the OSM Wiki http://wiki.openstreetmap.org:
Filter your osm files using osmfilter: "osmfilter is used to filter OpenStreetMap data files for specific tags. You can define different kinds of filters to get OSM objects (i.e. nodes, ways, relations), including their dependent objects, e.g. nodes of ways, ways of relations, relations of other relations."
Clipping based on Polygons or borders using osmconvert: http://wiki.openstreetmap.org/wiki/Osmconvert#Applying_Geographical_Borders
You can write bash scripts for both osmfilter and osmconvert, but I'd recommend using a DBMS. Just import into PostGIS using osm2pgsql, and connect your R code with any Postgresql driver. This will optimize your read/write ops.

arcmap network analyst iteration over multiple files using model builder

I have 10+ files that I want to add to ArcMap then do some spatial analysis in an automated fashion. The files are in csv format which are located in one folder and named in order as "TTS11_path_points_1" to "TTS11_path_points_13". The steps are as follows:
Make XY event layer
Export the XY table to a point shapefile using the feature class to feature class tool
Project the shapefiles
Snap the points to another line shapfile
Make a Route layer - network analyst
Add locations to stops using the output of step 4
Solve to get routes between points based on a RouteName field
I tried to attach a snapshot of the model builder to show the steps visually but I don't have enough points to do so.
I have two problems:
How do I iterate this procedure over the number of files that I have?
How to make sure that every time the output has a different name so it doesn't overwrite the one form the previous iteration?
Your help is much appreciated.
Once you're satisfied with the way the model works on a single input CSV, you can batch the operation 10+ times, manually adjusting the input/output files. This easily addresses your second problem, since you're controlling the output name.
You can use an iterator in your ModelBuilder model -- specifically, Iterate Files. The iterator would be the first input to the model, and has two outputs: File (which you link to other tools), and Name. The latter is a variable which you can use in other tools to control their output -- for example, you can set the final output to C:\temp\out%Name% instead of just C:\temp\output. This can be a little trickier, but once it's in place it tends to work well.
For future reference, gis.stackexchange.com is likely to get you a faster response.

Resources