I want to find out a few things about OpenStreetMapX which from what I understand works well with transportation-based networks. I am wondering if it's also possible to use this package along with lightgraphs.jl to create a power grid network. In my case, I have filtered some power grid data using osmosis (a piece of software that allows filtering OpenStreetMap data based on a tag)
I want to know whether it is relevant to use OpenStreetMapX for this kind of data (power grid)?
using OpenStreetMapX
# Load power data for Germany
deData = get_map_data("D:/PowerGridNetwork/data/germany/de_power_160718.osm")
# Get roadways (which I believe has the meta data for edges)
deData.roadways
I ended up with metadata for power as well as roads, which I am wondering, how it came in the first place. Since I filtered only the power data.
The next question I have is, does deData.e returns an adjacency list?. Since what I am really after is creating a MetaGraph with nodes and edges with their respective properties.
Any ideas?
Thanks in advance
Related
I am an absolute beginner in PostgreSQL and PostGIS (databases in general) but have a fairly good working experience in R. I have two multi-polygon data sets of vulnerable areas of India from two different sources - one is around 12gb and it's in .gdb format (let's call it mygdb) and the other is a shapefile around 2gb (let's call it myshp). I want to compare the two sets of vulnerability maps and generate some state-wise measures of fit using intersection (I), difference (D), and union (U) between the maps.
I would like to make use of PostGIS functionalities (via R) as neither R (crashes!) nor qgis (too slow) is efficient for this. To start with, I have uploaded both data sets in my PostGIS database. I used ogr2ogr in R to upload mygdb. But I am kind of stuck at this point. My idea is to split both polygon files by states and then apply other functions to get I, U and D. From my search, I think I can use sf functions like st_split, st_intersect, st_difference, and st_union. However, even after splitting, I would imagine that the file sizes will be still too large for r to process, so my questions are
Is my approach the best way forward?
How can I use sf::st_ functions (e.g. st_split, st_intersection) without importing the data from database into R
There are some useful answers to previous relevant questions, like this one for example. But I find it hard to put the steps together from different links and any help with a dummy example would be great. Many thanks in advance.
Maybe you could try loading it as a stars proxy. It doesn't load the file to the memory, it applies it directly to the hard drive.
https://r-spatial.github.io/stars/articles/stars2.html
Not answer for question sensu stricte, however in response to request in comment, an example of postgresql/postgis query for ST_Intersection. Based on OSM data in postgresql database imported with osm2pgsql:
WITH
highway AS (
select osm_id, way from planet_osm_line where osm_id = 332054927),
dln AS (
select osm_id, way from planet_osm_polygon where "boundary" = 'administrative'
and "admin_level" = '4' and "ref" = 'DS')
SELECT ST_Intersection(dln.way, highway.way) FROM highway, dln
I’m implementing a chatbot using Tensorflow’s seq2seq model[1], feeding it with data from the Ubuntu Dialogue Corpus. I want to compare an RNN using standard LSTM cells with Grid LSTM cells described in Kalchbrenner et al [2].
I’m trying to implement the Grid LSTM cell in the translation model described in section 4.4 [2], but I’m struggling with the bidirectional part.
I have tried using BidirectionalGridLSTMCell, but I’m not sure what they mean by num_frequency_block. They do not mention that in the paper. Does anyone know what they mean by num_frequency_block? In the api docs it says:
num_frequecy_blocks: [required] A list of frequency blocks needed to cover the whole input feature splitting defined by start_freqindex_list and end_freqindex_list.
Further, I have tried to create my own cell. First I do the forward processing with the inputs, then I reverse the inputs, and do the backward processing. But when I concatenate these results, the shape changes. E.g. when I try to run the network with a batch size of 32, then i get this error:
ValueError: Dimensions must be equal, but are 64 and 32
How can I concatenate the results without changing the shape? Is that even possible?
Does anyone have any other tips, on how I can implement Bidirectional Grid LSTM?
[1] https://www.tensorflow.org/tutorials/seq2seq/
[2] https://arxiv.org/abs/1507.01526
-tensorflow has bidirectional LSTMs built-in: https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/3_NeuralNetworks/bidirectional_rnn.ipynb
here's a tutorial for using bidirectional LSTMs for intent matching: https://blog.themusio.com/2016/07/18/musios-intent-classifier-2/
you're missing your second [2] reference link.
is this a helpful baseline, even if they don't provide grids?
may i ask what you are using it for?
I would like to provide more than one graph as input to the equiv.clust function in the sna package. For example
library(ergm)
library(sna)
data(florentine)
flobusiness # first relation
flomarriage # second relation
eq<-equiv.clust(flobusiness)
b<-blockmodel(flobusiness,eq,h=10)
plot(b)
So far so good. I get the output I expect. However, how do I include both relations in the equiv.clust and blockmodel commands?
According to the Usage in documentation
equiv.clust(dat, g=NULL, equiv.dist=NULL, equiv.fun="sedist",
method="hamming", mode="digraph", diag=FALSE,
cluster.method="complete", glabels=NULL, plabels=NULL, ...)
where
dat one or more graphs.
Specifically, I am requesting to know how to provide two or more graphs in as the dat part of the argument. Thanks a ton
try entering the graphs as a list, as in:
equiv.clust(list(flobusiness,flomarriage))
not sure if that will work but in general I think you need to use lists to analyze multiple graphs. though in this case, it depends on whether you want two separate blockmodels in which case you could just loop or use
lapply(equiv.clust, list(flobusiness,flomarriage))
and then a slightly more complicated statement for the block model, or whether you want a blockmodel of the combined network in which case you could just add them together
I am currently looking to have map files that are no larger than the sizes of municipalities in Mexico (at largest, about 3 degrees longitude/latitude across). However, I have been running into memory issues (at the very least) when trying to do so. The file size of the OSM XML object is 1.9 GB, for reference.
library(osmar)
get.map.for.municipality<-function(province,municipality){
base.map.filename = 'OpenStreetMap/mexico-latest.osm'
#bounds.list is a list that contains the boundaries
bounds = bounds.list[[paste0(province,'*',municipality)]]
my.bbox = corner_bbox(bounds[1],bounds[2],bounds[3],bounds[4])
my.map.source = osmsource_file(base.map.filename)
my.map = get_osm(my.bbox,my.map.source)
return(my.map)
}
I am running this inside of a loop, but it can't even get past the first one. When I tried running it, my computer froze and I was only able to take a screenshot with my phone. The memory steadily inclined over the course of a few minutes, and then it shot up really quickly, and I was unable to react before my computer froze.
What is a better way of doing this? I expect to have to run this loop about 100-150 times, so any way that is more efficient in terms of memory would help. I would prefer not to download smaller files from an API service.
If necessary, I would be willing to use another programming language (preferably Python or C++), but I prefer to keep this in R.
I'd suggest not use R for that.
There are better tools for that job. Many ways to split, filter stuff from the command line or using a DBMS.
Here are some alternatives extracted from the OSM Wiki http://wiki.openstreetmap.org:
Filter your osm files using osmfilter: "osmfilter is used to filter OpenStreetMap data files for specific tags. You can define different kinds of filters to get OSM objects (i.e. nodes, ways, relations), including their dependent objects, e.g. nodes of ways, ways of relations, relations of other relations."
Clipping based on Polygons or borders using osmconvert: http://wiki.openstreetmap.org/wiki/Osmconvert#Applying_Geographical_Borders
You can write bash scripts for both osmfilter and osmconvert, but I'd recommend using a DBMS. Just import into PostGIS using osm2pgsql, and connect your R code with any Postgresql driver. This will optimize your read/write ops.
I have 10+ files that I want to add to ArcMap then do some spatial analysis in an automated fashion. The files are in csv format which are located in one folder and named in order as "TTS11_path_points_1" to "TTS11_path_points_13". The steps are as follows:
Make XY event layer
Export the XY table to a point shapefile using the feature class to feature class tool
Project the shapefiles
Snap the points to another line shapfile
Make a Route layer - network analyst
Add locations to stops using the output of step 4
Solve to get routes between points based on a RouteName field
I tried to attach a snapshot of the model builder to show the steps visually but I don't have enough points to do so.
I have two problems:
How do I iterate this procedure over the number of files that I have?
How to make sure that every time the output has a different name so it doesn't overwrite the one form the previous iteration?
Your help is much appreciated.
Once you're satisfied with the way the model works on a single input CSV, you can batch the operation 10+ times, manually adjusting the input/output files. This easily addresses your second problem, since you're controlling the output name.
You can use an iterator in your ModelBuilder model -- specifically, Iterate Files. The iterator would be the first input to the model, and has two outputs: File (which you link to other tools), and Name. The latter is a variable which you can use in other tools to control their output -- for example, you can set the final output to C:\temp\out%Name% instead of just C:\temp\output. This can be a little trickier, but once it's in place it tends to work well.
For future reference, gis.stackexchange.com is likely to get you a faster response.