I have a shapefile of India with states (probably as polygons). I want to convert each polygon into equally divided cells ("raster" way), and populate (actually coloring) each cell by a value that would be calculated from an algorithm which is cell's location specific. This should be done for all the cells in the polygon (programmatically) so that at the end I have the shapefile, looking as a thematic (of what my algorithm calculates), raster image. I am not starting any image because the information is actually calculated value from algorithm and not coming from satellite imagery or anything like that.
In other words, it is not a vegetation or elevation thematic but something like population distribution, where each value (color) of cell represents a mean value of population there, showing wholly as a distribution at large scale.
Can any one please help how to do this using any open source application? (both as application and also programmatically using API like sharpmap) Please help
The GDAL utilities and scripting would be my choice.
http://www.gdal.org/index.html
I don't quite understand how you are going to decide cell values based on position, but have a look at the following utilites:
http://www.gdal.org/gdal_grid.html
http://www.gdal.org/gdal_rasterize.html
If you can't get the required output from the command line, then the GDAL functions can be scripted (C++ or Python have the most examples).
One easy way to do this would be to use Mapnik and its Python binding. See their site for a tutorial on thebasic usage and their XML configuration schema.
I've done this using MapScript (from UMN's Mapserver).
http://mapserver.org/mapscript/index.html#mapscript
It's fairly straight forward and has many bindings (PHP, Ruby, Python, .NET and so forth), but the API is the same across all bindings. Those bindings when I last used it were of varying quality and I'm not up to date on the current quality
I've done this using python and GDAL by following the instructions outlined here:
http://proj.lis.ic.unicamp.br/webmaps/docs/calc_ndvi/
I hope this helps.
ps. The site is in portugese, so if you don't speak the language you may find Google translate very useful. Good luck.
n
Related
I am trying to render a mvt (Mapbox Vector Tile) containing OSM data using Mapbox GL js, but I keep getting some ugly polygons like they were simplified (like in the Simplification section of this documentation!). I don't want those polygons to be simplified. At least I would like the best resolution to be as close as possible from reality.
First, I checked if it could come from OSM data. But OSM data is good.
So I looked into the tile server and more precisely into the mvt encoder (code). The extent value, which controls how detailed the coordinates are encoded in the vector tile, is 4096. 4096 is a very good value. So I don't understand why I don't get proper polygons.
I suppose that this issue comes from Mapbox GL js which might perform an additional simplification.
What extent value could I use in the encoder?
Is there a way to configure a resolution with mapbox gl js ?
I would appreciate some help !
Thanks!
Mapbox GL JS does not do any additional simplification on vector tile sources. If you are seeing simplified geometries, this is most likely done during vector tile generation.
I was finding the same thing. I got better results when, rather than importing the polygons as a geojson as I had been doing, I converted the file to a shape file, zipped it, and imported that into mapbox. There was then no simplification to the shape.
Okay, so I have one OBJ file which I read into PCLpointcloud2. Now I want to feed it into a K-dTree. Which is not taking PCLPointCloud2 as input. I want to query any general point if it lies on the surface of my OBJ file.
I am finding it hard to understand their documentation. So how can it be done?
Plus, kindly point me to a good reference easily interpretable. And what is "PointT" BTW? Is it custom build type defined by us? please elaborate.
Look at the code in the provided tool pcl_mesh_sampling (in the PCL code directory under tools/mesh_sampling.cpp). It is relatively simple. It loads a model from PLY or OBJ then for each triangle it samples random points from the triangle. The final point cloud then undergoes a voxel-grid sample to make the points relatively uniform. Alternatively, you can just run the pcl_mesh_sampling program on your obj file to get an output PCD which you can then visualise with pcl_viewer before loading the PCD file into your own code.
Once you have the final point cloud, you can build and use a KD-Tree as per http://pointclouds.org/documentation/tutorials/kdtree_search.php
PointT is the template argument. The point cloud library can handle a variety of point types, from simple PointXYZ (having just x,y,z) to more complicated points like PointXYZRGBNormal (having x,y,z,normal_x,normal_y,normal_z, curvature, r, g, and b channels). Each algorithm is templated on the point type that you want to use. It would probably be easier if you used PointXYZ with your OBJ file, so use pcl::PointXYZ for all your template arguments. For more on templates see http://www.tutorialspoint.com/cplusplus/cpp_templates.htm and http://pointclouds.org/documentation/tutorials/adding_custom_ptype.php.
Update (reply to latest comment)
Added here because this reply is too long for a comment.
I think I see what you are getting at. So when you sample points from the point cloud & build a KD-tree of the object surface, and for each point you keep track which faces are nearby that point (probably all the faces adjacent to the face from which the point was sampled should be sufficient? Just one face is definitely insufficient). Then when the query point is given, you find the nearest point in the KD-tree and check whether the query point is on the "outside" or inside of the full list of nearby faces associated with that point in the KD-tree. If it's on the "inside" of all of them perhaps it is an interior point. But I cannot guarantee that this is true. That is my thinking on that question at the moment. But I do wonder if you want a mesh-based approach really. By the way, if you break your mesh up into convex parts then you can have nice guarantees when processing each convex part.
Problem I have a binary image with traversable and blocked cells. On this map points of interest (POIs) are set. My goal is to create a graph from these POIs respecting obstacles (see images) which represents all possible and truly distinct paths. Two paths are truly distinct if they can not be joined into one path. E.g. if the outside of the building in picture 1 was accessible a path around the building could not be merged with one through the building.
Researched I have looked at maze solvers and various shortest path finding algorithms (e.g. A*, Theta*, Phi*) and while they'd be useful for this problem they only search for a path between two points and don't consider already established routes.
Best Guess I am considering using Phi* to search for all possible routes and merge afterwards using magic (ideas?), but this will not give me truly distinct alternatives.
Can someone help?
P.S.: I'm using C++ and am not really eager to do this by myself, so if there is a library which already does this... :)
I found (and decided to use) a parallel thinning algorithm (Zhan-Suen for now) to create an image skeleton. This effectively makes the assumption that the geometry shapes the common routes, which is fine I think.
By using the Rutovitz crossing number I can extract bifurcations and crossings from the resulting skeleton. Then I'll determine the shortest line of sight from my Points of Interest (using Bresenham's algorithm) to the extracted crossings to connect them to the graph.
I hope this will help someone along the road :)
Basically, I'm looking for something like this awesome research project: Gmap, which was referenced in this related SO question.
It's a rather novel data visualization that combines a network graph with an imaginary set of regions that looks like a map. Basically, the map-ification helps humans comprehend the enormous data set better.
Cool, huh? GMap doesn't appear to be open source, though I plan to contact the authors.
I already know how to create a network graph with a force-directed layout (currently using Prefuse/Flare), so an answer could be a way to layer a mapping algorithm on top of an existing graph. I'm also not concerned about the client-side at all right now - this would be a backend process, and I am flexible about technology stack and data output at this stage.
There's also this paper that describes the algorithm backing GMap. If you have heard of Voronoi diagrams (which rock, but make my head hurt), this paper is for you. I quit after Calc 1, though, so I'm hoping to avoid remembering what sigmas and epsilons are.
As a start, could you do a simple closest point sort of an algorithm? So it looks something like this: You have your force directed layout and have computed some sort of bounding box. Now you want to render it. Adjust your bounding box to line up to the origin and then as you calculate the color of each pixel, find it's closest point. This should generate some semblance of regions and should be quite simple to try out. Of course, it isn't going to be as pretty as GMap, but maybe a start? The runtime would be awful, but... I don't know about you but computing boundary lines directly sounds a lot harder to me.
I am using PostgreSQL with the GIS extension to store map data, together with OpenLayers, GeoServer etc. Given a polygon, e.g. of a neighborhood, I need to find all LAT/LONG points stored in some table (e.g. traffic lights, restaurants) that are within the polygon. Alternatively given a set of polygons I would like to find the set of points within each polygon (like a GROUP BY query, rather then iterating over each polygon).
Are these functions something I need to program, or is the functionality available (as extended SQL)? Please elaborate.
Also for the simple 2D data I have do I actually need the GIS extension (GPL license is a limitation) or will PostgreSQL suffice?
Thanks!
In PostGIS you can use the bounding box operator to find candidates, which is very efficient as it uses GiST indexes. Then, if strict matches are required, use the contains operator.
Something like
SELECT
points,neighborhood_name from points_table,neighborhood
WHERE
neighborhood_poly && points /* Uses GiST index with the polygon's bounding box */
AND
ST_Contains(neighborhood_poly,points); /* Uses exact matching */
About if this is needed, depends on your requirements. For the above to work you certainly need PostGIS and GEOS installed. But, if bounding box match is enough, you can code it simply in SQL not needing PostGIS.
If exact matches are required, contains algorithms are publicly available, but to implement them efficiently requires some effort implementing it in a library which would then be called from SQL (just like GEOS).
I believe ST_Contains automatically rewrites the query to use the the GiST-indexed bounding box, as it notes:
"This function call will automatically
include a bounding box comparison that
will make use of any indexes that are
available on the geometries. To avoid
index use, use the function
_ST_Contains."
http://postgis.refractions.net/docs/ST_Contains.html
ST_Contains would solve your problem.
SELECT
polygon.gid, points.x, points.y
FROM
polygons LEFT JOIN points
ON st_contains(polygon.geom, points.geom)
WHERE
polygon.gid=your_id