I work on a project building recognition system. I want to ask the experienced people in collecting GPS data from a map.
I want to partition the map into grids the area of each is 30 meter * 30 meter in each grid i want to store the center GPS coordinate (i.e point(15, 15)
What is the best way to do this?
Here's an image that demonstrates what I need.
This is not so easy:
There are two ways:
The professional solution:
Draw the grid 30x30 using the UTM coordinate system of that country / city.
UTM is measured in meters and is a flat cartesian coordinate system, while latitude / longitude are spehrical and not linear in x,y.
Align your grid such that it corresponds to integral UTM coordinates.
Then you need a method sto transform from latitude/longitude WGS84 to UTM (hopefully WGS84, but in some countries other ellipsoids are used)
The map display software should be apple to use UTM coordinate system.
And the simpler one:
stay using the lat long cooridnates and calculate the latitudinalGridWithDegrees measure in degrees wich corresponds to 30m (in the middle of the map/ city/ curch of city)
Since lat and lon do not use the same scale (1 degree of latitude differecne is not the same meters as one degree in longitude - only at the aequator), additionally calculate the longitudinalGridWithDegrees.
You will get two different values (they differ by a factor of cos(mapCenterLatitudeRadians).
To calculate this values either understand geo calculations or simply use a function which creates an offset point by given radius and direction from an start point. Use the center of the map as point to be offset (start point).
Create one point with offset 30m ,and heading = 0°, then measure the lat difference by subtracting and store in latitudinalGridWithDegrees
Do the same using heading = 90, and measure the longitudinal difference and store in longitudinalGridWithDegrees.
Now you are able two draw the grid using as lat and long steps the value of that both variables.
These then gives the corner points of such an square, and you always use latitude and longituide as interfcae to the mapping software.
If you are living in special countries/continents like Australia or Norwegen where the continent drift with up to 1m per year, it is more difficult.
Other continents drift only 1mm per year, which you and all other apps just ignore.
Advantages / Disadvantages of each solution
Simple Solution:
- grid is not square at corners of map, but probabyl not visisble for a city
- grid cell is exactly 30m only in the center of the map / reference point choosen for offset calc)
+ easier implementtaion
+ maping software interfcae simpler and always supported
Professional solution.
+ grid will match professional paper maps
+ grid cell is always exactly 30m
- needs geo transformation software or method
- unclear whether map display software provdes UTM (in most cases not)
However, maybe it is easier to assign road and house numbers to be measured, supported to match more ore less the grid.
Related
I am running a taxicab distance function on a list of coordinates and I would like to convert the outcome integer to a mile or km quantity. For example:
0.0117420 = |40.721319 - 40.712278| + |-73.844311 - -73.841610|
Where 0.0117420 is the output I would like to convert to mi/km. How could I go about this?
This appears to be a situation where you are trying to navigate from (40.721319, -73.844311) to (40.712278, -73.841610) where these are lat / lon pairs, and you want to navigate using a "Manhattan" routing rather than a direct great circle route.
It looks like you are considering these points as opposite corners of a "rectangle" where travel is only allowed along north, south, east and west headings to move from one point to another and where travel along the path always brings the traveler closer to the destination point.
An approximation of this is to find one of the corners of the bounding rectangle for all such paths. There are two of them, one at (40.721319, -73.841610) and the other at (40.712278, -73.844311). So, you can pick one of these and chose that as a waypoint for approximating the length each possible "Manhattan routes" between the two points. If we chose the first, you will need to calculate the distance from the starting point to the waypoint then to the destination point. Such as:
l(0) = (40.721319, -73.844311)
l(1) = (40.721319, -73.841610)
l(2) = (40.712278, -73.841610)
Using the Haversine equations we see the distance from l(0) to l(1) is approximately 0.2276km and the distance from l(1) to l(2) is approximately 1.005km making the entire route about 1.2326km.
This is approximately the length of any "Manhattan route" you pick where the distance is strictly decreasing along the path taken between the two points. There are also some errors due to the curvature of the Earth, but for points this close to each other and so distant from either of the poles, this should be good enough for most applications.
I have a point shapefile of Station IDs and stageheights. I would like to create a raster where each cell has the stage height value (in meters) of the closest in situ station to that cell.
I want this raster to match up with another raster. So I would like it if I could input both a raster I have created (dataset 3 described below) and my point shapefile (1).
Datasets:
1) Point Shapefile with stage heights of a river delta
2) Shapefile of the river delta extent
3) Raster of the delta where NA's represent land (could also have them be zero's if need be) and 1's are water. Two datasets 10 meter resolution and 30 meter resolution.
One conceptual issue I am having is with the amount of small streams I have.
For example (pictured in image below), station 1 (circled in blue) is technically closer to the black x region than station 2 (circled in red), but the stage height value in red is more representative of point x. There are NA's in between the two streams, does that mean that the value will not jump across streams?
How can I reassign the values in my Raster (all the 1's) to the stage height of the nearest station and make sure that these values are not jumping from stream to stream? Do I need to use least cost path? What is the best way to do this?
I would like to use R, but can use ArcMap if I must.
So I'm not sure what tools you have available to you but I think this answer may be useful:
Calculating attribute for network distance between multiple points in ArcGIS Desktop?
Here the questioner was looking to calculate distances on roads to some points, but your problem seems similar. I think the main point I would make here is that you should do your network distance classification prior to worrying about the raster layer. You may have to convert from polygon to lines or some workaround to get your data into a format that works, but this is the kind of job the tool is designed to do.
After you have reclassified your river shapefile based on their network distance to a given point, then convert the polygons to raster and use this to classify your original raster. You could do this in R or Arcmap. Arcmap will probably be faster.
I have a map of a mountainous landscape, http://skimap.org/data/989/60/1218033025.jpg. It contains a number of known points, the lat-longs of which can be easily found out using Google maps. I wish to be able to pin any latitude longitude coordinate on the map, of course within the bounds of the landscape.
For this, I tried an approach that seems to be largely failing. I assumed the map to be equivalent to an aerial photograph of the Swiss landscape, without any info about the altitude or other coordinates of the camera. So, I assumed the plane perpendicular to the camera lens normal to be Ax+By+Cz-d=0.
I attempt to find the plane constants, using the known points. I fix my origin at a point, with z=0 at the sea level. I take two known points in the landscape, and using the equation for a line in 3D, I find the length of the projection of this line segment joining the two known points, on the plane. I multiply it by another constant K to account for the resizing of this length on a static 2d representation of this 3D image. The length between the two points on a 2d static representation of this image on this screen can be easily found in pixels, and the actual length of the line joining the two points, can be easily found, since I can calculate the distance between the two points with their lat-longs, and their heights above sea level.
So, I end up with an equation directly relating the distance between the two points on the screen 2d representation, lets call it Ls, and the actual length in the landscape, L. I have many other known points, so plugging them into the equation should give me values of the 4 constants. For this, I needed 8 known points (known parameters being their name, lat-long, and heights above sea level), one being my orogin, and the second being a fixed reference point. The rest 6 points generate a system of 6 linear equations in A^2, B^2, C^2, AB, BC and CA. Solving the system using a online tool, I get the result that the system has a unique solution with all 6 constants being 0.
So, it seems that the assumption that the map is equivalent to an aerial photograph taken from an aircraft, is faulty. Can someone please give me some pointers or any other ideas to get this to work? Do open street maps have a Mercator projection?
I would say that this impossible to do in an automatic way. The skimap should be considered as an image rather than a map, a map is an projection of the real world into one plane, since this doesn't fit skimaps very well they are drawn instead.
The best way is probably to manually define a lot of points in the skimap with known or estimated coordinates and use them to estimate the points betwween. To get an acceptable result you probably have to assign coordinates to each pixel in the skimap.
You could do something like the following: http://magazin.unic.com/en/2012/02/16/making-of-interactive-mobile-piste-map-by-laax/
I am solving the exact same issue. It is pretty hard and lots of maths. Taking me a few weeks to solve it. Interpolation is the key as well with lots of manual mapping. I would say that for a ski mountain it will take at least 1000/1500 points to be able to get the very basic. So, not a trivial task unless you can automate the collection of these points (what I am doing!) ;)
I am in charge of a program that is used to create a set of nodes and paths for consumption by an autonomous ground vehicle. The program keeps track of the locations of all items in its map by indicating the item's position as being x meters north and y meters east of an origin point of 0,0. In the real world, the vehicle knows the location of the origin's lat and long, as it is determined by a dgps system and is accurate down to a couple centimeters. My program is ignorant of any lat long coordinates.
It is one of my goals to modify the program to keep track of lat long coords of items in addition to an origin point and items' x,y position in relation to that origin. At first blush, it seems that I am going to modify the program to allow the lat long coords of the origin to be passed in, and after that I desire that the program will automatically calculate the lat long of every item currently in a map. From what I've researched so far, I believe that I will need to figure out the math behind converting to lat long coords from a UTM like projection where I specify the origin points and meridians etc as opposed to whatever is defined already for UTM.
I've come to ask of you GIS programmers, am I on the right track? It seems to me like there is so much to wrap ones head around, and I'm not sure if the answer isn't something as simple as, "oh yea theres a conversion from meters to lat long, here"
Currently, due to the nature of DGPS, the system really doesn't need to care about locations more than oh, what... 40 km? radius away from the origin. Given this, and the fact that I need to make sure that the error on my coordinates is not greater than .5 meters, do I need anything more complex than a simple lat/long to meters conversion constant?
I'm knee deep in materials here. I could use some pointers about what concepts to research.
Thanks much!
Given a start point in lat/long and a distance and bearing, finding the end point is a geodesic calculation. There's a great summary of geodesic calculations and errors on the proj.4 website. They come to the conclusion that using a spherical model can get results for distance between points with at most 0.51% error. That, combined with a formula to translate between WGS-84 and ECEF (see the "LLA to ECEF" and "ECEF to LLA" sections, seems like it gets you what you need.
If you want to really get the errors nailed down by inverse projecting your flat map to WGS-84, proj.4 is a projection software package. It has source code, and comes with three command line utilities - proj, which converts to/from cartographic projection and cartesian data; cs2cs, which converts between different cartographic projections; and geod, which calculates geodesic relationships.
The USGS publishes a very comprehensive treatment of map projections.
I'd do a full-up calculation if you can. That way you'll always be as accurate as you can be.
If you happen to be using C++ the GDAL is a very good library.
For a range of 40km, you may find that approximating the world to a 2D flat surface may work, although a UTM transform would be the ideal way to go - in any case, I'd advocate using the actual WGS84 co-ordinates & ellipsoid for calculations such as great circle distance, or calculating bearings.
If you get bored, you could go down a similar line to something I've been working on, that can be used as a base class for differing datums such as OSGB36 or WGS84...
It's been a while since my math in university, and now I've come to need it like I never thought i would.
So, this is what I want to achieve:
Having a set of 3D points (geographical points, latitude and longitude, altitude doesn't matter), I want to display them on a screen, considering the direction I want to take into account.
This is going to be used along with a camera and a compass , so when I point the camera to the North, I want to display on my computer the points that the camera should "see". It's a kind of Augmented Reality.
Basically what (i think) i need is a way of transforming the 3D points viewed from above (like viewing the points on google maps) into a set of 3d Points viewed from a side.
The conversion of Latitude and longitude to 3-D cartesian (x,y,z) coordinates can be accomplished with the following (Java) code snippet. Hopefully it's easily converted to your language of choice. lat and lng are initially the latitude and longitude in degrees:
lat*=Math.PI/180.0;
lng*=Math.PI/180.0;
z=Math.sin(-lat);
x=Math.cos(lat)*Math.sin(-lng);
y=Math.cos(lat)*Math.cos(-lng);
The vector (x,y,z) will always lie on a sphere of radius 1 (i.e. the Earth's radius has been scaled to 1).
From there, a 3D perspective projection is required to convert the (x,y,z) into (X,Y) screen coordinates, given a camera position and angle. See, for example, http://en.wikipedia.org/wiki/3D_projection
It really depends on the degree of precision you require. If you're working on a high-precision, close-in view of points anywhere on the globe you will need to take the ellipsoidal shape of the earth into account. This is usually done using an algorithm similar to the one descibed here, on page 38 under 'Conversion between Geographical and Cartesian Coordinates':
http://www.icsm.gov.au/gda/gdatm/gdav2.3.pdf
If you don't need high precision the techniques mentioned above work just fine.
could anyone explain me exactly what these params mean ?
I've tried and the results where very weird so i guess i am missunderstanding some of the params for the perspective projection
* {a}_{x,y,z} - the point in 3D space that is to be projected.
* {c}_{x,y,z} - the location of the camera.
* {\theta}_{x,y,z} - The rotation of the camera. When {c}_{x,y,z}=<0,0,0>, and {\theta}_{x,y,z}=<0,0,0>, the 3D vector <1,2,0> is projected to the 2D vector <1,2>.
* {e}_{x,y,z} - the viewer's position relative to the display surface. [1]
Well, you'll want some 3D vector arithmetic to move your origin, and probably some quaternion-based rotation functions to rotate the vectors to match your direction. There are any number of good tutorials on using quaternions to rotate 3D vectors (since they're used a lot for rendering and such), and the 3D vector stuff is pretty simple if you can remember how vectors are represented.
well, just a pice ov advice, you can plot this points into a 3d space (you can do easily this using openGL).
You have to transforrm the lat/long into another system for example polar or cartesian.
So starting from lat/longyou put the origin of your space into the center of the heart, than you have to transform your data in cartesian coord:
z= R * sin(long)
x= R * cos(long) * sin(lat)
y= R * cos(long) * cos(lat)
R is the radius of the world, you can put it at 1 if you need only to cath the direction between yoour point of view anthe points you need "to see"
than put the Virtual camera in a point of the space you've created, and link data from your real camera (simply a vector) to the data of the virtual one.
The next stemp to gain what you want to do is to try to plot timages for your camera overlapped with your "virtual space", definitevly you should have a real camera that is a control to move the virtual one in a virtual space.