SQLite has some interesting extensions Geopoly. In this library there are functions for testing polygons. One basic function :
/*
** Function: geopoly_regular(X,Y,R,N)
**
** Construct a simple, convex, regular polygon centered at X, Y
** with circumradius R and with N sides.
*/
In one of the examples they produce some basic shapes, however, when X/Y are latitude and longitude then the units for the radius is questionable. (side note, N also get's fuzzy on large R.)
So give that R is a circumradius which means it unit is probably based on the lat/long. How do I compute something meaningful?
The source for the extension is here
UPDATE
I think I have it... The lat/long coordinates are are in "degrees decimal" and needed to be converted to radians so that. However, the code, even though it seems to work it still has some problems.
According to this website the distance between these points is 16.44 miles which means a radius of 17 should work. but does not.
select geopoly_contains_point(
geopoly_regular(
26.122438* 0.01745327 --lat convert DEG2RAD
,-80.137314* 0.01745327 --long convert DEG2RAD
,19.0/3959 -- radius convert miles to RAD
,200 -- polygon sides
)
, 26.103039* 0.01745327 -- target lat convert DEG2RAD
, -80.401382* 0.01745327 -- target long convert DEG2RAD
)
;
I think the question is still valid, however, the answer is in the negative. The geopoly functions seem to be 2D where distances between two points on earth are based on curves and 3D.
One probably needs to use the ‘haversine’ formula and there are plenty of examples of that.
How can a world-space coordinate of a point in Signed Distance Field be obtained?
Example: Let there be a Signed Distance Field function of e.g. a cone sitting on top of a box. Assuming I perform transformations on the cone such as scaling, translation and rotation, how can I get the world-space coordinate of e.g. the tip of the cone?
When using polygonal geometry, I would query the world-space position of the vertex that represents the tip of the cone (after performing all transformations to the polygonal model). However, when using SDFs, I don't know how to proceed.
I need to return true when a particular point(lat/longs) is present inside my geofence circle. But the query is retuning true even though the point is outside my geofence circle.
Please find the query below.
select ST_Intersects(ST_Buffer(geofence_polygon, 127.08), ST_POINT(18.595798 ,73.78833)) from masterdata.al_m_geofence
In this query, geofence_polygon is of type geography.
127.08 - radius of circle in meters
18.595798 - latitude
73.78833 - longitude
The query should return true only when the point is inside the circle.
Please let me know, whether this query is correct or not.
Your ST_POINT arguments are backwards:
geometry ST_Point(float x_lon, float y_lat);
Should be:
ST_POINT(73.78833,18.595798)
full query:
select ST_Intersects(ST_Buffer(geofence_polygon, 127.08), ST_POINT(73.78833,18.595798)) from masterdata.al_m_geofence
I want to calculate the area in square kilometers of a polygon in Google Maps API V3.
With the google.maps.geometry.spherical.computeSignedArea() this should be possible and easy.
http://code.google.com/intl/en-US/apis/maps/documentation/javascript/reference.html#spherical
It works nicely and it returns a number as described in the documentation. But what unit of measurement is the number? I can't find it anywhere! Is it meters, centimeters, feet?
The function returns the unit in m^2 (square meters)
As described in the documentation for computeArea:
Returns the area of a closed path. The computed area uses the same units as the radius. The radius defaults to the Earth's radius in meters, in which case the area is in square meters.
It's been a while since my math in university, and now I've come to need it like I never thought i would.
So, this is what I want to achieve:
Having a set of 3D points (geographical points, latitude and longitude, altitude doesn't matter), I want to display them on a screen, considering the direction I want to take into account.
This is going to be used along with a camera and a compass , so when I point the camera to the North, I want to display on my computer the points that the camera should "see". It's a kind of Augmented Reality.
Basically what (i think) i need is a way of transforming the 3D points viewed from above (like viewing the points on google maps) into a set of 3d Points viewed from a side.
The conversion of Latitude and longitude to 3-D cartesian (x,y,z) coordinates can be accomplished with the following (Java) code snippet. Hopefully it's easily converted to your language of choice. lat and lng are initially the latitude and longitude in degrees:
lat*=Math.PI/180.0;
lng*=Math.PI/180.0;
z=Math.sin(-lat);
x=Math.cos(lat)*Math.sin(-lng);
y=Math.cos(lat)*Math.cos(-lng);
The vector (x,y,z) will always lie on a sphere of radius 1 (i.e. the Earth's radius has been scaled to 1).
From there, a 3D perspective projection is required to convert the (x,y,z) into (X,Y) screen coordinates, given a camera position and angle. See, for example, http://en.wikipedia.org/wiki/3D_projection
It really depends on the degree of precision you require. If you're working on a high-precision, close-in view of points anywhere on the globe you will need to take the ellipsoidal shape of the earth into account. This is usually done using an algorithm similar to the one descibed here, on page 38 under 'Conversion between Geographical and Cartesian Coordinates':
http://www.icsm.gov.au/gda/gdatm/gdav2.3.pdf
If you don't need high precision the techniques mentioned above work just fine.
could anyone explain me exactly what these params mean ?
I've tried and the results where very weird so i guess i am missunderstanding some of the params for the perspective projection
* {a}_{x,y,z} - the point in 3D space that is to be projected.
* {c}_{x,y,z} - the location of the camera.
* {\theta}_{x,y,z} - The rotation of the camera. When {c}_{x,y,z}=<0,0,0>, and {\theta}_{x,y,z}=<0,0,0>, the 3D vector <1,2,0> is projected to the 2D vector <1,2>.
* {e}_{x,y,z} - the viewer's position relative to the display surface. [1]
Well, you'll want some 3D vector arithmetic to move your origin, and probably some quaternion-based rotation functions to rotate the vectors to match your direction. There are any number of good tutorials on using quaternions to rotate 3D vectors (since they're used a lot for rendering and such), and the 3D vector stuff is pretty simple if you can remember how vectors are represented.
well, just a pice ov advice, you can plot this points into a 3d space (you can do easily this using openGL).
You have to transforrm the lat/long into another system for example polar or cartesian.
So starting from lat/longyou put the origin of your space into the center of the heart, than you have to transform your data in cartesian coord:
z= R * sin(long)
x= R * cos(long) * sin(lat)
y= R * cos(long) * cos(lat)
R is the radius of the world, you can put it at 1 if you need only to cath the direction between yoour point of view anthe points you need "to see"
than put the Virtual camera in a point of the space you've created, and link data from your real camera (simply a vector) to the data of the virtual one.
The next stemp to gain what you want to do is to try to plot timages for your camera overlapped with your "virtual space", definitevly you should have a real camera that is a control to move the virtual one in a virtual space.