From origin to destination, I would like to know the direction of.
Simply want to know which direction is in the angle values.
In the image above, the angle value will be probably between 350-360.
(Image source :
https://maps.google.com/maps?saddr=Dr+NS+Hardikar+Rd&daddr=Rithala+Metro+Station,+Rithala+Rd,+Sector+12,+Rohini,+New+Delhi,+Delhi,+India&geocode=FYfstQEdJn-YBA%3BFY8-tgEd3Y-YBCnlvvwRTAENOTHixeyZbhPwSA&sll=28.697665,77.11956&sspn=0.062715,0.11055&hl=en)
I can't upload image. sorry.
How do I calculate?
Use google.maps.geometry.spherical.computeHeading() with start_location of the first leg and end_location of the last leg as arguments.
Related
I'm not sure if something like this has been asked but I've spent days trying to figure this out to no avail.
I've been working on a project that has a straight tube and a sleeve placed some length down the tube, this part of the problem isn't causing any issues but the orientation of the placed sleeve is. When the sleeve is placed it is given a location that intersects another object giving it all the information it needs to be placed, but I need that sleeve to orient itself with the tube, pretty much just along the roll axis, but I would like to hammer out how yaw and pitch would be done similarly.
The tube has transform data connected to it. It has an origin for the center point of the tube, and 3 xyz points standing for each basis axis. in example for one of the tubes tested:
origin:{(119.814557964, -37.330669765, 8.400185257)},
BasisX: {(1.000000000, 0.000000000, 0.000000000)},
BasisY: {(0.000000000, 0.939692621, 0.342020143)},
BasisZ: {(0.000000000, -0.342020143, 0.939692621)}.
In some of the solution parts I've come across I found some ways this information is used. And I've had some success with this way of doing it:
(note: I realize this code has a lot of pointless variable use, I didn't want to adjust it and confuse myself more)
upDownAxis = givenSleeveObject.passedOnTransform.BasisZ;
leftRightAxis = givenSleeveObject.passedOnTransform.BasisX;
tempOfVector = givenSleeveObject.passedOnTransform.OfVector(upDownAxis);//this ofvector is applying the transform to the vector
rotationAngle = upDownAxis.AngleOnPlaneTo(tempOfVector, leftRightAxis);
This was able to give me the angle rotation of this particular tube which was 20 degrees.
The problem is that this doesn't really work along the y axis the same, and completely wrong along the z axis. Likely due to after rotating to z axis the axis for each direction changes to one of the others at that angle. Also if it is of any help, the direction of the tube basically follows the basisX. If z is the only one with a 1, it is heading upward.
So now my issue is, how can I find the roll of this tube no matter it's orientation? Also rotation direction might matter in the long run. Since this object's transforms are all connected to itself, there must be a way to know how much of a roll has been done to it even at an extreme of 45 in every axis, right?
I am currently working on some radiotherapy plan generation and I am trying to retrieve the beam source position from a DICOM RTPLAN file and point it on a related CT-Scan 3D image.
With the RTPLAN, I am able to access the isocenter position of each beam but this is in patient coordinates and I am note quite sure how to find the coordinates in the basis that is used by the 3D CT-Scan image.
I have access to the attributes ImagePosition and ImageOrientation of the DICOM of the CT. Moreover, the CT DICOM-like file (it is in practice a json regrouping some DICOM information) and the RTPLAN share the same FrameOfReference (Does it mean that they share the Patient coordinate system ?).
What does ImagePosition truely indicates ? As well as I can understand I think this is the position of the point (0, 0, 0) of the CT-3DImage in the Patient Coordinates. I am also a bit confused about the ImageOrientation attribute.
As you can read in this answer here, the ImagePosition attribute gives you the x, y, and z coordinates of the upper left hand corner of the image, in mm, i.e. the coordinates of the center of the upper left pixel of the image.
For your convenience I copy-paste below a table from the DICOM Documentation Part 3 (page 561).
Regarding the ImageOrientation attribute, as described in the documentation, gives you the direction cosines of the first row and the first column with respect to the patient. To understand better this attribute take a look at the very useful website, DICOM is Easy, by Roni Zaharia. In one of his images (below), you can clearly see that when the attribute is not equal to 1\0\0\0\1\0, then the coordinate system of the image is not align with the coordinate system of the patient. To align them, you have to use the direction cosines provided by the attribute and apply a rotation (take a look at the transformation matrix at page 562 of the aforementioned DICOM documentation).
Let me first define my problem,
I am working on an indoor navigation problem. So I constructed a graph to simulate possible paths. I can easily calculate the shortest path with Dijkstra and draw it on a map. So far, so good.
But this is not enough,
I need to give instruction to user to navigate him.
For example:
"Turn Right"
"Turn Left"
"Go on from the left"
To give these kind of instructions I need to know which path is on the left and which path is on the right.
And here is what I have to solve this:
1. A undirected weighted graph
2. The shortest path which contains vertices and edges
3. X and Y Coordinates of each vertices
By the way I will do this in .Net by using beacon technology.
Do you know how to separate left and right edges so I can give direction messages to user?
Thanks.
The easiest way I can think of is to take the cross product of the vector representing the direction the player is facing/traveling and the vector representing direction you want the player to go in. Whether the player must turn left or right depends on whether the result's Y-coordinate is positive or negative, but which is which depends on the handedness of the coordinate system. I would just pick one and try it. You have a 50% of being right, and it's easy to reverse if you're wrong.
Edit:
Here we see that a×b points up when a is to the right of b. However, we also see that -a×b points down. So, if a were pointing in the opposite direction—to the left—then the cross product would point down.
The dot product approach does not work in two dimensions. For this case you want to use the sign of the determinant of the matrix [A B], where A and B are your column vectors. A pseudo-code would be
c=sign(det([A B]))
Here, if c>0 is means that B is to the left. This will switch depending on the order of A and B in your matrix.
I use the Google Maps Javascript API v3 for calculating the directions from my current position to my end destination in an iPad PhoneGap Application.
Now I want to make a function which automatically recalculates the directions, if you take the wrong lane. That means, I will make a marker for the current position on the map and then should check if it's near the directions-polygon, if not recalculate the route.
The directions are printed out in a canvas-element and I couldn't find anything how to compare it with my markers…
Any idea?
The following line works. Just set the tolerance as desired.
google.maps.geometry.poly.isLocationOnEdge (point, polyline, tolerance)
It will return true if point is located in the polyline/ polygen. Let me know if tyhis doesn't work.
axs
App works this way:
User enters a starting location and a distance. User can choose to draw circles, or lines over the roads. Circles work fine. Lines used to work.
When Lines, the code finds the lat/long of the starting location, then for the points N, S, E, W of the origin at the distance set by the user (say, 100km). Starting with the N destination, the code calls google.maps.DirectionsService() to get directions from the origin to N. This returns an array of lat/longs in the the route.overview_path.
NOTE: I COULD use the directionsRenderer() to draw the route, BUT, the distance drawn would be greater than the distance set by the user. Drawing the entire route from the origin to the point N might be 124km over the roads, and I just want to draw 100km.
Instead, I step through the route.overview_path[] array, checking the distance between that point and the point of origin, adding each point to a new array. When the distance is greater than the dist set by the user, I stop, pop off the last element, then create a new Polyline based on this 2nd, smaller array.
I've spent the entire day in Chrome's developer mode walking through the javascript, setting breakpoints, watching locals, etc. The array of points passed in google.maps.Polyline({}) is a good array of unique points. I just cannot figure out why they aren't rendering.
Ultimately, the code used to draw 4 lines starting at the point of origin, one heading North, one heading East, South, West. etc....
The code is here: http://whosquick.com/RunViz.html
Thank you for your attention.
Nevermind. Solved it.
var objGeo = new LatLon(Geo.parseDMS(myroute.overview_path[0].Pa), Geo.parseDMS(myroute.overview_path[0].Qa));
I had inadvertently switched Pa with Qa.