IDL: Can I get the coordinates of a point on my plot's cartesian plane? - plot

I have a plot like this:
http://i.imgur.com/i9xp5.png
I need the data coordinates of points in order to plot wind barbs.
Now, if I wanted a wind barb to be drawn at x=100, y=20, is there a way I can obtain the data coordinates of that ( or other ) points of my plot?

Would the ARROW procedure be of any use to you? It looks like you could
just pass it your data coordinates (x0=100, y0=20) for the base of the arrow,
and another set of coordinates x1 and y1 representing the length and direction for the arrowhead end. It should take care of placing and scaling them properly on your plot.
But I don't think ARROW gives you any control over the arrow style, except for color,
heaviness of the lines, and filled vs. unfilled. If you need to use a different
shape, I think you might have to express it as an array of XY points to define
the vertices of your custom arrow symbol, then rotate, scale, translate, and PLOTS
line segments between each symbol vertex.
The DATA and DEVICE graphics keywords tell the various plotting routines whether
the coordinates are in data coordinates or device coordinates. I'm not sure from
your description which is the appropriate setting, but one of them should do what you want.

Related

Rendering combined 2D and 3D maps in R

I saw this figure in Leland Wilkinson's book the Grammar of Graphics and was wondering how I could go about creating something similar in R.
I am suspicious this could be done using rgl, persp3d, but there's a couple aspects that are unclear to me like how to create the conformal mapping shown in the coordinates of the XY plane, as well as how to create the 2D color map in a 3D context.
Any advice would be much appreciated. Thanks!
That should be possible with rgl, but there might be some snags in the details. Here's the outline:
The green surface does not appear to have a rectangular base,
so you'll pass matrices for all of x, y and z coordinates to surface3d() to draw it.
I can't tell if the map is on a flat surface with curved edges, or if it's a curved surface. In either case, you plot the surface with a 2D texture showing the map and the contours.
a. To produce that 2D texture, use whatever mapping software you've got, and output the image to a PNG file.
b. To put it on the surface, use surface3d() with arguments texture = <filename>, texture_s = ..., texture_t = ...) where texture_s and texture_t are set to coordinates in the image (bottom left = (0,0), top right = (1,1)) corresponding to each x and y location. The z value is
either constant or varying depending on whether you want it flat
or curved.
The axes will be drawn with axis3d.

Building a 3d plot function for Antenna Emission Measurements

I have an input of three [1:360,2] tables that include measurements of the three planes X,Y,Z with each plane having 1:360 degree polar coordinates. A good example of one of those tables being the image here.
image example
I would like to print those three "circles" as different planes circles (x,y,z) as is shown in the link below.
click here
I wrote last night a small example that works on the rgl library (but perhaps
I should "move" this code to the ggplot2) that plots those three circles by converting the polar coordinates to cartesian ones, for simplicity assume that all three circles have radius of 1. You can copy paste the below and see what I mean
require("rgls")
degreeToRadian<-function(degree){
return (0.01745329252*degree)
}
turnPolarToX<-function(Amplitude,Coordinate){
return (Amplitude*cos(degreeToRadian(Coordinate)))
}
turnPolarToY<-function(Amplitude,Coordinate){
return (Amplitude*sin(degreeToRadian(Coordinate)))
}
# first circle
X1<-turnPolarToX(1,1:360)
Y1<-turnPolarToY(1,1:360)
Z1<-rep(0,360)
# second circle
X2<-turnPolarToX(1,1:360)
Y2<-rep(0,360)
Z2<-turnPolarToY(1,1:360)
# third circle
X3<-rep(0,360)
Y3<-turnPolarToX(1,1:360)
Z3<-turnPolarToY(1,1:360)
Min<-min(X1,Y1,Z1,X2,Y2,Z2,X3,Y3,Z3)
Max<-max(X1,Y1,Z1,X2,Y2,Z2,X3,Y3,Z3)
plot3d(X1,Y1,Z1,xlim=c(Min,Max),ylim=c(Min,Max),zlim=c(Min,Max),box=TRUE,axe=FALSE,add=TRUE,col="red",type="l")
plot3d(X2,Y2,Z2,xlim=c(Min,Max),ylim=c(Min,Max),zlim=c(Min,Max),box=TRUE,axe=FALSE,add=TRUE,col="green",type="l")
plot3d(X3,Y3,Z3,xlim=c(Min,Max),ylim=c(Min,Max),zlim=c(Min,Max),box=TRUE,axe=TRUE,add=FALSE,col="blue",type="l")
the problem I have now is that the axes contain cartesian coordinate values that are not understandable by the user. I am thinking of how I can remove the cartesian coordinates and use colors that considers the amplitude value (of the initial information at polar corrdinates) of each vector instead of the x,y values)
I would like to thank you in advance for your reply
Regards
Alex

spatstat wiht R: Error with defining the window of spatial point pattern

Please see below image. This image is created by first converting a two-column data frame into a study window (call it study_win) using as.owin, and then plotting another two-columns data-frame (call it study_points)on top of the window.
It is clear that the points are lying inside the window! However when I call
ppp(study_points[,1],study_points[,2],win = study_window)
it says that most of my points are rejected as lying outside the window. Could someone tell me what is going on?
Thanks!
First you could have taken a step back to check that the window object study_window was what you intended. You could have plotted or printed this object in its own right. A plot of study_window would show (and you can also see this in the plot that you supplied in the question) that the boundary of the window is a disconnected scatter of points, not a joined-up polygon. A printout of study_window would have revealed that it is a binary pixel mask, with a very small area, rather than a polygonal region. The help for as.owin explains that, when as.owin is applied to a dataframe containing columns of x,y coordinates, it interprets them as pixel coordinates of the pixels that lie inside the window.
So,what has happened is that as.owin has created a window consisting of one pixel at each of the (x,y) locations in the data frame. That's not what you wanted; the (x,y) coordinates were meant to be the vertices of a polygonal boundary.
To get the desired window, do something like study_window <- owin(poly=df) where df is the data frame of (x,y) coordinates of vertices.
To do it all in one step, type something like mypattern <- ppp(x, y, poly=df) where x and y are the vectors of coordinates of the points in the window.
so I solved the problem by using the "owin" and specify the region to be polygon; instead of "as.owin". I have no idea the difference between owin and as.owin, but I am just glad it worked...

Disperse points in a 2D visualisation

I have a set of points like this (that I have clustered using R):
180.06576696, 192.64378568
180.11529253999998, 192.62311824
180.12106092, 191.78020965999997
180.15299478, 192.56909828000002
180.2260287, 192.55455869999997
These points are dispersed around a center point or centroid.
The problem is that the points are very close together and are, thus, difficult to see.
So, how do I move the points apart so that I can distinguish each point more clearly?
Thanks,
s
Maybe I'm overlooking some intricacy here, but...multiply by 10?
EDIT
Assuming the data you listed above are Cartesian (x,y) coordinate pairs, you can visualize them as a scatter plot using Google Charts. I've rounded your data to 3 decimal places, because Google Charts doesn't appear to handle higher precision than that.
I don't know the coordinates for your central point. In the above chart, I'm assuming it is somewhere nearby and not at (0,0). If it is at (0,0), then I imagine it will be difficult to visualize all of the data at once without some kind of "zoom-in" feature, scaling the data, or a very large screen.
slotishtype, without going into code, I think you first need to add in the following tweaking parameters to be used by the visualization code.
Given an x by y display box, fill the entire box, with input parameters [0.0 to 1.0]...
overlap: the allowance for points to be placed on top of each other
completeness: how important is it to display all of your data points
centroid_display: how important is it to see the centroid in the same output
These produce the dependent parameter
scale: the ratio between display distances to numerical distances
You will need code to
calculate the distance(s) to the centroid like you said,
and also the distances between data points, affecting the output based on the chosen input parameters.
I take inspiration from the fundamentals in the GraphViz dot manual. Look at the "Drawing Orientation, Size and Spacing" on p12.

Polygon math

Given a list of points that form a simple 2d polygon oriented in 3d space and a normal for that polygon, what is a good way to determine which points are specific 'corner' points?
For example, which point is at the lower left, or the lower right, or the top most point? The polygon may be oriented in any 3d orientation, so I'm pretty sure I need to do something with the normal, but I'm having trouble getting the math right.
Thanks!
You would need more information in order to make that decision. A set of (co-planar) points and a normal is not enough to give you a concept of "lower left" or "top right" or any such relative identification.
Viewing the polygon from the direction of the normal (so that it appears as a simple 2D shape) is a good start, but that shape could be rotated to any arbitrary angle.
Is there some other information in the 3D world that you can use to obtain a coordinate-system reference?
What are you trying to accomplish by knowing the extreme corners of the shape?
Are you looking for a bounding box?
I'm not sure the normal has anything to do with what you are asking.
To get a Bounding box, keep 4 variables: MinX, MaxX, MinY, MaxY
Then loop through all of your points, checking the X values against MaxX and MinX, and your Y values against MaxY and MinY, updating them as needed.
When looping is complete, your box is defined as MinX,MinY as the upper left, MinX, MaxY as upper right, and so on...
Response to your comment:
If you want your box after a projection, what you need is to get the "transformed" points. Then apply bounding box loop as stated above.
Transformed usually implies 2D screen coordinates after a projection(scene render) but it could also mean the 2D points on any plane that you projected on to.
A possible algorithm would be
Find the normal, which you can do by using the cross product of vectors connecting two pairs of different corners
Create a transformation matrix to rotate the polygon so that it is planer in XY space (i.e. normal alligned along the Z axis)
Calculate the coordinates of the bounding box or whatever other definition of corners you are using (as the polygon is now aligned in 2D space this is a considerably simpler problem)
Apply the inverse of the transformation matrix used in step 2 to transform these coordinates back to 3D space.
I believe that your question requires some additional information - namely the coordinate system with respect to which any point could be considered "topmost", or "leftmost".
Don't forget that whilst the normal tells you which way the polygon is facing, it doesn't on its own tell you which way is "up". It's possible to rotate (or "roll") around the normal vector and still be facing in the same direction.
This is why most 3D rendering systems have a camera which contains not only a "view" vector, but also "up" and "right" vectors. Changes to the latter two achieve the effect of the camera "rolling" around the view vector.
Project it onto a plane and get the bounding box.
I have a silly idea, but at the risk of gaining a negative a point, I'll give it a try:
Get the minimum/maximum value from
each three-dimensional axis of each
point on your 2d polygon. A single pass with a loop/iterator over the list of values for every point will suffice, simply replacing the minimum and maximum values as you go. The end result is a list that has the "lowest" X, Y, Z coordinates and "highest" X, Y, Z coordinates.
Iterate through this list of min/max
values to create each point
("corner") of a "bounding box"
around the object. The result
should be a box that always contains
the object regardless of axis
examined or orientation (no point on
the polygon will ever exceed the
maximum or minimums you collect).
Then get the distance of each "2d
polygon" point to each corner
location on the "bounding box"; the
shorter the distance between points,
the "closer" it is to that "corner".
Far from optimal, certainly crummy, but certainly quick. You could probably post-capture this during the object's rotation, by simply looking for the min/max of each rotated x/y/z value, and retaining a list of those values ahead of time.
If you can assume that there is some constraints regarding the shapes, then you might be able to get away with knowing less information. For example, if your shape was the composition of a small square with a long thin triangle on one side (i.e. a simple symmetrical geometry), then you could compare the distance from each list point to the "center of mass." The largest distance would identify the tip of the cone, the second largest would be the two points farthest from the tip of the cone, etc... If there was some order to the list, like points are entered in counter clockwise order (about the normal), you could identify all the points. This sounds like a bit of computation, so it might be reasonable to try to include some extra info with your shapes, like the "center of mass" and a reference point that is located "up" above the COM (but not along the normal). This will give you an "up" vector that you can cross with the normal to define some body coordinates, for example. Also, the normal can be defined by an ordering of the point list. If you can't assume anything about the shapes (or even if the shapes were symmetrical, for example), then you will need more data. It depends on your constraints.
If you know that the polygon in 3D is "flat" you can use the normal to transform all 3D-points of the vertices to a 2D-representation (of the points with respect to the plan in which the polygon is located) - but this still leaves you with defining the origin of this coordinate-system (but this don't really matter for your problem) and with the orientation of at least one of the axes (if you want orthogonal axes you can still rotate them around your choosen origin) - and this is where the trouble starts.
I would recommend using the Y-axis of your 3D-coordinate system, project this on your plane and use the resulting direction as "up" - but then you are in trouble in case your plan is orthogonal to the Y-axis (now you might want to use the projected Z-Axis as "up").
The math is rather simple (you can use the inner product (a.k.a. scalar product) for projection to your plane and some matrix stuff to convert to the 2D-coordinate system - you can get all of it by googling for raytracer algorithms for polygons.

Resources