I need to produce an animated gif from a OpenGL visualization package rgl in R. The recipe is described in this link at genomearchitecture.com. First we create some 3D cube with our image. It was produced in plot3D package via scatter3D() commands. Next we invoke a rgl window with it by
require(plot3Drgl)
plotrgl()
The recipe from the link above sounds as follows (essential part):
for (i in 1:90) {
view3d(userMatrix=rotationMatrix(2*pi * i/90, 1, -1, -1))
rgl.snapshot(filename=paste("animation/frame-",
sprintf("%03d", i), ".png", sep=""))
}
that is we create 90 rgl snapshots and rotate our object using parameters in userMatrix=rotationMatrix() directive. However exactly this directive performs rotation around some specific axis, and the whole rotating animation looks kinky, especially if we have just a 3D geographical map to rotate, like on this image.
We do not want this image rotate and go upside down, we just need to rotate it around vertical Z axis. However, every attempt to modify the directive view3d(userMatrix=rotationMatrix(2*pi * i/90, 1, -1, -1)) changing vector (1,-1,-1) produces the cube with Z axis facing to us with initial position of the angle.
In the static framework the position of the cube can be intuitively controlled by two polar coordinates via directive
plotdev(theta=10,phi=15)
It would be good to program the snapshots running theta from 0 to 2 pi with some reasonable phi, but setting theta, phi instead of userMatrix works good only for static image. In rgl window it again produces something not expected, for example, theta=0, phi=0 again aim Z axis towards us. How it is possible to modify the view angle directive to make the cube rotate around its Z axis?
Related
I saw this figure in Leland Wilkinson's book the Grammar of Graphics and was wondering how I could go about creating something similar in R.
I am suspicious this could be done using rgl, persp3d, but there's a couple aspects that are unclear to me like how to create the conformal mapping shown in the coordinates of the XY plane, as well as how to create the 2D color map in a 3D context.
Any advice would be much appreciated. Thanks!
That should be possible with rgl, but there might be some snags in the details. Here's the outline:
The green surface does not appear to have a rectangular base,
so you'll pass matrices for all of x, y and z coordinates to surface3d() to draw it.
I can't tell if the map is on a flat surface with curved edges, or if it's a curved surface. In either case, you plot the surface with a 2D texture showing the map and the contours.
a. To produce that 2D texture, use whatever mapping software you've got, and output the image to a PNG file.
b. To put it on the surface, use surface3d() with arguments texture = <filename>, texture_s = ..., texture_t = ...) where texture_s and texture_t are set to coordinates in the image (bottom left = (0,0), top right = (1,1)) corresponding to each x and y location. The z value is
either constant or varying depending on whether you want it flat
or curved.
The axes will be drawn with axis3d.
I have a mesh that has origin point at the bottom. I want to move it by -132 on Z axis. If I change the position of the Mesh it is on the correct position. But If I translate it on the Z axis by -132, the mesh is off by 20. Why am I not getting the same result?
The way I am translating the matrix:
matrix = new THREE.Matrix4().makeTranslation( 0, 0, -132 )
mesh.geometry.applyMatrix( matrix );
Here is the image of the mesh:
And here is image after the translation by 132. It's off by 20.
Some more info:
Position of the mesh is at:
419, -830, 500
and Rotation is:
0, -0.52, 0
So the Z coordinate is at 500; But I have to move it down by -132. If it move it by moving the position down 132 it is on correct position. But I want to translate the matrix to get the origin point down by 132.
Here is also the matrix:
"matrix": [0.8660253882408142,0,0.5,0,0,1,0,0,-0.5,0,0.8660253882408142,0,419,-830,500,1]
update after further clarifications and chat
The whole proint is that 3D transformatins are not commutative. This means that translating and then rotating is different that rotating and then translating (will produce different results). In some special cases these can coincide (e.g origins are at 0,0,0 and so on..), but in general they have different results.
Furthermore there is the issue with relative coordinates and nesting 3D objects inside other 3D objects. The final (world) transform of an object is affected whether it is inside other object or not.
Finally, the actual mesh position (local transform) versus vertices position plays a role in the way the mesh (and the geometry) will be (eventualy) projected onto 2D, so the projection angle changes (see above).
As clarified, the question is about shifting a mesh in such a way as to shift the origin, so further transformations (e.g rotations) can be done with respect to this shifted origin. A standard way to achieve this behaviour, in 3D programing, is to include pivots or wrappers around the mesh and position the mesh relatively inside with respect to the wrapper. Then apply any further transformations (e.g rotations) on the wrapper itself. This gives the effect that the mesh is rotating with respect to another axis (instead of the center).
The way this happens is that the wrapper indeed rotates around its own origin (i.e at 0,0,0) but the mesh inside is shifted so it appears as rotating with respect to another axis. A common example of this is modelling a 3D car wheel, which can rotate around its own axis (i.e spinning) but also it translates with the rest of the car. So one adds a wrapper around the wheel, where the wrapper is indeed translated with the rest of the car, but the wheel is rotated inside the wrapper as if no translation is present (kind of reverse situation of what you need here, but same difference).
You may optionaly want to check the MOD3 pivot modifier which creates custom pivots as origin points/axes (ps i'm author of the port). A wheel modifier is also included in MOD3 which solves what is described above as the wheel problem in 3D.
To use a wrapper 3D Object in your code do something like this:
// create the pivot wrapper
var pivot = new THREE.Object3D();
pivot.name = "PIVOT";
pivot.add( mesh );
scene.add( pivot );
// shift the mesh inside pivot
mesh.position.set(0,0,-132);
// position wrapper in the scene,
// position in the place where mesh would be
pivot.position.set(419, -830, 500);
pivot.rotation.set(0, -0.52, 0);
// now mesh appears rotated around the z = -132 axis instead of the center
// because the wrapper is actually rotated around its center,
// but its center coincides with the shifted mesh by z = -132
a related answer here
I am interested in generating a series of simple 3d scatterplots which include regression planes without interactions using the scatterplot3d function in R. The following code generates almost what I am after with one problem- in many cases the regression plane extends outside of the bounding box (e.g. in this case, the corner nearest x, y and z =0). I tried changing the axis limits to increase the box size, but this does not alter the axis ranges as specified (which, according to the package documentation is an unfixed bug). Is there a way to either 1) re-draw the box to include the entire plane or 2) shrink the plane to include only the portion within the box?
example data
bugs<-c(335.20,8.68,1.94,3.22,21.79,11.16,1618.00,108.76,250.59,400.81,233.86,15.05,274.62,419.21)
max_dq<-c(0.015,0.001,0.001,0.001,0.002,0.007,0.04,0.001,0.014,0.003,0.002,0.006,0.004,0.013)
since_dist<-c(21,58,5,1,1,19,42,33,22,300,240,79,327,42)
library(scatterplot3d)
3 d plot
reg_plt<-scatterplot3d(max_dq,since_dist,bugs,angle=50)
regression plane
reg_plt$plane3d(lm(bugs~max_dq+since_dist))
To view my 3D environment, I use the "true" 3D isometric projection (flat square on XZ plane, Y is "always" 0). I used the explanation on wikipedia: http://en.wikipedia.org/wiki/Isometric_projection to come to how to do this transformation:
The projection matrix is an orthographic projection matrix between some minimum and maximum coordinate.
The view matrix is two rotations: one around the Y-axis (n * 45 degrees) and one around the X-axis (arctan(sin(45 degrees))).
The result looks ok, so I think I have done it correctly.
But now I want to be able to pick a coordinate with the mouse. I have successfully implemented this by rendering coordinates to an invisible framebuffer and then getting the pixel under the mouse cursor to get the coordinate. Although this works fine, I would really like to see a mathematical sollution because I will need it to calculate bounding boxes, frustums of the area on the screen and stuff like that.
My instincts tell me to:
- go from screen-coordinates to 2D projection coordinates (or how do you say this, I mean transforming screen coordinates to a coordinate between -1 and +1 for both axisses, with y inverted)
- untransform the coordinate with the inverse of the view-matrix.
- yeah... untransform this coordinate with the inverse of the projection matrix, but as my instincts tell, this won't work as everything will have the same Z-coordinate.
This, while every information is perfectly available on the isometric view (I know that the Y value is always 0). So I should be able to convert the isometric 2D x,y coordinate to a calculated 3d (x, 0, z) coordinate without using scans or something like that.
My math isn't bad, but this is something I can't seem to grasp.
Edit: IMO. every different (x, 0, z) coordinate corresponds to a different (x2, y2) coordinate in isometric view. So I should be able to simply calculate a way from (x2, y2) to (x, 0, z). But how?
Anyone?
there is something called project and unproject to transform screen to world and vice versa....
You seem to miss some core concepts here (it’s been a while since I did this stuff, so minor errors included):
There are 3 kinds of coordinates involved here (there are more, these are the relevant ones): Scene, Projection and Window
Scene (3D) are the coordinates in your world
Projection (3D) are those coordinates after being transformed by camera position and projection
Window (2D) are the coordinates in your window. They are generated from projection by scaling x and y appropriately and discarding z (z is still used for “who’s in front?” calculations)
You can not transform from window to scene with a matrix, as every point in window does correspond to a whole line in scene. If you want (x, 0, z) coordinates, you can generate this line and intersect it with the y-plane.
If you want to do this by hand, generate two points in projection with the same (x,y) and different (arbitrary) z coordinates and transform them to scene by multiplying with the inverse of your projection transformation. Now intersect the line through those two points with your y-plane and you’re done.
Note that there should be a “static” solution (a single formula) to this problem – if you solve this all on paper, you should get to it.
I am a graphics programmer from the GKS days trying to use R graphics. I have two questions that relate to transformations in R:
I was wondering if there is an equivalent for building a viewing pipeline in R where one could map a window in world coordinates [wc] to a viewport in device coordinates [dc]. For example I could specify a transofrmation t which maps a window of (wcxmin, wcxmax, wcymin, wcymax) to (vpxmin, vpxmax, vpymin, vpymax) where wc is (1000, -50, 40, 90) and vp is (0, 800, 0, 600). The objective being that all graphics calculations are done in wc but the graphics engine renders it in dc. In this case it would scale the coordinates appropriately and also flip the x-axis as wcxmin > wcxmax.
Is there an equivalent of graphics segments which could then be transformed [sclae, shift, rotate, and possibly shear] via a transformation matrix.
I am sure I am missing something very basic in R graphics. I could successfully build such transforms in SVG without any issues. I have been looking at packages like grid, lattice, ggplot2 but have not been able to make much progress.
Thanks.
Here's some sample code for something I am trying to do:
distn<-rnorm(100)
distw<-rweibull(100, shape=2)
dret<-stack(list(norm=distn, weib=distw))
n<-0
for (idx in levels(dret$ind)) {
pct<-dret[dret$ind == idx,c('values')]
# scale and shift the data
pct<-(pct-min(pct))/(max(pct) - min(pct))
if (n == 0) {
# top left
par(fig=c(0,0.5,0.5,1))
limx<-c(0,1)
} else {
# bottom right
par(fig=c(0.5,1,0,0.5), new=TRUE)
limx<-c(1,0)
}
fp<-density(pct)
sfx<-fp$x
sfy<-(fp$y-min(fp$y))/(max(fp$y)-min(fp$y))
sortpct<-sort(pct)
ecdfpct<-(1:length(sortpct))/length(sortpct)
plot(sortpct, ecdfpct, xlim=limx, type="l", col="green")
lines(sfx, sfy, xlim=limx, type="l", col="red")
n<-n+1
}
I would like to rotate the figure in the bottom right quadrant by -90 degrees.
The 'grid' package does that all the time. The viewports are represented as [0, 1] in both X and Y directions(and sometimes Z) and the functions convertX and convertY are called to move from user-coordinates to grid-coordinates. Type help(grid) for a full list of facilities. A third dimension is also represented when using wireframe or levelplot. Transformations via homogeneous coordinates are accomplished via 4 x 4 matrices stored as an item accessed as current.transform( current.viewport()). You can get more detail regarding how those transformation matrices are handled in R by looking at the code in trans3d. I see that #nograpes has already pointed you to the high-level rotation facility in the grid::pushViewport function.