Adjust the orthographic camera to fit a 3D object (Three.js) - math

I'm building a scene, which I want to view through the orthographic camera, from an angle. I do the following:
Build the scene.
Move the OrbitControl's (camera's) target to the center of the scene.
Move the camera by a certain (unit) vector using spherical coordinates.
Try to adjust the camera's left/right/top/bottom params to keep the object in the view, centered. Also considered adjusting the zoom.
My simplified, ideally positioned scene looks like that:
So I guess it is a problem of a calculation of positions of object extremities after (spherical) transformation and projecting them back into cartesian coordinates. I tried to use Euler transform helper, but it depends on the order of transformation for each of the axis. Quaternions are also non-commutive, and I'm lost. Perhaps I need to calculate how would the widths/heights of the diagonals change after transformation and use these?

Related

Why translating matrix on Z axis is not the same as changing position on the Z axis by same number?

I have a mesh that has origin point at the bottom. I want to move it by -132 on Z axis. If I change the position of the Mesh it is on the correct position. But If I translate it on the Z axis by -132, the mesh is off by 20. Why am I not getting the same result?
The way I am translating the matrix:
matrix = new THREE.Matrix4().makeTranslation( 0, 0, -132 )
mesh.geometry.applyMatrix( matrix );
Here is the image of the mesh:
And here is image after the translation by 132. It's off by 20.
Some more info:
Position of the mesh is at:
419, -830, 500
and Rotation is:
0, -0.52, 0
So the Z coordinate is at 500; But I have to move it down by -132. If it move it by moving the position down 132 it is on correct position. But I want to translate the matrix to get the origin point down by 132.
Here is also the matrix:
"matrix": [0.8660253882408142,0,0.5,0,0,1,0,0,-0.5,0,0.8660253882408142,0,419,-830,500,1]
update after further clarifications and chat
The whole proint is that 3D transformatins are not commutative. This means that translating and then rotating is different that rotating and then translating (will produce different results). In some special cases these can coincide (e.g origins are at 0,0,0 and so on..), but in general they have different results.
Furthermore there is the issue with relative coordinates and nesting 3D objects inside other 3D objects. The final (world) transform of an object is affected whether it is inside other object or not.
Finally, the actual mesh position (local transform) versus vertices position plays a role in the way the mesh (and the geometry) will be (eventualy) projected onto 2D, so the projection angle changes (see above).
As clarified, the question is about shifting a mesh in such a way as to shift the origin, so further transformations (e.g rotations) can be done with respect to this shifted origin. A standard way to achieve this behaviour, in 3D programing, is to include pivots or wrappers around the mesh and position the mesh relatively inside with respect to the wrapper. Then apply any further transformations (e.g rotations) on the wrapper itself. This gives the effect that the mesh is rotating with respect to another axis (instead of the center).
The way this happens is that the wrapper indeed rotates around its own origin (i.e at 0,0,0) but the mesh inside is shifted so it appears as rotating with respect to another axis. A common example of this is modelling a 3D car wheel, which can rotate around its own axis (i.e spinning) but also it translates with the rest of the car. So one adds a wrapper around the wheel, where the wrapper is indeed translated with the rest of the car, but the wheel is rotated inside the wrapper as if no translation is present (kind of reverse situation of what you need here, but same difference).
You may optionaly want to check the MOD3 pivot modifier which creates custom pivots as origin points/axes (ps i'm author of the port). A wheel modifier is also included in MOD3 which solves what is described above as the wheel problem in 3D.
To use a wrapper 3D Object in your code do something like this:
// create the pivot wrapper
var pivot = new THREE.Object3D();
pivot.name = "PIVOT";
pivot.add( mesh );
scene.add( pivot );
// shift the mesh inside pivot
mesh.position.set(0,0,-132);
// position wrapper in the scene,
// position in the place where mesh would be
pivot.position.set(419, -830, 500);
pivot.rotation.set(0, -0.52, 0);
// now mesh appears rotated around the z = -132 axis instead of the center
// because the wrapper is actually rotated around its center,
// but its center coincides with the shifted mesh by z = -132
a related answer here

How to determine a positions of MeshView?

I am using javafx.
I have a MeshView which is wall of a cube.
I try to find a way how to get it coordinates (x,y,z).
I need it to detrmine if the wall is visible on the screen or not
and if not how to rotate it to make it visible.
These methods:
myMeshViewWall.getLocalBounds()
myMeshViewWall.getBoundsInLocal()
myMeshViewWall.getBoundsInParent()
always gives me same result when I rotate my cube.
Wherever my wall is, the result is not changing.
What shoudl I do to achive my goal?
In order to get the coordinates from the object in the scene you can try:
myMeshViewWall.localToScene(myMeshViewWall.getBoundsInLocal());
This will transform the bounds from the local coordinate space of this node into the coordinate space of its scene.

how to translate 3d mesh, given a view direction and a change in cursor position

My question is similar to 3D Scene Panning in perspective projection (OpenGL) except I don't know how to compute the direction in which to move the mesh.
I have a program in which various meshes can be selected. Once a mesh is selected I want it to translate when click-dragging the cursor. When the cursor moves up, I want the mesh to move up, and so on for the appropriate direction. In other words, I want the mesh to translate in directions along the plane that is perpendicular to the viewing direction.
I have the Vector2 for the Delta (x,y) in cursor postion, and I have the Vector3 viewDirection of the camera and the center of the mesh. How can I figure out which way to translate the mesh in 3d space with the Delta and viewDirection? Will I need other information in order to to this calculation (such as the up, or eye)?
It doesn't matter if if the scale of the translation is off, I'm just trying to figure out the direction right now.
EDIT: for some reason I had a confusion about getting the up direction. Clearly it can be calculated by applying the camera rotation to the specified perspective up vector.
You'll need an additional vector, upDirection, which is the unit vector pointing "up" from your camera. You can now cross-product viewDirection and upDirection to get rightDirection, the vector pointing "right" from your camera.
You want to map y deltas to motion along upDirection (or -upDirection) and x deltas to motion in rightDirection. These vectors are in world-space.
You may want to scale the translation speed to match the mouse speed. If you are using perspective projection you'll want to scale the translation speed with your model's depth with respect to your camera (The further the object is from your camera, the faster you will need to move it if you want it to match the mouse.)

Mapping points in 2d space to a sphere

I have a bunch of points in a rectangular x/y space which I would like to project onto a sphere. As in, I am trying to write this function:
function point_on_sphere(2dx:Number, 2dy:Number) : Vector3D
{
//magic
return new Vector3D(3dx, 3dy, 3dz);
}
I have been trying to first plot the points on to a cylinder and then map those points to a sphere as directed by this wikipedia page. However, those formulas assume a constant z=0, which doesn't really do what I want.
I'm using actionscript 3 / flex, but any pseudo code or pushes in the right direction would be greatly appreciated.
Just to clarify: I'm not trying to apply a texture to a sphere object, but rather to place objects along an imaginary sphere.
There is no one right answer. You can choose different approaches based on how you want to place the objects along the sphere.
Is it OK for the objects to get nearer and nearer to each other as you get closer to the sphere's "poles"? Why wouldn't the normal texture-mapping projection actually work for you?

Project a grid in screenspace on the world xz plane

I want to project a grid on the xz-plane like shown here:
To do that, I created a vertex grid with x and z range [-1|1]. In the shader I multiply the xz screen coordinate of a vertex with the inverse of the View-Projection matrix. Then I want to adjust the height, depending on the new world xz coordinates and finally I transform these coordinates back to screenspace by multiplying them with the View-Projection matrix.
I dont know why, but I get a very strange plane shown on the screen. Are the mathematical oprations I use correct?
The grid that you initially create, is that in projection space or actual screen co-ords? It sounds like it is in projection space since you only transform it with the inverse of the view-projection matrix to get into world co-ords. I think you need to include the "Window" matrix too i.e. transform them by the inverse of the View-Projection-Window matrix (and similarly on the way back to screen co-ords).
Edit:
I'm probably not understanding exactly what it is you're trying to do so here's some questions back. :)
Are you trying to take the grid that's shown in the screenshot in your question and project that onto world z-x co-ordinates? If so, then why do you start with a grid of z-x values? Also, if you apply an inverse view matrix to those then surely you would end up with a line since the camera looks along z although your second screenshots show that you are getting a plane. I'm a bit confused.

Resources