I am using javafx.
I have a MeshView which is wall of a cube.
I try to find a way how to get it coordinates (x,y,z).
I need it to detrmine if the wall is visible on the screen or not
and if not how to rotate it to make it visible.
These methods:
myMeshViewWall.getLocalBounds()
myMeshViewWall.getBoundsInLocal()
myMeshViewWall.getBoundsInParent()
always gives me same result when I rotate my cube.
Wherever my wall is, the result is not changing.
What shoudl I do to achive my goal?
In order to get the coordinates from the object in the scene you can try:
myMeshViewWall.localToScene(myMeshViewWall.getBoundsInLocal());
This will transform the bounds from the local coordinate space of this node into the coordinate space of its scene.
Related
I'm building a scene, which I want to view through the orthographic camera, from an angle. I do the following:
Build the scene.
Move the OrbitControl's (camera's) target to the center of the scene.
Move the camera by a certain (unit) vector using spherical coordinates.
Try to adjust the camera's left/right/top/bottom params to keep the object in the view, centered. Also considered adjusting the zoom.
My simplified, ideally positioned scene looks like that:
So I guess it is a problem of a calculation of positions of object extremities after (spherical) transformation and projecting them back into cartesian coordinates. I tried to use Euler transform helper, but it depends on the order of transformation for each of the axis. Quaternions are also non-commutive, and I'm lost. Perhaps I need to calculate how would the widths/heights of the diagonals change after transformation and use these?
I would like to know how to compute rotation components of a rectangle in space according to four given points in a projection plane.
Hard to depict in a single sentence, thus I explain my needs.
I have a 3D world viewed from a static camera (located in <0,0,0>).
I have a known rectangular shape (an picture, actually) That I want to place in that space.
I only can define points (up to four) in a spherical/rectangular referencial (camera looking at <0°,0°> (sph) or <0,0,1000> (rect)).
I considere the given polygon to be my rectangle shape rotated (rX,rY,rZ). 3 points are supposed to be enough, 4 points should be too constraintfull. I'm not sure for now.
I want to determine rX, rY and rZ, the rectangle rotation about its center.
--- My first attempt at solving this constrint problem was to fix the first point: given spherical coordinates, I "project" this point onto a camera-facing plane at z=1000. Quite easy, this give me a point.
Then, the second point is considered to be on the <0,0,0>- segment, which is about an infinity of solution ; but I fix this by knowing the width(w) and height(h) of my rectangle: I then get two solutions for my second point ; one is "in front" of the first point, and the other is "far away"... I now have a edge of my rectangle. Two, in fact.
And from there, I don't know what to do. If in the end I have my four points, I don't have a clue about how to calculate the rotation equivalency...
It's hard to be lost in Mathematics...
To get an idea of the goal of all this: I make photospheres and I want to "insert" in them images. For instance, I got on my photo a TV screen, and I want to place a picture in the screen. I know my screen size (or I can guess it), I know the size of the image I want to place in (actually, it has the same aspect ratio), and I know the four screen corner positions in my space (spherical or euclidian). My software allow my to place an image in the scene and to rotate it as I want. I can zoom it (to give the feeling of depth)... I then can do all this manually, but it is a long try-fail process and never exact. I would like then to be able to type in the screen corner positions, and get the final image place and rotation attributes in a click...
The question in pictures:
Images presenting steps of the problem
Note that on the page, I present actual images of my app. I mean I had to manually rotate and scale the picture to get it fits the screen but it is not a photoshop. The parameters found are:
Scale: 0.86362
rX = 18.9375
rY = -12.5875
rZ = -0.105881
center position: <-9.55, 18.76, 1000>
Note: Rotation is not enought to set the picture up: we need scale and translation. I assume the scale can be found once a first edge is fixed (first two points help determining two solutions as initial constraints, and because I then know edge length and picture width and height, I can deduce scale. But the software is kind and allow me to modify picture width and height: thus the constraint is just to be sure the four points are descripbing a rectangle in space, with is simple to check with vectors. Here, the problem seems to place the fourth point as a valid rectangle corner, and then deduce rotation from that rectangle. About translation, it is the center (diagonal cross) of the points once fixed.
My question is similar to 3D Scene Panning in perspective projection (OpenGL) except I don't know how to compute the direction in which to move the mesh.
I have a program in which various meshes can be selected. Once a mesh is selected I want it to translate when click-dragging the cursor. When the cursor moves up, I want the mesh to move up, and so on for the appropriate direction. In other words, I want the mesh to translate in directions along the plane that is perpendicular to the viewing direction.
I have the Vector2 for the Delta (x,y) in cursor postion, and I have the Vector3 viewDirection of the camera and the center of the mesh. How can I figure out which way to translate the mesh in 3d space with the Delta and viewDirection? Will I need other information in order to to this calculation (such as the up, or eye)?
It doesn't matter if if the scale of the translation is off, I'm just trying to figure out the direction right now.
EDIT: for some reason I had a confusion about getting the up direction. Clearly it can be calculated by applying the camera rotation to the specified perspective up vector.
You'll need an additional vector, upDirection, which is the unit vector pointing "up" from your camera. You can now cross-product viewDirection and upDirection to get rightDirection, the vector pointing "right" from your camera.
You want to map y deltas to motion along upDirection (or -upDirection) and x deltas to motion in rightDirection. These vectors are in world-space.
You may want to scale the translation speed to match the mouse speed. If you are using perspective projection you'll want to scale the translation speed with your model's depth with respect to your camera (The further the object is from your camera, the faster you will need to move it if you want it to match the mouse.)
i haven’t been entirely sure what to google or search for to help solve my problem, really hoping someone here can help a little…
currently i have a 3d scene, it has a massive sphere with a texture mapped to it and the camera at the center of the sphere, so it’s much like a qtvr viewer.
i’d like a way to click on the polygons within the sphere and update the texture at that position with something and dot etc..
the only part of the process where i need help is converting the 2d mouse position to a point on the inside of the sphere.
hope this makes sense…
fyi, im only looking for a pure math solution..
The first thing you need to do is convert the screen coordinate into a line in 3d space. This will pass through the point you click and your eyepoint.
Once you have this line you can then intersect this line with your sphere to find the intersection point on the sphere.
You may get 2d coordinates of the polygons (triangles?) that are making up the sphere and then find the one that contains the mouse pointer point.
I am trying to learn XNA by writing a small 2D game, it's a Top-Down perspective and Im trying to have double movement, moving along the axis using Left-Right and Up-Down keys, as well as looking right at the mouse cursor, so that the player can run and aim at the same time.
I have one vector for the player position (m_PlayerPos), and one vector for the mouse position (m_MousePos), and im trying to get the correct angle towards the mouse position.
Im using the formula method:
public static float Angle(Vector2 from, Vector2 to)
{
return (float)Math.Atan2(from.X - to.X, from.Y - to.Y);
}
This works, but for some reason the method only works half-way, along the x-axis. When the mouse is to the exact left of right of the player, the player looks right at the mouse.
But if I move to the top of the player, it looks down, and if the mouse is below the player, the player looks up. So I need to inverse the Y axis, but Im not sure how.
Thanks in advance for any feedback.
Use to.Y - from.Y.
Multiply it with (0.0, -1.0) (or just multiply the Y component by -1.0). This will mirror the vector along the horizontal axis and should achieve the result you want.
In screen space the origin is in the top-left corner with the Y axis pointing downward whereas in eucledean space the Y axis points upwards. That's why you observe the Y axis being "flipped".