I am trying to implement a 2D selection window that can select the 3D vertices inside the 2D window (indicated by dashed cyan rectangle). Each of my 3D models is currently a composite group of MeshView's, one MeshView per face. My plan was to iterate over each face (MeshView) and check if the 2D bounds intersect with the selection box bounds (I am planning to update this later using an Atlas texture to reduce the amount of meshes, but for now I just want the selection mechanism working).
Currently I have the following, but this isn't correct.
val selectionBounds = selectionRectangle.boundsInParent
val localBounds = meshView.localToScene(meshView.boundsInLocal, true)
if (selectionBounds.intersects(localBounds))
// do something with the mesh in meshView
My subscene contains a perspective camera, now I saw two useful posts:
Convert coordinates from 3D scene to 2D overlay
How to get 2D coordinates on window for 3D object in javafx
I think I have to first project the meshView's bounds properly using my perspective camera. But I am unsure how to proceed, do I have to project every 3D point in the local bounds to 2D as is done in referenced question 2 (above). Not very familiar with the math, or related concepts, any help would be appreciated.
(wink, wink MVP José)
EDIT 1:
After José's suggestion I added red bounding boxes for each meshview which gives the following result:
Apparently it adds some offset which appears to be the same regardless of the camera rotation. Here the red boxes are drawn around each meshview. Will investigate further..
EDIT 2:
I use a Pane which contains the SubScene and another Node. This is done to control the sizing of the SubScene and to reposition/resize the other Node accordingly by overriding the layoutChildren method, as such (this is in Kotlin):
override fun layoutChildren() {
val subScene = subSceneProperty.get()
if (subScene != null){
subScene.width = width
subScene.height = height
}
overlayRectangleGroup.resize(width, height)
val nodeWidth = snapSize(overlayMiscGroup.prefWidth(-1.0))
val nodeHeight = snapSize(overlayMiscGroup.prefHeight(-1.0))
overlayMiscGroup.resizeRelocate(width - nodeWidth, 0.0, nodeWidth, nodeHeight)
}
Related
I have a custom mesh (created in blender) that I insert into Qt3D using the following code:
QMesh *mesh = new QMesh(rootEntity);
mesh->setSource(QUrl::fromLocalFile(baseUrl+"mesh.obj"));
This works fine; I can add it to an entity with a material and everything.
Then I create a custom material using a texture loaded from a .png. I do this using the following code:
Qt3DRender::QTextureLoader *loader = new Qt3DRender::QTextureLoader(rootEntity);
Qt3DExtras::QTextureMaterial *material = new Qt3DExtras::QTextureMaterial(rootEntity);
loader->setSource(QUrl::fromLocalFile(baseUrl+"pattern.jpg"));
material->setTexture(loader);
This also works fine. When I add this material to a built-in Qt mesh (e.g. QPlaneMesh or QSphereMesh) it shows perfectly on the surface as one would expect.
However - now comes the problem - if I add it with the QMesh specified above, the mesh just gets one homogeneous color which seems to be the average over the colors in the pattern. Here you can see what I mean: both objects have the same material. The top one is inserted externally while the bottom one is a QPlaneMesh.
Can someone explain me why that is the case? And is there a way to successfully add textures to custom meshes?
Note: I have tried this with 2D and 3D meshes and it is the same outcome.
Note 2: I have also tried it with diferent images and it still just gets one homogeneous average color.
UPDATE: I tried (following the suggestion in the answer) to add a texture attribute to the geometry of my imported mesh like the following:
Qt3DCore::QEntity *entity = new Qt3DCore::QEntity(rootEntity);
QMesh *mesh = new QMesh(entity);
mesh->setSource(QUrl::fromLocalFile(baseUrl+"mesh.obj"));
const int stride = (3 + 2 + 3 + 4) * sizeof(float);
QSize resolution = QSize(2,2);
const int nVerts = resolution.width() * resolution.height();
QAttribute *texCoordAttr = new QAttribute(mesh->geometry());
Qt3DRender::QBuffer *vertexBuffer = new Qt3DRender::QBuffer(mesh->geometry());
texCoordAttr->setName(QAttribute::defaultTextureCoordinate1AttributeName());
texCoordAttr->setVertexBaseType(QAttribute::Float);
texCoordAttr->setVertexSize(2);
texCoordAttr->setAttributeType(QAttribute::VertexAttribute);
texCoordAttr->setBuffer(vertexBuffer);
texCoordAttr->setByteStride(stride);
texCoordAttr->setByteOffset(3*sizeof(float));
texCoordAttr->setCount(nVerts);
vertexBuffer->setDataGenerator(QSharedPointer<PlaneVertexBufferFunctor>::create(1.0f,1.0f,resolution, false)); //these input values (width, height, resolution, mirrored) are probably the cause of the problem
mesh->geometry()->addAttribute(texCoordAttr); //it crashes here
entity->addComponent(mesh);
entity->addComponent(transform);
entity->addComponent(material);
I created the functor for setDataGenerator like in the QPlaneMesh code. Now I am suspecting the segmentation fault is because of sizing mismatch. So how can I get the correct width and height of an external mesh from its QGeometry? And what else might be wrong here?
It looks like the mesh is missing the texture coordinates. When you open the file with a text editor, do you see the key vt somewhere? Those are the texture coordinates. You can read about the format here.
If you still want the obj file that you have, you have to add texture coordinates if it doesn't have any. It's probably best to open the file in Blender and use its texture mapper - at least for more complex meshes. Guessing which vertex needs which texture coordinate is not really feasible.
The texture coordinates work as follows:
If you have an image of, say 500 by 400 pixels, the texture coordinate (0.7, 0.3) is (500 * 0.7, 400 * 0.3) = (350, 120), meaning that the vertex which has that texture coordinate will receive the color value of the pixel at (350, 120). Values inside a triangle will get interpolated.
If your obj file comes along with a mtl file then it probably already has texture coordinates. If you want to load this mtl file use the QSceneLoader and add it to its parent QEntity to display everything.
I want to have a right-handed Cartesian coordinate system in JavaFX, so (0,0) at lower left corner of window, x increasing to the right and y increasing upwards. I can't figure out how to do that with transforms. If I apply a rotation transform, the buttons will be upside down. All I want is to be able to use this coordinate system instead of the default one.
As mentioned in the JavaFX documentation (see chapter Y-down versus Y-up), Y down is used by many 2D graphics libraries, which is where JavaFX has started.
To force Y up and correct drawing, you could put all your content in a rotated parent node:
// Rotate camera to use Y up.
camera.setRotationAxis(Rotate.Z_AXIS);
camera.setRotate(180.0);
// Rotate scene content for correct drawing.
Group yUp = new Group();
yUp.setRotationAxis(Rotate.Z_AXIS);
yUp.setRotate(180.0);
Scene scene = new Scene(yUp);
scene.setCamera(camera);
Now add everything to yUp to use those nodes like in a Y up environment.
Bear in mind that this is fine in 2D space. If you come up with additional 3D features, make sure your models grow in negative Y direction. Otherwise you would have to use another container.
JavaFX's prism renderer eventually uses a 3D Camera transform to render it's shapes.
There are two cameras that can be set to the scene, Parallel and Perspective.
If you look in the javafx source for parallel camera here you will find some maths to compute the transform.
If you override that method and implement the proper maths, you should be able to invert the coordinate system.
The kind of math you would use is something like this.
You would have to look in the source to see what ortho does exactly. But this should get you on the right track.
I am implementing a pan tool in our software's 3D view which is supposed to work much like the grab tool of, say, Photoshop or Acrobat Reader. That is, the point the user grabs onto with the mouse (clicks and holds, then moves the mouse) stays under the mouse cursor as the mouse moves.
This is a common paradigm and one that's been asked about on SO before, the best answer being to this question about the technique in OpenGL. There is another that also has some hints, and I have been reading this very informative CodeProject article. (It doesn't explain many of its code examples' variables etc, but from reading the text I think I understand the technique.) But, I have some implementation issues because my 3D environment's navigation is set up quite differently to those articles, and I am seeking some guidance.
My technique - and this might be fundamentally flawed, so please say so - is:
The scene 'camera' is stored as two D3DXVECTOR3 points: the eye position and a look point. The view matrix is constructed using D3DXMatrixLookAtLH like so:
const D3DXVECTOR3 oUpVector(0.0f, 1.0f, 0.0f); // Keep up "up", always.
D3DXMatrixLookAtLH(&m_oViewMatrix, &m_oEyePos, &m_oLook, &oUpVector);
When the mouse button is pressed, shoot a ray through that pixel and find: the coordinate (in unprojected scene / world space) of the pixel that was clicked on; the intersection of that ray with the near plane; and the distance between the near-plane point and object, which is the length between those two points. Store this and the mouse position, and the original navigation (eye and look).
// Get the clicked-on point in unprojected (normal) world space
D3DXVECTOR3 o3DPos;
if (Get3DPositionAtMouse(roMousePos, o3DPos)) { // fails if nothing under the mouse
// Mouse location when panning started
m_oPanMouseStartPos = roMousePos;
// Intersection at near plane (z = 0) of the ray from camera to clicked spot
D3DXVECTOR3 oRayVector;
CalculateRayFromPixel(m_oPanMouseStartPos, m_oPanPlaneZ0StartPos, oRayVector);
// Store original eye and look points
m_oPanOriginalEyePos = m_oEyePos;
m_oPanOriginalLook = m_oLook;
// Store the distance between near plane and the object, and the object position
m_dPanPlaneZ0ObjectDist = fabs(D3DXVec3Length(&(o3DPos - m_oPanPlaneZ0StartPos)));
m_oPanOriginalObjectPos = o3DPos;
Get3DPositionAtMouse is a known-ok method which picks a 3D coordinate under the mouse. CalculateRayFromPixel is a known-ok method which takes in a screen-space mouse coordinate and casts a ray, and fills the other two parameters with the ray intersection at the near plane (Z = 0) and the normalised ray vector.
When the mouse moves, cast another ray at the new position, but using the old (original) view matrix. (Thanks to Nico below for pointing this out.) Calculate where the object should be by extending the ray from the near plane the distance between the object and near plane (this way, the original object and new object points should be in parallel plane to the near plane.) Move the eye and look coordinates by this much. Eye and Look are set from their original (when panning started) values, with the difference being from the original mouse and new mouse positions. This is to reduce any precision loss from incrementing or decrementing by granular (integer) pixel movements as the mouse moves, ie it calculates the whole difference in navigation every time.
// Set navigation back to original (as it was when started panning) and cast a ray for the mouse
m_oEyePos = m_oPanOriginalEyePos;
m_oLook = m_oPanOriginalLook;
UpdateView();
D3DXVECTOR3 oRayVector;
D3DXVECTOR3 oNewPlaneZPos;
CalculateRayFromPixel(roMousePos, oNewPlaneZPos, oRayVector);
// Now intersect that ray (ray through the mouse pixel, using the original navigation)
// to hit the plane the object is in. Function uses a "line", so start at near plane
// and the line is of the length of the far plane away
D3DXVECTOR3 oNew3DPos;
D3DXPlaneIntersectLine(&oNew3DPos, &m_oPanObjectPlane, &oNewPlaneZPos, &(oRayVector * GetScene().GetFarPlane()));
// The eye/look difference /should/ be as simple as:
// const D3DXVECTOR3 oDiff = (m_oPanOriginalObjectPos - oNew3DPos);
// But that lags and is slow, ie the objects trail behind. I don't know why. What does
// work is to scale the from-to difference by the distance from the camera relative to
// the whole scene distance
const double dDist = D3DXVec3Length(&(oNew3DPos - m_oPanOriginalEyePos));
const double dTotalDist = GetScene().GetFarPlane() - GetScene().GetNearPlane();
const D3DXVECTOR3 oDiff = (m_oPanOriginalObjectPos - oNew3DPos) * (1.0 + (dDist / dTotalDist));
// Adjust the eye and look points by the same amount, so orthogonally changed
m_oEyePos = m_oPanOriginalEyePos + oDiff;
m_oLook = m_oPanOriginalLook + oDiff;
Diagram
This diagram is my working sketch for implementing this:
and hopefully explains the above much more simply than the text. You can see a moving point, and where the camera has to move to keep that point at the same relative position. The clicked-on point (the ray from the camera to the object) is just to the right of the straight-ahead ray representing the center pixel.
The problem
But, as you've probably guessed, this doesn't work as I hope. What I wanted to see was the clicked-on object moving with the mouse cursor. What I actually see is that the object moves in the direction of the mouse, but not enough, ie it does not keep the clicked-on point under the cursor. Secondly, the movement flickers and jumps around, jittering by up to twenty or thirty pixels sometimes, then flickers back. If I replace oDiff with something constant this doesn't occur.
Any ideas, or code samples showing how to implement this with DirectX (D3DX, DX matrix order, etc) will be gratefully read.
Edit
Commenter Nico below pointed out that when calculating the new position using the mouse cursor's moved position, I needed to use the original view matrix. Doing so helps a lot, and the objects stay near the mouse position. However, it's still not exact. What I've noticed is that at the center of the screen, it is exact; as the mouse moves further from the center, it gets out by more and more. This seemed to change based on how far away the object was, too. By pure 'I have no idea what I'm doing' guesswork, I scaled this by a factor of the near/far plane and how far away the object was, and this brings it very close to the mouse cursor, but still a few pixels away (1 to, say, 30 at the extreme edge of the screen, which is enough to make it feel wrong.)
Here's how i solve this problem.
float fieldOfView = 45.0f;
float halfFOV = (fieldOfView / 2.0f) * (DEGREES_TO_RADIANS);
float distanceToObject = // compute the world space distance from the camera to the object you want to pan
float projectionToWorldScale = distanceToObject * tan( halfFov );
Vector mouseDeltaInScreenSpace = // the delta mouse in pixels that we want to pan
Vector mouseDeltaInProjectionSpace = Vector( mouseDeltaInScreenSpace.x * 2 / windowPixelSizeX, mouseDeltaInScreenSpace.y * 2 / windowPixelSizeY ); // ( the "*2" is because the projection space is from -1 to 1)
// go from normalized device coordinate space to world space (at origin)
Vector cameraDelta = -mouseDeltaInProjectionSpace * projectionToWorldScale;
// now translate your camera by "cameraDelta".
Note this works for an field of view apsect ratio of 1, i think you would have to break up the "scale" into separate x and y components if they vertical field of view was different than the horizontal field of view
Also, you mentioned a "look at" vector. I'm not sure how my math would need to change for that since my camera is always looking straight down the z-axis.
One problem is your calculation of the new 3d position. I am not sure if this is the root cause, but you might try it. If it doesn't help, just post a comment.
The problem is that your offset vector is not parallel to the znear plane. This is because the two rays are not parallel. Therefore, if the have the same length behind znear, the distance of the end point to the znear plane cannot be equal.
You can calculate the offset vector with the theorem of intersecting lines. If zNearA and zNearB are the intersection points of the znear plane with ray A and ray B respectively, then the theorem states:
Length(original_position - cam_position) / Length(offset_vector) = Length(zNearA - cam_position) / Length(zNearB - zNearA)
And therefore
offset_vector = Length(original_position - cam_position) / Length(zNearA - cam_position) * (zNearB - zNearA)
Then you can be sure to move on a line that is parallel to the znear plane.
Just try it out and see if it helps.
I am working on a 3D mesh manipulator using this : http://leapmotion.com. So far, I have been able manipulate the points just fine, by 'grabbing' and moving them, however I now want to be able to rotate the mesh and work on the opposite face. What I have done is add an extra object that is called 'rotatable' as Shown below:
scene=new THREE.Scene();
camera = new THREE.PerspectiveCamera(70,window.innerWidth/window.innerHeight,1,8000)
renderer=new THREE.WebGLRenderer( { clearColor: 0x000000, clearAlpha: 1, maxLights:5 } )
//This is the 'Mesh Scene'
rotatable = new THREE.Object3D()
scene.add(rotatable)
//Mesh we are altering
var material = new THREE.MeshNormalMaterial()
material.side=2
var geom = new THREE.SphereGeometry(200,10,10);
var sphere = new THREE.Mesh(geom, material)
rotatable.add(sphere)
I am then trying to change the vertices of this sphere, but to do so I need to do a 'collision test' in order to see if the vertex is being 'grabbed' This involves check the vertex position and see if it coincides with one of your finger position (psuedoCode below)
if(finger.x == vertex.x && finger.y == vertex.y && finger.z == vertex.z){
vertex.grabbed = true
}
This works fine when the rotatable's rotation is zero, however when it starts to rotate, the collision test will still be testing for the unrotated vertex position (which makes sense). My question is how to find the position of the vertex in its 'scene / global' position. The only way I can think of doing this so far is to calculate the rotation of the 'rotatable' and use this vector to calculate the new vertex position.
I know nothing about math, so this may not be the way to go, and even if it is I will have to struggle through it so hard that I won't ever know if I'm just doing the math incorrectly, or this isn't the way I should go about calculating it. Obviously I'm willing to go through this work, but just want to make sure this is the way to do it, rather then an other simpler method.
If there are any other questions about the code, please let me know, and Thanks in advance for your time!
Isaac
To get the world position of a vertex specified in local coordinates, apply the object's world transform to the vertex like so:
vertex.applyMatrix4( object.matrixWorld );
(I am not familiar with leapmotion, so hopefully it does not impact this answer.)
Tip: maxLights is no longer required. And it is best to avoid material.side = 2. Use material.side = THREE.DoubleSide instead.
You can find the constants here: https://github.com/mrdoob/three.js/blob/master/src/Three.js
three.js r.55
Flex 3, ActionScript 3, Flash player 9.
I have a picture in a BitmapData object. And an array of points. I nead to erase the part of the picture inside a polygon specified by the points. In other words, draw a polygon specified by the points and fill it with transparency.
Any ideas on how it can be done?
Got it working with the following code:
var shape:Shape = new Shape();
shape.graphics.beginFill(0x000000, 1); // solid black
shape.graphics.moveTo(points[0].x, points[0].y);
points.forEach(function (p:Point, i:int, a:Array):void {
shape.graphics.lineTo(p.x, p.y);
});
shape.graphics.endFill();
data.draw(shape, null, null, "erase");
For a rectangle, you can use fillRect. For a polygon you are gonna have to draw the polygon in a totally different color (than other colors in the bitmap) and use floodFill - but I don't know how to draw a polygon. There is no method in bitmap data class to draw lines. Another option would be to write your own logic to find pixels inside the polygon and use setPixel32 method to set their alphas to zero.
This wikipedia page describes algorithms to find if a point is inside a given polygon. You might find it useful.