I have a custom mesh (created in blender) that I insert into Qt3D using the following code:
QMesh *mesh = new QMesh(rootEntity);
mesh->setSource(QUrl::fromLocalFile(baseUrl+"mesh.obj"));
This works fine; I can add it to an entity with a material and everything.
Then I create a custom material using a texture loaded from a .png. I do this using the following code:
Qt3DRender::QTextureLoader *loader = new Qt3DRender::QTextureLoader(rootEntity);
Qt3DExtras::QTextureMaterial *material = new Qt3DExtras::QTextureMaterial(rootEntity);
loader->setSource(QUrl::fromLocalFile(baseUrl+"pattern.jpg"));
material->setTexture(loader);
This also works fine. When I add this material to a built-in Qt mesh (e.g. QPlaneMesh or QSphereMesh) it shows perfectly on the surface as one would expect.
However - now comes the problem - if I add it with the QMesh specified above, the mesh just gets one homogeneous color which seems to be the average over the colors in the pattern. Here you can see what I mean: both objects have the same material. The top one is inserted externally while the bottom one is a QPlaneMesh.
Can someone explain me why that is the case? And is there a way to successfully add textures to custom meshes?
Note: I have tried this with 2D and 3D meshes and it is the same outcome.
Note 2: I have also tried it with diferent images and it still just gets one homogeneous average color.
UPDATE: I tried (following the suggestion in the answer) to add a texture attribute to the geometry of my imported mesh like the following:
Qt3DCore::QEntity *entity = new Qt3DCore::QEntity(rootEntity);
QMesh *mesh = new QMesh(entity);
mesh->setSource(QUrl::fromLocalFile(baseUrl+"mesh.obj"));
const int stride = (3 + 2 + 3 + 4) * sizeof(float);
QSize resolution = QSize(2,2);
const int nVerts = resolution.width() * resolution.height();
QAttribute *texCoordAttr = new QAttribute(mesh->geometry());
Qt3DRender::QBuffer *vertexBuffer = new Qt3DRender::QBuffer(mesh->geometry());
texCoordAttr->setName(QAttribute::defaultTextureCoordinate1AttributeName());
texCoordAttr->setVertexBaseType(QAttribute::Float);
texCoordAttr->setVertexSize(2);
texCoordAttr->setAttributeType(QAttribute::VertexAttribute);
texCoordAttr->setBuffer(vertexBuffer);
texCoordAttr->setByteStride(stride);
texCoordAttr->setByteOffset(3*sizeof(float));
texCoordAttr->setCount(nVerts);
vertexBuffer->setDataGenerator(QSharedPointer<PlaneVertexBufferFunctor>::create(1.0f,1.0f,resolution, false)); //these input values (width, height, resolution, mirrored) are probably the cause of the problem
mesh->geometry()->addAttribute(texCoordAttr); //it crashes here
entity->addComponent(mesh);
entity->addComponent(transform);
entity->addComponent(material);
I created the functor for setDataGenerator like in the QPlaneMesh code. Now I am suspecting the segmentation fault is because of sizing mismatch. So how can I get the correct width and height of an external mesh from its QGeometry? And what else might be wrong here?
It looks like the mesh is missing the texture coordinates. When you open the file with a text editor, do you see the key vt somewhere? Those are the texture coordinates. You can read about the format here.
If you still want the obj file that you have, you have to add texture coordinates if it doesn't have any. It's probably best to open the file in Blender and use its texture mapper - at least for more complex meshes. Guessing which vertex needs which texture coordinate is not really feasible.
The texture coordinates work as follows:
If you have an image of, say 500 by 400 pixels, the texture coordinate (0.7, 0.3) is (500 * 0.7, 400 * 0.3) = (350, 120), meaning that the vertex which has that texture coordinate will receive the color value of the pixel at (350, 120). Values inside a triangle will get interpolated.
If your obj file comes along with a mtl file then it probably already has texture coordinates. If you want to load this mtl file use the QSceneLoader and add it to its parent QEntity to display everything.
Related
Im trying to make my own 3D engine using LWJGL.
This is the Line of Code which creates the projection matrix for the uniform to pass.
pMatrix = new Matrix4f().ortho(-2f, 2f, -1.125f, 1.125f, 0.1f, 1000f);
Perfectly working Orthographic Projection
Notice how the rendered mesh is square, correct according to the vertices
Only changing this one line to
pMatrix = new Matrix4f().perspective((float) Math.toRadians(60.0f), 640/360,0.1f, 1000f);
Breaks It Not Properly working Perspective projection
Notice how in the second image , the mesh is not square
Is there something that im doing wrong. If yes , please help me fix it. If not then why is this happening.
I'm trying go apply normal map to QDiffuseSpecularMaterial or QMetalRoughMaterial. I Use QTextureImage to load the textures. When I try to apply normal material just becomes black, however, I don't have any issues with other maps (baseColor, ambientOcclusion, metalness and roughness)
What I've tried without success:
Changed direction of vertex normals, but vertex normals are correct
Swapped RGB channels in normal map texture - all the combinations. Also tried with grayscale texture
Mapped values in texture loaded in QPaintedTextureImage from (0, 255) to (0, 1) range
Thought that normal maps maybe don't work with QPointLight, so I've also added QDirectionalLight to the scene
I am trying to implement a 2D selection window that can select the 3D vertices inside the 2D window (indicated by dashed cyan rectangle). Each of my 3D models is currently a composite group of MeshView's, one MeshView per face. My plan was to iterate over each face (MeshView) and check if the 2D bounds intersect with the selection box bounds (I am planning to update this later using an Atlas texture to reduce the amount of meshes, but for now I just want the selection mechanism working).
Currently I have the following, but this isn't correct.
val selectionBounds = selectionRectangle.boundsInParent
val localBounds = meshView.localToScene(meshView.boundsInLocal, true)
if (selectionBounds.intersects(localBounds))
// do something with the mesh in meshView
My subscene contains a perspective camera, now I saw two useful posts:
Convert coordinates from 3D scene to 2D overlay
How to get 2D coordinates on window for 3D object in javafx
I think I have to first project the meshView's bounds properly using my perspective camera. But I am unsure how to proceed, do I have to project every 3D point in the local bounds to 2D as is done in referenced question 2 (above). Not very familiar with the math, or related concepts, any help would be appreciated.
(wink, wink MVP José)
EDIT 1:
After José's suggestion I added red bounding boxes for each meshview which gives the following result:
Apparently it adds some offset which appears to be the same regardless of the camera rotation. Here the red boxes are drawn around each meshview. Will investigate further..
EDIT 2:
I use a Pane which contains the SubScene and another Node. This is done to control the sizing of the SubScene and to reposition/resize the other Node accordingly by overriding the layoutChildren method, as such (this is in Kotlin):
override fun layoutChildren() {
val subScene = subSceneProperty.get()
if (subScene != null){
subScene.width = width
subScene.height = height
}
overlayRectangleGroup.resize(width, height)
val nodeWidth = snapSize(overlayMiscGroup.prefWidth(-1.0))
val nodeHeight = snapSize(overlayMiscGroup.prefHeight(-1.0))
overlayMiscGroup.resizeRelocate(width - nodeWidth, 0.0, nodeWidth, nodeHeight)
}
So basically when I try to draw more a mesh inside an FBX file its orientation is always removed and it's scaled down. I'm not sure if the issue is caused by code or the way I'm exporting the FBX files. I have been trying to narrow down the cause and I am fairly sure it's not caused by the way I export the FBX (but I could be wrong), so it's either the XNA content pipeline or my drawing code
Here are some pics I took to show my problem, where the gray background is in 3Ds Max as I see it and red background is in XNA:
THis is as it appears in 3D StudioMax: http://i.stack.imgur.com/e0oW4.png
This is how it appears in XNA: http://i.stack.imgur.com/1vOcx.png
Both are being viewed from the same angle and direction but varying distances.
Now what is really odd is if I create another mesh in max, say a box, and export that (along with the original model), it works fine: http://i.stack.imgur.com/SIDg9.png
So long as there is more than one mesh in the fbx model it draws properly (though I'm still suspicious if it's drawing with proper scaling applied, i.e. if in Max it is 1 unit long in XNA it becomes something like 1.27 units long), if there is less its orientation which I applied to it in 3D studio max is removed when I draw it.
This is how I draw the model:
model.CopyAbsoluteBoneTransformsTo(boneTransforms);
foreach (ModelMesh mesh in model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.World = boneTransforms[mesh.ParentBone.Index];
Vector3 cameraPosition = Camera.Get.Position;// new Vector3(0, 0, 0);
//cameraPosition.X = -Camera.Get.PosX;
//cameraPosition.Y = Camera.Get.PosY;
effect.View = Camera.Get.View;// Matrix.CreateLookAt(cameraPosition, cameraPosition + Camera.Get.LookDir, Camera.Get.Up);
effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4,
BaseGame.Get.GraphicsDevice.Viewport.AspectRatio,
0.01f, 1000000); //Matrix.CreateOrthographic(800 / 1, 480 / 1, 0, 1000000);
//effect.TextureEnabled = true;
effect.LightingEnabled = true;
effect.PreferPerPixelLighting = true;
//effect.SpecularColor = new Vector3(1, 0, 0);
}
mesh.Draw();
}
Obviously mesh.draw() is called twice when there is more than one mesh in the fbx file..
Generally if you are having a problem with the position or scale of the mesh while rendering, then it's likely to be related to the matrices. Not necessarily the exporting, but rather how you use them in the code.
I use blender3d for modelling, but I know that Blender3d actually defines different spaces when you are creating the meshes within the editor. For example, if you create a mesh while in 'object' mode, the position/rotation/scale of the object in the scene will not be exported (because that object will be the root of a new tree, centered around 0,0,0). So I would check for a similar situation in 3DMax - make sure you are transforming the vertices in Max relative to 0,0,0, or else you may lose the 'initial' translation and when you render in XNA, all the objects will be rendered around your 0,0,0 (i.e. appear mixed together).
Failing that, and I can't remember exactly off the top of my head, but I think you may need to multiply the current mesh's absolute matrix transform with that of the parent's world matrix transform. Although it's been a while so I'm not too sure.
This question already has an answer here:
Can I get vector data back out out of a Graphics object?
(1 answer)
Closed 9 years ago.
EDIT (for clarification):
I have a vector image with a simple contour, an irregular closed polygon.
I need to import it into Flash in a way that I can then programmatically access each of the segments that form the polygon.
Importing the vector image into the library as a MovieClip wasn't good because all I get is a shape from which I can take no geometry information at all.
My goal is being able to calculate the polygon's area and also calculating the intersection between the polygon and another polygon.
I guess I could write an Illustrator script that reads all the segments and writes a CSV files with their coordinates, but there has to be a simpler way, I mean, they're both vectorial, they should understand each other.
Thanks!
.
-- Old Post: --
I have a contour in vector graphics that I imported to the Flash library as a movieclip.
I Instanciate the movieclip and it has a Shape child which is the actual contour.
I need to be able to access the contour segments, i.e. the polygon's sides, to be able to get their starting and ending points, is there a way?
the Graphics class only allows to draw but what you draw, as with the Shape class, are not objects, it's not a polygon with sides or whatever.
Am I being clear?
Thanks
There is no way to read the data of a Graphics object (which is essentially what contains the information that you are after.) This applies to any vector graphics object that has already been drawn, either by the Graphics/drawing API itself, or in Flash CS3/CS4, or was embedded using the [Embed] meta-tag.
Your best bet if you need to calculate the algebraic area, or for some other reason retain the vectors in your algorithms, is definitely exporting an SVG or some single-purpose format (like a CSV of the points) from Illustrator, and parsing that in ActionScript.
Another option is to use a BitmapData, and draw the Shape object onto that, then counting the colored (opaque) pixels to numerically calculate it's area.
var bmp : BitmapData = new BitmapData(myShape.width, myShape.height, true, 0);
bmp.draw(myShape);
var i : uint;
var area : uint = 0;
var num_pixels : uint = bmp.width*bmp.height;
for (i=0; i<num_pixels; i++) {
var px : uint = bmp.getPixel32(i%bmp.width, Math.floor(i/bmp.height));
// Determine from px color/alpha whether it's part of the shape or not.
// This particular if statement should determine whether the alpha
// component (first 8 bits of the px integer) are greater than zero, i.e.
// not transparent.
if ((px >> 24) > 0)
area++;
}
trace('number of opaque pixels (area): '+area);
Depending on your application, you might also be able to use the BitmapData.hitTest() method for your collision detection.
I believe the best you can do is to retrieve a rectangular bounding box on the Shape object. Depending on how you imported it, you may or may not have direct access to the Shape object as an instance variable; however, if you do, you can call shapeVar.transform.getBounds() or shapeVar.transform.getRect() (bounds returns a rectangle inclusive of strokes on the shape, rect does not).
I'm curious, so I'm doing a bit of research on alternate means of getting some pixel bounds. I'll edit this further if I find something useful.