This question already has an answer here:
Can I get vector data back out out of a Graphics object?
(1 answer)
Closed 9 years ago.
EDIT (for clarification):
I have a vector image with a simple contour, an irregular closed polygon.
I need to import it into Flash in a way that I can then programmatically access each of the segments that form the polygon.
Importing the vector image into the library as a MovieClip wasn't good because all I get is a shape from which I can take no geometry information at all.
My goal is being able to calculate the polygon's area and also calculating the intersection between the polygon and another polygon.
I guess I could write an Illustrator script that reads all the segments and writes a CSV files with their coordinates, but there has to be a simpler way, I mean, they're both vectorial, they should understand each other.
Thanks!
.
-- Old Post: --
I have a contour in vector graphics that I imported to the Flash library as a movieclip.
I Instanciate the movieclip and it has a Shape child which is the actual contour.
I need to be able to access the contour segments, i.e. the polygon's sides, to be able to get their starting and ending points, is there a way?
the Graphics class only allows to draw but what you draw, as with the Shape class, are not objects, it's not a polygon with sides or whatever.
Am I being clear?
Thanks
There is no way to read the data of a Graphics object (which is essentially what contains the information that you are after.) This applies to any vector graphics object that has already been drawn, either by the Graphics/drawing API itself, or in Flash CS3/CS4, or was embedded using the [Embed] meta-tag.
Your best bet if you need to calculate the algebraic area, or for some other reason retain the vectors in your algorithms, is definitely exporting an SVG or some single-purpose format (like a CSV of the points) from Illustrator, and parsing that in ActionScript.
Another option is to use a BitmapData, and draw the Shape object onto that, then counting the colored (opaque) pixels to numerically calculate it's area.
var bmp : BitmapData = new BitmapData(myShape.width, myShape.height, true, 0);
bmp.draw(myShape);
var i : uint;
var area : uint = 0;
var num_pixels : uint = bmp.width*bmp.height;
for (i=0; i<num_pixels; i++) {
var px : uint = bmp.getPixel32(i%bmp.width, Math.floor(i/bmp.height));
// Determine from px color/alpha whether it's part of the shape or not.
// This particular if statement should determine whether the alpha
// component (first 8 bits of the px integer) are greater than zero, i.e.
// not transparent.
if ((px >> 24) > 0)
area++;
}
trace('number of opaque pixels (area): '+area);
Depending on your application, you might also be able to use the BitmapData.hitTest() method for your collision detection.
I believe the best you can do is to retrieve a rectangular bounding box on the Shape object. Depending on how you imported it, you may or may not have direct access to the Shape object as an instance variable; however, if you do, you can call shapeVar.transform.getBounds() or shapeVar.transform.getRect() (bounds returns a rectangle inclusive of strokes on the shape, rect does not).
I'm curious, so I'm doing a bit of research on alternate means of getting some pixel bounds. I'll edit this further if I find something useful.
Related
I have a custom mesh (created in blender) that I insert into Qt3D using the following code:
QMesh *mesh = new QMesh(rootEntity);
mesh->setSource(QUrl::fromLocalFile(baseUrl+"mesh.obj"));
This works fine; I can add it to an entity with a material and everything.
Then I create a custom material using a texture loaded from a .png. I do this using the following code:
Qt3DRender::QTextureLoader *loader = new Qt3DRender::QTextureLoader(rootEntity);
Qt3DExtras::QTextureMaterial *material = new Qt3DExtras::QTextureMaterial(rootEntity);
loader->setSource(QUrl::fromLocalFile(baseUrl+"pattern.jpg"));
material->setTexture(loader);
This also works fine. When I add this material to a built-in Qt mesh (e.g. QPlaneMesh or QSphereMesh) it shows perfectly on the surface as one would expect.
However - now comes the problem - if I add it with the QMesh specified above, the mesh just gets one homogeneous color which seems to be the average over the colors in the pattern. Here you can see what I mean: both objects have the same material. The top one is inserted externally while the bottom one is a QPlaneMesh.
Can someone explain me why that is the case? And is there a way to successfully add textures to custom meshes?
Note: I have tried this with 2D and 3D meshes and it is the same outcome.
Note 2: I have also tried it with diferent images and it still just gets one homogeneous average color.
UPDATE: I tried (following the suggestion in the answer) to add a texture attribute to the geometry of my imported mesh like the following:
Qt3DCore::QEntity *entity = new Qt3DCore::QEntity(rootEntity);
QMesh *mesh = new QMesh(entity);
mesh->setSource(QUrl::fromLocalFile(baseUrl+"mesh.obj"));
const int stride = (3 + 2 + 3 + 4) * sizeof(float);
QSize resolution = QSize(2,2);
const int nVerts = resolution.width() * resolution.height();
QAttribute *texCoordAttr = new QAttribute(mesh->geometry());
Qt3DRender::QBuffer *vertexBuffer = new Qt3DRender::QBuffer(mesh->geometry());
texCoordAttr->setName(QAttribute::defaultTextureCoordinate1AttributeName());
texCoordAttr->setVertexBaseType(QAttribute::Float);
texCoordAttr->setVertexSize(2);
texCoordAttr->setAttributeType(QAttribute::VertexAttribute);
texCoordAttr->setBuffer(vertexBuffer);
texCoordAttr->setByteStride(stride);
texCoordAttr->setByteOffset(3*sizeof(float));
texCoordAttr->setCount(nVerts);
vertexBuffer->setDataGenerator(QSharedPointer<PlaneVertexBufferFunctor>::create(1.0f,1.0f,resolution, false)); //these input values (width, height, resolution, mirrored) are probably the cause of the problem
mesh->geometry()->addAttribute(texCoordAttr); //it crashes here
entity->addComponent(mesh);
entity->addComponent(transform);
entity->addComponent(material);
I created the functor for setDataGenerator like in the QPlaneMesh code. Now I am suspecting the segmentation fault is because of sizing mismatch. So how can I get the correct width and height of an external mesh from its QGeometry? And what else might be wrong here?
It looks like the mesh is missing the texture coordinates. When you open the file with a text editor, do you see the key vt somewhere? Those are the texture coordinates. You can read about the format here.
If you still want the obj file that you have, you have to add texture coordinates if it doesn't have any. It's probably best to open the file in Blender and use its texture mapper - at least for more complex meshes. Guessing which vertex needs which texture coordinate is not really feasible.
The texture coordinates work as follows:
If you have an image of, say 500 by 400 pixels, the texture coordinate (0.7, 0.3) is (500 * 0.7, 400 * 0.3) = (350, 120), meaning that the vertex which has that texture coordinate will receive the color value of the pixel at (350, 120). Values inside a triangle will get interpolated.
If your obj file comes along with a mtl file then it probably already has texture coordinates. If you want to load this mtl file use the QSceneLoader and add it to its parent QEntity to display everything.
Has anyone already used the IFC (Industry Foundation Classes) from BuildingSmart, typically adopted for BIM projects and building domain ?
I would like to know how to navigate the IFC objects to get the coordinates of a IfcWallStandardCase or of an affine object (i.e., yet a Wall).
I am interesting in getting the coordinates of all or at least one of the vertices delimiting the Wall.
Please indicate the navigation through the Ifc objects of an Ifc file, to know where to locate the coordinates information in the Ifc file starting from an IfcWallStandardCase or affine object.
First go for the Representation attribute which is optional for IfcProduct. You want shape representations (IfcProductDefinitionShape), not material representations. If there are representations at all, you may get multiple representations, each with a context specifying dimensionality, precision, and coordinate system. Since you are hunting for coordinates, you probably want a representation of type IfcShapeRepresentation, not IfcTopologyRepresentation. Each representation then consists of multiple representation items.
There are several types of geometry representations - check the inheritance tree of IfcGeometricRepresentationItem. Here is an example for a faceted BREP: each representation item is then of type IfcFacetedBrep, which is explained nicely in the IFC2x4 specs. With attribute outer you get a closed shell, which consist of a set of faces (IfcFace) reachable trough the attribute CfsFaces. Each face has a set of bounds (IfcFaceBound, attribute Bounds), each of which is defined by a loop (IfcLoop, attribute Bound) and an orientation. The loops again may be of different type, let's assume IfcPolyLoop. Those have a list of points (IfcCartesianPoint) under the attribute Polygon, which finally give you the coordinates (of type IfcLengthMeasure which is a REAL) with attribute Coordinates.
Be aware that those coordinates are relative to the coordinate system of the geometric context mentioned in the beginning. Contexts may be nested with multiple coordinate transformations to be resolved in order to get absolute world coordinates.
The path of attribute names is: Representation, Items, CfsFaces, Bounds, Bound, Polygon, Coordinates.
For a university project I need to implement a computer graphics paper that has been relased a couple of years ago. At one point, I need to triangulate the results I get from my simulation. I guess its easier to explain what I need looking at a picture contained within the paper:
Let's say I already have got all the information it takes to reconstruct the contour lines that you can see in the second thumbnail. Using those I need to do some triangulation using those siluettes as constrains. I have searched the internet for triangulation libraries like CGAL, VTK, Triangle, Triangle++, ... but I always ended up throwing my hands up in horror. I am not a good programmer and it seems impossible to me to get into one of those APIs before the deadline of this project passes.
I would appreciate any kind of help like code snipplets, tips, etc...
I know that the algorithms need segments (pairs of points) as input, so let's say I have got one std::vector containing all pairs of points defining the siluette as well as the left and right side of the rectangle.
Can you somehow give me a code snipplet for i.e. CGAL that I could use for my purpose? First of all I just want to achieve the state of the third thumbnail. Lateron I will have to do some displacement within the "cracks" and finally write the information into a VBO for OpenGL rendering.
I have started working it out with CGAL. One simple problem still drives me crazy:
It is possible to attach informations (like ints) to points before adding them up to the triangulator object. I do this since I need on the one hand an int-flag that I use lateron to define my texture coordinates and on the other hand an index which I use so that I can create a indexed VBO.
http://doc.cgal.org/latest/Triangulation_2/Triangulation_2_2info_insert_with_pair_iterator_2_8cpp-example.html
But instead of points I only want to insert constraint-edges. If I insert both CGAL returns strange results since points have been fed into two times (once as point and once as point of a constrained edge).
http://doc.cgal.org/latest/Triangulation_2/Triangulation_2_2constrained_8cpp-example.html
Is it possible to connect in the same way as with points information to "Constraints" so that I can only use this function cdt.insert_constraint( Point(j,0), Point(j,6)); before I iterate over the resulting faces?
Lateron when I loop over the triangles I need some way to access the int-flags that I defined before. Like this but not on acutal points but the "ends" defined by the constraint edges:
for(CDT::Finite_faces_iterator fit = m_cdt.finite_faces_begin(); fit != m_cdt.finite_faces_end(); ++fit, ++k) {
int j = k*3;
for(int i=0; i < 3; i++) {
indices[j+i] = fit->vertex(i)->info().first;
}
}
So basically when I try to draw more a mesh inside an FBX file its orientation is always removed and it's scaled down. I'm not sure if the issue is caused by code or the way I'm exporting the FBX files. I have been trying to narrow down the cause and I am fairly sure it's not caused by the way I export the FBX (but I could be wrong), so it's either the XNA content pipeline or my drawing code
Here are some pics I took to show my problem, where the gray background is in 3Ds Max as I see it and red background is in XNA:
THis is as it appears in 3D StudioMax: http://i.stack.imgur.com/e0oW4.png
This is how it appears in XNA: http://i.stack.imgur.com/1vOcx.png
Both are being viewed from the same angle and direction but varying distances.
Now what is really odd is if I create another mesh in max, say a box, and export that (along with the original model), it works fine: http://i.stack.imgur.com/SIDg9.png
So long as there is more than one mesh in the fbx model it draws properly (though I'm still suspicious if it's drawing with proper scaling applied, i.e. if in Max it is 1 unit long in XNA it becomes something like 1.27 units long), if there is less its orientation which I applied to it in 3D studio max is removed when I draw it.
This is how I draw the model:
model.CopyAbsoluteBoneTransformsTo(boneTransforms);
foreach (ModelMesh mesh in model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.World = boneTransforms[mesh.ParentBone.Index];
Vector3 cameraPosition = Camera.Get.Position;// new Vector3(0, 0, 0);
//cameraPosition.X = -Camera.Get.PosX;
//cameraPosition.Y = Camera.Get.PosY;
effect.View = Camera.Get.View;// Matrix.CreateLookAt(cameraPosition, cameraPosition + Camera.Get.LookDir, Camera.Get.Up);
effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4,
BaseGame.Get.GraphicsDevice.Viewport.AspectRatio,
0.01f, 1000000); //Matrix.CreateOrthographic(800 / 1, 480 / 1, 0, 1000000);
//effect.TextureEnabled = true;
effect.LightingEnabled = true;
effect.PreferPerPixelLighting = true;
//effect.SpecularColor = new Vector3(1, 0, 0);
}
mesh.Draw();
}
Obviously mesh.draw() is called twice when there is more than one mesh in the fbx file..
Generally if you are having a problem with the position or scale of the mesh while rendering, then it's likely to be related to the matrices. Not necessarily the exporting, but rather how you use them in the code.
I use blender3d for modelling, but I know that Blender3d actually defines different spaces when you are creating the meshes within the editor. For example, if you create a mesh while in 'object' mode, the position/rotation/scale of the object in the scene will not be exported (because that object will be the root of a new tree, centered around 0,0,0). So I would check for a similar situation in 3DMax - make sure you are transforming the vertices in Max relative to 0,0,0, or else you may lose the 'initial' translation and when you render in XNA, all the objects will be rendered around your 0,0,0 (i.e. appear mixed together).
Failing that, and I can't remember exactly off the top of my head, but I think you may need to multiply the current mesh's absolute matrix transform with that of the parent's world matrix transform. Although it's been a while so I'm not too sure.
I am looking for a fairly simple image comparison method in AS3. I have taken an image from a web cam (with no subject) passed it in to bitmap data, then a second image is taken (this time with a subject) to compare this data, from these two images I would like to create a mask from the pixels that match on both bitmaps. I have been scratching my head for a while, and I am not really making any progress. Could any one point me in the right direction for pixel comparison method, something like getPixel32()
Cheers
Jono
use compare to create a difference between the two and then use treshold to extract the parts that interest you.
edit: actually it is pretty straight forward. the trick is to apply the threshold multiple times per channel using the mask parameter (otherwise the comparison only makes little sense, since 0x010000 (which is almost black) is consider greater than 0x0000FF (which is anything but black)). here's how:
var dif:BitmapData;//your original bitmapdata
var mask:BitmapData = new BitmapData(dif.width, dif.height, true, 0);
const threshold:uint = 0x20;
for (var i:int = 0; i < 3; i++)
mask.threshold(dif, dif.rect, new Point(), ">", threshold << (i * 8), 0xFF000000, 0xFF << (i * 8));
this creates a transparent mask. then the threshold is applied for all three channels, setting the alpha channel to fully opaque where the channels value exceeds the threshold value (you might wanna decrease it).
you can isolate the foreground object ("the guy in front of the webcam") by copying the alpha channel from the mask to the current video image.
one of the problems here is that you want to find if a pixel has ANY change to it, and if it does then to convert that pixel to another color (for masking). Unfortunately, a webcam's quality isn't great so even if your scene does not change at all the bitmapdata coming from the webcam will change slightly. Therefor, when your subject steps into frame...you will get pixel changes for the subject...but also noise in other areas due to lighting changes or camera quality. What you'll need to do is write a function that analyzes the result of a bitmapdaya.compare() for change in area's larger than _____ to determine if there is enough change to warrant an actual object being there. That will help remove noise and make your mask more accurate.