Web-ifc-three: How to get the geometry by expressId only, without picking? - ifc

I have a question, please. How to get the geometry by expressId only, without picking?
Having an expressId(s) I need to know the geometry, i.e. the corresponding threejs object(s). I tried createSubset, but this method returns (as I found) not the mesh of the subset, but the mesh of the whole model.
let subset = ...createSubset({
modelID: ..., ids: [id], material: ...
scene: ..., removePrevious: true
});
Thank you in advance!

For performance sake, IFC.js combines all the items of the model in a single mesh. Having each item as a single mesh would result in the browser not being able to handle medium sized models due to the draw calls.
Subsets are not exactly the whole model. Each subset shares the same position, normal, and expressID buffers with the whole model to save memory, but each subset has its own index array. Notice that both the whole model and the subset are indexed BufferGeometries.
If you want to reconstruct an individual Three.js Mesh from a subset, you can do it like I explained in this other answer. Please note that there is a reason behind this decision, and Mesh reconstruction should only be used for export purposes.

Related

Google Earth Engine: Extract band values from pixel in each image of a collection?

sorry for the basic question, I'm a GEE beginner.
Essentially, what I want to do it extract the value of a certain band in a pixel from each image in a collection, and put it into an array.
I understand how to do this if the output is to a chart, e.g.:
print(ui.Chart.image.series(with_ndvi.select("nd"),area));
Where with_ndvi is my image collection, "nd" is the band I'm interested in, and area is a point feature.
However, I need to get these values into an array, because I need to perform a calculation on each value.
Is there an easy funciton to map over a collection to extract the values as numbers to work with?
Thanks for any help.
In general, in order to get particular values out of an image, you use reduceRegion. Since you have a single point, there isn't exactly any reduction, but the same operation can be used to get the mean, median, maximum, etc. from an area and you need to choose a reducer to perform the operation. (ui.Chart.image.series defaults to the mean reducer if you don't specify otherwise).
I constructed this example from the images used in the Normalized Difference example script:
var imageCollection = ee.ImageCollection('MODIS/006/MOD09GA')
.filterDate('2019-01-01', '2019-01-31');
var ndviCollection = imageCollection.map(function (img) {
var ndImage = img.normalizedDifference(['sur_refl_b02', 'sur_refl_b01']);
return ee.Feature(area, ndImage.reduceRegion(ee.Reducer.mean(), area));
});
print(ndviCollection);
Runnable example link
Here, ndviCollection is a FeatureCollection where each feature has the original point as geometry (useful if you have multiple points of interest, but otherwise you could make it be null instead) and the NDVI at that point as a property named nd.
If you absolutely need a list of numbers rather than a feature collection, you can obtain that:
print(ndviCollection
.toList(100) // set this number to the maximum number of elements you expect
.map(function (feature) {
return ee.Feature(feature).get('nd');
}));
but you should not do this if you can avoid it as lists are always kept in memory as a whole rather than processed in a streaming fashion. Instead, perform your computations on the feature collection using map and/or reduceColumns.

Adapt geometry on printed points

I draw a vectorial geometry with some calibration points around it.
I print this geometry and then I physically scan the printed calibration points (I can't scan the geometry, I can only scan the calibration points).
When I acquire these points, these aren't in their position anymore because of some print error or bad print calibration.
The question is:
Is there any algorithm that helps me to adapt the original geometry in base of the new points scanned?
In practice I need to warp the geometry in order to obtain the real geometry printed on the paper with the same print error that I have on the calibration points.
The distortion is given by the physical distortion of the material (not paper but cloth) during the print process. I can't know how much the material will distort during the print.
Yes, there are algorithms to help you with that. In general you need to learn/find the transformation between the two images that you have.
Typical geometrical transformations are affine transformations (shift, scale, rotation, shear, reflections) which need at least three control points or piecewise local linear/ local weighted mean which need at least 4-6 control points. The more control points you have, the better in general.
Given a set of control points in one image and the corresponding set of control points in the other image there are algorithms for finding the optimal transformation between if you specify a class (affine or piecewise local linear). See for example fitgeotrans in Matlab. I don't know how exactly it solves the problem by I guess by some kind of optimization. It should be easy to find available implementations for other programming languages (Python, C, Java).
What remains is finding the correspondence between the control points in the two images. For a few images you may be able to do that by hand, but in the general case you might want to automatize this as well. General image registration algorithms like imregister should do well for your images. They give you a good initial estimate for the transformation (may already be sufficient) so that then identification of the corresponding point pairs is trivial (always take the nearest) and allow refining.
So I advice you to first just try to register the images (gray scale data) with an identity transformation as starting value. Then identify corresponding point pairs and refine the transformation either using an affine or a piecewiece/local transformation. Then apply the transformation on the geometry to get the printed geometry. Depending on your choice of programming languages you will find many implementations that do the job.

Tracking multiple objects with merging,dividing of objects in scenes in point clouds

I have a stream of RGBD images. After removing plane surface, I am performing Euclidean clustering on each of them to give me the centroids of the objects present in the scene for each frame. But the label associated with which of these objects keeps changing with each frame.
Essentially, I need to track objects (where merging and dividing of objects can take place) and designate them a consistent label. I have the X,Y,Z of the objects for each frame. I wanted to know of any mathematical models that can do this task? I am not able to find an example of a tracker in PCL also. But since I have already extracted the centroids of the point clouds it might be possible for me to code this up from scratch in MATLAB/Python.
These are two objects present in scene.
X_1,Y_1,Z_1
X_2,Y_2,Z_2
Say object 2 now divides into two. I want to keep track that these new objects have originated from object 2. In the next frame, the object coordinates will be something like this
X_1,Y_1,Z_1
X_2_1,Y_2_1,Z_2_1
X_2_2,Y_2_2,Z_2_2

Constrained (Delaunay) Triangulation

For a university project I need to implement a computer graphics paper that has been relased a couple of years ago. At one point, I need to triangulate the results I get from my simulation. I guess its easier to explain what I need looking at a picture contained within the paper:
Let's say I already have got all the information it takes to reconstruct the contour lines that you can see in the second thumbnail. Using those I need to do some triangulation using those siluettes as constrains. I have searched the internet for triangulation libraries like CGAL, VTK, Triangle, Triangle++, ... but I always ended up throwing my hands up in horror. I am not a good programmer and it seems impossible to me to get into one of those APIs before the deadline of this project passes.
I would appreciate any kind of help like code snipplets, tips, etc...
I know that the algorithms need segments (pairs of points) as input, so let's say I have got one std::vector containing all pairs of points defining the siluette as well as the left and right side of the rectangle.
Can you somehow give me a code snipplet for i.e. CGAL that I could use for my purpose? First of all I just want to achieve the state of the third thumbnail. Lateron I will have to do some displacement within the "cracks" and finally write the information into a VBO for OpenGL rendering.
I have started working it out with CGAL. One simple problem still drives me crazy:
It is possible to attach informations (like ints) to points before adding them up to the triangulator object. I do this since I need on the one hand an int-flag that I use lateron to define my texture coordinates and on the other hand an index which I use so that I can create a indexed VBO.
http://doc.cgal.org/latest/Triangulation_2/Triangulation_2_2info_insert_with_pair_iterator_2_8cpp-example.html
But instead of points I only want to insert constraint-edges. If I insert both CGAL returns strange results since points have been fed into two times (once as point and once as point of a constrained edge).
http://doc.cgal.org/latest/Triangulation_2/Triangulation_2_2constrained_8cpp-example.html
Is it possible to connect in the same way as with points information to "Constraints" so that I can only use this function cdt.insert_constraint( Point(j,0), Point(j,6)); before I iterate over the resulting faces?
Lateron when I loop over the triangles I need some way to access the int-flags that I defined before. Like this but not on acutal points but the "ends" defined by the constraint edges:
for(CDT::Finite_faces_iterator fit = m_cdt.finite_faces_begin(); fit != m_cdt.finite_faces_end(); ++fit, ++k) {
int j = k*3;
for(int i=0; i < 3; i++) {
indices[j+i] = fit->vertex(i)->info().first;
}
}

Rendering massive amount of data

I have a 3D floating-point matrix, in worst-case scenario the size could be (200000x1000000x100), I want to visualize this matrix using Qt/OpenGL.
Since the number of elements is extremely high, I want to render them in a way that when the camera is far away from the matrix, I just show a number of interesting points that gives an approximation of how the matrix look like. When the camera gets closer, I want to get more details and hence more elements are calculated.
I would like to know if there are techniques that deals with this kind of visualization.
The general idea is called level-of-detail rendering and is a whole science in itself.
For your domain i would recommend two steps:
1) Reduce the number of cells by averaging (arithmetic-mean function) them in cubes of different sizes and caching those cubes (on disk as well as RAM). "Different" means here, that you have the same data in multiple sizes of cubes, e.g. coarse-grained cubes of 10000x10000x10000 and finer cubes of 100x100x100 cells resulting in multiple levels-of-detail. You have to organize these in a hierarchical structure (the larger ones containing multiple smaller ones) and for this i would recommend an Octree:
http://en.wikipedia.org/wiki/Octree
2) The second step is to actually render parts of this Octree:
To do this use the distance of your camera-point to the sub-cubes. Go through the cubes and decide to either enter the sub-cube or render the larger cube by using this distance-function and heuristically chosen or guessed threshold-values.
(2) can be further optimized but this is optional: To optimize this rendering organize the to-be-rendered cube's into layers: The direction of the layers (whether it is in x, y, or z-slices) depends on your camera-viewpoint to which it should be near-perpendicular. Then render each slice into a texture and voila you only have to render a single quad with that texture for each slice, 1000 quads are no problem to render.
Qt has some way of rendering huge number of elements efficiently. Check the examples/demo that is part of QT.

Resources