Clicking "Pictures" of a point cloud in PCL Library or Open3D - point-cloud-library

I have a point cloud and want to take "pictures" of it from various angles. Let us say I point my at the top at a certain angle and rotate the camera around the object at this particular orientation and "snap" what the camera is seeing.
Next I want to change the camera orientation and repeat the process.
I am completely new to the 3D data processing domain and not very well aware of all the PCL / open3D libraries. How can I code this functionalities.
Thanks in Advance.

Related

How to calculate three.js camera matrix given projection matrix

Problem context: I'm working on using the google maps webgl api with threejs wrapper to create an interactive browser game.
My understanding of the framework is that google maps takes control of the webgl camera (e.g., to enable the usual maps controls like drag-to-pan and scroll-to-zoom) and only allows client three.js code to query camera information via the following documented api:
this.camera.projectionMatrix.fromArray(
transformer.fromLatLngAltitude(this.anchor, this.rotation, this.scale)
);
I've attempted to click on three.js objects using the following method for calculating projection rays:
raycast(
normalizedScreenPoint: three.Vector2,
): three.Intersection[] {
this.projectionMatrixInverse.copy(this.camera.projectionMatrix).invert();
this.raycaster.ray.origin
.set(normalizedScreenPoint.x, normalizedScreenPoint.y, 0)
.applyMatrix4(this.projectionMatrixInverse);
this.raycaster.ray.direction
.set(normalizedScreenPoint.x, normalizedScreenPoint.y, .5)
.applyMatrix4(this.projectionMatrixInverse)
.sub(this.raycaster.ray.origin)
.normalize();
...
}
where normalizedScreenPoint ranges from -1 to 1 and is just the X/Y coordinates within the map div.
This method generally seems to be working correctly close to ground level. However, for objects at high altitudes (400m, or 400 threejs units) close to but not occluded by the camera (still entirely within the viewing frustrum), my projection rays are not intersecting these objects as expected. The problem gets even worse with altitude, with objects being nearly unselectable at 1000m. I do not have this issue when running in a pure three.js environment using three.js native functions for generating projection rays, which require the cameras position in three.js space to be known.
I have to believe there's so kind of coordinate mismatch between three.js cartesian coordinates and the google maps azimuthal projection, or some comparable issue that's leading to the api to return a "bad" projection matrix. The googlemaps hooks to webgl are closed-source, so I'm unable to dig in how the camera projection is generated, but I believe it'd be easier to be able to manually move the camera position up in height a few meters if I was able to set and calculate it. How could I do this given its projection matrix?
The other alternatives, of trying to integrate three.js myself with another map rendering engine like Tangram to give me full control, would resolve these issues of dealing with an proprietary api but presumably be much more time-intensive.

Can we simulate a calibrated camera on PCL visualizer?

I have a calibrated stereo camera. So I have the camera intrinsic matrix for the left camera. I have built an absolute 3D model of the scanned region which I am able to load as a mesh file.
I have recorded a video of stereo camera's left camera as I scanned the region. I also know the position and orientation of the stereo camera at every point during the scanning process. So I recreated this motion of the stereo camera using PCL. If the position and orientation of the stereo camera in the real world matches with that of the camera of pcl visualizer, will the left cam's photo and the pcl rendered view match?
I tried doing this but looks like the perspective projection done in PCL visualizer is different from that of the camera. If so, is there a way to change the projection algorithm used in PCL visualizer so that I can match the rendered view that match exactly with the camera's image?

Apply projective transformation on plane in 3D

Scenario
I have a 3D environment which contains a 3D scene and a '2D' scene.
The 3D scene contains a cube and a perspective camera.
The '2D' scene contains 4 round objects and an orthographic camera. These round objects can be moved around by the user therefor the orthographic camera is used otherwise the round objects can be moved 'in depth' (along z-axis) and could change in size and i want them to maintain size.
Depending on positioning the round objects, the corners of the cube in the 3D scene should be aligned with the positions of the round objects. And maintaining perspective.
Edit:
What i am trying to accomplish is: Based on an image of a room a user uses those round objects to define the dimensions of the room. Based on those dimensions a hidden cube is positioned to act as a boundery box. The next step would be to add 3d objects to the scene and maintaining perspective of the room.
I tried explaining this scenario in a picture:
Problems
Basically i have no clue where to start.
The round objects are in a '2D' environment because of the orthographic camera, therefor i have no depth value that i think i need.
I think i need some perspective transformation based on camera positions/settings? There are all sorts of matrices that could be produced but don't know how to implement them.
Sources i studied
http://www.graphicsmill.com/docs/gm/affine-and-projective-transformations.htm
below is a similar situation
https://math.stackexchange.com/questions/296794/finding-the-transform-matrix-from-4-projected-points-with-javascript
Cannot post more links because of my reputation
I hope someone can make this clear or point me in the right direction
Counting the real degrees of freedom, I would say that you don't have enough data. Imagine the projetive camera of the 3D scene as an actual pinhole camera. Then the image that camera creates on its film, sensor or whatever is described by at least 9 parameters:
3 parameters for the position of the camera in space,
2 parameters for the direction the camera is looking at and
1 parameter rotating the camera + sensor around their optical axis,
1 parameter determining the distance from pinhole to sensor and
2 parameters translating the sensor in its plane
On the other hand, knowing a projective transformation from one plane to another, e.g. using my answer to the question you already referenced, will only yield 8 geometrically meaningful parameters. So you cannot hope to reconstruct the camera position from that, so you cannot find the image of the 3D scene that would fit your markers. The Wikipedia article on 3D pose estimation writes that
Most implementations of POSIT only work on non-coplanar points (in other words, it won't work with flat objects or planes).[3]
That being said, you gave an example of where someone is actually doing this! So how do they do it? Honestly, I'm not sure, but they would have to make use of some additional knowledge or extra assumptions. For example, if they knew details about their camera (focal length, relative position between lens and sensor, or something like that), that could provide the required data. Since these apps tend to work on mobile devices, I think it rather likely that they might have either an API to request these things or a database where they can be looked up for the more common devices.
Judging from your question, you don't have that. Neither do you have all the vertical edges of the cube depicted vertically parallel to one another, which would have been another possible way to add more information. You have to come up with one more piece of information in order to allow for a hopefully unique solution.
Of course, without more information the system is just underspecified. It's not hard to find any transformation matrix which does what you requested. Actually the answer I references is placed in a setup where a 2D to 2D map is to be modeled using a 3D transformation matrix. You can do the same and be done with it. But your users might become frustrated, since the transformation they obtain might do completely wrong things to the out-of-plane direction, and there is no knob to tune that to the correct behavior.

How to set gravity to the center of a big sphere (planet) in Babylon.js?

I made a sphere in Babylon.js at the point [0,0,0], I want it to be like a planet with it's own gravity.
Then I want a sphere (that will be the player) to be attracted to the center of the big sphere ("planet").
Here is the demo I made.
http://www.babylonjs-playground.com/#DETZ7#1
The only solution I can still think is to update the gravity values dinamically, but I don't know if it's a best pratice in this situation. If you know some better way to do it, please let me know, I started learning it today.
Thank you in advance.
You can make your "planet gravity" force in Cannon.js by applying a force on the player body on every physics tick. The force should be directed at the center of the planet. You also need to cancel the gravity force that the physics world applies every step.
This is mainly what I added to your code to implement the gravity force. Note that I also changed your makePlayer function so it returns the CANNON.Body instead of your player mesh. I also made sure to set the gravity of the world to exactly -10 in the Y direction, for simplicity.
// Listen for physics ticks
playerBody.world.addEventListener('postStep', function () {
// Direction towards (0,0,0)
playerBody.force.set(
-playerBody.position.x,
-playerBody.position.y,
-playerBody.position.z
).normalize();
// Set magnitude to 10
playerBody.force.scale(10, playerBody.force);
// Cancel gravity force from the world
playerBody.force.y += 10;
});
Here's the updated playground scene:
http://www.babylonjs-playground.com/#DETZ7#4

How to achieve realistic reflection with threejs

I am trying to render as realistically as possible a scene in which a point light hits an object and bounces off with the same angle wrt the normal of the face (angle of incidence = angle of reflection) and illuminates the scene elsewhere.
Now, I know reflection in threejs is normally dealt with CubeCamera-material as per the examples I found online, but it doesn't quite apply to my case, for I may be observing the scene from a point in which I might not be able to observe the reflection of the object on the mirror-like surface of another one.
Consider this example prototype I'm working on: if the box that is protruding from the wall in the scene had a mirror-like material (accomplished with a CubeCamera), I wouldn't be able to see the green cube's reflection on the bottom face unless the camera was at a specific position; in real life, however, if an object illuminated by a light source passes in the vicinity of another one, it will in part light it as if it were a light source itself (depending on the object's index of reflectivity, of course) and such phenomenon should be visible from any point of view the object receiving indirect lighting is visible from.
Hence I came up with the idea of adding a PointLight to the cube, but this of course produces undesirable effects on the surroundings.
I will try to illustrate my goal with the following sequence:
1) Here, the far side of what I will henceforth refer to as balcony is correctly dark, while the areas marked with a red 'x' are the consequence of the cube having a child PointLight which shines in all directions.
2) Here, the balcony's far face is still dark and the bottom one is receiving even more light as the cube passes by, which is desirable, but the wall behind the cube should actually be dark (I haven't added shadows yet, I first want to get the lighting right), as well as the ground beneath it and the lamp post.
3) Finally, when the cube has passed the balcony, it's just plain wrong for the balcony's side and bottom face to be illuminated, for we all now that a reflected ray does not bounce back the way it came from. Same applies to the lamp post.
Now I realize that all the mistakes that occur are due to the fact that the cube emits light itself, what I'm hoping you can help me with is determining a way to produce physically accurate reflected rays.
I would like to avoid using ambient light or other hacks to simulate real-life scenarios and stick to physics as much as possible; I suspect what I want to achieve is very computationally heavy to render, let alone animate in a real-time use case, but that's not an issue for I'm merely trying to develop a proof-of-concept, not something that should necessarily perform fast.
From what I gather, I should probably be writing custom vertex and fragment shaders for the materials receiving indirect illumination, right? Unfortunately I wouldn't know where to begin, can anyone point me in the right direction? Cheers.
If you do not want to go to the Volumetric rendering then you have 3 options (I know of)
ray-tracing
you have to use ray-trace rendering (back ray-trace) to achieve this. This will also cover shadows,transparent materials,reflected illumination and much more if coded properly. Unless you want to do also precise atmospheric scattering then this is the way.
back raytracing is one (or 3) ray(s) per each screen pixel. It is much faster but not that precise.. (still precise enough)
raytracing is one ray per each 3D angular unit (steradian) of space per each light source. It is slow but precise (if ray density is high enough).
If the casted ray hits any obstacle then its color is changed (due to obstacle property) and new ray is casted as reflected light ray. If material is transparent then also refracted ray is casted ... Each hit or refraction affect light intensity so you stop when intensity is lower then some treshold or on some layer of recursion (limit max number of refractions per ray) to avoid infinite loops and you can manipulate performance/quality ...
standard polygon rendering
With this approach (I think you are using it right now) you have to improvise. The reflection and illumination effects can be done similar to shadowing techniques. For each surface you have to render the scene in reflected direction. The same can be done with shadows but then you just rendering to the light direction or use shadow map instead. If you have insane number of reflective surfaces then this approach is not the way also to achieve reflection of refraction you have to render recursively making it multiple rendering pass per polygon which is also insane.
cubemap
You can use cube map per each object. It is similar to bullet 2 but the insanity is done just once while generating cubemaps instead of per frame ... If you have too much objects then this is also not the way. You can use cube map only for objects with reflective surfaces to make it manageable. Also if the objects are moving then you have to re-generate cubemaps once in a while ...

Resources