Can we simulate a calibrated camera on PCL visualizer? - point-cloud-library

I have a calibrated stereo camera. So I have the camera intrinsic matrix for the left camera. I have built an absolute 3D model of the scanned region which I am able to load as a mesh file.
I have recorded a video of stereo camera's left camera as I scanned the region. I also know the position and orientation of the stereo camera at every point during the scanning process. So I recreated this motion of the stereo camera using PCL. If the position and orientation of the stereo camera in the real world matches with that of the camera of pcl visualizer, will the left cam's photo and the pcl rendered view match?
I tried doing this but looks like the perspective projection done in PCL visualizer is different from that of the camera. If so, is there a way to change the projection algorithm used in PCL visualizer so that I can match the rendered view that match exactly with the camera's image?

Related

Clicking "Pictures" of a point cloud in PCL Library or Open3D

I have a point cloud and want to take "pictures" of it from various angles. Let us say I point my at the top at a certain angle and rotate the camera around the object at this particular orientation and "snap" what the camera is seeing.
Next I want to change the camera orientation and repeat the process.
I am completely new to the 3D data processing domain and not very well aware of all the PCL / open3D libraries. How can I code this functionalities.
Thanks in Advance.

JavaFX 2d game engine: How to move camera as player moves?

I am creating a 2d topdown game (with canvas size 720x480) where I want the camera to move once the player reaches a certain part of the screen. For example, once the player reaches 2/3 of the way to the right, the camera should then start scrolling to the right.
For reference, here is an image of the game world: game
In my code, I have every object implementing a "genericObj" class, which has a position, velocity, and dimensions. So this is what I am thinking of doing once the player reaches 2/3 of the way to the right and is continuing to move to the right:
set the player's velocity to half the original
update every object's velocity with the negative of half the player's velocity (object.velocity -= player.velocity)
check if objects are within the view of the camera
display the objects that are within the view, disregard others
The reason for using half the player's velocity for both the new player velocity and the objects is that, in my code, the player movement sprite is only updated when the player is moving. Therefore, I need the player to be moving as opposed to setting the player velocity to 0 and the velocity of every object to negative player.velocity.
Is this a good way of "moving" the camera? What are some better methods of moving the camera? Also, would performance be an issue if I used this method of moving the camera (for example if I had 50+ objects)?

How to transform an equirectangular (360 degree) video to normal field of view

I am working on 360 degree videos, where I need to render a equirectangular video format in a normal 2D field of view, given any latitude and longitude coordinates in the sphere, just like what 360 degree player does, but control it with code and not by mouse interactions.
So my question is
How can I orient the virtual camera in that sphere with code (python preferred) given input as equirectangular video format, so that I can get the image of that particular angle in that sphere.
In order to do above, what sort of transformation operations are required to be performed on input video (equirectangular format), so that I can get the cropped normal field of view where virtual camera is oriented.

Is it posible get distance of object from picture with static camera?

I am drawing situation.
Camera (Raspberry pi) looking on scene, there is an object. I know real width and height of object. Is there any way, how can I calculate distance between camera and object from cam photo? Object on picture is not always situated in the middle. I know height of camera and angle of camera.
Yes, but it's gonna be difficult. You need the intrinsic camera matrix and a depth map. If you don't have them, then forget it. If you only posses a RGB image then it won't work.

Calculate the GPS position in 2D inside a camera image depending of FOV

I have a camera with a known FOV that is located to a known GPS coord with a known orientation.
I have another GPS coord and would like to display a dot on the camera image (augmented reality) of this GPS coord.
Is it possible to do that with the info?
PS: The distance between the 2 GPS coord is less than a few kilometers so perhaps we can do some approximation
You would need to track the orientation of the camera. GPS can track the position and velocity of the receiver, but most setups cannot tell you anything about the orientation. So, without additional information, the answer is "no".
If you have 3-axis magnetic and inertial sensors (accelerometer and rate gyros) on the camera, you may be able to compute an orientation based on gravity and geomagnetic field -- although the magnetic component would be sensitive to local magnetic distortions.
If you ask the user to wave the camera around, you may be able to combine inertial sensors with GPS velocity readings to determine your orientation -- although I'm not sure the data would be good enough for a stable orientation. Using computer vision techniques on the live-view images might help to stabilize this kind of tracking, though.
In any case, you'll want to look up Kalman filtering, which is the technique used to do this kind of tracking.

Resources