Is it posible get distance of object from picture with static camera? - math

I am drawing situation.
Camera (Raspberry pi) looking on scene, there is an object. I know real width and height of object. Is there any way, how can I calculate distance between camera and object from cam photo? Object on picture is not always situated in the middle. I know height of camera and angle of camera.

Yes, but it's gonna be difficult. You need the intrinsic camera matrix and a depth map. If you don't have them, then forget it. If you only posses a RGB image then it won't work.

Related

Clicking "Pictures" of a point cloud in PCL Library or Open3D

I have a point cloud and want to take "pictures" of it from various angles. Let us say I point my at the top at a certain angle and rotate the camera around the object at this particular orientation and "snap" what the camera is seeing.
Next I want to change the camera orientation and repeat the process.
I am completely new to the 3D data processing domain and not very well aware of all the PCL / open3D libraries. How can I code this functionalities.
Thanks in Advance.

Determining the position of a camera based on a trapezoid

I have a camera for which I know every parameter: field of view, etc... (I can basically construct it's frustum).
With that camera, I capture a trapezoid every frame like so:
My question would be, knowing these parameters, what would be the best way to determine the position and orientation of the camera?

Apply projective transformation on plane in 3D

Scenario
I have a 3D environment which contains a 3D scene and a '2D' scene.
The 3D scene contains a cube and a perspective camera.
The '2D' scene contains 4 round objects and an orthographic camera. These round objects can be moved around by the user therefor the orthographic camera is used otherwise the round objects can be moved 'in depth' (along z-axis) and could change in size and i want them to maintain size.
Depending on positioning the round objects, the corners of the cube in the 3D scene should be aligned with the positions of the round objects. And maintaining perspective.
Edit:
What i am trying to accomplish is: Based on an image of a room a user uses those round objects to define the dimensions of the room. Based on those dimensions a hidden cube is positioned to act as a boundery box. The next step would be to add 3d objects to the scene and maintaining perspective of the room.
I tried explaining this scenario in a picture:
Problems
Basically i have no clue where to start.
The round objects are in a '2D' environment because of the orthographic camera, therefor i have no depth value that i think i need.
I think i need some perspective transformation based on camera positions/settings? There are all sorts of matrices that could be produced but don't know how to implement them.
Sources i studied
http://www.graphicsmill.com/docs/gm/affine-and-projective-transformations.htm
below is a similar situation
https://math.stackexchange.com/questions/296794/finding-the-transform-matrix-from-4-projected-points-with-javascript
Cannot post more links because of my reputation
I hope someone can make this clear or point me in the right direction
Counting the real degrees of freedom, I would say that you don't have enough data. Imagine the projetive camera of the 3D scene as an actual pinhole camera. Then the image that camera creates on its film, sensor or whatever is described by at least 9 parameters:
3 parameters for the position of the camera in space,
2 parameters for the direction the camera is looking at and
1 parameter rotating the camera + sensor around their optical axis,
1 parameter determining the distance from pinhole to sensor and
2 parameters translating the sensor in its plane
On the other hand, knowing a projective transformation from one plane to another, e.g. using my answer to the question you already referenced, will only yield 8 geometrically meaningful parameters. So you cannot hope to reconstruct the camera position from that, so you cannot find the image of the 3D scene that would fit your markers. The Wikipedia article on 3D pose estimation writes that
Most implementations of POSIT only work on non-coplanar points (in other words, it won't work with flat objects or planes).[3]
That being said, you gave an example of where someone is actually doing this! So how do they do it? Honestly, I'm not sure, but they would have to make use of some additional knowledge or extra assumptions. For example, if they knew details about their camera (focal length, relative position between lens and sensor, or something like that), that could provide the required data. Since these apps tend to work on mobile devices, I think it rather likely that they might have either an API to request these things or a database where they can be looked up for the more common devices.
Judging from your question, you don't have that. Neither do you have all the vertical edges of the cube depicted vertically parallel to one another, which would have been another possible way to add more information. You have to come up with one more piece of information in order to allow for a hopefully unique solution.
Of course, without more information the system is just underspecified. It's not hard to find any transformation matrix which does what you requested. Actually the answer I references is placed in a setup where a 2D to 2D map is to be modeled using a 3D transformation matrix. You can do the same and be done with it. But your users might become frustrated, since the transformation they obtain might do completely wrong things to the out-of-plane direction, and there is no knob to tune that to the correct behavior.

Unity: Ballistic Vector

This is a first person shooter game where I am trying to launch a projectile from a moving player. The launch direction depends on where i click on the screen. The "launcher" has a fixed fire strength, which means that the projectile if fired more horizontally will travel further before hitting the ground, likewise a projectile fired in a more upwards direction will go higher but will have travelled less horizontally when it hits the ground due to gravity. The firing vector is determined by where the finger touches the screen, and then multipled by a public parameter "firingstrength". Make sense so far?
What I am confused about is how to get the position of the finger on the screen, which i use to calculate the "vector" to apply to the projectile.
I have imagined doing this (I am new to Unity) by the following:
Empty object (Player): Contains
movement scripts
Camera
Invisble inverted sphere which surrounds the camera, and I use this sphere to pick up the mouse clicks (i.e when i click on the screen the game should detect where on the inside of the sphere I have clicked on, in order to calculate a vector between the camera position and the sphere wall that i clicked on)
Once i have the vector, i just multiply it by a "firingstrength variable and apply it to a projectile that originates from camera position.
Does this make sense or is there a better way to do this?
Kevin
You do not need a sphere just use Input.mousePosition and construct a ray from it using ScreenPointToRay(). Than you just use ray.Direction to get Vector3 from your camera position.
Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
Vector3 shootVector = ray.Diraction;
This should do the trick.
For touch interface:
Ray ray = Camera.main.ScreenPointToRay(Input.GetTouch(0).position);
Vector3 shootVector = ray.Diraction;

QGraphicsView: How to efficiently get the viewport coordinates of QGraphicsItems?

Is there a fast way to get the viewport coordinates of QGraphicsItems in a QGraphicsView? The only way I can think of is to call QGraphicsView::items(), and then QGraphicsItem::pos() followed by QGraphicsView::mapFromScene.
I must be missing something, though, because items are already converted to viewport coordinates to position them correctly on the QGraphicsView, so converting it to viewport coordinates again with mapFromScene seems inefficient--especially because in my case this is occurring often and for many items. Is there a more direct approach?
Probably not. A QGraphicScene can be rendered by more than one QGraphicsView simultaneously. It makes no sense to keep only one set of view port coordinates.
Also. All operation between QGraphicsItems are calculated in scene coordinate directly. Events from view port are convert to scene before processing. Working off view port which is only integer-based can also loose precision. A QGraphicsView is only a representation of the mathematical model of a scene. It's not the actual model.
Maybe you can ask a more specific question on what exactly you are trying to accomplish. There may be a better way to do it in scene coordinate.

Resources