QGraphicsView: How to efficiently get the viewport coordinates of QGraphicsItems? - qt

Is there a fast way to get the viewport coordinates of QGraphicsItems in a QGraphicsView? The only way I can think of is to call QGraphicsView::items(), and then QGraphicsItem::pos() followed by QGraphicsView::mapFromScene.
I must be missing something, though, because items are already converted to viewport coordinates to position them correctly on the QGraphicsView, so converting it to viewport coordinates again with mapFromScene seems inefficient--especially because in my case this is occurring often and for many items. Is there a more direct approach?

Probably not. A QGraphicScene can be rendered by more than one QGraphicsView simultaneously. It makes no sense to keep only one set of view port coordinates.
Also. All operation between QGraphicsItems are calculated in scene coordinate directly. Events from view port are convert to scene before processing. Working off view port which is only integer-based can also loose precision. A QGraphicsView is only a representation of the mathematical model of a scene. It's not the actual model.
Maybe you can ask a more specific question on what exactly you are trying to accomplish. There may be a better way to do it in scene coordinate.

Related

Display QML rectangles on video stream based on object recognition

I have a video stream as describe in Qt Video Overview, using the MyVideoProducer mechanics. The source images are analyzed and I have a list of connected components (x,y,width,height) and I want overlay rectangles on the video.
Can I do this by sending a list of rectangle co-ordinates to QML and have it place the rectangles or do I need to create my own overlay images?
I looked at the QtQuick particle system but it doesn't seem to fit. Other questions have the layout of the rectangle managed by Qt/Qml, but I need the rectangle to be placed according to the co-ordinates that the vision pipeline has determined in C++ and sent to the QML front-end. They will be stale/related to the video frames.
There is an example, but the overlay is unrelated to the video. I think I need an overlay that is synced to the onNewVideoContentReceived(). QML won't be able to determine how to keep any list of rectangle in sync with the video easily.
I just modified the original buffer creation, debayered from a camera, to draw the rectangles myself in the RGBA format. It avoids the synchronization issue of the video frame with the object location data. I did not use alpha but just replacement of pixels. For my content, the amount of boxes versus the video area was not great. With alpha rectangles and a lot of objects, it may be more efficient to involve a GPU. In fact, you could used fixed size squares and not the CCL bounded region and this might be significantly faster with a GPU.
A QML solution would be more elegant, but this solution works.
Alternative options are QVideoFrame::setMetaData, this can tie the CCL QRect list to the frame, so that the association is clear and tied to the frame. The method onNewVideoContentReceived() of the MyVideoProducer could render the rectangles from C++.
Another option is QAbstractVideoFilter, which will modify the original buffer to add additional data to the images presented. This is easy to enable/disable via the QML front end.
All solutions rely on C++ so it is not easy to change coloring, etc in QML. For example if the object has a recognized property such as 'male', 'female', 'cat', 'vehicle', etc the QML could update the highlighting appropriately and maintain an accounting of the object types.

Apply projective transformation on plane in 3D

Scenario
I have a 3D environment which contains a 3D scene and a '2D' scene.
The 3D scene contains a cube and a perspective camera.
The '2D' scene contains 4 round objects and an orthographic camera. These round objects can be moved around by the user therefor the orthographic camera is used otherwise the round objects can be moved 'in depth' (along z-axis) and could change in size and i want them to maintain size.
Depending on positioning the round objects, the corners of the cube in the 3D scene should be aligned with the positions of the round objects. And maintaining perspective.
Edit:
What i am trying to accomplish is: Based on an image of a room a user uses those round objects to define the dimensions of the room. Based on those dimensions a hidden cube is positioned to act as a boundery box. The next step would be to add 3d objects to the scene and maintaining perspective of the room.
I tried explaining this scenario in a picture:
Problems
Basically i have no clue where to start.
The round objects are in a '2D' environment because of the orthographic camera, therefor i have no depth value that i think i need.
I think i need some perspective transformation based on camera positions/settings? There are all sorts of matrices that could be produced but don't know how to implement them.
Sources i studied
http://www.graphicsmill.com/docs/gm/affine-and-projective-transformations.htm
below is a similar situation
https://math.stackexchange.com/questions/296794/finding-the-transform-matrix-from-4-projected-points-with-javascript
Cannot post more links because of my reputation
I hope someone can make this clear or point me in the right direction
Counting the real degrees of freedom, I would say that you don't have enough data. Imagine the projetive camera of the 3D scene as an actual pinhole camera. Then the image that camera creates on its film, sensor or whatever is described by at least 9 parameters:
3 parameters for the position of the camera in space,
2 parameters for the direction the camera is looking at and
1 parameter rotating the camera + sensor around their optical axis,
1 parameter determining the distance from pinhole to sensor and
2 parameters translating the sensor in its plane
On the other hand, knowing a projective transformation from one plane to another, e.g. using my answer to the question you already referenced, will only yield 8 geometrically meaningful parameters. So you cannot hope to reconstruct the camera position from that, so you cannot find the image of the 3D scene that would fit your markers. The Wikipedia article on 3D pose estimation writes that
Most implementations of POSIT only work on non-coplanar points (in other words, it won't work with flat objects or planes).[3]
That being said, you gave an example of where someone is actually doing this! So how do they do it? Honestly, I'm not sure, but they would have to make use of some additional knowledge or extra assumptions. For example, if they knew details about their camera (focal length, relative position between lens and sensor, or something like that), that could provide the required data. Since these apps tend to work on mobile devices, I think it rather likely that they might have either an API to request these things or a database where they can be looked up for the more common devices.
Judging from your question, you don't have that. Neither do you have all the vertical edges of the cube depicted vertically parallel to one another, which would have been another possible way to add more information. You have to come up with one more piece of information in order to allow for a hopefully unique solution.
Of course, without more information the system is just underspecified. It's not hard to find any transformation matrix which does what you requested. Actually the answer I references is placed in a setup where a 2D to 2D map is to be modeled using a 3D transformation matrix. You can do the same and be done with it. But your users might become frustrated, since the transformation they obtain might do completely wrong things to the out-of-plane direction, and there is no knob to tune that to the correct behavior.

Scrollable and Endless map - libgdx

I want to create a simple game with a space ship that needs to dodge asteroids and stuff along the way.
Now, I can think of several ways to spawn the obstacles in the map. My only problem is, how do I implement the idea of an endless map/scrollable map?
For instance, in Flappy Bird there is an endless map.
I just want to know what is the best approach to implement this kind of thing.
Like Alon said, for the background you can use several horizontally "tileable" textures, just load them behind the current one when it's edge is almost visible. You can actually make multiple layers to create depth, for instance you can create a foreground layer with some clouds/nebula's the player travels behind, then some space dust behind the player traveling slower and some planets traveling very slowly in the distance.
Simply create a array for each layer of tileable background textures. Make these textures a bit (or a lot) wider then the actual screen. Keep adding textures to the right side, pick them randomly from your array, and let them scroll. Of course you delete textures when they traversed the screen and not shown anymore.
For your objects you just spawn the asteroids off screen and let them travel across the screen. You maintain a asteroid list and each time you need an asteroid you generate it with a random Y axis and add it to the list. When you need to draw or calculate collision you traverse this list and do your stuff on each asteroid.
There are many ways to do it. I recommend this one:
Move your character in needed direction and respawn obstacles on global position. Camera should follow the character. For moving background you have two options: ParallaxBackground which already knows how to move backgrounds or you can just create two backgrounds and when character will be in the end of first one you will move second background to the end.

QT drawing pixels to graphics scene

I need to write a std::vector< double > values to a qGraphicsScene. (values between 0-1, each element represents a pixel - grayscale)
Later i want to access the pixels of the image for replace the color (i don't have time to replace the whole image)
thx. for the answer!
If you're wanting to do such low level modification, I'd recommend taking a look at the QImage class. Members such as QImage::setPixel will give you access to individual pixels for modification.
If you need this kind of functionality on a QGraphicsScene, then you could draw to the QImage and then convert that to a QPixmap (with QPixmap::convertFromImage) for use with a QGraphicsPixmapItem, and then place the QGraphicsPixmapItem onto the scene.
You may want to take a look at the generic Qt containers, such as QVector as well.

Best solution for 2D occlusion culling

In my 2D game, I have static and dynamic objects. There can be multiple cameras. My problem: Determine objects that intersect with the current camera's view rectangle.
Currently, I simply iterate over all existing objects (not caring wheter dynamic or static) and do an AABB check with the cameras view rect on them. This seems acceptable for very dynamic objects, but not for static objects, where there can be tens of thousands of them (static level geometry scattered over the whole scene).
I have looked into multiple data structures which could solve my problem:
Quadtree
This was the first thing I considered, however the problem is that it would force my scenes to be of fixed size. (Acceptable for static, but not for dynamic objects)
Dynamic AABB tree
Seems good, but the overhead for rebalancing it seems just too great for many dynamic objects.
Spatial hash
The main problem here for me was that if you zoom out with the camera a lot, a huge number of mostly non-existing spatial hash buckets had to be queried, causing low performance.
In general, my criterias for a good solution of this problem are:
Dynamic size: The solution must not cause the scene size to be limited, or require heavy recomputation for resizing
Good query performance (for the camera)
Good support of very dynamic objects: The computations needed to handle objects with constantly changing position should be good:
The maximum sane number of dynamic objects in my game at one time probably is at 5000. Consider they all change their position every frame. Is there even a data structure which can be faster, considering the frequent insertions and deletions, than comparing the AABBs of the objects with the camera every frame?
Don't try to find the silver bullet. Just split your scene into dynamic and static parts and use different algorithms for them.
Quad trees are obviously suitable for static geometry with fixed
bounds.
Spatial hashes are ideal for sets of objects with similar sizes
(particle systems, for example).
AFAIK dynamic AABB trees are rarely used for occlusion culling, their
main purpose is the broad phase of collision detection.
And as you noticed, bruteforce culling is normal for dynamic objects
if the number of them is not really big.
static level geometry scattered over the whole scene
If your scene is highly-sparse, you can divide it into islands, i.e. create a list of scene parts with "good density".

Resources