How to shift the pixels of part of a QGraphicsItem? - qt

I would like to know whether it is possible to shift part of a drawing by copying its pixels rather than redrawing it.
I work in an embedded environment, where performance is a key factor. We use Qt 4.8.
I have a set of real-time data points that I want to draw. I define the following class:
class SetOfDataPoints : public QGraphicsItem
{
public:
<constructor>
QRectF boundingRect() const { ... }
void paint(QPainter* painter,
const QStyleOptionGraphicsItem* option,
QWidget* widget = NULL) { ... }
<other methods>
};
At regular intervals, I read a new data point, add it to the instance of SetOfDataPoints, and shift the SetOfDataPoints to the left (by calling QGraphicsItem::moveBy() on the SetOfDataPoints), so the new data point becomes visible. As a result, SetOfDataPoints::paint() gets called, and in that method I have to draw the entire set of data points. The drawing currently consists only of line segments that connect the data points, but will become more elaborate in the future.
Now, it feels inefficient to redraw the whole set of data points, when most of the graph is actually just shifted to the left. I would like to shift the pixels of the unchanged part of the graph to the left, and draw only the one line segment that connects the last two points. At least I would like to try, and measure how much that improves performance.
Is there a way to do that in Qt 4.8?

This won't work in general:
Your item doesn't exist as any pixels until it's rendered. You don't know how many pixels it is drawn on, or even if there are any, since there are 0 or more views your scene is shown on, and the item might be visible to various extent on these views.
There are transformations applied to your item. It doesn't have to be rectangular.
Your item is composited with the items below it, unless it is completely opaque.
Your item, shown on a view, is composited with the backing store of the widget the view is on.
You can optimize for special cases. If your item is not cached, then it's always painted on a view, and the widget argument of paint will point to that widget. You then have direct access to the backing store, and the painter gives you the transformation used to go from item coordinates to the backing store's device coordinates. You can then inspect the path on the widget tree from the view to the window for opacity. If all intervening widgets paint opaque, and your item has an orientation-preserving transformation, you can certainly do a blit on the image, and redraw only a small part of the item.
If your item is cached, it should then be cached in device coordinates. You can do the blitting too, as you're painting on a pixmap. That pixmap is then composited onto the backing store of the window the view is on. There's a separate cache pixmap for each view.
When blitting, you must always recognize how much of the previous pixels are correct. For each view or cache pixmap, you should keep a region that is valid. That region initally should be empty.

Related

In Graphic scene how to make overlapped object transparent?

In graphics view m setting scene in that adding objects (by dropping ), i can move these items by mouse, when i moved one object on another object moved object should be transparent. how can i make it?
I don't believe you actually want full transparency since it will make it impossible to visually recognize the transparent object later on. Reduced opacity - yes.
As for your question: each item inside your scene has a bounding rectangle (or other type of bounding area). You can easily get it by calling boundingRect() of your item. The returned QRectF has (just like QRect) has the bool QRect::intersects(const QRect &rectangle) const function, which takes another rectangle and checks if a collision is present.
Whenever you move your mouse while dragging an item you need to iterate either through all or just a subset of all items in your scene (by subset I mean just the items in a specific region to increase the performance) and check for collision. If a collision is detected, you can alter either the item you are dragging or the item underneath it.
Of course to make sure that one item covers another one you also need to check the Z value. The easiest way to do that is if you keep all currently not being dragged items at the same Z level and then, whenever you drag one, increase it's Z level by one so that it is "above" the others.

Collision with fixed pixel-size items on QGraphicsView

I'm using Qt's GraphicsView/GraphicsScene framework, and I have to draw some line items.
To be sure these items are always visible (independant of the zoom level) I use a cosmetic pen, with a size of 3 (for example) so I always get lines of 3 pixels width drawn on screen.
But these items doesn't receive mouse events (such as hoverEnterEvent/hoverLeaveEvent) when I'm zooming out a lot.
I've digged in the code, and it appears that all collisions tests are done with the return value of the shape() function.
So I've tried to re-implement "shape()" and also "contains()" and "collidesWithPath()" methods, but I still have problems to detect collisions (because when zoom is changed, I need to re-update the shape for example).
Is there any tricks to do that ?
In an efficient way ? (without re-updating the item's shape at every zoom change)
Thanks

What is the meaning of "the item's view geometry and scene geometry will be maintained separately"?

The documentation for the QGraphicsItem::ItemIgnoresTransformations flag says
The item ignores inherited transformations (i.e., its position is still anchored to its parent, but the parent or view rotation, zoom or shear transformations are ignored). This flag is useful for keeping text label items horizontal and unscaled, so they will still be readable if the view is transformed. When set, the item's view geometry and scene geometry will be maintained separately. You must call deviceTransform() to map coordinates and detect collisions in the view. By default, this flag is disabled. This flag was introduced in Qt 4.3. Note: With this flag set you can still scale the item itself, and that scale transformation will influence the item's children.
Of course I have read the QGraphicsItem details, the QGraphicsScene details, the QGraphicsView details, and the Graphics View Framework.
There are also several questions about the ItemIgnoresTransformations flag like Fixed size QGraphicsItems in multiple views?
But I still do not understand the sentence in bold face. What does it mean ?
The problem that rose this concern is described in PyQt: Moving multiple items with different ItemIgnoresTransformations flags, but maybe this question was too long, or too pyqt oriented at first sight. And it was more about moving items. So here I'm trying to better focus.
Imagine situation that parent is rotated 45 degrees or even have some sheer. Since current item ignores this transformation it stays strait (not rotated).
Now ask question how this impacts on item size and position? Parent may maintain geometry of item (for example by using layout) but it doesn't take into account this flag, so some geometry (which is in parent units) will be set but effectively items scene rect may appear different since item ignores transformation and it is not rotated squeezed zoomed as a parent.
So from parent point of view geometry has some value but form scene point of view it is different.
It would be best if you try to see how it works in practice It is hard to describe problem clearly.

Shadow Effects with QGraphicsRectItem in Qt

I have a QGraphicsRectItem over a scene. I intend to drag and drop this window over the scene. When the rect item reaches the left boundary end I have to show it appearing from the right end. Currently I am using two objects and hiding and showing them by calculating the boundary of scene which involves lot of calculations.
Is there any better way to achieve the same effect using just a single object?
Thank You
You could use a single item that spans the entire scene, and draw the rectangle (or 2 parts of it) in it's paint method.
But then you would lose the optimization of the BSP tree, your item would redraw even if some unrelated area repaints. If this is just 1 item, I guess it would not have much impact.
You would need to implement your own dragging with mousemove event and the like, though this is not much code, you just need to get it right.

Drawing a pixmap using QPainter::drawPixmap in qt

I am able to paint a pixmap by using QPainter::drawPixmap, but I am having trouble with the sizing. The pixmap is being drawn onto many different scenes. Some of the scenes are very large, and some are very small. This results in the pixmap drawn being either looking very large or very small, depending on the size of the scene (or viewport, whatever its called). I need the pixmap to look the same size everytime, regardless of the dimensions of the scene it is being placed into.
Basically, I want it to work similar to drawPoint, where you can specify the length and width of the point in pixels, so the point looks the same size every time.
The following line of code is inside my paint function of the QGraphicsItem I subclassed:
painter_p->drawPixmap( pos(), MYPIXMAP );
with pos() returning the QPointF I need to draw the pixmap at.
Can't you use QGraphicsPixmapItem? It'd do exactly what you want.

Resources