Like the qmmp(Qt) music player ui design, these two or three windows are in fact in the same window, because there is only a dock icon, and these windows can move together and attach to each other.
I read the source code, it seems use QDockWidget. But I really don't know the details how to get it.
When you manually move the secondary window, in this case - the playlist, you check where the manual move ends, and if it is on the edges of the primary window, you glue it by simply binding its position to the position and dimensions of the primary window.
Since the window position and dimensions are properties, they have notification signals, so you can connect those to a function that automatically moves the glued window.
And finally, when you attempt to manually move the secondary window, you un-glue by disconnecting.
You can easily support offset gluing instead of a purely horizontal or vertical one, by calculating and storing the positioning offset and applying it on every primary window move.
If the drop happens within a given threshold of the primary window you can snap to the edge. If you factor in the mouse position relative to the dragged window, you can even snap particular edges together.
Related
I'm using Qt's GraphicsView/GraphicsScene framework, and I have to draw some line items.
To be sure these items are always visible (independant of the zoom level) I use a cosmetic pen, with a size of 3 (for example) so I always get lines of 3 pixels width drawn on screen.
But these items doesn't receive mouse events (such as hoverEnterEvent/hoverLeaveEvent) when I'm zooming out a lot.
I've digged in the code, and it appears that all collisions tests are done with the return value of the shape() function.
So I've tried to re-implement "shape()" and also "contains()" and "collidesWithPath()" methods, but I still have problems to detect collisions (because when zoom is changed, I need to re-update the shape for example).
Is there any tricks to do that ?
In an efficient way ? (without re-updating the item's shape at every zoom change)
Thanks
I use grid layout (horizontal and vertical too). I like the fact that when resizing the window fills the entire window contents. but this extension is poorly managed. I often want to change the size of only one column in grid layout without changing the size of the window. such as in Windows Explorer. there are two columns - the left list of directories and their contents to the left to the right. and i can always press mouse button therebetween and pulling change the mutual sizes of columns in relation to each other.
how can I do this in Qt?
You need to use a QSplitter rather than a QGridLayout in this specific case (where you just want 2 widgets shown together). QSplitters are draggable.
You are looking for QSplitter
(The following is the procedure in the Qt Designer)
Group your widgets, and click Lay Out Horizontally/Vertically in Splitter
Put this group into another layout (QGridLayout, for example) to automatically expand it.
Congrats! Your Layout is now draggable(from step1) and expandable(from step2).
I created a sketchUp plugin that draws a wall (with lenght, width and height).
Now I would like to insert a "window" in that wall (fixed length, width and height, depending on the wall). How can I:
Create but not yet draw the group containing the window. Link it to the current mouse position
Constrain the current mouse position to the front plane of the wall I drew before
When the user clicks, the window is inserted and the group is shown
The easy way to do it, that doesn't fulfill your request 100% - but do use existing SketchUp conventions, is to create a component definition and then use Model.place_component to activate SketchUp's native tool to position a new component instance.
In order to fulfill your question 100%:
A Group is an instance. You cannot create one and not place it in the model. You can create it in step 3 when the user clicks. (Though, a window sounds like a candidate for a component since you usually have multiple copies of the same window type.)
You cannot constrain the mouse cursor itself, but if you implement a custom Tool and make use of the InputPoint class you can selectively determine what is a valid insertion point when the user clicks. You can also draw virtual lines and polygons to the viewport to give a preview of your window.
Profit!
I'm working on a simple widget system, and I'm implementing some containers right now.
Here's the situation I find myself in:
I have a Widget base class, a Container class, which is a widget that can contain other widgets, and several widget sub classes like Button.
I have two types of container: Container itself, which positions children absolutely, and Box, which will stack widgets next to each other, either horizontally or vertically.
Each widget draws itself at x=0, y=0. Therefore, containers need to add an offset to the drawing context before the widgets are told to draw themselves.
Each widget does its own hit testing based on its x/y position.
So far, it works fine. But it falls apart now that I'm implementing Box: What I do is that I overwrite the drawfunction inherited from Container to draw them all in next to each other, instead of based on their x/y position. Quite simple.
But event handling is totally off now, as the widget's x/y position has become meaningless.
I think I have two options:
Have the widget do hit testing at position x=0, y=0, like drawing. Then recalculate the mouse position to match that in Container.
Make each layout set x/y for its children, and make children draw themselves at their x/y position again. No more offsets for the drawing context
The first one is a bit ugly, I think. The second one is pretty complicated to implement, since I need to react to position changes in widgets.
How to other widget systems like Qt, Gtk and wxWidgets generally tackle this? I've looked at the source of some of these, but can't quite figure that out, it's too sophisticated. I don't have any resizing or packing issues to consider.
You are trying to implement your own layout system. You should expect it to be difficult.
I would advise against the first method. The x,y coordinates of a widget are not only used by the widgets themselves, but by anyone outside of the container who wants to do something with the widget.
The second solution is what I've chosen to implement custom widgets made of several smaller widgets and it's not that hard to put together if you don't want too many features.
Just get the widgets when they are added to your container, set their position to the current free spot, and move on to the next.
When I resize a window, I can do so using the top,bottom,left or right sides or top-right,top-left,bottom-right or bottom-left corners.
Is there a way to know which one is used when I'm resizing?
I don't know if there is an elegant solution because different operating systems handle borders differently.
My suggestion is
to compute the difference between the current and previous window size each time it is drawn
Get the mouse cursor's position.
If the window X changes, the border used is probably the left or right -- whichever the mouse cursor is closest to. If the Y changes, probably the top or bottom border the cursor is closest to.
If both change, the corner the mouse cursor is closest to is probably it.
A few corner cases may come up. For example, a window can be resized on some systems using the keyboard. It can potentially also be resized programatically, like when the user changes to a resolution too low to contain your window. These things can be handled in most cases by detecting of the mouse button is clicked while the resize is taking place.
Also, it is possible to resize just the width or height from the corner. In these cases, you may have to choose a threshold for mouse distance from corner that would decide whether it is actually at a corner.