I've just added a second QGLWidget to my app (both QGLWidgets inherit from the same class). While the first one still works as expected, the second one raises a GL_OUT_OF_MEMORY, in the glDrawArrays() method of my paintGL()method, whatever the data which filled the buffers
I manage to solve this in adding the first QGLWidget as a "share widget" when creating the second one:
http://doc.qt.io/qt-4.8/qglwidget.html#QGLWidget
However, now, it seems that the two QGLWidgets are linked/synchronized (especially the camera but only when switching to a widget to another).
My question is thus more general as I would like to know how I should handle my two QGLWidgets, to avoid conflicts, knowing that they only share the same shaders code (vertex and fragment) but no data (they do not write/read the same buffers).
EDIT: I use PyQt4
The problem comes to the fact that when I switch from a window to another (my GLWidgets being on different windows), the paintGL() method is called and, as the context is share, they also share the same camera matrices. Thus, at the beginning of each paintGL() method I call my updateCamera() method.
Related
Ok, it's pretty simple with a storyboard to have one viewController create a segue to another viewController. Simply control-click and drag from one viewController to the other.
How do I create a segue, either in the storyboard, or programmatically to invoke or present another instance of itself?
Why would I want to do this?
I currently have a viewController with a UICollectionView presented from data in a simple one-dimensional array of objects. When I select an item it presents a detail viewController for that item.
What I want to do is modify my data for organizational reasons to allow the objects in the array to hold an array of objects like the original data array, as a folder with sub-folders. It seems reasonable to me that simply updating what the data source is pointing to, and calling the same viewController itself to display the next level should be relatively trivial. When finished, simply pop back a level and be right where you left off.
It seems nontrivial to click-drag from the viewController to itself. Prior to using storyboards and segues, this would be done by simply presenting the view contoller. What is the best way to do this with storyboards and segues?
Yes, it COULD probably be done by manipulating the data source and just redrawing the current viewController, but it seems like it SHOULD be cleaner to just call the viewController with a pointer into the sub-array as if it was the top-level array and re-present the same viewController, letting the view controller stack manage the individual levels without having to redraw the model from different starting points and remembering the starting points in some kind of stack.
Any advice on the best way to do this?
Apparently you cannot invoke or present your view controller on itself.
Likely this is because UIKit is not re-entrant or thread-safe. So, Apple doesn't let you use a segue, or use presentViewController with an argument of self (or anything that resolves to self).
A pity, since it would have been nice to not have to create a sub-view contoller which does all the same things the viewController does.
If someone knows of a better way to do this, please post below.
I am trying to develop an application with Qt 5.5 and OpenGL. The basic work of the application will be to load simple objects, modify their positions in a scene and save them together with other attributes (material, name, parents/child relations etc...).
The only thing I am struggling about for a week now is that I really don't know how I should take on the problem of synchronizing data. Let's say I have some kind of SceneGraph class which takes care of all SceneObjects. Those SceneGraphs should be rendered in a SceneView-Widget which can be used to modify it's Objects via transformations. Now how would I tell every SceneView that an Object changed it's position?
I thought of the Model/View architecture for a moment but I am not really sure how this implementation should look like.
What would be the best way to handle Objects like that in different Windows/Widgets but still have one single piece of data?
SceneObject:
Holds the mesh-information (verticies, uvs, etc..)
Has a name (QString)
Has a material
Has a transform storing position, rotation and scaling information
(Important: these datatypes should be synchronized in all views)
SceneGraph:
Contains different SceneObjects and is passed to SceneViews
SceneView:
The QWidget responsible for drawing the Scene correctly in any QWindow.
Has it's own camera to move around.
Handles UserInput and allows transformation of SceneObjects.
You could use the signal and slot to observe position updates of SceneObjects and process them in SceneView.
I'm implementing a drag and drop interface with Qt across X11 and Windows. The interface handles events such that it is not illegal for a user to drop a dragged object on an area which can't handle drops.
In this case, Qt::IgnoreAction should therefore not be treated as an incorrect potential action. To communicate this fact to the user I need a way to stop Qt::ForbiddenCursor from displaying if the current Qt::DropAction is Qt::IgnoreAction.
There are three ways I can see to achieve this (in order of preference):
To override the QCursor used for a drag with Qt::IgnoreAction to something other than Qt::ForbiddenCursor.
To override the bitmap used for Qt::ForbiddenCursor. This is pretty dirty but would be an acceptable solution as long as I don't have to delve into OS-specific configuration.
To override the call made by Qt when a drag leaves a valid drop area (I assume that Qt does the equivalent of QDropEvent::setDropAction(Qt::IgnoreAction) in this case).
Could anyone suggest ways to acheive any of the above?
Note: I have also attempted to use QApplication::setOverrideCursor() just before calling QDrag::exec(). This doesn't seem to have any effect.
Check if QDragEnterEvent comes to application itself (install event filter on QApplication object). If it does, simply accept it and cursor will appear normal.
I have now tried for hours to make sense of how QGraphicsItem behaves with respect to children. I create a new QGraphicsItem B (actually an own subclass of it), and then add it to another QGraphicsItem A as child by invoking the setParentItem method on B. Immediately after that A has B as child. I have verified with some debug code that iterates over the children of A. Then A is added to a List of As in a Manager. Some time later in the program, in a QWidget the lists iterator is obtained from the manager. I then iterate over the list of As and check the children for each of them and all of them are gone. I have verified in the debugger that the Manager is really the same instance and the list is also the same instance. Somehow this really puzzles me, who in the Qt FWK decides for me that my A objects do no longer need their children?
I'm a newby to Qt and C++, also with extensive development experience from java to objective-c so I have some hope it is a speciality of Qt I'm not aware of, not entirely my own stupidity...
Best Regards,
André
FYI:
QGraphicsItemGroup specially designed for grouping.
// Group all selected items together
QGraphicsItemGroup *group = scene->createItemGroup(scene->selecteditems());
I'm creating a CAD-like app (Qt-based), it will be a multiple document interface and each document will contain about 5 viewports (derived from QGLWidget). As such I need my flat shader to be shared across the entire application, and then the 3D assets (models stored as VBOs) to be shared across each document i.e. the 5 viewports.
I thought as long as I shared around the shader program and VBO GLuint addresses all will automagickly work - it doesn't. I think because each viewport/context has it's own address space on the graphics card, if anyone knows better please inform!
I would like to have the shader compiled on application start, but this is proving difficult as I need a valid QGLWidget to get OpenGL into a valid state beforehand. But as I need to share the QGLWidgets (through their constructor) to have them share resources, one needs to be created and shown before the others can be instantiated. But this is highly impractical as multiple views to be shown at once to the user.
This must be easier than I'm making out because it's hardly groundbreaking stuff, but I am really struggling - can anyone point me in the right direction?
Thanks, Cam
Here's what usual CAD/MDI applications are doing:
they create a shared context that serves for well, sharing resources.
they use wglShareLists when creating a new OpenGL rendering context for giving access to the resource ids of the shared context.
wglShareLists can be used for sharing VBOs, textures, shaders, etc, not only display lists (sharing DLs is the legacy usage, hence the function name).
I don't remember if you need to create resources with the shared context or if you can create them on any contexts.
If you're not on windows, see glXCreateContext. That should put you on track.
Edit:
I've looked at Qt, it looks like it's abstracted with member QGLContext::create.