I can draw Qt objects to an QImage and then draw the image to HDC or CDC. This may hurt our application's performance. It would be great if I can draw Qt objects directly to Win32 HDC or MFC CDC. I expect that there is a class, say QWin32Image for clear, then I can use it in this way:
QWin32Image image(hdc, 100, 100, Format_ARGB32_Premultiplied);
QPainter painter(&image);
painter.drawText(....);
Is it possible for my thought? Or is there a better way to do that?
Short answer - No. AFAIK in Qt, they abstract the entire UI for platform independence at application code level. Qt paints all of its widgets to it's own buffer and then paints to the screen.
Long answer - Qualified Yes.
Qt offers Win/MFC Integration Library which will allow Qt objects to interact with HDC and MFC objects. This library does work well. But I found using it somewhat confusing until I understood how it works.
What this library does allows you display QWindows in MFC window or MFC window in at Qt frame. As long as you keep this in mind, you can make it work.
Alternately, QImage.scanLine(0) points to the entire raw bitmap that you can use to write directly to the screen using one of the windows functions. Even though function name is scanline, using it this way points to start of the raw pixel buffer.
Related
QML as per my knowledge does the same thing as OpenGL, right? So can I completely replace OpenGLwith QML ?
Whats the basic difference between QML and OpenGL?
When does people prefer QML over OpenGL and vice versa?
Your knowledge is incorrect, QML and OpenGL are two completely different things, the first is a declarative language the second is a graphics API.
QtQuick which uses QML usually uses OpenGL for its graphics, but that's a back-end you don't have any access to (it actually got a little more accessible in the recent releases but I expect not many people will go into tweaking that, and even if they did, it would be in C++, not QML).
There is Qt3D, which has a QML API, but it is just some basic stuff and it is high level - by no means a substitute to OpenGL which is very low level. That means it will be much easier to put some 3D models, cameras, materials and such with Qt3D, things you'd normally not do in OpenGL directly, but with an API built on top of OpenGL.
The top-level view in the application I am working on is not Qt-based. This view has its own APIs to draw lines, pixels, etc. I would like to take a rectangular portion of this view and attach it to QMainWindow instance. I guess there must be some mechanism within Qt that attaches a screen (Windows, x-windows, etc.) to QMainWindow. Can you please direct me to the abstract class that Qt uses for drawing to the actual surface? Regards.
If you're using Qt4 there's QX11EmbedWidget, which doesn't actually seem to exist in Qt5 and I can't find a good replacement. In terms of surface rendering everything is done as a QPaintDevice if it's subclassed from QWidget (which as far as I know every GUI element is).
The default raster backend draws on a QImage, so what you paint on with a QPainter in any widget is a QImage.
The backing store QImage shares the image bits with the underlying platform. On Windows, the QImage accesses a DIB section's data directly. On X11, the QImage accesses a shared memory XImage.
In all cases, assuming that your non-Qt code expects a bitmap to paint on, you can pass the data pointer from the QImage to the non-Qt code, within the paint event:
QImage * image = dynamic_cast<QImage*>(backingStore()->paintDevice());
The non-Qt code needs to properly interface to a large bitmap: it needs to accept a starting scan line to draw on, an X offset, and scanline length.
I want to use Qt 5.4 to create a window and render with normal OpenGL functions some stuff in that window. In the last few days, I read a lot about the Qt classes and how to initialize OpenGL and so on. I think, the main classes I have to deal with are QOpenGLWindow or QOpenGLWidget, but there are the QSurface and some other classes too. Now I am very unsure about what doing next and which class I should use to use the plain OpenGL functions, later. Can someone explain more clearly to me what I have to do to set up a Qt GUI in which I can use plain OpenGL?
Some other questions from me are:
At which point does Qt create a plain OpenGL context? Do I have to use the QOpenGLContext?
What is exactly the difference between a QSurface and a QOpenGLWindow? In the QOpenGLWindow example both classes are used.
Is it possible to use glew besides this qt stuff? Here are some question on, which deal with setting up glew with qt, but I think that I did not get the real point of why glew is needed.
Edit: I discussed this question with a colleague and our only conclusion was to use Offscreen-Rendering. Does anyone know another solution?
At which point does Qt create a plain OpenGL context? Do I have to use the QOpenGLContext?
Either where it's documented (for instance, creating a QOpenGLWidget or a QOpenGLWindow will automatically create a context), or you can create a context manually at any time by creating a QOpenGLContext object.
What is exactly the difference between a QSurface and a QOpenGLWindow? In the QOpenGLWindow example both classes are used.
A QSurface is a base class representing a "drawable surface" (onscreen or offscreen). QWindow is its onscreen implementation (representing a top level window), so it inherits from QSurface. You can draw over a QWindow by using OpenGL or by using a CPU-based rasterizer.
Finally, QOpenGLWindow is a QWindow subclass which offers some extra functionality and convenience, by automatically creating and managing an OpenGL context (via QOpenGLContext), having an optional partial-update strategy (through the usage of a FBO), etc.
Is it possible to use glew besides this qt stuff? Here are some question on, which deal with setting up glew with qt, but I think that I did not get the real point of why glew is needed.
Qt is not in your way. And it doesn't change your usage of OpenGL in any way. Just use Qt to create a window and a context (in a totally cross platform way), then you're free to use GLEW (to resolve OpenGL function pointers, extensions, etc.) or any 3rd party OpenGL abstraction.
I am learning the basics of OpenGL with Qt and it seems sample buffers are Qt specific, and I don’t really understand what information they store for the screen. What is a Sample Buffer and what is it used for?
They are used to get multisampling in Qt. Setting up multisampling is normally platform specific (since it requires a pixel format with multisampling support), but Qt lets you do this in a platform independent way. To get a OpenGL context with multisampling you pass a QGLFormat with sample buffers enabled when creating your QGLWidget.
Specifically, QGLFormat::setSampleBuffers is used to request a multisampled context and QGLFormat::setSamples is used to set the prefered number of samples.
In your OpenGL code you also have to enable multisampling before rendering:
glEnable(GL_MULTISAMPLE);
jamvm -Dawt.toolkit=gnu.java.awt.peer.qt test
QPixmap: It is not safe to use pixmaps outside the GUI thread
I'm new to Qt, I don't know how to deal with it.
I have no experience whatsoever with the jamvm, but here's the Qt doc quote that might be helpful:
Qt provides four classes for handling
image data: QImage, QPixmap, QBitmap
and QPicture. QImage is designed and
optimized for I/O, and for direct
pixel access and manipulation, while
QPixmap is designed and optimized for
showing images on screen.
Try using QImage instead of QPixmap and see if there is the same warning/error message.
Since QPixmap is a device-dependent representation, and many display drivers and systems aren't thread-safe, QPixmap is restricted to only being used in the main or GUI thread, which is the same thread your QApplication object should be instantiated in. You can see a brief bit in the documentation here, and read more information about it in this discussion thread.