How to scroll window contents using Direct2D api? - gdi

I would like to scroll window contents in which drawing is performed with Direct2D api through ID2D1RenderTarget.
In GDI I could create a buffer with CreateCompatibleDC and later scroll its contents with ScrollDC, redraw exposed area and BitBlt the buffer to window.
I cannot see any necessary API in Direct2D to perform the same operations. How can I achieve the same functionality without using GetDC (and GDI), and without using own third buffer?

There is no Scroll API in Direct2D. Your best solution to get hardware accelerated scrolling is to use a 2nd buffer. On the ID2D1RenderTarget that you want to scroll, use CreateCompatibleRenderTarget() to create an ID2D1BitmapRenderTarget (it's a good idea to cache this guy) with the same pixel size as ID2D1RenderTarget::GetPixelSize() and with the same resolution as returned from ID2D1RenderTarget::GetDpi(). Then, use ID2D1BitmapRenderTarget::GetBitmap() to get the underlying ID2D1Bitmap. Next, use ID2D1Bitmap::CopyFromRenderTarget() copy the contents with adjustments for the distance you're scrolling. Then copy that bitmap's contents back to the original render target, re-render the uncovered area, and present (via EndDraw).

You can use translation.
MSDN: To translate a 2-D object is to move the object along the x-axis, the y-axis, or both.
m_pRenderTarget->SetTransform(D2D1::Matrix3x2F::Translation(20, 10));
More details here
http://msdn.microsoft.com/en-us/library/windows/desktop/dd756691(v=vs.85).aspx

In DXGI 1.2 there is a new IDXGISwapChain1::Present1 API call with DXGI_PRESENT_PARAMETERS
parameter. It contains functionality supporting scrolling window contents.

Related

Using OpenGL frame buffer objects with Qt (QOpenGLWidget), how to disable multisampling when drawing to frame buffer

As the title suggests, I'm using Qt for OpenGL drawing, and with QOpenGLWidget I can turn on multisampling for the main screen buffer with QSurfaceFormat's setSamples() function. This works fine and looks pretty nice. However, I'm also drawing into a custom frame buffer (using glGenFramebuffers, glBindFramebuffer(), etc) in the background, where I don't want anti-aliasing (since it's drawing using color encoding for selection purposes), but it seems to be inheriting the multi-sampling from the main QOpenGLWidget somehow. Any ideas on how to disable that, to use multisampling in the main window but not in my own custom off-screen frame buffers?
Multisampled rendering is enabled or disabled by using the glEnable/Disable(GL_MULTISAMPLE). This state is not part of the framebuffer's state; it's regular context state. As such, even when you switch framebuffers, that state will be unaffected.
Additionally, the multisample enable/disable switch doesn't mean anything if your attached images don't have multiple samples. If you're creating images for non-multisampled rendering, there's no reason to create them with multiple samples. So create single-sample images.
Well, couldn't find a way to disable it or avoid it in my OpenGL code, but if I set the default format to have 0 samples and the format of the QOpenGLWidget to have 2/4/8/whatever, then the framebuffer object won't use anti-aliasing when it's created.

How to create a window in browser which has a NSWindow handle?

I am making a plugin for safari on mac. I am stuck at how to create a window over browser's window upon which a video can be displayed.
Earlier, we were using Cocoa Event model under which window pointer received in NPWindow in NPP_SetWindow function is null. Then we switched to Carbon Event model and we got pointer to NP_CGContext via window pointer present in NPWindow struct, using which we got pointer to WindowRef and got a pointer to NSWindow as following:
NP_CGContext* npContext = (NP_CGContext*)npWindow->window;
WindowRef window = npContext->window;
NSWindow* browserWindow = [[[NSWindow alloc] initWithWindowRef:window] autorelease];
Our streaming engine accepts the pointer to NSWindow. We don't know how to create a window in our browser space.
So any help regarding the window creation would be appreciated.
Our streaming engine accepts the pointer to NSWindow. We don't know how to create a window in our browser space.
You should not do this, as explained in previous answers.
A streaming engine that requires an NSWindow pointer is very poorly suited to making an NPAPI plugin. You should if at all possible look for something that takes or vends a CALayer, or failing that, which can draw frames into CGContextRef (but this will be much slower in out-of-process plugins).
If you absolutely must use an NSWindow, then you'll need to make a new one in your plugin process that is completely unrelated to the browser's window, and display it somewhere on screen. The user experience will be relatively poor, because it won't move with the window, can end up behind the browser window, etc. This is explicitly discouraged by browser vendors. But if you have no choice but to use an NSWindow, then this is your only option with modern browsers.

Cocos3D - Take various screenshots in the background

Using Cocos3D, is it possible to take screenshot of the 3D model in the background without the user knowing it?
For pre-processing purpose and other usage, I want to take screenshots of the 3D model at various angles. Following the Render-To-Texture capability, I noticed when my scene is not visible, the drawSceneContentWithVisitor: method only execute once rather than at every rendering cycle. For obvious reason, the CC3GLFramebuffer* won't get updated with new data, hence, I'm only able to take the initial screenshot.
Thanks.
In Cocos3D, you can render your 3D scene to an off-screen surface. See the CC3DemoMashUp addTelevision and drawSceneContentWithVisitor: methods for an example of how to do this.
What is important is that the 3D drawing environment has been established when you perform your drawing. The safest place to do this is inside your drawSceneContentWithVisitor: method. But if you want to render somewhere else, you need to invoke the CC3Scene open3DWithVisitor: and CC3Scene close3DWithVisitor: methods before and after rendering. See the implementations of the CC3Scene processInitializeScene and open methods for examples of how to do that.
To render your scene from multiple viewpoints, you need to add multiple cameras to your scene, and set the camera property of your drawing visitor appropriately to select a camera before rendering. See how this is done in the CC3DemoMashUpScene addTelevision and drawToTVScreen methods. The drawToTVScreen method also shows how to handle clearing the color and depth buffers of your surface.

Qt custom widget update big overhead

We are trying to use Qt 4.8.5 for some Linux-based embedded devices in our company. I use Qt embedded without X server. I need to plot measured data and update them very often (20-30fps, but only a small portion of the widget). The system is ARM based, 400Mhz, have no GPU and no FPU. I subclassed QWidget and overridden the paintEvent(). I have WA_OpaquePaintEvent and WA_StaticContents set. For testing, my paint event is empty, and I call the update() function of the widget form a timer set to 50ms. My problem is that the empty update is eating up 30% of the CPU. The amount varies with the area of the update, so I think QT may redraw something in the background. I have read many posts but I cannot find the solution for my problem. If I comment out the update call, the CPU usage drops to ~1% (even if I generate a sine in the timer for testing the widget, which should be much more complex than an empty function call). My widget is rectangular, is not transparent and I want to handle the full drawing procedure from the paint event.
Is it possible to reduce this overhead, and handle the whole painting process by my own?
The "empty update" is not empty - it repaints the whole window :)
Have you read the below?
To rapidly update custom widgets with simple background colors, such as real-time plotting or graphing widgets, it is better to define a suitable background color (using setBackgroundRole() with the QPalette::Window role), set the autoFillBackground property, and only implement the necessary drawing functionality in the widget's paintEvent().
You should also be using QWidget::scroll(), since internally it does scroll the backing store of the window, and that's much more efficient than repainting the entire thing if only a tiny slice is added to it.

Slow repaint underneath dragged object on X... Can Qt force drag and drop operations to be internal only?

I'm implementing Qt's drag and drop API across Windows and X. When I pick up an object in the app running on X and drag it, it leaves a white ghost trail of itself on the window underneath, as if the window underneath is being slow to repaint where the dragged object was previously obscuring part of itself.
I believe that this is symptomatic of the same problem that Qt has just solved with resizing windows causing flicker in child widgets on X windows - i.e. the dragged object is treated as a separate native window and therefore X handles the clipping from the dragged object to the window underneath. Since X does this in a different way to Qt, we get the ghosting effect.
Has anyone experienced the same problems? One solution that comes to mind is to use the same technique as detailed in the blog article linked above and stop the dragged object being treated as a native window, presumably at the cost of drag and drop being limited to my application only (I have no problem with this). Would anyone know how to force drag and drop operations to be internal only?
EDIT: I'm using QDrag::setPixmap to set the graphical representation of the dragged object - it is important that I retain this in favour of a standard drag cursor as this interface is being used on a touchscreen device and will hence have no visible cursor.
I'm now of the opinion that short of editing and then compiling my own build of Qt (not an option for me), I can't force drag and drop operations to be internal only.
Equally, I can't find any way of getting rid of the ghost trail by tweaking my window manager settings or using a compositing window manager (thanks anyway though #atomice). Using OpenGL as the renderer increases the screen repaint speed slightly, but is not perfect and introduces its own problems (see Starting a Qt drag operation on X11 w/ OpenGL causes screen flicker). I would still be very interested to hear any ideas though.
I have, however, got a workaround for my problem which works on both Windows and X. Here's a simplified version:
void DoDrag()
{
//Prepare the graphical representation of the drag
mDragRepresenter = new QWidget(QApplication::activeWindow());
mDragRepresenter->setAttribute(Qt::WA_TransparentForMouseEvents);
mDragRepresenter->SetPixmap(GenerateDragPixmap());
RepositionDragRepresenter();
mDragRepresenter->show();
QTimer UpdateTimer;
connect(&UpdateTimer, SIGNAL(timeout()), this, SLOT(RepositionDragRepresenter()));
UpdateTimer.start(40);
//Start the drag (modal operation)
Qt::DropAction ResultingAction = Drag->exec(Qt::CopyAction);
UpdateTimer.stop();
delete mDragRepresenter;
}
void RepositionDragRepresenter()
{
mDragRepresenter->move(QApplication::activeWindow()->mapFromGlobal(QCursor::pos()) - mDragRepresenterHotSpot);
}
An X11 window is only created for a drag operation if a QDrag::mimeData()->hasImage() is true. If you modify your code so it doesn't use an image then you will just get a drag cursor instead which won't trigger a repaint of the windows underneath.
You don't specify what kind of object you are dragging or how you are setting up the drag operation. Can you add some code to show that?

Resources