Qt openGL es2 glBindTexture failed when picture size is large - qt

I use Qt 4.7 whith openGL es2 , the hardwar is powervr ,sdk is sgx 4.8
glBindTexture( GL_TEXTURE_2D, bindTexture(m_myPixmapOfPic, GL_TEXTURE_2D));
When picture size is 512*256, it works well.When picture is 768*512, shows black which means failed.I try to find interface and increase the size of buffer of texture.But Qt doesnot support such interface.OpenGL es2 interfaces also do not mention this problem.
QVector<QVector3D> vertices.append
QVector<QVector2D> texCoords.append
glBindTexture( GL_TEXTURE_2D, bindTexture(m_myPixmapOfPic, GL_TEXTURE_2D));
GLSL:gl_FragColor = texture2D(texture, v_texcoord) //simple bind

This is not due to buffer size. You may need to specify a power-of-two (ie 2^n - 2,4,16,32,64,128,256, 512, 1024 etc) sized image as texture. Alternatively the HW needs to support extensions for NON-power of two.

Related

How to use OpenCL to write directly to linux framebuffer with zero-copy?

I am using OpenCL to do some image processing and want to use it to write RGBA image directly to framebuffer. Workflow is shown below:
1) map framebuffer to user space.
2) create OpenCL buffer using clCreateBuffer with flags of "CL_MEM_ALLOC_HOST_PTR"
3) use clEnqueueMapBuffer to map the results to framebuffer.
However, it doesn't work. Nothing on the screen. Then I found that the mapped virtual address from framebuffer are not same as the virtual address mapped OpenCL. Has any body done a zero-copy move of data from GPU to framebuffer?Any help on what approach should I use for this?
Some key codes:
if ((fd_fb = open("/dev/fb0", O_RDWR, 0)) < 0) {
printf("Unable to open /dev/fb0\n");
return -1;
}
fb0 = (unsigned char *)mmap(0, fb0_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd_fb, 0);
...
cmDevSrc4 = clCreateBuffer(cxGPUContext, CL_MEM_READ_WRITE | CL_MEM_ALLOC_HOST_PTR, sizeof(cl_uchar) * imagesize * 4, NULL, &status);
...
fb0 = (unsigned char*)clEnqueueMapBuffer(cqCommandQueue, cmDevSrc4, CL_TRUE, CL_MAP_READ, 0, sizeof(cl_uchar) * imagesize * 4, 0, NULL, NULL, &ciErr);
For zero-copy with an existing buffer you need to use CL_MEM_USE_HOST_PTR flag in the clCreateBuffer() function call. In addition you need give the pointer to the existing buffer as second to last argument.
I don't know how linux framebuffer internally works but it is possible that even with the zero-copy from device to host it leads to extra copying the data to GPU for rendering. So you might want to render the OpenCL buffer directly with OpenGL. Check out cl_khr_gl_sharing extension for OpenCL.
I don't know OpenCL yet, I was just doing a search to find out about writing to the framebuffer from it and hit your post. Opening it and mmapping it like in your code looks good.
I've done that with the CPU: https://sourceforge.net/projects/fbgrad/
That doesn't always work, it depends on the computer. I'm on an old Dell Latitude D530 and not only can't I write to the framebuffer but there's no GPU, so no advantage to using OpenCL over using the CPU. If you have a /dev/fb0 and you can get something on the screen with
cat /dev/random > /dev/fb0
Then you might have a chance from OpenCL. With a Mali at least there's a way to pass a pointer from the CPU to the GPU. You may need to add some offset (true on a Raspberry Pi I think). And it could be double-buffered by Xorg, there are lots of reasons why it might not work.

copy a float texture with webgl2

I can no longer find any trace of copyTexImage2D in the specification of webgl2 : https://www.khronos.org/registry/webgl/specs/latest/2.0/
A few months ago I had asked the question of how to make a float-texture copy. With webgl version 1.0 this was not possible with copyTexImage2D (float type is not supported)
So I made a texture copy by building a simple shader.
I imagined that the restriction on the float32 type was lifted with webgl2. But I do not find any occurrence of the word "copyTexImage2D" in the specification of webgl 2.
How does it work? The specification of webgl2 gives only the novelties or new polymorphism functions since webgl1 ?
In short, with webgl2, is a more efficient method to copy a texture ?
(In my slow, very slow, understanding of webgl2 I have not yet addressed the interesting novelty of feedback)
WebGL2s spec just adds to WebGL1. From the WebGL2 spec right near the beginning
This document should be read as an extension to the WebGL 1.0 specification. It will only describe the differences from 1.0.
Similarly it also says
The remaining sections of this document are intended to be read in conjunction with the OpenGL ES 3.0 specification (3.0.4 at the time of this writing, available from the Khronos OpenGL ES API Registry). Unless otherwise specified, the behavior of each method is defined by the OpenGL ES 3.0 specification.
So, copyTexImage2D is still there.
Your blitFramebuffer solution works though
Ok i find a solution: blitFramebuffer :
let texture1 be the texture which we want to copy in texture2. We have already two framebuffer copieFB and FBorig.
copieFB have a color attachment to texture2,
FBorig have a color attachment to texture1.
gl.bindFramebuffer ( gl.DRAW_FRAMEBUFFER, copieFB );
gl.bindFramebuffer ( gl.READ_FRAMEBUFFER, FBorig );
gl.readBuffer ( gl.COLOR_ATTACHMENT0 );
gl.blitFramebuffer(0, 0, PVS, PVS,0, 0, PVS, PVS,gl.COLOR_BUFFER_BIT, gl.NEAREST);
old solution :
gl.bindFramebuffer( gl.FRAMEBUFFER , copieFB);
gl.viewport(0, 0, PVS, PVS);
gl.useProgram(copieShader);
gl.uniform1i(copieShader.FBorig,TEXTURE1);
gl.drawArrays(gl.POINTS , 0 , NBRE);
the gain is some FPS.
copyTex[Sub]Image2D works with floats in WebGL2 with the EXT_color_buffer_float extension.
I'll note that this also works in WebGL1 with the extensions:
OES_texture_half_float and EXT_color_buffer_half_float[1] for float16s
OES_texture_float and WEBGL_color_buffer_float[1] for float32s
Note the sometimes-confusing differences:
WEBGL_color_buffer_float is for WebGL1, and enables only RGBA32F (RGBA/FLOAT for textures)
EXT_color_buffer_half_float is for WebGL1, and enables only RGBA16F (RGBA/HALF_FLOAT_OES for textures)
EXT_color_buffer_float is for WebGL2, and enables R/RG/RGBA 16F and 32F, as well as R11F_G11F_B10F
(see the WebGL Extension Registry for more info on extensions)
blitFramebuffer also does work on WebGL2, though you'll need EXT_color_buffer_float to allow float framebuffers to be complete.
[1]: EXT_color_buffer_half_float and WEBGL_color_buffer_float are not yet offered in Chrome, though enabling OES_texture_[half_]float might be enough. On Chrome, verify on each machine that checkFramebufferStatus returns FRAMEBUFFER_COMPLETE.

Convert pixel size to point size for fonts on multiple platforms

I need a way to translate point size and pixel size between multiple platforms.
I have a Qt application that must run on multi-platform, including an embedded Linux on a type of tablet.
It is expected that users can save files created by the application, on a desktop (either windows or linux) and open on the custom device.
The data consists of drawings, and text - QGraphicsItems on a QGraphicsScene. Some text items are "rich text", so we can change font on fragments of text.
For normal text, including all UI text, we used pixel size instead of point size to achieve a similar look. But rich text defies me: the QTextCharFormat doesn't have a pixelSize() option. Only setFontPointSize() and fontPointSize(). I can use font().setPixelSize() then setFont() but the result is that when saving, using the html() method, I lose all font information. (Must be a qt bug ?)
So, what I need is to be able to use pixel size everywhere, and then calculate the point size to set it on paragraphs (and go in reverse when reading sizes).
But - what is the relation between pixel size and point size ? If I determine both, for a given font, on the current platform, can I establish some sort of equation to use ?
Edit - I found an interesting post - it seems to do what I want, but it is specific to only OSX. https://stackoverflow.com/a/25929628/1217150
My target platforms, Windows / Linux / OSX but also, especially, a custom tablet running embedded Linux, and possibly in the future Android devices.
Qt 4.8
Edit - using the conversion in answer below, left text using setPixelSize(20) and right text using setPointSize(20 * screenDpi) where
qreal screenDpi = QApplication::desktop()->physicalDpiX() / 72.;
Note the size is not the same... (running in windows, have not yet tested on other platforms)
I even tried
#ifdef Q_OS_WIN32
qreal screenDpi = QApplication::desktop()->physicalDpiX() / 96.;
#else
qreal screenDpi = QApplication::desktop()->physicalDpiX() / 72.;
#endif
Yes, I think it is possible:
double ptToPx(double pt, double dpi) {
return pt/72*dpi
}
double pxToPt(double px, double dpi) {
return px*72/dpi
}
...
double dpi = QGuiApplication::primaryScreen()->physicalDotsPerInch();
qDebug() << "12 pt is" << ptToPx(12, dpi) << "px";
qDebug() << "26 px is" << pxToPt(26, dpi) << "pt";
But rich text defies me: the QTextCharFormat doesn't have a pixelSize() option. Only setFontPointSize() and fontPointSize().
You can set prepared QFont with QTextCharFormat::FontPropertiesSpecifiedOnly behavior parameter to set only pixelSize:
QFont font;
font.setPixelSize(18);
QTextCharFormat fmt;
fmt.setFont(font, QTextCharFormat::FontPropertiesSpecifiedOnly);

Bug in IMFSinkWriter?

I implemented an encoder in 2 ways.
1) based on the SDK Transcoder example, which uses topology and transcoding profile
2) based on IMFSourceReader and IMFSinkWriter, where the Sinkwriter delivers the samples to the Sourcewriter for transcoding
I tested both implementations on Windows 8.1 with Nvidia (Quadro K2200) and Intel GPU (P4600/P4700)
But bizarrly only the topology implementation uses GPU (on both).
In 2) I both I set "MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS", which has not to be set I guess, because 1) works with GPU with and without this flag set for the container type.
Is there a trick to enable GPU with IMFSinkWriter or is this a bug in the media foundation?
I had initially ran into the same issue. You don't mention how you configured the output type of the source reader (and the input type of the sink), but I found that if you allow the system to handle it (by selecting the output type of the reader to be RGB32), the performance will be horrible and all CPU bound. (error checking omitted for brevity)
hr = videoMediaType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video);
hr = videoMediaType->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_RGB32);
hr = reader->SetCurrentMediaType((DWORD)MF_SOURCE_READER_FIRST_VIDEO_STREAM, nullptr, videoMediaType);
reader->SetStreamSelection((DWORD)MF_SOURCE_READER_FIRST_VIDEO_STREAM, true);
And the documentation agrees, indicating that this configuration is useful for getting a single snapshot from the video. As a result, if you configure the reader to deliver the native media type, performance is excellent, but you now have to transform the format yourself.
reader->GetNativeMediaType((DWORD)MF_SOURCE_READER_FIRST_VIDEO_STREAM, videoMode->GetIndex(), videoMediaType);
From here, if you are dealing with simple color conversion (like YUY2 or YUV from a webcam) then there are a few options. I originally tried writing my own converter, and pushing that off to the GPU using HLSL with DirectCompute. This works very well, but in your case, the format isn't as trivial.
Ultimately, creating and configuring an instance of the color converter (as an IMFTransform) works perfectly.
Microsoft::WRL::ComPtr<IMFMediaType> mediaTransform;
hr = ::CoCreateInstance(CLSID_CColorConvertDMO, nullptr, CLSCTX_INPROC_SERVER, __uuidof(IMFTransform), reinterpret_cast<void**>(mediaTransform.GetAddressOf());
// set the input type of the transform to the NATIVE output type of the reader
hr = mediaTransform->SetInputType(0u, videoMediaType.Get(), 0u);
Create and configure a separate sample and buffer.
IMFSample* transformSample;
hr = ::MFCreateSample(&transformSample);
hr = ::MFCreateMemoryBuffer(RGB_MFT_OUTPUT_BUFFER_SIZE, &_transformBuffer);
hr = transformSample->AddBuffer(transformBuffer);
MFT_OUTPUT_DATA_BUFFER* transformDataBuffer;
transformDataBuffer = new MFT_OUTPUT_DATA_BUFFER();
transformDataBuffer->pSample = _transformSample;
transformDataBuffer->dwStreamID = 0u;
transformDataBuffer->dwStatus = 0u;
transformDataBuffer->pEvents = nullptr;
When receiving samples from the source, hand them off to the transform to be converted.
hr = mediaTransform->ProcessInput(0u, sample, 0u));
hr = mediaTransform->ProcessOutput(0u, 1u, transformDataBuffer, &outStatus));
hr = transformDataBuffer->pSample->GetBufferByIndex(0, &mediaBuffer);
Then of course, finally hand off the transformed sample to the sink just as you do today. I am confident that this will work, and you will be a very happy person. For me, I went from 20% CPU utilization (originally implementation) down to 2% (I am concurrently displaying the video). Good luck. I hope you enjoy your project.

How to exchange the width and height of W35 Touchscreen for Mini2440 FriendlyARM embedded board

I have a Friendly-ARM embedded board with a W35 3.5" Touchscreen. You can see the board via the following link: http://www.friendlyarm.net/sites/products/mini2440-35s.jpg. Also I write QT program for that using Qt Creator. I have to write a GUI using QT with Width x Height = 240 x 320. I mean width = 240 and height = 320. According to what I found at various online documents and pages, the dimensions of W35 are 320 x 240, it means width = 320 and height = 320. So when I run the program, there are large margins at left and right and some part of GUI is cut at top and button. How do I exchange the board width and height?
The closest page I found on Friendly-ARM site is: http://www.friendlyarm.net/forum/topic/2881.
At this page someone mentioned, there is a file s3c2410.c at drivers/video directory, or there is file mach-mini2440.c at arch/arm/mach-s3c2440 directory and we should tweak some C #define, but I don't have neither one on the board kernel. What should I do?
1) Reinstalling the Kernel?
2) Writing program for 320 x 240 instead of 240 x 320
3) Changing the touchscreen to similar ones like X35 or T35
FYI, when the board starts up, there is Qtopia with right dimensions.
TIA,
-- Saeed Amrollahi Boyouki
First of all cross-compile your Qt with -qt-gfx-transformed option.
Method 1:
You can rotate your Qt Application using
./myApp -qws -display ":Transformed:Rot90:0"
Method 2:
You can set Display Dimension using
export QWS_DISPLAY=Transformed:Rot90:0
and start your application using
./myApp -qws
What you need to look for are instructions to rotate the display from "landscape" mode to "portrait" mode. I am unsure if there is a hardware option on the ARM processor included in the FriendlyArm, but that gives you a place to start searching. I'd also look in the Qtopia forums for a similar switch, ie http://qt-project.org/forums/viewthread/8875 or similar.

Resources