I want to implement cubemap convolution for IBL using a Qt widget.
When implementing conversion from an equirectangular map to a cubemap I ran into an error I do not understand:
Here is how I create my renderbuffer:
QOpenGLFramebufferObjectFormat format;
format.setAttachment(QOpenGLFramebufferObject::CombinedDepthStencil);
format.setInternalTextureFormat(GL_RGBA32F_ARB);
envTarget = new QOpenGLFramebufferObject(QSize(256, 256), format);
Here is how I create my cubemap texture:
envCubemap = new QOpenGLTexture(QOpenGLTexture::TargetCubeMap);
envCubemap->create();
envCubemap->bind();
envCubemap->setSize(256, 256, 4);
envCubemap->setFormat(QOpenGLTexture::RGBAFormat);
envCubemap->allocateStorage(QOpenGLTexture::RGB, QOpenGLTexture::Float32);
envCubemap->setMinMagFilters(QOpenGLTexture::Nearest, QOpenGLTexture::Linear);
I then proceed to render the different cubemap views to the corresponding parts of the texture:
envCubemap->bind(9);
glViewport(0, 0, 256, 256);
envTarget->bind();
for (unsigned int i = 0; i < 6; ++i)
{
ActiveScene->ActiveCamera->View = captureViews[i];
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 9, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
drawBackground();
}
envTarget->release();
The drawBackground() method draws an environment sphere which works fine with my default buffer.
The openGL error I get is 1282. This turns to 0 if I comment out the glFramebufferTexture2D line. 1282 corresponds to GL_INVALID_OPERATION or GL_INVALID_VALUE, where both of these have multiple errors attached to them according to the glFramebufferTexture2D documentation.
What did I get wrong? I tried iterating over each parameter in order to solve this error but did not come up with a solution. As this should be fairly standard stuff I hope to find a solution here :D Help?
You need to actually tell the framebuffer, which texture to render to using its ID, and not '9':
glFramebufferTexture2D(
GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_TEXTURE_CUBE_MAP_POSITIVE_X + i,
envCubemap->textureId(), // <--- The change
0);
The same goes for envCubemap->bind(9);, which can be simply removed.
Related
I am currently working on creating openGL buffers in Qt for some 3D models (cuboids of various sizes depicting buildings).
I've tried to look into some examples, but I've only found some for 2D. I tried using the same for 3D, but at best I end up with blank images when I convert my buffers to images to check.
I reached the max success (at least an image is being created) using: https://dangelog.wordpress.com/2013/02/10/using-fbos-instead-of-pbuffers-in-qt-5-2/
My current code is something like:
QOpenGLFramebufferObject* SBuildingEditorUtils::getFboForBuilding(Building bd)
{
glPushMatrix();
QSurfaceFormat format;
format.setMajorVersion(4);
format.setMinorVersion(3);
QWindow window;
window.setSurfaceType(QWindow::OpenGLSurface);
window.setFormat(format);
window.create();
QOpenGLContext context;
context.setFormat(format);
if (!context.create())
qFatal("Cannot create the requested OpenGL context!");
context.makeCurrent(&window);
// TODO: fbo size = building size
QOpenGLFramebufferObjectFormat fboFormat;
fboFormat.setAttachment(QOpenGLFramebufferObject::CombinedDepthStencil);
//fboFormat.setAttachment(QOpenGLFramebufferObject::Depth);
auto fbo = new QOpenGLFramebufferObject(1500, 1500, fboFormat);
auto res = glGetError();
auto bindRet = fbo->bind();
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(0, 1500, 0, 1500, 0, 1500);
glMatrixMode(GL_MODELVIEW);
glBegin(GL_POLYGON);
glVertex3d(500, 500, 500);
glVertex3d(500, 1000, 1000);
glVertex3d(1000, 1000, 1000);
glVertex3d(1000, 500, 500);
glEnd();
glDisable(GL_DEPTH_TEST);
fbo->toImage().save("uniqueName.png");
fbo->release();
glPopMatrix();
return fbo;
}
Here I've been using the image "uniqueName.png" to text my output. As you can guess most of the code here just for testing. Also, this code is part of a larger code base.
Any advice of what I might be missing. While I have some experience with Qt, I lack any formal education / training in openGL. Any help would be appreciated.
I wish to know how to get the code to work.
Thanks.
I am using QT QOpenGLWidget, I want to unproject my mouse click position back into 3D, so I used glReadPixels. (I also read about the source code of Pangolin, a very good rotation, translation, zoom example, it uses glReadPixels as well)
Here's part of my simple code:
void myGLWidget::initializeGL()
{
glClearColor(0.2, 0.2, 0.2, 1.0); //background color
glClearDepthf(1.0); //set depth test
glEnable(GL_DEPTH_TEST); //enable depth test
}
void myGLWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); //clear color and depth buffer
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(cameraView_.data()); // cameraView_ is a QMatrix4x4
drawingTeapot();
// reading pixels in paintGL works well!!! returns lots of 1s
GLfloat zs[10 * 10];
glReadPixels(0, 0, 10, 10, GL_DEPTH_COMPONENT, GL_FLOAT, &zs);
}
void myGLWidget::mousePressEvent(QMouseEvent *event)
{
// glReadBuffer(GL_FRONT); // also tried this, nothing works
GLfloat zs[10 * 10];
glReadPixels(0, 0, 10, 10, GL_DEPTH_COMPONENT, GL_FLOAT, &zs);
GLenum e = glGetError(); // this gives 1282 err code!!!
}
I'm using macOS Sierra, Pangolin works perfectly on my laptop, however, my qt project does work??!!
By saying not working, I mean the output variable zs remains random values like 0 and 123123e-315 and it never change before and after glReadPixels.
Why glReadPixels works only in PaintGL function??
I also tried python version, it gives my an error says:
File "errorchecker.pyx", line 53, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError (src/errorchecker.c:1218)
OpenGL.error.GLError: GLError(
err = 1282,
description = b'invalid operation',
baseOperation = glReadPixels,
which might be the case that:
GL_INVALID_OPERATION is generated if format is GL_DEPTH_COMPONENT and there is no depth buffer. reference from document
But I still don't know what to do
OpenGL operations should be performed only when an OpenGL context is active. This is true in the paintGL() method because this is probably set by the framework for you. You can't assume the OpenGL is active in other methods, like in other event responding methods and callbacks as mousePressEvent(), because those methods can also be run by a different thread where the OpenGL context is not active.
i am trying to create a kind of metaball, nice curves between two circles.
Something like the image, the lines are drawn straight but can also be more curved. I need them as a vector in Processing. Does anyone can help me?
thanks in advance!
Example in paperjs:
http://paperjs.org/examples/meta-balls/
image:
http://www.smeulders.biz/tmp/metaballs.png
void setup() {
size(500,500);
ellipse(100, 250, 100, 100);
ellipse(350, 250, 200, 200);
}
void draw() {}
With a bit of math (to workout distance between circles) and a bit of pixel manipulation to set pixel colours based on these calculated distances, you can render 2D metaballs and there plenty of examples
For fun however I decided to take a stab at making a very hacky version of the example you shared by simply rendering ellipses into an image, then filtering the image at the end:
PGraphics pg;//a separate layer to render into
int dilateAmt = 3;
PImage grid;//pixels of the grid alone, minus the 'cursor'
void setup(){
size(400,400);
//create a new layer
pg = createGraphics(width,height);
pg.beginDraw();
//draw a di-grid inside
pg.background(255);
pg.noStroke();pg.fill(0);
for(int y = 0 ; y < 5; y++)
for(int x = 0 ; x < 5; x++)
pg.ellipse((y%2==0?40:0)+(x * 80),40+(y * 80), 40, 40);
pg.endDraw();
//grab a snapshot for later re-use
grid = pg.get();
}
void draw(){
pg.beginDraw();
//draw the cached grid (no need to loop and re-render circles)
pg.image(grid,0,0);
//and the cursor into the layer
pg.ellipse(mouseX,mouseY,60,60);
pg.endDraw();
//since PGraphics extends PImage, you can filter, so we dilate
for(int i = 0; i < dilateAmt; i++) pg.filter(DILATE);
//finally render the result
image(pg,0,0);
}
void keyPressed(){
if(keyCode == UP) dilateAmt++;
if(keyCode == DOWN) dilateAmt--;
if(dilateAmt < 1) dilateAmt = 1;
println(dilateAmt);
}
Note that the end result is raster, not vector.
If you want to achieve the exact effect you will need to port your example from JavaScript to Java. The source code is available.
If you like Processing the above example you could use plain javascript using p5.js. You'll find most of the familiar functions from Processing, but also directly use the paper.js library.
After some trouble I've managed to correctly render to texture inside a Frame Buffer Object in a Qt 4.8 application: I can open an OpenGL context with a QGLWidget, render to a FBO, and use this one as a texture.
Now I need to display the texture rendered in a QPixmap and show it in some other widget in the gui. But.. nothing is shown.
Those are some pieces of code:
// generate texture, FBO, RBO in the initializeGL
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glGenRenderbuffers(1, &rboId);
glBindRenderbuffer(GL_RENDERBUFFER, rboId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, TEXTURE_WIDTH, TEXTURE_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
// now in paintGL
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
// .... render into texture code ....
if(showTextureInWidget==false) {
showTextureInWidget = true;
char *pixels;
pixels = new char[TEXTURE_WIDTH * TEXTURE_HEIGHT * 4];
glReadPixels(0, 0, TEXTURE_WIDTH, TEXTURE_HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, pixels);
QPixmap qp = QPixmap(pixels);
QLabel *l = new QLabel();
// /* TEST */ l->setText(QString::fromStdString("dudee"));
l->setPixmap(qp);
QWidget *d = new QWidget;
l->setParent(d);
d->show();
}
glBindFramebuffer(GL_FRAMEBUFFER, 0); // unbind
// now draw the scene with the rendered texture
I see the Widget opened but.. there is nothing inside it. If I decomment the test line.. I see the "dudee" string so I know that there is a qlabel but.. no image from the QPixmap.
I know that the original data are ´unsigned char´ and I'm using ´char´ and I've tried with some different color parameters (´GL_RGBA´, ´GL_RGB´ etc) but I don't think this is the point.. the point is that I don't see anything..
Any advice? If I have to post more code I will do it!
Edit:
I haven't posted all the code, but the fact I'd like to be clear is that the texture is correctly rendered as a texture inside a cube. I'm just not able to put it back in the cpu from gpu
Edit 2:
Thanks to the peppe answer I found out the problem: I needed a Qt object that accept as a constructor some raw pixels data. Here is the complete snippet:
uchar *pixels;
pixels = new uchar[TEXTURE_WIDTH * TEXTURE_HEIGHT * 4];
for(int i=0; i < (TEXTURE_WIDTH * TEXTURE_HEIGHT * 4) ; i++ ) {
pixels[i] = 0;
}
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glReadPixels( 0,0, TEXTURE_WIDTH, TEXTURE_HEIGHT, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
qi = QImage(pixels, TEXTURE_WIDTH, TEXTURE_HEIGHT, QImage::Format_ARGB32);
qi = qi.rgbSwapped();
QLabel *l = new QLabel();
l->setPixmap(QPixmap::fromImage(qi));
QWidget *d = new QWidget;
l->setParent(d);
d->show();
Given that that's not all of your code and -- as you say -- the texture is correctly filled, then there's a little mistake going on here:
glReadPixels(0, 0, TEXTURE_WIDTH, TEXTURE_HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, pixels);
QPixmap qp = QPixmap(pixels);
The QPixmap(const char *) ctor wants a XPM image, not raw pixels. You need to use one of the QImage ctors to create a valid QImage. (You can also pass ownership to the QImage, solving the fact that you're currently leaking pixels...)
Once you do that, you'll figure out that
the image is flipped vertically, as OpenGL has the origin in the bottom left corner, growing upwards/rightwards, while Qt assumes origin in the top left, growing to downwards/rightwards;
the channels might be swapped -- i.e. OpenGL is returning data with the wrong endianess. I don't remember in this case if using glPixelStorei(GL_PACK_SWAP_BYTES) or GL_UNSIGNED_INT_8_8_8_8 as the type may help, eventually you need to resort to a CPU-side loop to fix your pixel data :)
I need to extract frames from a video in my Qt based application. Using ffmpeg libraries I am able to fetch frames as AVFrames which I need to convert to QImage to use in other parts of my application. This conversion needs to be efficient. So far it seems sws_scale() is the right function to use but I am not sure what source and destination pixel formats are to be specified.
Came up with the following 2-step process that first converts a decoded AVFame to another AVFrame in RGB colorspace and then to QImage. It works and is reasonably fast.
src_frame = get_decoded_frame();
AVFrame *pFrameRGB = avcodec_alloc_frame(); // intermediate pframe
if(pFrameRGB==NULL) {
;// Handle error
}
int numBytes= avpicture_get_size(PIX_FMT_RGB24,
is->video_st->codec->width, is->video_st->codec->height);
uint8_t *buffer = (uint8_t*)malloc(numBytes);
avpicture_fill((AVPicture*)pFrameRGB, buffer, PIX_FMT_RGB24,
is->video_st->codec->width, is->video_st->codec->height);
int dst_fmt = PIX_FMT_RGB24;
int dst_w = is->video_st->codec->width;
int dst_h = is->video_st->codec->height;
// TODO: cache following conversion context for speedup,
// and recalculate only on dimension changes
SwsContext *img_convert_ctx_temp;
img_convert_ctx_temp = sws_getContext(
is->video_st->codec->width, is->video_st->codec->height,
is->video_st->codec->pix_fmt,
dst_w, dst_h, (PixelFormat)dst_fmt,
SWS_BICUBIC, NULL, NULL, NULL);
QImage *myImage = new QImage(dst_w, dst_h, QImage::Format_RGB32);
sws_scale(img_convert_ctx_temp,
src_frame->data, src_frame->linesize, 0, is->video_st->codec->height,
pFrameRGB->data,
pFrameRGB->linesize);
uint8_t *src = (uint8_t *)(pFrameRGB->data[0]);
for (int y = 0; y < dst_h; y++)
{
QRgb *scanLine = (QRgb *) myImage->scanLine(y);
for (int x = 0; x < dst_w; x=x+1)
{
scanLine[x] = qRgb(src[3*x], src[3*x+1], src[3*x+2]);
}
src += pFrameRGB->linesize[0];
}
If you find a more efficient approach, let me know in the comments
I know, it's too late, but maybe someone will find it useful. From here I got the clue of doing the same conversion, which looks a bit shorter.
So I created QImage which is reused for every decoded frame:
QImage img( width, height, QImage::Format_RGB888 );
Created frameRGB:
frameRGB = av_frame_alloc();
//Allocate memory for the pixels of a picture and setup the AVPicture fields for it.
avpicture_alloc( ( AVPicture *) frameRGB, AV_PIX_FMT_RGB24, width, height);
After the the first frame is decoded I create conversion context SwsContext this way (it will be used for all the next frames):
mImgConvertCtx = sws_getContext( codecContext->width, codecContext->height, codecContext->pix_fmt, width, height, AV_PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
And finally for every decoded frame conversion is performed:
if( 1 == framesFinished && nullptr != imgConvertCtx )
{
//conversion frame to frameRGB
sws_scale(imgConvertCtx, frame->data, frame->linesize, 0, codecContext->height, frameRGB->data, frameRGB->linesize);
//setting QImage from frameRGB
for( int y = 0; y < height; ++y )
memcpy( img.scanLine(y), frameRGB->data[0]+y * frameRGB->linesize[0], mWidth * 3 );
}
See the link for the specifics.
A simpler approach, I think:
void takeSnapshot(AVCodecContext* dec_ctx, AVFrame* frame)
{
SwsContext* img_convert_ctx;
img_convert_ctx = sws_getContext(dec_ctx->width,
dec_ctx->height,
dec_ctx->pix_fmt,
dec_ctx->width,
dec_ctx->height,
AV_PIX_FMT_RGB24,
SWS_BICUBIC, NULL, NULL, NULL);
AVFrame* frameRGB = av_frame_alloc();
avpicture_alloc((AVPicture*)frameRGB,
AV_PIX_FMT_RGB24,
dec_ctx->width,
dec_ctx->height);
sws_scale(img_convert_ctx,
frame->data,
frame->linesize, 0,
dec_ctx->height,
frameRGB->data,
frameRGB->linesize);
QImage image(frameRGB->data[0],
dec_ctx->width,
dec_ctx->height,
frameRGB->linesize[0],
QImage::Format_RGB888);
image.save("capture.png");
}
Today, I have tested directly pass the image->bit() to swscale and finally it works, so it doesn't need to copy to memory. For example:
/* 1. Get frame and QImage to show */
struct my_frame *frame = get_frame(source);
QImage *myImage = new QImage(dst_w, dst_h, QImage::Format_RGBA8888);
/* 2. Convert and write into image buffer */
uint8_t *dst[] = {myImage->bits()};
int linesizes[4];
av_image_fill_linesizes(linesizes, AV_PIX_FMT_RGBA, frame->width);
sws_scale(myswscontext, frame->data, (const int*)frame->linesize,
0, frame->height, dst, linesizes);
I just discovered that scanLine is just seeking thru the buffer.. all you need is use AV_PIX_FMT_RGB32 for the AVFrame and QImage::FORMAT_RGB32 for the QImage.
Then after decoding just do a memcpy
memcpy(img.scanLine(0), pFrameRGB->data[0], pFrameRGB->linesize[0] * pFrameRGB->height());
I had problems with the other proposed solutions as :
They did not mention freeing either AVFrame, SwsContext or the allocated buffers, which caused massive memory leaks (I had thousands of frames to handle). These problems couldn't all be solved easily as QImage relies on the underlying data, and does not copy it. If freeing the buffer directly, the QImage points to freed data and breaks. This could be solved by using QImage's cleanupFunction to free the buffer once the image is no longer needed, but with other problems it wasn't good anyways.
In some cases one of the suggestions, of passing QImage.bits directly to sws_scale, would not work as QImage are minimum 32 bit aligned. Therefore for certain dimensions it would not match the expected width by sws_scale and output each line shifted a little bit.
A third problem is that they used deprecated AVPicture elements.
I listed the problem in another question Converting an AVFrame to QImage with conversion of pixel format and in the end found a solution using a temporary buffer, which could be copied to the QImage, and then safely freed.
So see my answer for a fully working, efficient, and with no deprecated function calls, implementation : https://stackoverflow.com/a/68212609/7360943