I've been trying to use OpenGL in Qt with shaders and a simple vertex array. I basically want a plain to be drawn in the middle of the screen but nothing appears when I run the program. I'm basing my code in the "Texture" example of Qt, everything looks the same to me, but It's not working!
Here's the code of my glwidget.cpp:
#include "glwidget.h"
GLWidget::GLWidget(QWidget *parent):QGLWidget(parent)
{
timer.start(10);
//connect(&timer, SIGNAL(timeout()), this, SLOT(updateGL()));
Object aux;
QVector3D auxV;
auxV.setX(0.4); auxV.setY(0.4); auxV.setZ(1.0);
aux.vertices.append(auxV);
auxV.setX(0.4); auxV.setY(-0.4); auxV.setZ(1.0);
aux.vertices.append(auxV);
auxV.setX(-0.4); auxV.setY(-0.4); auxV.setZ(1.0);
aux.vertices.append(auxV);
auxV.setX(-0.4); auxV.setY(-0.4); auxV.setZ(1.0);
aux.vertices.append(auxV);
Objects.append(aux);
}
GLWidget::~GLWidget()
{
}
void GLWidget::initializeGL()
{
#define PROGRAM_VERTEX_ATTRIBUTE 0
printf("Objects Size: %d\nObj1 Size: %d\n", Objects.size(), Objects.at(0).vertices.size());
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
//printf("Version: %s\n", glGetString(GL_VERSION));
vShader= new QGLShader (QGLShader::Vertex, this);
vShader->compileSourceFile("../src/shaders/editorVshader.glsl");
fShader= new QGLShader (QGLShader::Fragment, this);
fShader->compileSourceFile("../src/shaders/editorFshader.glsl");
editor= new QGLShaderProgram (this);
editor->addShader(vShader);
editor->addShader(fShader);
editor->bindAttributeLocation("vertices", PROGRAM_VERTEX_ATTRIBUTE);
editor->link();
editor->bind();
}
void GLWidget::paintGL()
{
glClearColor(0.4765625, 0.54296875, 0.6171875, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0.0f, 0.0f, -10.0f);
glVertexPointer(3, GL_FLOAT, 0, Objects.at(0).vertices.constData());
glEnableClientState(GL_VERTEX_ARRAY);
QMatrix4x4 auxM;
auxM.ortho(-0.5, 0.5, 0.5, -0.5, 4.0, -15.0);
auxM.translate(0.0f, 0.0f, -10.0f);
editor->setUniformValue("modelmatrix", auxM);
editor->enableAttributeArray(PROGRAM_VERTEX_ATTRIBUTE);
//editor->enableAttributeArray(editor->attributeLocation("vertices"));
//editor->setAttributeArray(editor->attributeLocation("vertices"), Objects.at(0).vertices.constData(), 3);
editor->setAttributeArray (PROGRAM_VERTEX_ATTRIBUTE, Objects.at(0).vertices.constData());
glDrawArrays(GL_QUADS, 0, 4);
}
void GLWidget::resizeGL(int w, int h)
{
int side = qMin(w, h);
glViewport((w - side) / 2, (h - side) / 2, side, side);
//glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-0.5, 0.5, -0.5, 0.5, 4.0, -15.0);
glMatrixMode(GL_MODELVIEW);
updateGL();
}
And here are my vShader:
attribute highp vec4 vertices;
attribute highp mat4x4 modelmatrix;
void main (void)
{
gl_Position= modelmatrix*vertices;
}
And my fShader:
void main(void)
{
gl_FragColor= vec4(0.0, 0.1, 1.0, 1.0);
}
Do you see the error in there?
You are mixing OpenGLES1.1 (ex calls to glOrtho, glTranslate) and 2.0 (using shaders). Are you mixing the textures+overpainting examples ? You should instead take just one example that uses OpenGL / ES/ 1.1 or 2.0 like - http://qt-project.org/doc/qt-5.0/qtopengl/textures.html, then make changes and see how the code works.
I found what the problem was.
You're right, prabindh, I was mixing OpenGL versions, and the problem was related to that. After cleaning the code from OpenGL ES instructions, I also added the model, view and projection matrices in order to pass them to the shaders. It worked! Actually I think the plane was there all the time but I just couldn't see it because "the camera was aiming somewhere else".
But I wasn't mixing the examples, I based all my code in the textures example. I still can't understand why its code works and mine didn't. But anyway, everything went fine after that.
Thanks for your answers and your comments!
Related
I was using QOpenGLWidget to render textured triangle, the code was looking good but the triangle was always rendering black i had problem with it for two days until i accidentally found out what the title says.
This is the code, the texture gets loaded to default location of GL_TEXTURE0 and the code will not work unless i call glActiveTexture(GL_TEXTURE1) at the end, GL_TEXTURE1 is just an example it can be any other texture slot except the one where texture actually is. Without the call the object will be black.
QImage ready;
QImage image("C:/Users/Gamer/Desktop/New folder/ring.jpg");
ready = image.convertToFormat(QImage::Format_RGBA8888);
glGenTextures(1, &texture);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(glGetUniformLocation(program.programId(), "samp"), 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, ready.width(), ready.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, ready.constBits());
glGenerateMipmap(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE1)
I've tried some tests, creating multiple textures and displaying them all at once, the last active texture was always black unless i activate some other unoccupied slot.
I don't know what to make of this, i'm begginer in OpenGL and Qt but this doesn't sound right.
EDIT:
Main function
#include "mainwindow.h"
#include <QApplication>
#include <QSurfaceFormat>
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QSurfaceFormat format;
format.setVersion(3, 3);
format.setProfile(QSurfaceFormat::CoreProfile);
format.setDepthBufferSize(24);
format.setStencilBufferSize(8);
format.setSamples(4);
format.setSwapInterval(0);
QSurfaceFormat::setDefaultFormat(format);
MainWindow w;
w.show();
return a.exec();
}
Widget code
#include "openglwidget.h"
#include <QOpenGLShaderProgram>
#include <QImage>
#include <QDebug>
OpenGLWidget::OpenGLWidget(QWidget *parent) :
QOpenGLWidget(parent)
{
}
OpenGLWidget::~OpenGLWidget()
{
glDeleteBuffers(1, &vbo);
glDeleteVertexArrays(1, &vao);
glDeleteTextures(1, &texture);
}
void OpenGLWidget::initializeGL()
{
QOpenGLFunctions_3_3_Core::initializeOpenGLFunctions();
GLfloat vertices[] = {
0.0f, 0.75f, 0.0f,
-0.75f, -0.75f, 0.0f,
0.75f, -0.75f, 0.0f,
0.5f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f
};
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
program.addShaderFromSourceFile(QOpenGLShader::Vertex, "C:/Users/Gamer/Desktop/New folder/vertex.vert");
program.addShaderFromSourceFile(QOpenGLShader::Fragment, "C:/Users/Gamer/Desktop/New folder/fragment.frag");
program.link();
program.bind();
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, (void*)36);
glEnableVertexAttribArray(1);
QImage ready;
QImage image("C:/Users/Gamer/Desktop/New folder/ring.jpg");
ready = image.convertToFormat(QImage::Format_RGBA8888);
glGenTextures(1, &texture);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(glGetUniformLocation(program.programId(), "samp"), 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, ready.width(), ready.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, ready.constBits());
glGenerateMipmap(GL_TEXTURE_2D);
// glActiveTexture(GL_TEXTURE1);
}
void OpenGLWidget::paintGL()
{
GLfloat yellow[] = {1.0, 1.0, 0.0, 0.0};
glClearBufferfv(GL_COLOR, 0, yellow);
glDrawArrays(GL_TRIANGLES, 0, 3);
}
void OpenGLWidget::resizeGL(int w, int h)
{
glViewport(0, 0, w, h);
}
And shaders
#version 330 core
layout(location = 0) in vec3 pos;
layout(location = 1) in vec2 coord;
out vec2 tc;
void main(void)
{
tc = coord;
gl_Position = vec4(pos, 1.0);
}
#version 330 core
uniform sampler2D samp;
in vec2 tc;
out vec4 color;
void main(void)
{
color = texture(samp, tc);
}
QOpenGLWidget is a rather complex abstraction which has some side effects which you might not expect. Quoting from the Qt5 docs:
All rendering happens into an OpenGL framebuffer object. makeCurrent() ensure that it is bound in the context. Keep this in mind when creating and binding additional framebuffer objects in the rendering code in paintGL(). Never re-bind the framebuffer with ID 0. Instead, call defaultFramebufferObject() to get the ID that should be bound.
Now, this in itself isn't an issue. However, looking at the description for the initializeGL() method (my emphasis):
There is no need to call makeCurrent() because this has already been done when this function is called. Note however that the framebuffer is not yet available at this stage, so avoid issuing draw calls from here. Defer such calls to paintGL() instead.
Now, this in itself still is not the issue. But: it means that Qt will create the FBO in-between initializeGL and the first paintGL. Since Qt creates a texture as the color buffer for the FBO, this means it will re-use the currently active texture unit, and change the texture binding you did establish in initializeGL.
If you, on the other hand set glActiveTexture to something other than unit 0, Qt will screw up the binding of that unit, but since you only use unit 0, it will not have any negative effects in your example.
You need to bind the texture to the texture unit before drawing. Texture unit state is not part of program state, unlike uniforms. It is unusual to try and set texture unit state during program startup, that would require allocating different texture units to each program (not out of the question, it's just not the way things are normally done).
Add the following line to paintGL, before the draw call:
glBindTexture(GL_TEXTURE_2D, texture);
I am pretty new to Qt so sorry if this is a straight forward question.
I am using Qt 5.5 and trying to visualize a point cloud in QOpenGLWidget.
This is my header:
class PointCloudWindow : public QOpenGLWidget
{
public:
void setDepthMap(DepthMapGrabber* grabber);
protected:
void initializeGL();
void paintGL();
private:
QMatrix4x4 m_projection;
DepthMapGrabber* m_grabber;
};
and here is the corresponding cpp:
void PointCloudWindow::setDepthMap(DepthMapGrabber* grabber) {
m_grabber = grabber;
QTimer* updatePointCloud = new QTimer(this);
connect(updatePointCloud, SIGNAL(timeout()), SLOT(update()));
updatePointCloud->start();
}
void PointCloudWindow::initializeGL() {
glewInit(); // TODO: check for return value if error occured
glEnable(GL_DEPTH_TEST);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
}
void PointCloudWindow::paintGL() {
m_grabber->getNextDepthFrame(); // TODO: check for return value if error occured
m_projection.setToIdentity();
m_projection.perspective(45.0f, width() / (float)height(), 0.01f, 100.0f);
if (m_grabber->getDepthMap()->cloud) {
glBegin(GL_POINTS);
glColor3f(0.8f, 0.8f, 0.8f);
for (UINT i = 0; i < m_grabber->getDepthMap()->size; ++i)
{
glVertex3f(m_grabber->getDepthMap()->cloud[i].X, m_grabber->getDepthMap()->cloud[i].Y, m_grabber->getDepthMap()->cloud[i].Z);
}
glEnd();
}
}
This is how my point cloud looks like after visualization:
My problem is that as you can see (monitor is cut in half for example) if a point has a z value, which is bigger, then 1.0 then it gets clipped of. I tried to set the near and far plane, but no effect. I searched through Google and tried several things, but was unable to figure out how this works in Qt. I manged to visualize this point cloud with OpenGL and GLUT before. Any help or explanation how to do this in Qt would be much appreciated!
m_projection is just a member variable in your class. It's not going to automatically "jump" into the OpenGL context. You've to explicitly load it into OpenGL. Normally you'd load a matrix like that into a uniform for use in a shader. But since you're not using shaders (booo! ;-) ) and use old, ugly and slow immediate mode (don't do that) you'll have to load it into the fixed function projection matrix.
glMatrixMode(GL_PROJECTION);
glLoadMatrixd(m_projection.constData());
I'm trying to render a QImage using OpenGL wrapper classes of Qt5 and shader programs. I have the following shaders and a 3.3 core context. I'm also using a VAO for the attributes. However, I keep getting a blank red frame (red is the background clear color that I set). I'm not sure if it is a problem with the MVP matrices or something else. Using a fragment shader which sets the output color to a certain fixed color (black) still resulted in a red frame. I'm totally lost here.
EDIT-1: I also noticed that attempting to get the location of texRGB uniform from the QOpenGLShaderProgram results in -1. But I'm not sure if that has anything to do with the problem I'm having. Uniforms defined in the vertex shader for the MVP matrices have the locations 0 and 1.
Vertex Shader
#version 330
layout(location = 0) in vec3 inPosition;
layout(location = 1) in vec2 inTexCoord;
out vec2 vTexCoord;
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
void main(void)
{
gl_Position = projectionMatrix * modelViewMatrix * vec4(inPosition, 1.0);
// pass the input texture coordinates to fragment shader
vTexCoord = inTexCoord;
}
Fragment Shader
#version 330
uniform sampler2DRect texRGB;
in vec2 vTexCoord;
out vec4 fColor;
void main(void)
{
vec3 rgb = texture2DRect(texRGB, vTexCoord.st).rgb;
fColor = vec4(rgb, 0.0);
}
OGLWindow.h
#include <QOpenGLWindow>
#include <QOpenGLFunctions>
#include <QOpenGLBuffer>
#include <QOpenGLShaderProgram>
#include <QOpenGLVertexArrayObject>
#include <QOpenGLTexture>
#include <QDebug>
#include <QString>
class OGLWindow : public QOpenGLWindow, protected QOpenGLFunctions
{
public:
OGLWindow();
~OGLWindow();
// OpenGL Events
void initializeGL();
void resizeGL(int width, int height);
void paintGL();
// a method for cleanup
void teardownGL();
private:
bool isInitialized;
// OpenGL state information
QOpenGLBuffer m_vbo_position;
QOpenGLBuffer m_vbo_index;
QOpenGLBuffer m_vbo_tex_coord;
QOpenGLVertexArrayObject m_object;
QOpenGLShaderProgram* m_program;
QImage m_image;
QOpenGLTexture* m_texture;
QMatrix4x4 m_projection_matrix;
QMatrix4x4 m_model_view_matrix;
};
OGLWindow.cpp
#include "OGLWindow.h"
// vertex data
static const QVector3D vertextData[] = {
QVector3D(-1.0f, -1.0f, 0.0f),
QVector3D( 1.0f, -1.0f, 0.0f),
QVector3D( 1.0f, 1.0f, 0.0f),
QVector3D(-1.0f, 1.0f, 0.0f)
};
// indices
static const GLushort indices[] = {
0, 1, 2,
0, 2, 3
};
OGLWindow::OGLWindow() :
m_vbo_position (QOpenGLBuffer::VertexBuffer),
m_vbo_tex_coord (QOpenGLBuffer::VertexBuffer),
m_vbo_index (QOpenGLBuffer::IndexBuffer),
m_program (nullptr),
m_texture (nullptr),
isInitialized (false)
{
}
OGLWindow::~OGLWindow()
{
makeCurrent();
teardownGL();
}
void OGLWindow::initializeGL()
{
qDebug() << "initializeGL()";
initializeOpenGLFunctions();
isInitialized = true;
QColor backgroundColor(Qt::red);
glClearColor(backgroundColor.redF(), backgroundColor.greenF(), backgroundColor.blueF(), 1.0f);
// load texture image
m_image = QImage(":/images/cube.png");
m_texture = new QOpenGLTexture(QOpenGLTexture::TargetRectangle);
// set bilinear filtering mode for texture magnification and minification
m_texture->setMinificationFilter(QOpenGLTexture::Nearest);
m_texture->setMagnificationFilter(QOpenGLTexture::Nearest);
// set the wrap mode
m_texture->setWrapMode(QOpenGLTexture::ClampToEdge);
m_texture->setData(m_image.mirrored(), QOpenGLTexture::MipMapGeneration::DontGenerateMipMaps);
int imgWidth = m_image.width();
int imgHeight = m_image.height();
m_projection_matrix.setToIdentity();
m_projection_matrix.ortho(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
// m_projection_matrix.ortho(0.0, (float) width(), (float) height(), 0.0f, -1.0f, 1.0f);
m_model_view_matrix.setToIdentity();
glViewport(0, 0, width(), height());
m_program = new QOpenGLShaderProgram();
m_program->addShaderFromSourceFile(QOpenGLShader::Vertex, ":/shaders/vshader.glsl");
m_program->addShaderFromSourceFile(QOpenGLShader::Fragment, ":/shaders/fshader.glsl");
m_program->link();
m_program->bind();
// texture coordinates
static const QVector2D textureData[] = {
QVector2D(0.0f, 0.0f),
QVector2D((float) imgWidth, 0.0f),
QVector2D((float) imgWidth, (float) imgHeight),
QVector2D(0.0f, (float) imgHeight)
};
// create Vertex Array Object (VAO)
m_object.create();
m_object.bind();
// create position VBO
m_vbo_position.create();
m_vbo_position.bind();
m_vbo_position.setUsagePattern(QOpenGLBuffer::StaticDraw);
m_vbo_position.allocate(vertextData, 4 * sizeof(QVector3D));
// create texture coordinates VBO
m_vbo_tex_coord.create();
m_vbo_tex_coord.bind();
m_vbo_tex_coord.setUsagePattern(QOpenGLBuffer::StaticDraw);
m_vbo_tex_coord.allocate(textureData, 4 * sizeof(QVector2D));
// create the index buffer
m_vbo_index.create();
m_vbo_index.bind();
m_vbo_index.setUsagePattern(QOpenGLBuffer::StaticDraw);
m_vbo_index.allocate(indices, 6 * sizeof(GLushort));
// enable the two attributes that we have and set their buffers
m_program->enableAttributeArray(0);
m_program->enableAttributeArray(1);
m_program->setAttributeBuffer(0, GL_FLOAT, 0, 3, sizeof(QVector3D));
m_program->setAttributeBuffer(1, GL_FLOAT, 0, 2, sizeof(QVector2D));
// Set modelview-projection matrix
m_program->setUniformValue("projectionMatrix", m_projection_matrix);
m_program->setUniformValue("modelViewMatrix", m_model_view_matrix);
// use texture unit 0 which contains our frame
m_program->setUniformValue("texRGB", 0);
// release (unbind) all
m_object.release();
m_vbo_position.release();
m_vbo_tex_coord.release();
m_vbo_index.release();
m_program->release();
}
void OGLWindow::resizeGL(int width, int height)
{
qDebug() << "resizeGL(): width =" << width << ", height=" << height;
if (isInitialized) {
// avoid division by zero
if (height == 0) {
height = 1;
}
m_projection_matrix.setToIdentity();
m_projection_matrix.perspective(60.0, (float) width / (float) height, -1, 1);
glViewport(0, 0, width, height);
}
}
void OGLWindow::paintGL()
{
// clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// render using our shader
m_program->bind();
{
m_texture->bind();
m_object.bind();
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0) );
m_object.release();
}
m_program->release();
}
void OGLWindow::teardownGL()
{
// actually destroy our OpenGL information
m_object.destroy();
m_vbo_position.destroy();
m_vbo_color.destroy();
delete m_program;
}
EDIT-2: I'm creating the context as follows:
QSurfaceFormat format;
format.setRenderableType(QSurfaceFormat::OpenGL);
format.setProfile(QSurfaceFormat::CoreProfile);
format.setVersion(3,3);
This line in your fragment shader code is invalid:
vec3 rgb = texture2DRect(texRGB, vTexCoord.st).rgb;
texture2DRect() is not a built-in function.
Since you're using the GLSL 3.30 core profile (core is the default for the version unless compatibility is specified), you should be using the overloaded texture() function, which replaces the older type specific functions like texture2D() in the core profile.
Functions like texture2D() are still supported in GLSL 3.30 core unless a forward compatible core profile context is used. So depending on how the context is created, you can still use those functions.
However, sampler2DRect was only added as a sampler type in GLSL 1.40 as part of adding rectangular textures to the standard in OpenGL 3.1. At the time, the legacy sampling functions were already marked as deprecated, and only the new texture() function was defined for rectangular textures. This means that texture2DRect() does not exist in any GLSL version.
The correct call is:
vec3 rgb = texture(texRGB, vTexCoord.st).rgb;
Another part of your code that can prevent it from rendering anything is this projection matrix:
m_projection_matrix.perspective(60.0, (float) width / (float) height, -1, 1);
The near and far planes for a standard projection matrix both need to be positive. This call will set up a projection transformation with a "camera" on the origin, looking down the negative z-axis. The near and far values are distances from the origin. A valid call could look like this:
m_projection_matrix.perspective(60.0, (float) width / (float) height, 1.0f, 10.0f);
You will then also need to set the model matrix to transform the coordinates of the object into this range on the negative z-axis. You could for example apply a translation by (0.0f, 0.0f, -5.0f).
Or, if you just want to see something, the quad should also become visible if you simply use the identity matrix for the projection.
I'm drawing the framebuffer to an image, it used to work fine, however something broke and I have no idea what..
Any help would be great.
I get the error "QGLFramebufferObject: Framebuffer incomplete, missing attachment."
It seems to work intermittently.
VoxelEditor::VoxelEditor(QWidget *parent)
: QGLWidget(QGLFormat(QGL::SampleBuffers), parent)
{
makeCurrent();
catchFbo = new QGLFramebufferObject(PICTURE_SIZE, PICTURE_SIZE);
void VoxelEditor::renderToImage() {
saveGLState();
const int nrPics = 360 / DEGREES_BETWEEN_PICTURES;
for (int i = 0; i < nrPics; i++) {
catchFbo->bind();
glColorMask(true, true, true, true);
glClearColor(0,0,0,0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glEnable(GL_MULTISAMPLE);
glLoadIdentity();
GLfloat x = GLfloat(PICTURE_SIZE) / PICTURE_SIZE;
glFrustum(-x, +x, -1.0, +1.0, 1.0, 1000.0);
glViewport(0, 0, PICTURE_SIZE, PICTURE_SIZE);
drawScreenshot(i);
catchFbo->release();
QImage catchImage = catchFbo->toImage();
catchImage.save("object/test" + QString::number(i) + ".png");
}
glDisable(GL_MULTISAMPLE);
restoreGLState();
}
I solved this by putting the creation of the fbo in the rendertoimage call.
It seemed at creation it was valid and had the appropriate attachment, but at execution it failed..
Perhaps creating the fbo in the initializeGL call would work as well.
Have you check you buffer with isValid() method? Try to release buffer after call toImage() method.
Can anyone recommend a how-to guide or provide a brief overview of what's involved with integrating OpenCV with larger GUI-based programs? What are the popular ways to do it?
Particularly, processing video with OpenCV while doing video capture/preview without using HighGUI seems especially arcane. I hope someone can demystify this.
My particular configuration is with either Juce or Qt depending on what can be done. The cross platform thing is not critical -- if there is an awesome way of doing this in Windows, I might be convinced. The availability of community support is important.
I have heard that HighGUI is entirely for testing and unsuitable for real applications. Someone recommended the VideoInput library, but it is experimental.
Key points from answers:
Use Qt (because Qt is great and has a big community).
Open a new thread to run cv::VideoCapture in a loop and emit signal after frame capture. Use Qt's msleep mechanism, not OpenCV. So, we are still using OpenCV highgui for capture.
Convert cv::Mat to QtImage:
QImage qtFrame(cvFrame.data, cvFrame.size().width, cvFrame.size().height, cvFrame.step, QImage::Format_RGB888);
qtFrame = qtFrame.rgbSwapped();
Optional: Render with GLWidget. Convert QtImage to GLFormat with Qt built-in method:
m_GLFrame = QGLWidget::convertToGLFormat(frame);
this->updateGL();
Here is how I am doing it with Qt. You are welcome to use whatever may be useful to you :)
/// OpenCV_GLWidget.h
#ifndef OPENCV_GLWIDGET_H_
#define OPENCV_GLWIDGET_H_
#include <qgl.h>
#include <QImage>
class OpenCV_GLWidget: public QGLWidget {
public:
OpenCV_GLWidget(QWidget * parent = 0, const QGLWidget * shareWidget = 0, Qt::WindowFlags f = 0);
virtual ~OpenCV_GLWidget();
void renderImage(const QImage& frame);
protected:
virtual void paintGL();
virtual void resizeGL(int width, int height);
private:
QImage m_GLFrame;
};
#endif /* OPENCV_GLWIDGET_H_ */
/// OpenCV_GLWidget.cpp
#include "OpenCV_GLWidget.h"
OpenCV_GLWidget::OpenCV_GLWidget(QWidget* parent, const QGLWidget* shareWidget, Qt::WindowFlags f) :
QGLWidget(parent, shareWidget, f)
{
// TODO Auto-generated constructor stub
}
OpenCV_GLWidget::~OpenCV_GLWidget() {
// TODO Auto-generated destructor stub
}
void OpenCV_GLWidget::renderImage(const QImage& frame)
{
m_GLFrame = QGLWidget::convertToGLFormat(frame);
this->updateGL();
}
void OpenCV_GLWidget::resizeGL(int width, int height)
{
// Setup our viewport to be the entire size of the window
glViewport(0, 0, width, height);
// Change to the projection matrix and set orthogonal projection
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, height, 0, 0, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
void OpenCV_GLWidget::paintGL() {
glClear (GL_COLOR_BUFFER_BIT);
glClearColor (0.0, 0.0, 0.0, 1.0);
if (!m_GLFrame.isNull()) {
m_GLFrame = m_GLFrame.scaled(this->size(), Qt::IgnoreAspectRatio, Qt::SmoothTransformation);
glEnable(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, m_GLFrame.width(), m_GLFrame.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, m_GLFrame.bits() );
glBegin(GL_QUADS);
glTexCoord2f(0, 0); glVertex2f(0, m_GLFrame.height());
glTexCoord2f(0, 1); glVertex2f(0, 0);
glTexCoord2f(1, 1); glVertex2f(m_GLFrame.width(), 0);
glTexCoord2f(1, 0); glVertex2f(m_GLFrame.width(), m_GLFrame.height());
glEnd();
glDisable(GL_TEXTURE_2D);
glFlush();
}
}
This class handles the rendering of the image onto a promoted QWidget. Next, I created a thread to feed the widget. (I cheated using the Qt signal-slot architecture here because it was easy...may not be the best performer in the book, but it should get you started).
void VideoThread::run()
{
cv::VideoCapture video(0);
while(!m_AbortCapture)
{
cv::Mat cvFrame;
video >> cvFrame;
cv::Mat gray(cvFrame.size(), CV_8UC1);
cv::GaussianBlur(cvFrame, cvFrame, cv::Size(5, 5), 9.0, 3.0, cv::BORDER_REPLICATE);
cv::cvtColor(cvFrame, gray, CV_RGB2GRAY);
m_ThresholdLock.lock();
double localThreshold = m_Threshold;
m_ThresholdLock.unlock();
if(localThreshold > 0.0)
{
qDebug() << "Threshold = " << localThreshold;
cv::threshold(gray, gray, localThreshold, 255.0, cv::THRESH_BINARY);
}
cv::cvtColor(gray, cvFrame, CV_GRAY2BGR);
// convert the Mat to a QImage
QImage qtFrame(cvFrame.data, cvFrame.size().width, cvFrame.size().height, cvFrame.step, QImage::Format_RGB888);
qtFrame = qtFrame.rgbSwapped();
// queue the image to the gui
emit sendImage(qtFrame);
msleep(20);
}
}
Took me a bit to figure that out, so hopefully it will help you and others save some time :D
Create an openCV image to hold the image you have captured.
Do processing on it and then copy the data into the image you want to display (eg QImage)
You can optomise things by creating the opencv cv::Mat image to share the memory with the
QImage but since QImage generally uses ARGB and most image processing tasks are better done as greyscale or RGB it's probably better to copy images and convert between them using the opencv cvtColor() function
Then simply include include the opencv headers, and link with the opencv libs - there are guides on the opencv wiki for your particular environment