running jogl VBO on netbeans 7.4 - jogl

I am new in jogl and try to render rectangle by using VBO.
There are given two array: first arrey is
float vertex[] = {-2.0f, -2.0f, -2.0f,
2.0f, -2.0f, -2.0f,
-2.0f, -2.0f, 2.0f,
2.0f, -2.0f, 2.0f
};
second array is
float colors[] = {1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f
};
and then I try to initialize vertex buffers
pointsbf = Buffers.newDirectFloatBuffer(vertex.length);
colorsbf = Buffers.newDirectFloatBuffer(colors.length);
pointsbf.put(vertex);
colorsbf.put(colors);
pointsbf.rewind();
colorsbf.rewind();
the code above has been written in INIT() function;
The code bellow has been written in DISPLAY() function;
gl.glEnableClientState(GL2.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL2.GL_COLOR_ARRAY);
gl.glVertexPointer(3, GL.GL_FLOAT, 0, pointsbf);
gl.glColorPointer(3, GL.GL_FLOAT, 0, colorsbf);
gl.glDrawArrays(GL.GL_TRIANGLES, 0, totalNumVerts);
gl.glDisableClientState(GL2.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL2.GL_COLOR_ARRAY);
but the code after running shows just black screen(((

Have you modified the projection matrix and the model view matrix? If your vertices aren't in the view frustum, you won't see them.
You can use my example and modify it to use VBOs:
http://en.wikipedia.org/wiki/Java_OpenGL#Code_example
Keep in mind that the official JogAmp forum is a better place to get answers about JOGL.

Related

QOpenGLWidget with custom framebuffer and multiple render targets

Related to my other question, I'm trying to render a segmentation mask to enable object picking. But I am not able to achieve the desired result.
Option 1 did not work at all. I was not able to retrieve the content of color attachment 1, or check if it existed at all (I created the attachment using only native OpenGL calls).
Using this post, I was able to reproduce the green.png and red.png images by creating a custom frame buffer with a second color attachment which is then bound and drawn to (all in paintGL()).
Somehow I had to use the person's frame buffer creation code because when I created the frame buffer myself there was always a warning saying toImage called for missing color attachment, although I attached the color attachment and textures() called on the frame buffer returned two objects. I then tried to insert my rendering code after
GLfloat red[4] = { 1.0f, 0.0f, 0.0f, 1.0f };
f->glClearBufferfv(GL_COLOR, 0, red);
GLfloat green[4] = { 0.0f, 1.0f, 0.0f, 1.0f };
f->glClearBufferfv(GL_COLOR, 1, green);
but this still resulted in the red and green image. But the code renders fine when using the normal default framebuffer. I adapted the shader to (short version for testing purposes):
void main() {
gl_FragData[0] = vec4(1.0, 1.0, 1.0, 1.0);
gl_FragData[1] = vec4(0.0, 0.0, 0.0, 1.0);
}
Since I was able to produce the red and green image, I'm assuming there must be a way to retrieve the frag data with this custom framebuffer. The solution I have right now is a complete (!) copy of the program and another dedicated fragment shader which's sole purpose is to render the segmentation, and perform all OpenGL draw calls a second time. As you can guess, this is a somewhat ugly solution, although the scenery is not that large and my computer is able to handle it easily. Has anyone got an idea/link?
If you want to write to multiple render targets in a Fragment shader, then you have to declare multiple output variables:
#version 330
layout(location = 0) out vec4 fragData0;
layout(location = 1) out vec4 fragData1;
void main()
{
fragData0 = vec4(1.0, 1.0, 1.0, 1.0);
fragData1 = vec4(0.0, 0.0, 0.0, 1.0);
}
From GLSL version 1.1 (#version 110, OpenGL 2.0) to GLSL version 1.5 (#version 150, OpenGL 3.2), the same can be achieved by writing to the built in fragment shader output variable gl_FragData.
void main()
{
gl_FragData[0] = vec4(1.0, 1.0, 1.0, 1.0);
gl_FragData[1] = vec4(0.0, 0.0, 0.0, 1.0);
}
See also Fragment Shader Outputs - Through The Ages
To use multiple render targets in Qt, a 2nd color attachment has to be add to the framebuffer and the list of color buffers has to be specified by glDrawBuffers:
QOpenGLShaderProgram *program;
QOpenGLFramebufferObject *fb;
int fb_width;
int fb_height,
fb = new QOpenGLFramebufferObject( fb_width, fb_height );
fb->addColorAttachment( fb_width, fb_height );
glViewport(0, 0, fb_width, fb_height);
fb->bind();
glClearColor(0, 0, 0, 1);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
GLenum buffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 };
glDrawBuffers(2, buffers);
program->bind();
// ..... do the drawing
program->release();
fb->release();
The OpenGL texture objects which are attached to the framebuffer, can be accessed:
QVector<GLuint> fb_textrues = fb->textures();

QT-OpenGl Vertex not redrawing

I have a simple openGl program that should redrawn when i change a spinner.
When the paintGL method is invoked the color of my vertex triangles change but the number of them (that is based on the spinner ) don't.
My code is the following:
void GLWidget::paintGL()
{
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_COLOR) ;
glClearColor(0.0, 0.0, 0.0, 0.0);
for(int i=0;i<numVertex;i++){
glBegin(GL_TRIANGLE_FAN);
drawTriangle(i);
glEnd();
}
qDebug("numVetex %d",numVertex);
};
void GLWidget::drawTriangle(int iteraction){
float theta=thetaIncrement*iteraction;
float x= radius*qCos(theta);
float y=radius*qSin(theta);
double r=((double) rand() / (RAND_MAX));
double g=((double) rand() / (RAND_MAX));
double b=((double) rand() / (RAND_MAX));
glColor3f(r,g,b);
glVertex3f( 0.0f, 0.0f, 0.0f);
glVertex3f(x,y,0.0f);
theta=thetaIncrement*(iteraction+1);
x= radius*qCos(theta);
y=radius*qSin(theta);
glVertex3f( x,y, 0.0f);
}
Even if i don't draw anything for example, on even number of vertex i just put a return on paintGl , the already drawn vertex still are showed on the screen.
Any recommendation?
your glClear call doesn't have a valid argument remove the GL_COLOR part:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) ;

Draw in QGLFrameBufferObject

With Qt and opengl, I would like draw in a QGlFrameBufferObject ?
I try this :
QGLFrameBufferObject * fbo = new QGLFramebufferObject(200, 200, QGLFramebufferObject::NoAttachment, GL_TEXTURE_2D, GL_RGBA32F);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f, fbo->size().width(), fbo->size().height(), 0.0f, 0.0f, 1.0f);
fbo->bind();
glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
fbo->release();
fbo->toImage().save("test.jpg"));
But I don't get a red image.
OTHER QUESTION
And if I want draw with :
glBegin(GL_QUAD);
glColor3d(0.2, 0.5, 1.0);
glVertex2i( 10, 20);
glColor3d(0.7, 0.5, 1.0);
glVertex2i( 15, 20);
glColor3d(0.8, 0.4, 1.0);
glVertex2i( 15, 25);
glColor3d(0.1, 0.9, 1.0);
glVertex2i( 10, 25);
glEnd();
Do I need also glClear() ?
You never actually clear the framebuffer. glClearColor() only sets the color used for clearing, but does not clear anything. You will need to add the second line here:
glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
The glClear() call will clear the color buffer with the clear color you specified in the first call. The framebuffer content is initially undefined, so you should always clear the framebuffer with a glClear() call before you start drawing. The only exception is if you're certain that the primitives you render will cover every pixel of the drawing surface. Even then, on some architectures it can actually be better for performance to still call glClear() first.
It shouldn't matter yet as long as you only clear the buffer. But once you want to start drawing, you will also need to set the viewport:
glViewport(0, 0, 200, 200);

Communicating large/changing/complex sets of vertices in OpenGL?

I've got a very basic scene rendering with a vertex and color array (some code below). I see how to bind the vertexes and colors to the vertex shaders attributes. Currently this vertex and color information is in a local array variable in my render function as you can see below and then glDrawArrays(GL_TRIANGLES, 0, n) is called to draw them for each frame.
I'm trying to picture the architecture of a larger moving scene where there are lots of models with lots of verticies that need to be loaded and unloaded.
The naïve way I imagine to extend this would be to place all the vertex/color data in one big array in main memory and then call glDrawArrays once for each frame. This seems to be inefficient to me. On every frame the vertex and color information changes only in parts, so arranging and reloading an entire monolithic vertex array for every frame seems wrong.
What do 3D games and so forth do about this? Are they for each frame placing all the vertexes in one big array in main memory, and then calling glDrawArrays once? If not, what architecture and OpenGL calls do they generally use to communicate all the vertexes of the scene to the GPU? Is it possible to load vertexes into GPU memory and then reuse them for several frames? Is it possible to draw multiple vertex arrays from multiple places in main memory?
static const char *vertexShaderSource =
R"(
attribute highp vec4 posAttr;
attribute lowp vec4 colAttr;
varying lowp vec4 col;
uniform highp mat4 matrix;
void main()
{
col = colAttr;
gl_Position = matrix * posAttr;
}
)";
static const char *fragmentShaderSource =
R"(
varying lowp vec4 col;
void main()
{
gl_FragColor = col;
}
)";
void Window::render()
{
glViewport(0, 0, width(), height());
glClear(GL_COLOR_BUFFER_BIT);
m_program->bind();
constexpr float delta = 0.001;
if (forward)
eyepos += QVector3D{0,0,+delta};
if (backward)
eyepos += QVector3D{0,0,-delta};
if (left)
eyepos += QVector3D{-delta,0,0};
if (right)
eyepos += QVector3D{delta,0,0};
QMatrix4x4 matrix;
matrix.perspective(60, 4.0/3.0, 0.1, 10000.0);
matrix.lookAt(eyepos, eyepos+direction, {0, 1, 0});
matrix.rotate(timer.elapsed() / 100.0f, 0, 1, 0);
m_program->setUniformValue("matrix", matrix);
QVector3D vertices[] =
{
{0.0f, 0.0f, 0.0f},
{1.0f, 0.0f, 0.0f},
{1.0f, 1.0f, 0.0f},
};
QVector3D colors[] =
{
{1.0f, 0.0f, 0.0f},
{1.0f, 1.0f, 0.0f},
{1.0f, 0.0f, 1.0f},
};
m_program->setAttributeArray("posAttr", vertices);
m_program->setAttributeArray("colAttr", colors);
m_program->enableAttributeArray("posAttr");
m_program->enableAttributeArray("colAttr");
glDrawArrays(GL_TRIANGLES, 0, 3);
m_program->disableAttributeArray("posAttr");
m_program->disableAttributeArray("colAttr");
m_program->release();
++m_frame;
}
Depends on how you want to structure things.
If you have a detailed model that needs to be moved and rotated and transformed but without changing its shape, then a pretty clear way to do it is to load that model into e.g. a VBO (I'm not sure what your setAttributeArray does), and this has to happen only before the first frame, and subsequent frames can render that model with any transformation you want by simply setting the model view matrix uniform which is a much smaller chunk of data going over the bus.
Vertex shaders can and should be used for letting the GPU help or offload entirely the computation and/or application of these types of operations.

How to set the origin axis on a CC3Camera, or a CC3MeshNode

So I'm using Cocos3D in Obj-C.
Since my initial question doesn't seems to be clear enough to get any answers i write it again.
I'm making a 3D viewer, with this i want to be able to drag my finger on the screen and make my object(Dobj) rotates on itself.
-(void) startRotatingObjectOnXYAxis { saveXYAxisStartLocation = Dobj.rotation; }
-(void) rotateObjectOnXYAxisBy: (CGPoint) aMovement
{
CC3Vector rotateVector = CC3VectorMake(aMovement.y, aMovement.x, 0.0f);
Dobj.rotation = CC3VectorAdd(saveXYAxisStartLocation, rotateVector);
}
The problem is that when I do it this way, the object axis also rotates, and some drags after, the X axis will be vertical(instead of horizontal), and rotations become very confusing.
So i would like to reset the axis to their origin point after each drags.
Something like that:
-(void) startRotatingObjectOnXYAxis { saveXYAxisStartLocation = Dobj.rotation; }
-(void) rotateObjectOnXYAxisBy: (CGPoint) aMovement
{
[Dobj moveMeshOriginTo:CC3VectorMake(0.0f, 0.0f, saveXYAxisStartLocation.z)];
CC3Vector rotateVector = CC3VectorMake(aMovement.y, aMovement.x, 0.0f);
Dobj.rotation = CC3VectorAdd(saveXYAxisStartLocation, rotateVector);
}
But it doesn't have any effect ...
In my examples aMovement is the (UIPanGestureRecognizer*) gesture.translation value.
Ok I finally ended with this solution:
-(void) startRotatingObjectOnXYAxis { objectXYAxisStartRotation = CC3VectorMake(0.0f, 0.0f, 0.0f); }
-(void) rotateObjectOnXYAxisBy: (CGPoint) aMovement
{
CC3Vector rotateVector = CC3VectorMake(aMovement.y, aMovement.x, 0.0f);
[Dobj rotateBy:CC3VectorDifference(rotateVector, objectXYAxisStartRotation)];
objectXYAxisStartRotation = rotateVector;
}
It appears that rotateBy will rotate the object only, without rotating the axis.
So the problem is SOLVED

Resources