What I would like to do is draw a parallelepiped (using GL_QUADS) and its edges (using GL_LINES).
The parallelepiped is supposed to be a Squash field, and the camera will be inside it.
The problem is when I use GL_LINES, the line drawn is not visible when the camera is inside the parallelepiped.
Couple of screenshots so you understand better :
Inside - Line not visible : http://i.stack.imgur.com/OZKy5.png
Outside - Line visible : http://i.stack.imgur.com/ah40O.png
This is what's inside my init method :
GL2 gl = drawable.getGL().getGL2(); // get the OpenGL graphics context
glu = new GLU(); // get GL Utilities
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // set background (clear) color
gl.glClearDepth(1.0f); // set clear depth value to farthest
gl.glEnable(GL_DEPTH_TEST); // enables depth testing
gl.glEnable(GL_BLEND);
gl.glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
gl.glDepthFunc(GL_LEQUAL); // the type of depth test to do
gl.glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // best perspective correction
gl.glShadeModel(GL_SMOOTH); // blends colors nicely, and smoothes out lighting
gl.glEnable(GL_LINE_SMOOTH);
and this is my display method :
GL2 gl = drawable.getGL().getGL2(); // get the OpenGL 2 graphics context
gl.glClearColor(0.55f, 0.55f, 0.55f, 1.0f);
gl.glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // clear color and depth buffers
gl.glLoadIdentity(); // reset the model-view matrix
gl.glTranslated(-3.2, -2.82, -10); // translate into the screen
gl.glBegin(GL_QUADS); // Start Drawing The Quad
gl.glColor4ub(r,g,b, alpha);
gl.glVertex3d(0, 0, 0);
gl.glVertex3d(0, 5.640, 0);
gl.glVertex3d(6.400, 5.640, 0);
gl.glVertex3d(6.400, 0, 0);
gl.glVertex3d(0, 0, 0);
gl.glVertex3d(0, 5.640, 0);
gl.glVertex3d(0, 5.640, 9.750);
gl.glVertex3d(0, 0, 9.750);
gl.glVertex3d(6.400, 0, 0);
gl.glVertex3d(6.400, 5.640, 0);
gl.glVertex3d(6.400, 5.640, 9.750);
gl.glVertex3d(6.400, 0, 9.750);
gl.glVertex3d(0, 0, 9.750);
gl.glVertex3d(0, 5.640, 9.750);
gl.glVertex3d(6.400, 5.640, 9.750);
gl.glVertex3d(6.400, 0, 9.750);
gl.glVertex3d(0, 0, 0);
gl.glVertex3d(0, 0, 9.750);
gl.glVertex3d(6.400, 0, 9.750);
gl.glVertex3d(6.400, 0, 0);
gl.glVertex3d(0, 5.640, 0);
gl.glVertex3d(0, 5.640, 9.750);
gl.glVertex3d(6.400, 5.640, 9.750);
gl.glVertex3d(6.400, 5.640, 0);
gl.glEnd(); // Done Drawing The Quad
gl.glLineWidth(2);
gl.glBegin(GL_LINES);
gl.glColor4ub((byte)0,(byte)0,(byte)0, (byte)255);
gl.glVertex3d(0, 0, 0);
gl.glVertex3d(0, 5.640, 0);
gl.glEnd();
Thank you for your help.
Lines are draw using the same z-buffer as quads. So when you are outside, lines are closer to camera than edges of quads and they are draw. When you are inside, surface of quads are closer and line is not draw. However because of numerical errors lines can appear or disappear from time to time. Turn of depth test and lines should always be draw.
Related
Ho do I draw a specific transparent color with GDI+ ?
I tried this code:
m_image = new Gdiplus::Bitmap( img_w, img_h );
m_graphic = Gdiplus::Graphics::FromImage( m_image );
Gdiplus::Color c( 0, 255, 0, 0 ); // ARGB = 0x00FF0000
m_graphic->Clear( c );
m_image->GetPixel( 0, 0, &c ); //ARGB = 0x00000000 ?!
The color of transparent part of the image is always black. How can I change this?
The Graphics::Clear method clears a Graphicsobject to a specified color.
I have tried your code:
Image m_image(L"C:\\Users\\strives\\Desktop\\apple.jpg");
Graphics *m_graphic = Graphics::FromImage(&m_image);
Gdiplus::Color c(0, 255, 0, 0); // ARGB = 0x00FF0000
m_graphic->Clear(c);
graphics.DrawImage(&m_image, 30, 20);
delete m_graphic;
The final picture is like this.
I think the problem is clear. If you use the clear function and set the color to (0, 255, 0, 0), which defaults to black, then the printed area must be black, and the color of the pixels captured by the GetPixel function below you must be black.
I use my own QGraphicsItem-based class for drawing in QGraphicsScene. I also use FTGL for text rendering. Everything works fine until I start using shaders. My frame update logic is following: in MyGraphicsItem::paint I first update frame with following code (for YUV images):
glEnable(GL_MULTISAMPLE);
QGLFunctions glFuncs(QGLContext::currentContext());
glFuncs.glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_RECTANGLE, tex0Id);
glTexSubImage2D(GL_TEXTURE_RECTANGLE, 0, 0, 0, width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, data);
glFuncs.glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_RECTANGLE, tex1Id);
glTexSubImage2D(GL_TEXTURE_RECTANGLE, 0, 0, 0, width/2, height/2, GL_LUMINANCE, GL_UNSIGNED_BYTE, data + (width * height));
glFuncs.glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_RECTANGLE, tex2Id);
glTexSubImage2D(GL_TEXTURE_RECTANGLE, 0, 0, 0, width/2, height/2, GL_LUMINANCE, GL_UNSIGNED_BYTE, data + (int)(5 * width * height) / 4);
shader->bind(width, height);
and after that I render text:
glDisable(GL_TEXTURE_RECTANGLE);
glOrtho(0, WIDTH, 0, HEIGHT, 0, 1);
glColor3f(0.0, 1.0, 0.0);
bufferFont.Render("This is a text", -1, FTPoint(150, 200, -1.0));
But instead of text there is a rectangle. I tried to disable GL_TEXTURE_2D, but it didn't work. How can I use both shaders and FTGL
I'm trying to draw a rectangle with text on it, but all I see is the rectangle, there is no text. Am I doing something wrong?
This is the part of the code that does the drawing:
CanvasImage image = PlayN.graphics().createImage(100, 50);
Canvas canvas = image.canvas();
canvas.setFillColor(color);
canvas.fillRect(0, 0, 100, 50);
canvas.setFillColor(textColor);
canvas.setStrokeColor(textColor);
canvas.drawText("test", 0, 0);
layer.surface().drawImage(image, 0, 0);
Thanks
btw, I'm running the HTML version.
The problem was that the text is drawen up from the y coordinate and I was setting y to 0 so the text wasn't shown.
Whatever I use for the texture coordinates, only the bottom-left pixel is ever shown (the rectangle has a solid color).
Here I set the texture coordinates:
glMatrixMode(GL_TEXTURE);
glPushMatrix();
glLoadIdentity();
glTranslatef(0.5,0.0,0.0); //Have no effect
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
...
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glBegin(GL_QUADS);
glTexCoord2f(0, 1);
glVertex2f(0, 0);
glTexCoord2f(0, 0);
glVertex2f(0, 1);
glTexCoord2f(1, 0);
glVertex2f(1, 1);
glTexCoord2f(1, 1);
glVertex2f(1, 0);
glEnd();
It is very serious. It is rendered in two different QGLWidgets. In one Widget the texture looks fine and in the other I get only the bottom left pixel.
I found the mistake. I think that anywhere between the two render processes of the two widgets , is the flag GL_TEXTURE_RECTANGLE_NV setted. I thought that glEnable(GL_TEXTURE_2D); disabled automatically the GL_TEXTURE_RECTANGLE_NV flag. But it seems not.
So the following solved my Problem:
glDisable(GL_TEXTURE_RECTANGLE_NV);
I write an OpenGL based vector graphics renderer for my application. It needs to render to a framebuffer object rather to the screen directly. Since I write the application in Qt, I use a QGLFramebufferObject which is a wrapper class for a OpenGL framebuffer object.
I created a minimal example which shows a wrong result I also get when rendering more complex stuff (for example using a fragment shader which sets colors with a non-one alpha value). I just render a red circle and a half-transparent green one on a black cleared screen, and then the same on the FBO:
void MainWidget::initializeGL()
{
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glClearColor(0, 0, 0, 0);
}
void MainWidget::resizeGL(int w, int h)
{
glViewport(0, 0, w, h);
}
void MainWidget::paintGL()
{
// DRAW ON THE SCREEN
{
glClear(GL_COLOR_BUFFER_BIT);
glPointSize(100);
glEnable(GL_POINT_SMOOTH);
glBegin(GL_POINTS);
glColor4f(1, 0, 0, 1);
glVertex2f(-.2, 0);
glColor4f(0, 1, 0, .5);
glVertex2f( .2, 0);
glEnd();
}
QGLFramebufferObject fbo(width(), height());
fbo.bind();
// DRAW ON THE FBO USING THE SAME CODE AND THE SAME CONTEXT
{
glClear(GL_COLOR_BUFFER_BIT);
glPointSize(100);
glEnable(GL_POINT_SMOOTH);
glBegin(GL_POINTS);
glColor4f(1, 0, 0, 1);
glVertex2f(-.2, 0);
glColor4f(0, 1, 0, .5);
glVertex2f( .2, 0);
glEnd();
}
fbo.release();
fbo.toImage().save("debug.png");
}
The result looks like this on the screen (scaled 400%):
The rendering to the QGLFramebufferObject looks like this (also scaled 400%):
Note that this image is not fully opaque, so here it is the same image with a checkerboard added behind it:
Even the area in which the two circles overlap isn't fully opaque. And the anti-aliasing looks pretty ugly.
How does this happen? And how can I fix this?
I already tried:
Different blend functions.
Explicitly disabling the depth buffer, stencil buffer and sampling on the QGLFramebufferObject. I'm not sure if the QGLFramebufferObject default format adds something I don't want.
Try the following:
QGLFramebufferObjectFormat fmt;
fmt.setSamples(1); // or 4 or disable this line
fmt.setInternalTextureFormat(GL_RGBA8);
QGLFramebufferObject fbo(width(), height(), fmt);
This forces a specific pixel format and also disables rendering to a texture by using multisampling (otherwise QT always renders to a texture). That might produce different results. You can also experiment with the format.
Also, what is your hardware? My maximal point size is only 64 pixels (GTX 260), you are trying to render 100 pixel points. That might be an issue. Are any OpenGL errors generated? Does the same happen on small points?
You might also try hinting (if it's possible in QT):
glHint(GL_POINT_SMOOTH_HINT, GL_NICEST);
But i wouldn't expect this to change anything.