GL_TEXTURE_2D vs GL_TEXTURE_RECTANGLE - jogl

I have a little program that render a yellow triangle twice, once on the left half of a framebuffer and once on the right side.
Dump of the texture
Now, after that I render the content of this framebuffer on the screen.
It works if I use GL_TEXTURE_RECTANGLE in the framebuffer constructor:
https://github.com/elect86/Joglus/blob/master/Joglolus/src/joglus/example1/FrameBuffer.java
In binding the texture, function renderFullScreenQuad, line 372:
https://github.com/elect86/Joglus/blob/master/Joglolus/src/joglus/example1/GlViewer.java
And using sampler2DRect in the fragment shader:
#version 330
out vec4 outputColor;
uniform sampler2DRect texture0;
void main() {
outputColor = texture(texture0, gl_FragCoord.xy);
}
But if I change all the RECTANGLE to 2D and I use sample2D in the fs, I get a total black image at the end of the display(), although the dump of the texture shows always the correct image... I would like to know why.

Texture coordinates work differently between textures of types GL_TEXTURE_RECTANGLE and GL_TEXTURE_2D:
For GL_TEXTURE_RECTANGLE, the range of texture coordinates corresponding to the entire texture image is [0.0, width] x [0.0, height]. In other words, the unit of the texture coordinates is in pixels of the texture image.
For GL_TEXTURE_2D, the range of texture coordinates is [0.0, 1.0] x [0.0, 1.0].
With this statement in your fragment shader:
outputColor = texture(texture0, gl_FragCoord.xy);
you are using coordinates in pixel units as texture coordinates. Based on the above, this will work for the RECTANGLE texture, but not for 2D.
Since your original input coordinates in the vertex shader appear to be in the range [0.0, 1.0], the easiest approach to fix this is to pass the untransformed coordinates from vertex shader to fragment shader, and use them as texture coordinates. The vertex shader would then look like this:
#version 330
layout (location = 0) in vec2 position;
out vec2 texCoord;
uniform mat4 modelToClipMatrix;
void main() {
gl_Position = modelToClipMatrix * vec4(position, 0, 1);
texCoord = position;
}
And the fragment shader:
#version 330
in vec2 texCoord;
out vec4 outputColor;
uniform sampler2D texture0;
void main() {
outputColor = texture(texture0, texCoord);
}

Related

Is there a way to batch render textures in Qt?

I have been trying to batch render two different pictures. I have 2 different QOpenGLTexture objects I want to draw in a single draw call with batch rendering but am struggling. Both texture objects have id's but only the last texture objects image is drawn. I believe my problem is with setting up the or frag shader.
//..............Setting up uniform...............//
const GLuint vals[] = {m_texture1->textureId(), m_texture2->textureId()};
m_program->setUniformValueArray("u_TextureID", vals, 2);
//..............frag Shader.....................//
#version 330 core
out vec4 color;
in vec2 v_textCoord; // Texture coordinate
in float v_index; // (0, 1) Vertex for which image to draw.
// 0 would draw the image of the first texture object
uniform sampler2D u_Texture[2];
void main()
{
int index = int(v_index);
color = texture(u_Texture[index], v_textCoord);
};
I've tried experimenting with the index value in the frag shader but it only draws the last texture image or blacks out. I tried implementing it how you would with openGL but have had no luck.

Hexagonal tilling of hemi-sphere

I need to have hexagonal grid on a spherical surface. like shown here.
Right now I am doing a hexagonal flattened grid.
and the projecting it onto the surface of a hemisphere. Like here,
But as you can see, the funny artifact is hexagons on the edge are disproportionately large. There should be a better way to do this so that all the hexagons are near equal in their size.
I had tried the solution like #spektre had suggested but my code was producing following plot.
i was using the a=sqrt(x*x+y*y)/r * (pi/2) because i wanted to scale a that goes from [0,r] to z [0,r] so angle a has bounds of [0,pi/2].
But with just a=sqrt(x*x+y*y)/r it works well.
New Development with the task, New problem
I have the problem that now, the hexagons are not equal through out the shapes. I want a uniform shape (area wise) for them across the dome and cylinder. I am confused on how to manage this?
Here is what I have in mind:
create planar hex grid on XY plane
center of your grid must be the center of your sphere I chose (0,0,0) and size of the grid should be at least the 2*radius of your sphere big.
convert planar coordinates to spherical
so distance from (0,0,0) to point coordinate in XY plane is arclength traveling on surface of your sphere so if processed point is (x,y,z) and sphere radius is r then latitude position on sphere is:
a=sqrt(x*x+y*y)/r;
so we can directly compute z coordinate:
z=r*cos(a);
and scale x,y to surface of sphere:
a=r*sin(a)/sqrt(x*x+y*y);
x*=a; y*=a;
If the z coordinate is negative then you have crossed half sphere and should handle differently (skip hex or convert to cylinder or whatever)
Here Small OpenGL/C++ example for this:
//---------------------------------------------------------------------------
const int _gx=15; // hex grid size
const int _gy=15;
const int _hy=(_gy+1)<<1; // hex points size
const int _hx=(_gx+1);
double hex[_hy][_hx][3]; // hex grid points
//---------------------------------------------------------------------------
void hexgrid_init(double r) // set hex[][][] to planar hex grid points at xy plane
{
double x0,y0,x,y,z,dx,dy,dz;
double sx,sy,sz;
int i,j;
// hex sizes
sz=sqrt(8.0)*r/double(_hy);
sx=sz*cos(60.0*deg);
sy=sz*sin(60.0*deg);
// center points arrounf (0,0)
x0=(0.5*sz)-double(_hy/4)*(sz+sx);
y0=-double(_hx)*(sy);
if (int(_gx&1)==0) x0-=sz+sx;
if (int(_gy&1)==0) y0-=sy; else y0+=sy;
for (y=y0,i=0;i<_hy;i+=2,y+=sy+sy)
for (x=x0,j=0;j<_hx;j++,x+=sz)
{
hex[i][j][0]=x;
hex[i][j][1]=y;
hex[i][j][2]=0.0;
x+=sz+sx+sx; j++; if (j>=_hx) break;
hex[i][j][0]=x;
hex[i][j][1]=y;
hex[i][j][2]=0.0;
}
for (y=y0+sy,i=1;i<_hy;i+=2,y+=sy+sy)
for (x=x0+sx,j=0;j<_hx;j++,x+=sx+sx+sz)
{
hex[i][j][0]=x;
hex[i][j][1]=y;
hex[i][j][2]=0.0;
x+=sz; j++; if (j>=_hx) break;
hex[i][j][0]=x;
hex[i][j][1]=y;
hex[i][j][2]=0.0;
}
}
//---------------------------------------------------------------------------
void hexgrid_half_sphere(double r0) // convert planar hex grid to half sphere at (0,0,0) with radius r0
{
int i,j;
double x,y,z,a,l;
for (i=0;i<_hy;i++)
for (j=0;j<_hx;j++)
{
x=hex[i][j][0];
y=hex[i][j][1];
z=hex[i][j][2];
l=sqrt(x*x+y*y); // distance from center on xy plane (arclength)
a=l/r0; // convert arclength to angle
z=r0*cos(a); // compute z coordinate (sphere)
if (z>=0.0) // half sphere
{
a=r0*sin(a)/l;
}
else{ // turn hexes above half sphere to cylinder
z=0.5*pi*r0-l;
a=r0/l;
}
x*=a;
y*=a;
hex[i][j][0]=x;
hex[i][j][1]=y;
hex[i][j][2]=z;
}
}
//---------------------------------------------------------------------------
void hex_draw(int x,int y,GLuint style) // draw hex x = <0,_gx) , y = <0,_gy)
{
y<<=1;
if ((x&1)==0) y++;
if ((x<0)||(x+1>=_hx)) return;
if ((y<0)||(y+2>=_hy)) return;
glBegin(style);
glVertex3dv(hex[y+1][x ]);
glVertex3dv(hex[y ][x ]);
glVertex3dv(hex[y ][x+1]);
glVertex3dv(hex[y+1][x+1]);
glVertex3dv(hex[y+2][x+1]);
glVertex3dv(hex[y+2][x ]);
glEnd();
}
//---------------------------------------------------------------------------
And usage:
hexgrid_init(1.5);
hexgrid_half_sphere(1.0);
int x,y;
glColor3f(0.0,0.2,0.3);
for (y=0;y<_gy;y++)
for (x=0;x<_gx;x++)
hex_draw(x,y,GL_POLYGON);
glLineWidth(2);
glColor3f(1.0,1.0,1.0);
for (y=0;y<_gy;y++)
for (x=0;x<_gx;x++)
hex_draw(x,y,GL_LINE_LOOP);
glLineWidth(1);
And preview:
For more info and ideas see related:
Make a sphere with equidistant vertices
Turning a cylinder into a sphere without pinching at the poles

Why does my GLSL shader lighting move around the scene with the objects it's shining on?

I'm following a tutorial on OpenGL ES 2.0 and combining it with a tutorial on GLSL lighting that I found, using a handy Utah teapot from developer.apple.com.
After a lot of fiddling and experimentation I have the teapot drawn moderately correctly on the screen, spinning around all three axes with the 'toon shading' from the lighting tutorial working. There's a few glitches in the geometry due to me simply drawing the whole vertex list as triangle strips (if you look in the teapot.h file there are '-1' embedded where I'm supposed to start new triangle strips, but this is only test data and not relevant to my problem).
The bit I am really confused about is how to position a light in the scene. In my Objective-C code I have a float3 vector that contains {0,1,0} and pass that into the shader to then calculate the intensity of the light.
Why does the light appear to move in the scene too? What I mean is the light acts as though it's attached to the teapot by an invisible stick, always pointing at the same side of it no matter what direction the teapot is facing.
This is the vertex shader
attribute vec4 Position;
attribute vec4 SourceColor;
attribute vec3 Normal;
uniform mat4 Projection;
uniform mat4 Modelview;
varying vec3 normal;
void main(void) {
normal = Normal;
gl_Position = Projection * Modelview * Position;
}
'Position' is set by the Obj-C code and is the vertices for the object, 'Normal' is the list of normals both from a vertex array (VBO), 'Projection' and 'Modelview' are calculated like this:
(A CC3GLMatrix is from the Cocos3D library, mentioned in the GLES tutorial linked above)
CC3GLMatrix *projection = [CC3GLMatrix matrix];
float h = 4.0f * self.frame.size.height / self.frame.size.width;
[projection populateFromFrustumLeft:-2 andRight:2 andBottom:-h/2 andTop:h/2 andNear:1 andFar:100];
glUniformMatrix4fv(_projectionUniform, 1, 0, projection.glMatrix);
CC3GLMatrix *modelView = [CC3GLMatrix matrix];
[modelView populateFromTranslation:CC3VectorMake(0, 0, -7)];
[modelView scaleBy:CC3VectorMake(30, 30, 30)];
_currentRotation += displayLink.duration * 90;
[modelView rotateBy:CC3VectorMake(_currentRotation, _currentRotation, _currentRotation)];
glUniformMatrix4fv(_modelViewUniform, 1, 0, modelView.glMatrix);
And I set the light in the scene by doing
float lightDir[] = {1,0,1};
glUniform3fv(_lightDirUniform, 1, lightDir);
The fragment shader looks like this
varying lowp vec4 DestinationColor; // 1
varying highp vec3 normal;
uniform highp vec3 LightDir;
void main(void) {
highp float intensity;
highp vec4 color;
intensity = dot(LightDir,normal);
if (intensity > 0.95)
color = vec4(1.0,0.5,0.5,1.0);
else if (intensity > 0.5)
color = vec4(0.6,0.3,0.3,1.0);
else if (intensity > 0.25)
color = vec4(0.4,0.2,0.2,1.0);
else
color = vec4(0.2,0.1,0.1,1.0);
gl_FragColor = color;
}
While trying to work this out I come across code that references the (non-existant in GLES) 'gl_LightSource' and 'gl_NormalMatrix' but don't know what to put into equivalents I have to pass into the shaders from my code. The references to 'eye space' 'camera space' 'world space' and so on are confusing, I know I should probably be converting things between them but don't understand why or how (and where - in code, or in the shader?)
Every frame do I need to modify the light source? The code I have for setting it looks too simplistic. I'm not really moving the teapot around, am I, instead I'm moving the entire scene - light and all around?
First of all some definitions:
world space: the space your whole world is defined in. By convention it is a static space that never moves.
view space/camera space/eye space: the space your camera is defined in. it is usually a position and rotation relative to world space
model space: the space your model is defined in. Like camera space, it is usually a position and rotation relative to world space
light space: same as model space
In simple examples (and i guess in your's) model space and world space are the same. In addition OpenGL by itself doesn't have a concept of world space, which doesn't mean you cannot use one. It comes in handy when you want to have more than one object moving around independently in your scene.
Now, what you are doing with your object before rendering is creating a matrix that transforms the vertices of a model into viewspace, hence 'modelViewMatrix'.
With light in this case it's a little different. Light calculation in your shader is done in modelspace, so you have to transform your lightposition every frame into modelspace.
This is done by calculating something like:
_lightDirUniform = inverseMatrix(model) * inverseMatrix(light) * lightPosition;
The lightposition is transformed from light into world and then into model space. If you don't have a world space, just leave out the model space transformation and you should be fine.

Pass a Qt QImage to a glsl texture sampler

I am writing a rendering engine using Qt and am running into problems with texturing my models
I have a very simple shader to test texturing:
vertex shader:
Attribute vec4 Vertex;
Attribute vec2 texcoords;
uniform mat4 mvp;
varying vec2 outTexture;
void main() {
gl_Position = mvp * Vertex;
outTexture = texcoords;
}
and fragment shader:
uniform sampler2D tex;
varying vec2 outTexture;
void main() {
vec4 color = texture2D(tex, outTexture);
gl_FragColor = color;
}
I am passing my texture coordinates to the shaders correctly
My problem is with binding a QImage and sending it to its texture uniform.
I am using the following code to bind the texture:
const QString& filename;
GLuint m_texture;
QImage image(filename);
image = image.convertToFormat(QImage::Format_ARGB32);
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexParameteri(GL_TEXTURE2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, image.width(), image.height(), 0, GL_BGRA, GL_UNSIGNED_BYTE, image.bits());
glGenerateMipmap(GL_TEXTURE2D);
glEnable(GL_TEXTURE_2D);
The shader works and I can pass a uniform to the matrix and attributes to the vertex and texture coordinates, but when I try to send a uniform to the texture the same way as such:
effect->setUniformValue(effect->uniformLocation("tex", texture->m_texture));
the program crashes with an “access violation reading location” error with glGetError() returning “invalid enumerant”
Interestingly, when I try running the program without attempting to send the texture to the sampler, the texture is actually appearing on the model. Which makes me think the way I’m binding it has something to do with the legacy texture handling and the texture is being bound to a particular texture address which is being picked up by the shader. This is not the effect I want because I want the programmer to be able to explicitly state at draw time what texture should be passed to the uniform (just as any other uniform is set)
How can I pass the texture to it’s sampler, what do I need to change when binding a texture?
Change it to
effect->setUniformValue(effect->uniformLocation("tex"), texture->m_texture);
or
effect->setUniformValue("tex", texture->m_texture);
Try converting the QImage using:
image = QGLWidget::convertToGLFormat(image);
Another thought, if you are using ES2, then GL_RGBA8 is not valid. I think GL_BGRA may be an optional extension, or not ES 2. Hope this helps.

OpenGL ES 2.0 specifying normals for vertex shader

I am not able to get the right shading on all the faces of the cube I drew. I get a smooth transition from one of the face to another face which does not show the edge properly.
One a different face (where I get the desired edge), I get the shading such that it shows the two triangles which make up that face.
I believe the problem is with the normals I am specifying. I am attaching my vertex and normal matrix and my vertex and fragment shader code. vertex and normal matrix are same.
I guess the problem is with the normals but I have tried almost everything with them but the effect does not change.
//normal matrix and vertex matrix are same
static const float normals[]=
{
//v0,v1,v2,v3,
//v4,v5,v6,v7
0.5,0.5,0.5, -0.5,0.5,0.5, -0.5,-0.5,0.5, 0.5,-0.5,0.5,
0.5,0.5,-0.5, -0.5,0.5,-0.5, -0.5,-0.5,-0.5, 0.5,-0.5,-0.5
};
//vertex shader
attribute vec4 color;
attribute vec4 position;
attribute vec4 normal;
uniform mat4 u_mvpMatrix;
uniform vec4 lightDirection;
uniform vec4 lightDiffuseColor;
uniform float translate;
varying vec4 frontColor;
//varying vec4 colorVarying;
void main()
{
vec4 normalizedNormal = normalize(u_mvpMatrix* normal);
vec4 normalizedLightDirection = normalize(lightDirection);
float nDotL = max(dot(normalizedNormal, normalizedLightDirection), 0.0);
frontColor = color * nDotL * lightDiffuseColor;
gl_Position = u_mvpMatrix * position;
}
//fragment shader
varying lowp vec4 frontColor;
void main()
{
gl_FragColor = frontColor;
}
Please help? Thanks in advance !!
Your problem is not related to your normal matrix, it is related to your input normals.
From what I read, you provide 8 normals, one for each vertex, meaning that you provide pre-averaged normals ; there is no way your vertex shader can un-average them.
To have proper discontinuities in your lighting, you need discontinuous normals : for each quad, you have 4 vertices, each with a normal matching the quad normal. So you end up with 6x4 = 24 normals (so as many vertices).
You may want to take a look at this for more detailed explanation :
http://www.songho.ca/opengl/gl_vertexarray.html

Resources