WebGL2 not writing second output of `out int[2]` result - webgl2

When I read output from the fragment shader:
#version 300 es
precision highp float;
precision highp int;
out int outColor[2];
void main() {
outColor[0] = 5;
outColor[1] = 2;
}
rendered into a 32 bit integer RG texture, I find that only the 5s have been written but not the 2s. Presumably I've got some format specifier wrong somewhere. Or I might be attaching the framebuffer to the wrong thing (gl.COLOR_ATTACHMENT0). I've tried varying various arguments but most changes that I make result in nothing coming out due to formats not lining up. It might be that I need to change 3 constants in tandem.
Here's my self-contained source. The output I want is an array alternatingbetween 5 and 2. Instead, I get an array alternating between 5 and semi-random large constants and 0.
let canvas /** #type {HTMLCanvasElement} */ = document.createElement('canvas');
let gl = canvas.getContext("webgl2");
let vertexShader = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(vertexShader, `#version 300 es
in vec4 a_position;
void main() {
gl_Position = a_position;
}
`);
gl.compileShader(vertexShader);
console.assert(gl.getShaderParameter(vertexShader, gl.COMPILE_STATUS), "Vertex shader compile failed.");
let fragmentShader = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(fragmentShader, `#version 300 es
precision highp float;
precision highp int;
out int outColor[2];
void main() {
outColor[0] = 5;
outColor[1] = 2;
}
`);
gl.compileShader(fragmentShader);
let program = gl.createProgram();
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragmentShader);
gl.linkProgram(program);
let positionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([-3, -1, 1, 3, 1, -1]), gl.STATIC_DRAW);
let positionAttributeLocation = gl.getAttribLocation(program, "a_position");
let vao = gl.createVertexArray();
gl.bindVertexArray(vao);
gl.enableVertexAttribArray(positionAttributeLocation);
gl.vertexAttribPointer(positionAttributeLocation, 2, gl.FLOAT, false, 0, 0);
let w = 4;
let h = 4;
let texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RG32I, w, h, 0, gl.RG_INTEGER, gl.INT, null);
let frameBuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, frameBuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture, 0);
gl.useProgram(program);
gl.viewport(0, 0, w, h);
gl.drawArrays(gl.TRIANGLES, 0, 3);
let outputBuffer = new Int32Array(w*h*2);
gl.readPixels(0, 0, w, h, gl.RG_INTEGER, gl.INT, outputBuffer);
console.log(outputBuffer);

Arrayed outputs like out int outColor[2]; are used for outputting to multiple render targets. In your case, two render targets with one channel each, because you've used a scalar type.
To express a single render target with two channels, try out ivec2 outColor;.

Related

Receiving denormalized output texture coordinates in Frag shader

Update
See rationale at the end of my question below
Using WebGL2 I can access a texel by its denormalized coordinates (sorry don't the right lingo for this). That means I don't have to scale them down to 0-1 like I do in texture2D().
However the input to the fragment shader is still the vec2/3 in normalized values.
Is there a way to declare in/out variables in the Vertex and Frag shaders so that I don't have to scale the coordinates?
somewhere in vertex shader:
...
out vec2 TextureCoordinates;
somewhere in frag shader:
...
in vec2 TextureCoordinates;
I would like for TextureCoordinates to be ivec2 and already scaled.
This question and all my other questions on webgl related to general computing using WebGL. We are trying to do tensor (multi-D matrix) operations using WebGL.
We map our data in a few ways to a Texture. The simplest approach we follow is -- assuming we can access our data as a flat array -- to lay it out along the texture's width and go up the texture's height until we're done.
Since our thinking, logic, and calculations are all based on tensor/matrix indices -- inside the fragment shader -- we'd have to map back to/from the X-Y texture coordinates to indices. The intermediate step here is to calculate an offset for a given position of a texel. Then from that offset we can calculate the matrix indices from its strides.
Calculating an offset in webgl 1 for very large textures seems to be taking much longer than webgl2 using the integer coordinates. See below:
WebGL 1 offset calculation
int coordsToOffset(vec2 coords, int width, int height) {
float s = coords.s * float(width);
float t = coords.t * float(height);
int offset = int(t) * width + int(s);
return offset;
}
vec2 offsetToCoords(int offset, int width, int height) {
int t = offset / width;
int s = offset - t*width;
vec2 coords = (vec2(s,t) + vec2(0.5,0.5)) / vec2(width, height);
return coords;
}
WebGL 2 offset calculation in the presence of int coords
int coordsToOffset(ivec2 coords, int width) {
return coords.t * width + coords.s;
}
ivec2 offsetToCoords(int offset, int width) {
int t = offset / width;
int s = offset - t*width;
return ivec2(s,t);
}
It should be clear that for a series of large texture operations we're saving hundreds of thousands of operations just on the offset/coords calculation.
It's not clear why you want do what you're trying to do. It would be better to ask something like "I'm trying to draw an image/implement post processing glow/do ray tracing/... and to do that I want to use un-normalized texture coordinates because " and then we can tell you if your solution is going to work and how to solve it.
In any case, passing int or unsigned int or ivec2/3/4 or uvec2/3/4 as a varying is supported but not interpolation. You have to declare them as flat.
Still, you can pass un-normalized values as float or vec2/3/4 and the convert to int, ivec2/3/4 in the fragment shader.
The other issue is you'll get no sampling using texelFetch, the function that takes texel coordinates instead of normalized texture coordinates. It just returns the exact value of a single pixel. It does not support filtering like the normal texture function.
Example:
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) {
return alert("need webgl2");
}
const vs = `
#version 300 es
in vec4 position;
in ivec2 texelcoord;
out vec2 v_texcoord;
void main() {
v_texcoord = vec2(texelcoord);
gl_Position = position;
}
`;
const fs = `
#version 300 es
precision mediump float;
in vec2 v_texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main() {
outColor = texelFetch(tex, ivec2(v_texcoord), 0);
}
`;
// compile shaders, link program, look up locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
// create buffers via gl.createBuffer, gl.bindBuffer, gl.bufferData)
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
numComponents: 2,
data: [
-.5, -.5,
.5, -.5,
0, .5,
],
},
texelcoord: {
numComponents: 2,
data: new Int32Array([
0, 0,
15, 0,
8, 15,
]),
}
});
// make a 16x16 texture
const ctx = document.createElement('canvas').getContext('2d');
ctx.canvas.width = 16;
ctx.canvas.height = 16;
for (let i = 23; i > 0; --i) {
ctx.fillStyle = `hsl(${i / 23 * 360 | 0}, 100%, ${i % 2 ? 25 : 75}%)`;
ctx.beginPath();
ctx.arc(8, 15, i, 0, Math.PI * 2, false);
ctx.fill();
}
const tex = twgl.createTexture(gl, { src: ctx.canvas });
gl.useProgram(programInfo.program);
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
// no need to set uniforms since they default to 0
// and only one texture which is already on texture unit 0
gl.drawArrays(gl.TRIANGLES, 0, 3);
}
main();
<canvas></canvas>
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
So in response to your updated question it's still not clear what you want to do. Why do you want to pass varyings to the fragment shader? Can't you just do whatever math you want in the fragment shader itself?
Example:
uniform sampler2D tex;
out float result;
// some all the values in the texture
vec4 sum4 = vec4(0);
ivec2 texDim = textureSize(tex, 0);
for (int y = 0; y < texDim.y; ++y) {
for (int x = 0; x < texDim.x; ++x) {
sum4 += texelFetch(tex, ivec2(x, y), 0);
}
}
result = sum4.x + sum4.y + sum4.z + sum4.w;
Example2
uniform isampler2D indices;
uniform sampler2D data;
out float result;
// some only values in data pointed to by indices
vec4 sum4 = vec4(0);
ivec2 texDim = textureSize(indices, 0);
for (int y = 0; y < texDim.y; ++y) {
for (int x = 0; x < texDim.x; ++x) {
ivec2 index = texelFetch(indices, ivec2(x, y), 0).xy;
sum4 += texelFetch(tex, index, 0);
}
}
result = sum4.x + sum4.y + sum4.z + sum4.w;
Note that I'm also not an expert in GPGPU but I have an hunch the code above is not the fastest way because I believe parallelization happens based on output. The code above has only 1 output so no parallelization? It would be easy to change so that it takes a block ID, tile ID, area ID as input and computes just the sum for that area. Then you'd write out a larger texture with the sum of each block and finally sum the block sums.
Also, dependant and non-uniform texture reads are a known perf issue. The first example reads the texture in order. That's cache friendly. The second example reads the texture in a random order (specified by indices), that's not cache friendly.

Why does creating a 2x8 R8 texture from a 16 byte buffer fail in webgl2?

When I try to create a 2x8 R8 texture in webgl2, I get an error. This doesn't happen for a 4x8 texture. If I double the size of the input buffer compared to what I expect, the 2x8 succeeds.
Does webgl2 have a 'column alignment' of 4 when creating/reading textures?
Here is some code that reproduces the issue. I tested it on Windows in both Chrome and Firefox:
function test_read(w) {
let gl = document.createElement('canvas').getContext('webgl2');
let h = 8;
let data = new Uint8Array(w*h);
data[5] = 5;
let texture = gl.createTexture();
let frameBuffer = gl.createFramebuffer();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.bindFramebuffer(gl.FRAMEBUFFER, frameBuffer);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.R8, w, h, 0, gl.RED, gl.UNSIGNED_BYTE, data);
if (gl.getError() !== gl.NO_ERROR) {
return 'bad w=' + w;
}
return 'good w=' + w;
}
console.log(test_read(4)); // good w=4
console.log(test_read(2)); // bad w=2
The error code coming out is 0x502 (INVALID_OPERATION). A similar issue happens when reading textures that were created by expanding the buffer: it seems to expect a 'column alignment' of 4.
You need to set gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1)
The default UNPACK_ALIGNMENT is 4 which means WebGL expects every row of pixel to be a multiple of 4 bytes. Since you're using R8 (1 byte pixel) and a width of 2 your rows are only 2 bytes long. When you change the width to 4 it starts working.
function test_read(w) {
let gl = document.createElement('canvas').getContext('webgl2');
let h = 8;
let data = new Uint8Array(w*h);
data[5] = 5;
// ---=== ADDED ===---
gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1);
let texture = gl.createTexture();
let frameBuffer = gl.createFramebuffer();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.bindFramebuffer(gl.FRAMEBUFFER, frameBuffer);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.R8, w, h, 0, gl.RED, gl.UNSIGNED_BYTE, data);
if (gl.getError() !== gl.NO_ERROR) {
return 'bad w=' + w;
}
return 'good w=' + w;
}
console.log(test_read(4)); // good w=4
console.log(test_read(2)); // bad w=2

Nothing gets drawn as soon as GL_DEPTH_TEST is enabled

As soon as I set glEnable(GL_DEPTH_TEST) in the following code, nothing except for the clear color gets drawn on screen.
window.cpp
void Window::initializeGL() {
makeCurrent();
initializeOpenGLFunctions();
glClearColor(0.0f, 0.03f, 0.2f, 1.0f);
Shaders::initShaders();
glEnable(GL_DEPTH_TEST);
currentLevel.addTiles();
viewMatrix.setToIdentity();
viewMatrix.translate(0.0f, 0.0f, -8.0f);
viewMatrix.rotate(45.0f, -1.0f, 0.0f, 0.0f);
}
void Window::resizeGL(int width, int height) {
const float fov = 45.0f,
zNear = 0.0f,
zFar = 1000.0f;
projectionMatrix.setToIdentity();
projectionMatrix.perspective(fov, width / float(height), zNear, zFar);
}
void Window::paintGL() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
currentLevel.setProjectionMatrix(projectionMatrix);
currentLevel.setViewMatrix(viewMatrix);
currentLevel.draw();
}
tile.cpp:
// Constructor with custom shader
Tile::Tile(QVector3D v1, QVector3D v2, QVector3D v3, QVector3D v4, QOpenGLShaderProgram* shaderProgram) :
vbo(QOpenGLBuffer::VertexBuffer),
cbo(QOpenGLBuffer::VertexBuffer),
ibo(QOpenGLBuffer::IndexBuffer) {
// Calculate surface normal & second set of vertices
QVector3D surfaceNormal = QVector3D::normal(v1, v2, v3);
QVector3D v1_n = v1 - surfaceNormal;
QVector3D v2_n = v2 - surfaceNormal;
QVector3D v3_n = v3 - surfaceNormal;
QVector3D v4_n = v4 - surfaceNormal;
// Set up rectangular mesh from given corner vectors
vertices = {
v1, v2, v3, v4,
v1_n, v2_n, v3_n, v4_n
};
colors = {
color1, color1, color2, color2,
color1, color1, color2, color2
};
indices = {
// Face 1
0, 1, 2,
2, 3, 0,
// Face 2
0, 4, 5,
5, 1, 0,
// Face 3
4, 5, 6,
6, 7, 4
};
this->shaderProgram = shaderProgram;
cacheShaderLocations();
createBuffers();
}
Tile::~Tile() {
vbo.destroy();
vao.destroy();
ibo.destroy();
}
// Cache Uniform Locations for shaders
void Tile::cacheShaderLocations() {
positionLocation = shaderProgram->attributeLocation("position");
colorLocation = shaderProgram->attributeLocation("color");
modelLocation = shaderProgram->uniformLocation("model");
viewLocation = shaderProgram->uniformLocation("view");
projectionLocation = shaderProgram->uniformLocation("projection");
}
// Create buffers
void Tile::createBuffers() {
// Vertex Buffer Object
vbo.create();
vbo.bind();
vbo.setUsagePattern(QOpenGLBuffer::StaticDraw);
vbo.allocate(vertices.constData(), vertices.size() * sizeof(QVector3D));
vbo.release();
// Color Buffer Object
cbo.create();
cbo.bind();
cbo.setUsagePattern(QOpenGLBuffer::StaticDraw);
cbo.allocate(colors.constData(), colors.size() * sizeof(QVector3D));
cbo.release();
// Index Buffer Object
ibo.create();
ibo.bind();
ibo.setUsagePattern(QOpenGLBuffer::StaticDraw);
ibo.allocate(indices.constData(), indices.size() * sizeof(GLushort));
ibo.release();
// Vertex Array Object
vao.create();
// Setup buffer attributes
shaderProgram->bind();
vao.bind();
vbo.bind();
shaderProgram->enableAttributeArray(positionLocation);
shaderProgram->setAttributeBuffer(positionLocation, GL_FLOAT, 0, 3, 0);
cbo.bind();
shaderProgram->enableAttributeArray(colorLocation);
shaderProgram->setAttributeBuffer(colorLocation, GL_FLOAT, 0, 3, 0);
ibo.bind();
vao.release();
// Release buffers & shader program
vbo.release();
cbo.release();
ibo.release();
shaderProgram->release();
}
void Tile::draw() {
shaderProgram->bind();
// Send uniforms to shader
shaderProgram->setUniformValue(projectionLocation, projectionMatrix);
shaderProgram->setUniformValue(viewLocation, viewMatrix);
shaderProgram->setUniformValue(modelLocation, modelMatrix);
// Draw vertices
vao.bind();
glDrawElements(GL_TRIANGLES, indices.size(), GL_UNSIGNED_SHORT, 0);
vao.release();
shaderProgram->release();
}
As for the depth buffer itself, according to the Qt documentation it's enabled by default. In the main.cpp I set the surface format/context like this:
// Set OpenGL Version information
QSurfaceFormat format;
format.setDepthBufferSize(24);
format.setRenderableType(QSurfaceFormat::OpenGL);
format.setProfile(QSurfaceFormat::CoreProfile);
format.setVersion(3,3);
I really have no clue why nothing gets drawn when I try to use depth testing so I would greatly appreciate any help.

qt opengl shader texture coordinates

I use OpenGL shader for apply median filter to image. Input image I copy to in_fbo buffer. All work fine.
QGLFramebufferObject *in_fbo, *out_fbo;
painter.begin(in_fbo); //Copy QImage to QGLFramebufferObject
painter.drawImage(0,0,image_in,0,0,width,height);
painter.end();
out_fbo->bind();
glViewport( 0, 0, nWidth, nHeight );
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
glOrtho( 0.0, nWidth, 0.0, nHeight, -1.0, 1.0 );
glMatrixMode( GL_MODELVIEW );
glLoadIdentity( );
glEnable( GL_TEXTURE_2D );
out_fbo->drawTexture( QPointF(0.0,0.0), in_fbo->texture( ), GL_TEXTURE_2D );
But in shader code I need divide position of vertex by width and height of image, because texture coordinates are normalized to a range between 0 and 1.
How correctly calculate texture coordinates?
//vertex shader
varying vec2 pos;
void main( void )
{
pos = gl_Vertex.xy;
gl_Position = ftransform( );
}
//fragment shader
#extension GL_ARB_texture_rectangle : enable
uniform sampler2D texture0;
uniform int imgWidth;
uniform int imgHeight;
uniform int len;
varying vec2 pos;
#define MAX_LEN (100)
void main(){
float v[ MAX_LEN ];
for (int i = 0; i < len; i++) {
vec2 posi = pos + float(i);
posi.x = posi.x / float( imgWidth );
posi.y = posi.y / float( imgHeight );
v[i] = texture2D(texture0, posi).r;
}
//
//.... Calculating new value
//
gl_FragColor = vec4( m, m, m, 1.0 );
}
Before I did it in OpenFrameworks. But shader for texture in OF does not work for texture in Qt. I suppose because OF create textures with textureTarget = GL_TEXTURE_RECTANGLE_ARB. Now the result of applying shader above isn't correct. It isn't identical with result of the old shader (there are few pixels with different colors). I don't know how modify shader above :(.
Old shaders:
//vertex
#version 120
#extension GL_ARB_texture_rectangle : enable
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
}
//fragment
#version 120
#extension GL_ARB_texture_rectangle : enable
uniform sampler2D texture0;
uniform int len;
void main(){
vec2 pos = gl_TexCoord[0].xy;
pos.x = int( pos.x );
pos.y = int( pos.y );
float v[ MAX_LEN ];
for (int i=0; i<len; i++) {
vec2 posi = pos + i;
posi.x = int( posi.x + 0.5 ) + 0.5;
posi.y = int( posi.y + 0.5 ) + 0.5;
v[i] = texture2D(texture0, posi).r;
}
//
//.... Calculating new value
//
gl_FragColor = vec4( m, m, m, 1.0 );
}
OpenGL code from OpenFrameworks lib
texData.width = w;
texData.height = h;
texData.tex_w = w;
texData.tex_h = h;
texData.textureTarget = GL_TEXTURE_RECTANGLE_ARB;
texData.bFlipTexture = true;
texData.glType = GL_RGBA;
// create & setup FBO
glGenFramebuffersEXT(1, &fbo);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
// Create the render buffer for depth
glGenRenderbuffersEXT(1, &depthBuffer);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depthBuffer);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, texData.tex_w, texData.tex_h);
// create & setup texture
glGenTextures(1, (GLuint *)(&texData.textureID)); // could be more then one, but for now, just one
glBindTexture(texData.textureTarget, (GLuint)(texData.textureID));
glTexParameterf(texData.textureTarget, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(texData.textureTarget, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(texData.textureTarget, 0, texData.glType, texData.tex_w, texData.tex_h, 0, texData.glType, GL_UNSIGNED_BYTE, 0);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
// attach it to the FBO so we can render to it
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, texData.textureTarget, (GLuint)texData.textureID, 0);
I do not think you actually want to use the texture's dimensions to do this. From the sounds of things this is a simple fullscreen image filter and you really just want fragment coordinates mapped into the range [0.0,1.0]. If this is the case, then gl_FragCoord.xy / viewport.xy, where viewport is a 2D uniform that defines the width and height of your viewport ought to work for your texture coordinates (in the fragment shader).
vec2 texCoord = vec2 (transformed_pos.x, transformed_pos.y) / transformed_pos.w * vec2 (0.5, 0.5) + vec2 (1.0, 1.0) may also work using the same principle -- clip-space coordinates transformed into NDC and then mapped to texture-space. This approach will not properly account for texel centers ((0.5, 0.5) rather than (0.0, 0.0)), however and can present problems when texture filtering is enabled and the wrap mode is not GL_CLAMP_TO_EDGE.

GLSL, reading wrong value inside a fragment shader for a bounded depth texture

I am applying a slightly modified version of the classic depth peeling algorithm, basically I am rendering all the opaque objects first and then I use that depth as minimum depth, because since they are opaque, it doesnt make sense to not discard fragment deeper than them.
I first tested it on a small test case and it works flawless.
Now I am applying this algorithm to my main application, but for some unknown reasons, it doesnt work and I am going crazy, the main problem is that I keep reading the value 0 for the opaque depth texture bounded in the fragment shader of the next stage
To sum up, this is the fbo for the opaque stuff:
opaqueDepthTexture = new int[1];
opaqueColorTexture = new int[1];
opaqueFbo = new int[1];
gl3.glGenTextures(1, opaqueDepthTexture, 0);
gl3.glGenTextures(1, opaqueColorTexture, 0);
gl3.glGenFramebuffers(1, opaqueFbo, 0);
gl3.glBindTexture(GL3.GL_TEXTURE_RECTANGLE, opaqueDepthTexture[0]);
gl3.glTexImage2D(GL3.GL_TEXTURE_RECTANGLE, 0, GL3.GL_DEPTH_COMPONENT32F, width, height, 0,
GL3.GL_DEPTH_COMPONENT, GL3.GL_FLOAT, null);
gl3.glTexParameteri(GL3.GL_TEXTURE_RECTANGLE, GL3.GL_TEXTURE_BASE_LEVEL, 0);
gl3.glTexParameteri(GL3.GL_TEXTURE_RECTANGLE, GL3.GL_TEXTURE_MAX_LEVEL, 0);
gl3.glBindTexture(GL3.GL_TEXTURE_RECTANGLE, opaqueColorTexture[0]);
gl3.glTexImage2D(GL3.GL_TEXTURE_RECTANGLE, 0, GL3.GL_RGBA, width, height, 0,
GL3.GL_RGBA, GL3.GL_FLOAT, null);
gl3.glTexParameteri(GL3.GL_TEXTURE_RECTANGLE, GL3.GL_TEXTURE_BASE_LEVEL, 0);
gl3.glTexParameteri(GL3.GL_TEXTURE_RECTANGLE, GL3.GL_TEXTURE_MAX_LEVEL, 0);
gl3.glBindFramebuffer(GL3.GL_FRAMEBUFFER, opaqueFbo[0]);
gl3.glFramebufferTexture2D(GL3.GL_FRAMEBUFFER, GL3.GL_DEPTH_ATTACHMENT, GL3.GL_TEXTURE_RECTANGLE,
opaqueDepthTexture[0], 0);
gl3.glFramebufferTexture2D(GL3.GL_FRAMEBUFFER, GL3.GL_COLOR_ATTACHMENT0, GL3.GL_TEXTURE_RECTANGLE,
opaqueColorTexture[0], 0);
checkBindedFrameBuffer(gl3);
Here I just clear the depth (default to 1), I even commented out the opaque rendering:
/**
* (1) Initialize Opaque FBO.
*/
gl3.glBindFramebuffer(GL3.GL_FRAMEBUFFER, opaqueFbo[0]);
gl3.glDrawBuffer(GL3.GL_COLOR_ATTACHMENT0);
gl3.glClearColor(1, 1, 1, 1);
gl3.glClear(GL3.GL_COLOR_BUFFER_BIT | GL3.GL_DEPTH_BUFFER_BIT);
gl3.glEnable(GL3.GL_DEPTH_TEST);
dpOpaque.bind(gl3);
{
// EC_Graph.instance.getRoot().renderDpOpaque(gl3, dpOpaque, new MatrixStack(), properties);
}
dpOpaque.unbind(gl3);
And I have a double confirmation from this
FloatBuffer fb = FloatBuffer.allocate(1 * GLBuffers.SIZEOF_FLOAT);
gl3.glReadPixels(width / 2, height / 2, 1, 1, GL3.GL_DEPTH_COMPONENT, GL3.GL_FLOAT, fb);
System.out.println("opaque fb.get(0) " + fb.get(0));
If I change the clearDepth to 0.9 for example, I get 0.9, so this is ok.
Now I initialize the minimum depth buffer, by rendering all the geometry having alpha < 1 and I bind the previous depth texture, the one used in the opaque rendering, to the
uniform sampler2D opaqueDepthTexture;
I temporarily switched the rendering of this passage to the default framebuffer
/**
* (2) Initialize Min Depth Buffer.
*/
gl3.glBindFramebuffer(GL3.GL_FRAMEBUFFER, 0);
gl3.glDrawBuffer(GL3.GL_BACK);
// gl3.glBindFramebuffer(GL3.GL_FRAMEBUFFER, blendFbo[0]);
// gl3.glDrawBuffer(GL3.GL_COLOR_ATTACHMENT0);
gl3.glClearColor(0, 0, 0, 1);
gl3.glClear(GL3.GL_COLOR_BUFFER_BIT | GL3.GL_DEPTH_BUFFER_BIT);
gl3.glEnable(GL3.GL_DEPTH_TEST);
if (cullFace) {
gl3.glEnable(GL3.GL_CULL_FACE);
}
dpInit.bind(gl3);
{
gl3.glActiveTexture(GL3.GL_TEXTURE1);
gl3.glBindTexture(GL3.GL_TEXTURE_RECTANGLE, opaqueDepthTexture[0]);
gl3.glUniform1i(dpInit.getOpaqueDepthTextureUL(), 1);
gl3.glBindSampler(1, sampler[0]);
{
EC_Graph.instance.getRoot().renderDpTransparent(gl3, dpInit, new MatrixStack(), properties);
}
gl3.glBindTexture(GL3.GL_TEXTURE_RECTANGLE, 0);
gl3.glBindSampler(1, 0);
}
dpInit.unbind(gl3);
This is the dpInit Fragment Shader:
#version 330
out vec4 outputColor;
uniform sampler2D texture0;
in vec2 oUV;
uniform sampler2D opaqueDepthTexture;
/*
* Layout {lighting, normal orientation, active, selected}
*/
uniform ivec4 settings;
const vec3 selectionColor = vec3(1, .5, 0);
const vec4 inactiveColor = vec4(.5, .5, .5, .2);
vec4 CalculateLight();
void main()
{
float opaqueDepth = texture(opaqueDepthTexture, gl_FragCoord.xy).r;
if(gl_FragCoord.z > opaqueDepth) {
//discard;
}
vec4 color = (1 - settings.x) * texture(texture0, oUV) + settings.x * CalculateLight();
if(settings.w == 1) {
if(settings.z == 1) {
color = vec4(selectionColor, color.q);
} else {
color = vec4(selectionColor, inactiveColor.w);
}
} else {
if(settings.z == 0) {
color = inactiveColor;
}
}
outputColor = vec4(color.rgb * color.a, 1.0 - color.a);
outputColor = vec4(.5, 1, 1, 1.0 - color.a);
if(opaqueDepth == 0)
outputColor = vec4(1, 0, 0, 1);
else
outputColor = vec4(0, 1, 0, 1);
}
Ignore the middle, the important is just at the begin, where I read the red component of the previous depth texture and then I compare at the end, and the geometry I obtain is red, this means the value I read in the opaqueDepthTexture is 0...
The question is why?
After the dpInit rendering, if I bind again the opaqueFbo and read the depth, it is always the clearDepth, so 1 as default or .9 if I cleared it with .9, so it works.
The problem is really that I read the wrong value in the dpInit FS from a bound depth texture.. why?
For clarification, this is the sampler:
private void initSampler(GL3 gl3) {
sampler = new int[1];
gl3.glGenSamplers(1, sampler, 0);
gl3.glSamplerParameteri(sampler[0], GL3.GL_TEXTURE_WRAP_S, GL3.GL_CLAMP_TO_EDGE);
gl3.glSamplerParameteri(sampler[0], GL3.GL_TEXTURE_WRAP_T, GL3.GL_CLAMP_TO_EDGE);
gl3.glSamplerParameteri(sampler[0], GL3.GL_TEXTURE_MIN_FILTER, GL3.GL_NEAREST);
gl3.glSamplerParameteri(sampler[0], GL3.GL_TEXTURE_MAG_FILTER, GL3.GL_NEAREST);
}
Ps: checking all the components, I see the opaqueDepthTexture has always the following values (0, 0, 0, 1)
Oh god, I found it, in the init FS
uniform sampler2D opaqueDepthTexture;
should be
uniform sampler2DRect opaqueDepthTexture;

Resources