Why is using isampler2D causing my shader to fail to compile in WebGL2? - webgl2

I have a fragment shader which is failing to compile in a WebGL 2 context:
#version 300 es
precision highp float;
uniform isampler2D tex_in;
out vec2 outColor;
void main() {
outColor = vec2(0.0, 0.0);
}
When I gl.compileShader this shader, gl.getShaderParameter(glFragmentShader, GL.COMPILE_STATUS) returns false. If I change isampler2D to sampler2D, the compile succeeds instead.
According to the webgl 2 quick reference, isampler2D is a supported keyword. What am I doing wrong?

You need to specify a precision for the sampler:
uniform highp isampler2D tex_in;
Note the highp.

Rather than just point out the specific issue it's just a good practice to print the shader log when gl.getShaderParameter(shader, GL.COMPILE_STATUS) is false
example
const gl = document.createElement('canvas').getContext('webgl2');
const s = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(s, `#version 300 es
precision highp float;
uniform isampler2D tex_in;
out vec2 outColor;
void main() {
outColor = vec2(0.0, 0.0);
}`);
gl.compileShader(s);
if (!gl.getShaderParameter(s, gl.COMPILE_STATUS)) {
throw new Error(gl.getShaderInfoLog(s));
}
prints
Uncaught Error: ERROR: 0:3: 'isampler2D' : No precision specified
So the issue is you didn't specify a precision

Related

How do I apply a 2d shader to a webgl canvas in js?

I hope I'm not taking the wrong approach here but I feel like I'm on the right track and this shouldn't be too complicated. I want to take a simple function of x and y on the screen and return a color applied to each pixel of a webGL canvas.
For example: f(x,y) -> rgb(x/canvasWidth,y/canvasHeight) where x and y are positions on the canvas and color is that of the pixel.
My first thought is to take this example and modify it so that the rectangle fills the screen and the color is as described. I think this is achieved by modifying the vertex shader so that the rectangle covers the canvas and fragment shader to implement the color but I'm not sure how to apply the vertex shader based on window size or get my x and y variables in the context of the fragment shader.
Here's the shader code for the tutorial I'm going off of. I Haven't tried much besides manually changing the constant color in the fragment shader and mutating the square by changing the values in the intitBuffers method.
You will need to pass in the canvasWidth and canvasHeight. Your fragment shader might look like this:
precision mediump float;
// Require resolution (canvas size) as an input
uniform vec3 uResolution;
void main() {
// Calculate relative coordinates (uv)
vec2 uv = gl_FragCoord.xy / uResolution.xy;
gl_FragColor = vec4(uv.x, uv.y, 0., 1.0);
}
And then per #LJ's answer, if you really want the fragment to cover the entire canvas, you could modify your vertex shader to ignore the normal matrix transforms:
void main() {
// Pass through each vertex position without transforming:
gl_Position = aVertexPosition;
}
The runnable example below is mostly copy-pasted from the example you linked, with minor modifications:
const canvas = document.querySelector('#glcanvas');
main();
//
// Start here
//
function main() {
const gl = canvas.getContext('webgl');
// If we don't have a GL context, give up now
if (!gl) {
alert('Unable to initialize WebGL. Your browser or machine may not support it.');
return;
}
// Vertex shader program
const vsSource = `
attribute vec4 aVertexPosition;
uniform mat4 uModelViewMatrix;
uniform mat4 uProjectionMatrix;
void main() {
// We don't need the projection:
//gl_Position = uProjectionMatrix * uModelViewMatrix * aVertexPosition;
// Instead we pass through each vertex position as-is:
gl_Position = aVertexPosition;
}
`;
// Fragment shader program
const fsSource = `
precision mediump float;
// Require resolution (canvas size) as an input
uniform vec3 uResolution;
void main() {
// Calculate relative coordinates (uv)
vec2 uv = gl_FragCoord.xy / uResolution.xy;
gl_FragColor = vec4(uv.x, uv.y, 0., 1.0);
}
`;
// Initialize a shader program; this is where all the lighting
// for the vertices and so forth is established.
const shaderProgram = initShaderProgram(gl, vsSource, fsSource);
// Collect all the info needed to use the shader program.
// Look up which attribute our shader program is using
// for aVertexPosition and look up uniform locations.
const programInfo = {
program: shaderProgram,
attribLocations: {
vertexPosition: gl.getAttribLocation(shaderProgram, 'aVertexPosition'),
},
uniformLocations: {
projectionMatrix: gl.getUniformLocation(shaderProgram, 'uProjectionMatrix'),
modelViewMatrix: gl.getUniformLocation(shaderProgram, 'uModelViewMatrix'),
resolution: gl.getUniformLocation(shaderProgram, 'uResolution'),
},
};
// Here's where we call the routine that builds all the
// objects we'll be drawing.
const buffers = initBuffers(gl);
// Draw the scene
drawScene(gl, programInfo, buffers);
}
//
// initBuffers
//
// Initialize the buffers we'll need. For this demo, we just
// have one object -- a simple two-dimensional square.
//
function initBuffers(gl) {
// Create a buffer for the square's positions.
const positionBuffer = gl.createBuffer();
// Select the positionBuffer as the one to apply buffer
// operations to from here out.
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
// Now create an array of positions for the square.
const positions = [
1.0, 1.0,
-1.0, 1.0,
1.0, -1.0,
-1.0, -1.0,
];
// Now pass the list of positions into WebGL to build the
// shape. We do this by creating a Float32Array from the
// JavaScript array, then use it to fill the current buffer.
gl.bufferData(gl.ARRAY_BUFFER,
new Float32Array(positions),
gl.STATIC_DRAW);
return {
position: positionBuffer,
};
}
//
// Draw the scene.
//
function drawScene(gl, programInfo, buffers) {
gl.clearColor(0.0, 0.0, 0.0, 1.0); // Clear to black, fully opaque
gl.clearDepth(1.0); // Clear everything
gl.enable(gl.DEPTH_TEST); // Enable depth testing
gl.depthFunc(gl.LEQUAL); // Near things obscure far things
// Clear the canvas before we start drawing on it.
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
// Create a perspective matrix, a special matrix that is
// used to simulate the distortion of perspective in a camera.
// Our field of view is 45 degrees, with a width/height
// ratio that matches the display size of the canvas
// and we only want to see objects between 0.1 units
// and 100 units away from the camera.
const fieldOfView = 45 * Math.PI / 180; // in radians
const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
const zNear = 0.1;
const zFar = 100.0;
const projectionMatrix = mat4.create();
// note: glmatrix.js always has the first argument
// as the destination to receive the result.
mat4.perspective(projectionMatrix,
fieldOfView,
aspect,
zNear,
zFar);
// Set the drawing position to the "identity" point, which is
// the center of the scene.
const modelViewMatrix = mat4.create();
// Now move the drawing position a bit to where we want to
// start drawing the square.
mat4.translate(modelViewMatrix, // destination matrix
modelViewMatrix, // matrix to translate
[-0.0, 0.0, -6]); // amount to translate
// Tell WebGL how to pull out the positions from the position
// buffer into the vertexPosition attribute.
{
const numComponents = 2;
const type = gl.FLOAT;
const normalize = false;
const stride = 0;
const offset = 0;
gl.bindBuffer(gl.ARRAY_BUFFER, buffers.position);
gl.vertexAttribPointer(
programInfo.attribLocations.vertexPosition,
numComponents,
type,
normalize,
stride,
offset);
gl.enableVertexAttribArray(
programInfo.attribLocations.vertexPosition);
}
// Tell WebGL to use our program when drawing
gl.useProgram(programInfo.program);
// Set the shader uniforms
gl.uniformMatrix4fv(
programInfo.uniformLocations.projectionMatrix,
false,
projectionMatrix);
gl.uniformMatrix4fv(
programInfo.uniformLocations.modelViewMatrix,
false,
modelViewMatrix);
gl.uniform3f(programInfo.uniformLocations.resolution, canvas.width, canvas.height, 1.0);
{
const offset = 0;
const vertexCount = 4;
gl.drawArrays(gl.TRIANGLE_STRIP, offset, vertexCount);
}
}
//
// Initialize a shader program, so WebGL knows how to draw our data
//
function initShaderProgram(gl, vsSource, fsSource) {
const vertexShader = loadShader(gl, gl.VERTEX_SHADER, vsSource);
const fragmentShader = loadShader(gl, gl.FRAGMENT_SHADER, fsSource);
// Create the shader program
const shaderProgram = gl.createProgram();
gl.attachShader(shaderProgram, vertexShader);
gl.attachShader(shaderProgram, fragmentShader);
gl.linkProgram(shaderProgram);
// If creating the shader program failed, alert
if (!gl.getProgramParameter(shaderProgram, gl.LINK_STATUS)) {
alert('Unable to initialize the shader program: ' + gl.getProgramInfoLog(shaderProgram));
return null;
}
return shaderProgram;
}
//
// creates a shader of the given type, uploads the source and
// compiles it.
//
function loadShader(gl, type, source) {
const shader = gl.createShader(type);
// Send the source to the shader object
gl.shaderSource(shader, source);
// Compile the shader program
gl.compileShader(shader);
// See if it compiled successfully
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
alert('An error occurred compiling the shaders: ' + gl.getShaderInfoLog(shader));
gl.deleteShader(shader);
return null;
}
return shader;
}
canvas {
border: 2px solid black;
background-color: black;
}
video {
display: none;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/gl-matrix/2.8.1/gl-matrix-min.js"></script>
<body>
<canvas id="glcanvas" width="640" height="480"></canvas>
</body>
Same on glitch:
https://glitch.com/edit/#!/so-example-71499942
You're on the right track, for having a quad fill the screen you can skip all the matrix transforms in the vertex shader and just feed normalized device coordinates (-1 ... +1) directly into gl_Position then use gl_FragCoord to access the position of the pixel in the fragment shader.

How can i transform a recursive function with two calls to itself into iterative? [duplicate]

I am writing a GPU-based real-time raytracing renderer using a GLSL compute shader. So far, it works really well, but I have stumbled into a seemingly unsolvable problem when it comes to having both reflections and refractions simultaneously.
My logic tells me that in order to have reflections and refractions on an object, such as glass, the ray would have to split into two, one ray reflects off the surface, and the other refracts through the surface. The ultimate colours of these rays would then be combined based on some function and ultimately used as the colour of the pixel the ray originated from. The problem I have is that I can't split the rays in shader code, as I would have to use recursion to do so. From my understanding, functions in a shader cannot be recursive because all GLSL functions are like inline functions in C++ due to compatibility issues with older GPU hardware.
Is it possible to simulate or fake recursion in shader code, or can I even achieve reflection and refraction simultaneously without using recursion at all? I can't see how it can happen without recursion, but I might be wrong.
I manage to convert back-raytracing to iterative process suitable for GLSL with the method suggested in my comment. It is far from optimized and I do not have all the physical stuff implemented (no Snell's law etc ...) yet but as a proof of concept it works already. I do all the stuff in fragment shader and CPU side code just send the uniforms constants and scene in form of 32 bit non-clamped float texture GL_LUMINANCE32F_ARB The rendering is just single QUAD covering whole screen.
passing the scene
I decided to store the scene in texture so each ray/fragment has direct access to whole scene. The texture is 2D but it is used as linear list of 32 bit floats. I decided this format:
enum _fac_type_enum
{
_fac_triangles=0, // r,g,b,a, n, triangle count, { x0,y0,z0,x1,y1,z1,x2,y2,z2 }
_fac_spheres, // r,g,b,a, n, sphere count, { x,y,z,r }
};
const GLfloat _n_glass=1.561;
const GLfloat _n_vacuum=1.0;
GLfloat data[]=
{
// r, g, b, a, n, type,count
0.2,0.3,0.5,0.5,_n_glass,_fac_triangles, 4, // tetrahedron
// px, py, pz, r, g, b
-0.5,-0.5,+1.0,
0.0,+0.5,+1.0,
+0.5,-0.5,+1.0,
0.0, 0.0,+0.5,
-0.5,-0.5,+1.0,
0.0,+0.5,+1.0,
0.0, 0.0,+0.5,
0.0,+0.5,+1.0,
+0.5,-0.5,+1.0,
0.0, 0.0,+0.5,
+0.5,-0.5,+1.0,
-0.5,-0.5,+1.0,
};
You can add/change any type of object. This example holds just single semi transparent bluish tetrahedron. You could also add transform matrices more coefficients for material properties etc ...
Architecture
the Vertex shader just initialize corner Rays of the view (start position and direction) which is interpolated so each fragment represents start ray of back ray tracing process.
Iterative back ray tracing
So I created a "static" list of rays and init it with the start ray. The Iteration is done in two steps first the back ray tracing:
Loop through all rays in a list from the first
Find closest intersection with scene...
store the position, surface normal and material properties into ray struct
If intersection found and not last "recursion" layer add reflect/refract rays to list at the end.
also store their indexes to the processed ray struct
Now your rays should hold all the intersection info you need to reconstruct the color. To do that:
loop through all the recursion levels backwards
for each of the rays matching actual recursion layer
compute ray color
so use lighting equations you want. If the ray contains children add their color to the result based on material properties (reflective and refractive coefficients ...)
Now the first ray should contain the color you want to output.
Uniforms used:
tm_eyeview camera matrix
aspectview ys/xs aspect ratio
n0 empty space refraction index (unused yet)
focal_length camera focal length
fac_siz resolution of the scene square texture
fac_num number of floats actually used in the scene texture
fac_txr texture unit for the scene texture
Preview:
The fragment shader contains my debug prints so you will need also the texture if used see the QA:
GLSL debug prints
ToDo:
add matrices for objects, camera etc.
add material properties (shininess, reflection/refraction coefficient)
Snell's law right now the direction of new rays are wrong ...
may be separate R,G,B to 3 start rays and combine at the end
fake SSS Subsurface scattering based on ray lengths
better implement lights (right now they are constants in a code)
implement more primitives (right now only triangles are supported)
[Edit1] code debug and upgrade
I removed old source code to fit inside 30KB limit. If you need it then dig it from edit history. Had some time for more advanced debugging for this and here the result:
this version got resolved some geometrical,accuracy,domain problems and bugs. I got implemented both reflections and refractions as is shown on this debug draw for test ray:
In the debug view only the cube is transparent and last ray that does not hit anything is ignored. So as you can see the ray split ... The ray ended inside cube due to total reflection angle And I disable all reflections inside objects for speed reasons.
The 32bit floats for intersection detection are a bit noisy with distances so you can use 64bit doubles instead but the speed drops considerably in such case. Another option is to rewrite the equation to use relative coordinates which are more precise in this case of use.
Here the float shaders source:
Vertex:
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
uniform float aspect;
uniform float focal_length;
uniform mat4x4 tm_eye;
layout(location=0) in vec2 pos;
out smooth vec2 txt_pos; // frag position on screen <-1,+1> for debug prints
out smooth vec3 ray_pos; // ray start position
out smooth vec3 ray_dir; // ray start direction
//------------------------------------------------------------------
void main(void)
{
vec4 p;
txt_pos=pos;
// perspective projection
p=tm_eye*vec4(pos.x/aspect,pos.y,0.0,1.0);
ray_pos=p.xyz;
p-=tm_eye*vec4(0.0,0.0,-focal_length,1.0);
ray_dir=normalize(p.xyz);
gl_Position=vec4(pos,0.0,1.0);
}
//------------------------------------------------------------------
Fragment:
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
// Ray tracer ver: 1.000
//------------------------------------------------------------------
in smooth vec3 ray_pos; // ray start position
in smooth vec3 ray_dir; // ray start direction
uniform float n0; // refractive index of camera origin
uniform int fac_siz; // square texture x,y resolution size
uniform int fac_num; // number of valid floats in texture
uniform sampler2D fac_txr; // scene mesh data texture
out layout(location=0) vec4 frag_col;
//---------------------------------------------------------------------------
//#define _debug_print
#define _reflect
#define _refract
//---------------------------------------------------------------------------
#ifdef _debug_print
in vec2 txt_pos; // frag screen position <-1,+1>
uniform sampler2D txr_font; // ASCII 32x8 characters font texture unit
uniform float txt_fxs,txt_fys; // font/screen resolution ratio
const int _txtsiz=64; // text buffer size
int txt[_txtsiz],txtsiz; // text buffer and its actual size
vec4 txt_col=vec4(0.0,0.0,0.0,1.0); // color interface for txt_print()
bool _txt_col=false; // is txt_col active?
void txt_decimal(vec2 v); // print vec3 into txt
void txt_decimal(vec3 v); // print vec3 into txt
void txt_decimal(vec4 v); // print vec3 into txt
void txt_decimal(float x); // print float x into txt
void txt_decimal(int x); // print int x into txt
void txt_print(float x0,float y0); // print txt at x0,y0 [chars]
#endif
//---------------------------------------------------------------------------
void main(void)
{
const vec3 light_dir=normalize(vec3(0.1,0.1,1.0));
const float light_iamb=0.1; // dot offset
const float light_idir=0.5; // directional light amplitude
const vec3 back_col=vec3(0.2,0.2,0.2); // background color
const float _zero=1e-6; // to avoid intrsection with start point of ray
const int _fac_triangles=0; // r,g,b, refl,refr,n, type, triangle count, { x0,y0,z0,x1,y1,z1,x2,y2,z2 }
const int _fac_spheres =1; // r,g,b, refl,refr,n, type, sphere count, { x,y,z,r }
// ray scene intersection
struct _ray
{
vec3 pos,dir,nor;
vec3 col;
float refl,refr;// reflection,refraction intensity coeficients
float n0,n1,l; // refaction index (start,end) , ray length
int lvl,i0,i1; // recursion level, reflect, refract
};
const int _lvls=5;
const int _rays=(1<<_lvls)-1;
_ray ray[_rays]; int rays;
vec3 v0,v1,v2,pos;
vec3 c,col;
float refr,refl;
float tt,t,n1,a;
int i0,ii,num,id;
// fac texture access
vec2 st; int i,j; float ds=1.0/float(fac_siz-1);
#define fac_get texture(fac_txr,st).r; st.s+=ds; i++; j++; if (j==fac_siz) { j=0; st.s=0.0; st.t+=ds; }
// enque start ray
ray[0].pos=ray_pos;
ray[0].dir=normalize(ray_dir);
ray[0].nor=vec3(0.0,0.0,0.0);
ray[0].refl=0.0;
ray[0].refr=0.0;
ray[0].n0=n0;
ray[0].n1=1.0;
ray[0].l =0.0;
ray[0].lvl=0;
ray[0].i0=-1;
ray[0].i1=-1;
rays=1;
// debug print area
#ifdef _debug_print
bool _dbg=false;
float dbg_x0=45.0;
float dbg_y0= 1.0;
float dbg_xs=12.0;
float dbg_ys=_rays+1.0;
dbg_xs=40.0;
dbg_ys=10;
float x=0.5*(1.0+txt_pos.x)/txt_fxs; x-=dbg_x0;
float y=0.5*(1.0-txt_pos.y)/txt_fys; y-=dbg_y0;
// inside bbox?
if ((x>=0.0)&&(x<=dbg_xs)
&&(y>=0.0)&&(y<=dbg_ys))
{
// prints on
_dbg=true;
// preset debug ray
ray[0].pos=vec3(0.0,0.0,0.0)*2.5;
ray[0].dir=vec3(0.0,0.0,1.0);
}
#endif
// loop all enqued rays
for (i0=0;i0<rays;i0++)
{
// loop through all objects
// find closest forward intersection between them and ray[i0]
// strore it to ray[i0].(nor,col)
// strore it to pos,n1
t=tt=-1.0; ii=1; ray[i0].l=0.0;
ray[i0].col=back_col;
pos=ray[i0].pos; n1=n0;
for (st=vec2(0.0,0.0),i=j=0;i<fac_num;)
{
c.r=fac_get; // RGBA
c.g=fac_get;
c.b=fac_get;
refl=fac_get;
refr=fac_get;
n1=fac_get; // refraction index
a=fac_get; id=int(a); // object type
a=fac_get; num=int(a); // face count
if (id==_fac_triangles)
for (;num>0;num--)
{
v0.x=fac_get; v0.y=fac_get; v0.z=fac_get;
v1.x=fac_get; v1.y=fac_get; v1.z=fac_get;
v2.x=fac_get; v2.y=fac_get; v2.z=fac_get;
vec3 e1,e2,n,p,q,r;
float t,u,v,det,idet;
//compute ray triangle intersection
e1=v1-v0;
e2=v2-v0;
// Calculate planes normal vector
p=cross(ray[i0].dir,e2);
det=dot(e1,p);
// Ray is parallel to plane
if (abs(det)<1e-8) continue;
idet=1.0/det;
r=ray[i0].pos-v0;
u=dot(r,p)*idet;
if ((u<0.0)||(u>1.0)) continue;
q=cross(r,e1);
v=dot(ray[i0].dir,q)*idet;
if ((v<0.0)||(u+v>1.0)) continue;
t=dot(e2,q)*idet;
if ((t>_zero)&&((t<=tt)||(ii!=0)))
{
ii=0; tt=t;
// store color,n ...
ray[i0].col=c;
ray[i0].refl=refl;
ray[i0].refr=refr;
// barycentric interpolate position
t=1.0-u-v;
pos=(v0*t)+(v1*u)+(v2*v);
// compute normal (store as dir for now)
e1=v1-v0;
e2=v2-v1;
ray[i0].nor=cross(e1,e2);
}
}
if (id==_fac_spheres)
for (;num>0;num--)
{
float r;
v0.x=fac_get; v0.y=fac_get; v0.z=fac_get; r=fac_get;
// compute l0 length of ray(p0,dp) to intersection with sphere(v0,r)
// where rr= r^-2
float aa,bb,cc,dd,l0,l1,rr;
vec3 p0,dp;
p0=ray[i0].pos-v0; // set sphere center to (0,0,0)
dp=ray[i0].dir;
rr = 1.0/(r*r);
aa=2.0*rr*dot(dp,dp);
bb=2.0*rr*dot(p0,dp);
cc= rr*dot(p0,p0)-1.0;
dd=((bb*bb)-(2.0*aa*cc));
if (dd<0.0) continue;
dd=sqrt(dd);
l0=(-bb+dd)/aa;
l1=(-bb-dd)/aa;
if (l0<0.0) l0=l1;
if (l1<0.0) l1=l0;
t=min(l0,l1); if (t<=_zero) t=max(l0,l1);
if ((t>_zero)&&((t<=tt)||(ii!=0)))
{
ii=0; tt=t;
// store color,n ...
ray[i0].col=c;
ray[i0].refl=refl;
ray[i0].refr=refr;
// position,normal
pos=ray[i0].pos+(ray[i0].dir*t);
ray[i0].nor=pos-v0;
}
}
}
ray[i0].l=tt;
ray[i0].nor=normalize(ray[i0].nor);
// split ray from pos and ray[i0].nor
if ((ii==0)&&(ray[i0].lvl<_lvls-1))
{
t=dot(ray[i0].dir,ray[i0].nor);
// reflect
#ifdef _reflect
if ((ray[i0].refl>_zero)&&(t<_zero)) // do not reflect inside objects
{
ray[i0].i0=rays;
ray[rays]=ray[i0];
ray[rays].lvl++;
ray[rays].i0=-1;
ray[rays].i1=-1;
ray[rays].pos=pos;
ray[rays].dir=ray[rays].dir-(2.0*t*ray[rays].nor);
ray[rays].n0=ray[i0].n0;
ray[rays].n1=ray[i0].n0;
rays++;
}
#endif
// refract
#ifdef _refract
if (ray[i0].refr>_zero)
{
ray[i0].i1=rays;
ray[rays]=ray[i0];
ray[rays].lvl++;
ray[rays].i0=-1;
ray[rays].i1=-1;
ray[rays].pos=pos;
t=dot(ray[i0].dir,ray[i0].nor);
if (t>0.0) // exit object
{
ray[rays].n0=ray[i0].n0;
ray[rays].n1=n0;
v0=-ray[i0].nor; t=-t;
}
else{ // enter object
ray[rays].n0=n1;
ray[rays].n1=ray[i0].n0;
ray[i0 ].n1=n1;
v0=ray[i0].nor;
}
n1=ray[i0].n0/ray[i0].n1;
tt=1.0-(n1*n1*(1.0-t*t));
if (tt>=0.0)
{
ray[rays].dir=(ray[i0].dir*n1)-(v0*((n1*t)+sqrt(tt)));
rays++;
}
}
#endif
}
else if (i0>0) // ignore last ray if nothing hit
{
ray[i0]=ray[rays-1];
rays--; i0--;
}
}
// back track ray intersections and compute output color col
// lvl is sorted ascending so backtrack from end
for (i0=rays-1;i0>=0;i0--)
{
// directional + ambient light
t=abs(dot(ray[i0].nor,light_dir)*light_idir)+light_iamb;
t*=1.0-ray[i0].refl-ray[i0].refr;
ray[i0].col.rgb*=t;
// reflect
ii=ray[i0].i0;
if (ii>=0) ray[i0].col.rgb+=ray[ii].col.rgb*ray[i0].refl;
// refract
ii=ray[i0].i1;
if (ii>=0) ray[i0].col.rgb+=ray[ii].col.rgb*ray[i0].refr;
}
col=ray[0].col;
// debug prints
#ifdef _debug_print
/*
if (_dbg)
{
txtsiz=0;
txt_decimal(_lvls);
txt[txtsiz]=' '; txtsiz++;
txt_decimal(rays);
txt[txtsiz]=' '; txtsiz++;
txt_decimal(_rays);
txt_print(dbg_x0,dbg_y0);
for (ii=0;ii<rays;ii++)
{
txtsiz=0;
txt_decimal(ray[ii].lvl);
txt_print(dbg_x0,dbg_y0+ii+1);
}
for (ii=0,st=vec2(0.0,0.0),i=j=0;i<fac_num;ii++)
{
c.r=fac_get; // RGBA
txtsiz=0;
txt_decimal(c.r);
txt_print(dbg_x0,dbg_y0+ii+1);
}
if (_txt_col) col=txt_col.rgb;
}
*/
if (_dbg)
{
float x=dbg_x0,y=dbg_y0;
vec3 a=vec3(1.0,2.0,3.0);
vec3 b=vec3(5.0,6.0,7.0);
txtsiz=0; txt_decimal(dot(a,b)); txt_print(x,y); y++;
txtsiz=0; txt_decimal(cross(a,b)); txt_print(x,y); y++;
if (_txt_col) col=txt_col.rgb;
}
#endif
frag_col=vec4(col,1.0);
}
//---------------------------------------------------------------------------
#ifdef _debug_print
//---------------------------------------------------------------------------
void txt_decimal(vec2 v) // print vec2 into txt
{
txt[txtsiz]='('; txtsiz++;
txt_decimal(v.x); txt[txtsiz]=','; txtsiz++;
txt_decimal(v.y); txt[txtsiz]=')'; txtsiz++;
txt[txtsiz]=0; // string terminator
}
//---------------------------------------------------------------------------
void txt_decimal(vec3 v) // print vec3 into txt
{
txt[txtsiz]='('; txtsiz++;
txt_decimal(v.x); txt[txtsiz]=','; txtsiz++;
txt_decimal(v.y); txt[txtsiz]=','; txtsiz++;
txt_decimal(v.z); txt[txtsiz]=')'; txtsiz++;
txt[txtsiz]=0; // string terminator
}
//---------------------------------------------------------------------------
void txt_decimal(vec4 v) // print vec4 into txt
{
txt[txtsiz]='('; txtsiz++;
txt_decimal(v.x); txt[txtsiz]=','; txtsiz++;
txt_decimal(v.y); txt[txtsiz]=','; txtsiz++;
txt_decimal(v.z); txt[txtsiz]=','; txtsiz++;
txt_decimal(v.w); txt[txtsiz]=')'; txtsiz++;
txt[txtsiz]=0; // string terminator
}
//---------------------------------------------------------------------------
void txt_decimal(float x) // print float x into txt
{
int i,j,c; // l is size of string
float y,a;
const float base=10;
// handle sign
if (x<0.0) { txt[txtsiz]='-'; txtsiz++; x=-x; }
else { txt[txtsiz]='+'; txtsiz++; }
// divide to int(x).fract(y) parts of number
y=x; x=floor(x); y-=x;
// handle integer part
i=txtsiz; // start of integer part
for (;txtsiz<_txtsiz;)
{
a=x;
x=floor(x/base);
a-=base*x;
txt[txtsiz]=int(a)+'0'; txtsiz++;
if (x<=0.0) break;
}
j=txtsiz-1; // end of integer part
for (;i<j;i++,j--) // reverse integer digits
{
c=txt[i]; txt[i]=txt[j]; txt[j]=c;
}
// handle fractional part
for (txt[txtsiz]='.',txtsiz++;txtsiz<_txtsiz;)
{
y*=base;
a=floor(y);
y-=a;
txt[txtsiz]=int(a)+'0'; txtsiz++;
if (y<=0.0) break;
}
txt[txtsiz]=0; // string terminator
}
//---------------------------------------------------------------------------
void txt_decimal(int x) // print int x into txt
{
int a,i,j,c; // l is size of string
const int base=10;
// handle sign
if (x<0.0) { txt[txtsiz]='-'; txtsiz++; x=-x; }
else { txt[txtsiz]='+'; txtsiz++; }
// handle integer part
i=txtsiz; // start of integer part
for (;txtsiz<_txtsiz;)
{
a=x;
x/=base;
a-=base*x;
txt[txtsiz]=int(a)+'0'; txtsiz++;
if (x<=0) break;
}
j=txtsiz-1; // end of integer part
for (;i<j;i++,j--) // reverse integer digits
{
c=txt[i]; txt[i]=txt[j]; txt[j]=c;
}
txt[txtsiz]=0; // string terminator
}
//---------------------------------------------------------------------------
void txt_print(float x0,float y0) // print txt at x0,y0 [chars]
{
int i;
float x,y;
// fragment position [chars] relative to x0,y0
x=0.5*(1.0+txt_pos.x)/txt_fxs; x-=x0;
y=0.5*(1.0-txt_pos.y)/txt_fys; y-=y0;
// inside bbox?
if ((x<0.0)||(x>float(txtsiz))||(y<0.0)||(y>1.0)) return;
// get font texture position for target ASCII
i=int(x); // char index in txt
x-=float(i);
i=txt[i];
x+=float(int(i&31));
y+=float(int(i>>5));
x/=32.0; y/=8.0; // offset in char texture
txt_col=texture(txr_font,vec2(x,y));
_txt_col=true;
}
//---------------------------------------------------------------------------
#endif
//---------------------------------------------------------------------------
The code is not optimized yet I wanted to have the physics working correctly first. There are still not Fresnells implemented but refl,refr coefficients of material are used instead.
Also you can ignore the debug prints stuff (they are encapsulated by #define).
I build a small class for the geometry texture so I can easily set up scene objects. This is how the scene was initiated for the preview:
ray.beg();
// r g b rfl rfr n
ray.add_material(1.0,1.0,1.0,0.3,0.0,_n_glass); ray.add_box ( 0.0, 0.0, 6.0,9.0,9.0,0.1);
ray.add_material(1.0,1.0,1.0,0.1,0.8,_n_glass); ray.add_sphere( 0.0, 0.0, 0.5,0.5);
ray.add_material(1.0,0.1,0.1,0.3,0.0,_n_glass); ray.add_sphere( +2.0, 0.0, 2.0,0.5);
ray.add_material(0.1,1.0,0.1,0.3,0.0,_n_glass); ray.add_box ( -2.0, 0.0, 2.0,0.5,0.5,0.5);
ray.add_material(0.1,0.1,1.0,0.3,0.0,_n_glass);
ray.add_tetrahedron
(
0.0, 0.0, 3.0,
-1.0,-1.0, 4.0,
+1.0,-1.0, 4.0,
0.0,+1.0, 4.0
);
ray.end();
It is important so computed normals are facing out of objects because that is used for detecting inside/outside object crossings.
P.S.
If you're interested here is my volumetric 3D back ray tracer:
How to best write a voxel engine in C with performance in mind
here archive for low rep users
Here newer version of this "Mesh" Raytracer supporting hemisphere objects:
Ray tracing a Hemisphere

Cannot link shaders. ERROR:LINK-9 (line - 1) Missing main function for shade

I am building simple "hello triangle" program to start with OpenGL-ES 2.0 developement, I am stuck with this tricky error. It displays that I can not link shaders. I have tested shader compilation on RenderMonkey and it is OK. But in my actual application it fails to link.
void COpenGLWidget::initializeGL()
{
const size_t nMaxLength = 255;
glClearColor(1.0f, 0.0f, 0.0, 1.0f);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
char lpszVertexBuffer[][nMaxLength] =
{
"uniform mat4 g_MatViewProjection;\n",
"attribute vec4 rm_Vertex;\n",
"void main(void)\n",
"{\n",
"gl_Position = rm_Vertex;\n",
"}"
};
char lpszFragmentBuffer[][nMaxLength] =
{
"precision mediump float; \n",
"void main(void)\n",
"{\n",
"gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0 );\n",
"}\n"
};
m_nVertexShader = glCreateShader(GL_VERTEX_SHADER);
m_nPixelShader = glCreateShader(GL_FRAGMENT_SHADER);
int iVertexShaderLength = 6;
int iPixelShaderLength = 5;
glShaderSource(m_nVertexShader, 1, (const char**)lpszVertexBuffer, &iVertexShaderLength);
glShaderSource(m_nPixelShader, 1, (const char**)lpszFragmentBuffer, &iPixelShaderLength);
glCompileShader(m_nVertexShader);
int iIsOk = 0;
glGetShaderiv(m_nVertexShader, GL_COMPILE_STATUS, &iIsOk);
if(!iIsOk)
{
GLint infoLen = 0;
glGetShaderiv(m_nVertexShader, GL_INFO_LOG_LENGTH, &infoLen);
if(infoLen > 1)
{
char* infoLog = (char*)malloc(sizeof(char) * infoLen);
glGetShaderInfoLog(m_nVertexShader, infoLen, NULL, infoLog);
QMessageBox::warning(this, QString("Error"),
QString(infoLog), QMessageBox::Yes | QMessageBox::Cancel, QMessageBox::Yes);
free(infoLog);
}
glDeleteShader(m_nVertexShader);
return;
}
glCompileShader(m_nPixelShader);
glGetShaderiv(m_nPixelShader, GL_COMPILE_STATUS, &iIsOk);
if(!iIsOk)
{
GLint infoLen = 0;
glGetShaderiv(m_nPixelShader, GL_INFO_LOG_LENGTH, &infoLen);
if(infoLen > 1)
{
char* infoLog = (char*)malloc(sizeof(char) * infoLen);
glGetShaderInfoLog(m_nPixelShader, infoLen, NULL, infoLog);
QMessageBox::warning(this, QString("Error"),
QString(infoLog), QMessageBox::Yes | QMessageBox::Cancel, QMessageBox::Yes);
free(infoLog);
}
glDeleteShader(m_nPixelShader);
return;
}
m_nProgram = glCreateProgram();
glAttachShader(m_nProgram, m_nVertexShader);
glAttachShader(m_nProgram, m_nPixelShader);
glBindAttribLocation(m_nProgram, 0, "rm_Vertex");
glLinkProgram(m_nProgram);
glGetProgramiv(m_nProgram, GL_LINK_STATUS, &iIsOk);
// Fail to pass status validation
if(!iIsOk)
{
GLint infoLen = 0;
glGetProgramiv(m_nProgram, GL_INFO_LOG_LENGTH, &infoLen);
if(infoLen > 1)
{
char* infoLog = (char*)malloc(sizeof(char) * infoLen);
glGetProgramInfoLog(m_nProgram, infoLen, NULL, infoLog);
QMessageBox::warning(this, QString("Error"),
QString(infoLog), QMessageBox::Yes | QMessageBox::Cancel, QMessageBox::Yes);
free(infoLog);
}
glDeleteProgram(m_nProgram);
return;
}
glUseProgram(m_nProgram);
connect(&m_Timer, SIGNAL(timeout()), this, SLOT(update()));
m_Timer.start(1);
}
You got the data structurs for the shader source code totally wrong. glShaderSource accepts an array of pointers to char*, so a sequence of strings. You store your shader source code in a 2-dimensional array char. Contrary to a somewhat common myth, arrays are not pointers in C/C++.
Furthermore, you are telling the GL that there is a single line with the length of 6 characters (5 for the fragment shader, respectively). The shader compiler only sees the very first part of your source code, and hence reports that it can't find a main function.
It is unclear why you even try to split your shader sources into several strings. You do not get any benefit from doing that as along as you don't recombine the different bits and pieces. I suggest you just use a singe string, so something line
const char *source=
"uniform mat4 g_MatViewProjection;\n"
"attribute vec4 rm_Vertex;\n"
"void main(void)\n"
"{\n"
"gl_Position = rm_Vertex;\n"
"}";
In C/C++, you can concatenate strings by simply writing them after each other, and this also works across lines.
THen, you can simply use the address of your source pointer and feed it into the GL:
glShaderSource(shaderName, 1, &source, NULL);
There is also no need to pre-calculate any lenghts, the GL will handle 0-terminated C strings just as well.
If you really want to go the route of different source strings, I strongly recommend you learn the basics of arrays, pointers, arrays of pointers and multidimensional arrays first.

Qt5 OpenGL Texture Sampling

I'm trying to render a QImage using OpenGL wrapper classes of Qt5 and shader programs. I have the following shaders and a 3.3 core context. I'm also using a VAO for the attributes. However, I keep getting a blank red frame (red is the background clear color that I set). I'm not sure if it is a problem with the MVP matrices or something else. Using a fragment shader which sets the output color to a certain fixed color (black) still resulted in a red frame. I'm totally lost here.
EDIT-1: I also noticed that attempting to get the location of texRGB uniform from the QOpenGLShaderProgram results in -1. But I'm not sure if that has anything to do with the problem I'm having. Uniforms defined in the vertex shader for the MVP matrices have the locations 0 and 1.
Vertex Shader
#version 330
layout(location = 0) in vec3 inPosition;
layout(location = 1) in vec2 inTexCoord;
out vec2 vTexCoord;
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
void main(void)
{
gl_Position = projectionMatrix * modelViewMatrix * vec4(inPosition, 1.0);
// pass the input texture coordinates to fragment shader
vTexCoord = inTexCoord;
}
Fragment Shader
#version 330
uniform sampler2DRect texRGB;
in vec2 vTexCoord;
out vec4 fColor;
void main(void)
{
vec3 rgb = texture2DRect(texRGB, vTexCoord.st).rgb;
fColor = vec4(rgb, 0.0);
}
OGLWindow.h
#include <QOpenGLWindow>
#include <QOpenGLFunctions>
#include <QOpenGLBuffer>
#include <QOpenGLShaderProgram>
#include <QOpenGLVertexArrayObject>
#include <QOpenGLTexture>
#include <QDebug>
#include <QString>
class OGLWindow : public QOpenGLWindow, protected QOpenGLFunctions
{
public:
OGLWindow();
~OGLWindow();
// OpenGL Events
void initializeGL();
void resizeGL(int width, int height);
void paintGL();
// a method for cleanup
void teardownGL();
private:
bool isInitialized;
// OpenGL state information
QOpenGLBuffer m_vbo_position;
QOpenGLBuffer m_vbo_index;
QOpenGLBuffer m_vbo_tex_coord;
QOpenGLVertexArrayObject m_object;
QOpenGLShaderProgram* m_program;
QImage m_image;
QOpenGLTexture* m_texture;
QMatrix4x4 m_projection_matrix;
QMatrix4x4 m_model_view_matrix;
};
OGLWindow.cpp
#include "OGLWindow.h"
// vertex data
static const QVector3D vertextData[] = {
QVector3D(-1.0f, -1.0f, 0.0f),
QVector3D( 1.0f, -1.0f, 0.0f),
QVector3D( 1.0f, 1.0f, 0.0f),
QVector3D(-1.0f, 1.0f, 0.0f)
};
// indices
static const GLushort indices[] = {
0, 1, 2,
0, 2, 3
};
OGLWindow::OGLWindow() :
m_vbo_position (QOpenGLBuffer::VertexBuffer),
m_vbo_tex_coord (QOpenGLBuffer::VertexBuffer),
m_vbo_index (QOpenGLBuffer::IndexBuffer),
m_program (nullptr),
m_texture (nullptr),
isInitialized (false)
{
}
OGLWindow::~OGLWindow()
{
makeCurrent();
teardownGL();
}
void OGLWindow::initializeGL()
{
qDebug() << "initializeGL()";
initializeOpenGLFunctions();
isInitialized = true;
QColor backgroundColor(Qt::red);
glClearColor(backgroundColor.redF(), backgroundColor.greenF(), backgroundColor.blueF(), 1.0f);
// load texture image
m_image = QImage(":/images/cube.png");
m_texture = new QOpenGLTexture(QOpenGLTexture::TargetRectangle);
// set bilinear filtering mode for texture magnification and minification
m_texture->setMinificationFilter(QOpenGLTexture::Nearest);
m_texture->setMagnificationFilter(QOpenGLTexture::Nearest);
// set the wrap mode
m_texture->setWrapMode(QOpenGLTexture::ClampToEdge);
m_texture->setData(m_image.mirrored(), QOpenGLTexture::MipMapGeneration::DontGenerateMipMaps);
int imgWidth = m_image.width();
int imgHeight = m_image.height();
m_projection_matrix.setToIdentity();
m_projection_matrix.ortho(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
// m_projection_matrix.ortho(0.0, (float) width(), (float) height(), 0.0f, -1.0f, 1.0f);
m_model_view_matrix.setToIdentity();
glViewport(0, 0, width(), height());
m_program = new QOpenGLShaderProgram();
m_program->addShaderFromSourceFile(QOpenGLShader::Vertex, ":/shaders/vshader.glsl");
m_program->addShaderFromSourceFile(QOpenGLShader::Fragment, ":/shaders/fshader.glsl");
m_program->link();
m_program->bind();
// texture coordinates
static const QVector2D textureData[] = {
QVector2D(0.0f, 0.0f),
QVector2D((float) imgWidth, 0.0f),
QVector2D((float) imgWidth, (float) imgHeight),
QVector2D(0.0f, (float) imgHeight)
};
// create Vertex Array Object (VAO)
m_object.create();
m_object.bind();
// create position VBO
m_vbo_position.create();
m_vbo_position.bind();
m_vbo_position.setUsagePattern(QOpenGLBuffer::StaticDraw);
m_vbo_position.allocate(vertextData, 4 * sizeof(QVector3D));
// create texture coordinates VBO
m_vbo_tex_coord.create();
m_vbo_tex_coord.bind();
m_vbo_tex_coord.setUsagePattern(QOpenGLBuffer::StaticDraw);
m_vbo_tex_coord.allocate(textureData, 4 * sizeof(QVector2D));
// create the index buffer
m_vbo_index.create();
m_vbo_index.bind();
m_vbo_index.setUsagePattern(QOpenGLBuffer::StaticDraw);
m_vbo_index.allocate(indices, 6 * sizeof(GLushort));
// enable the two attributes that we have and set their buffers
m_program->enableAttributeArray(0);
m_program->enableAttributeArray(1);
m_program->setAttributeBuffer(0, GL_FLOAT, 0, 3, sizeof(QVector3D));
m_program->setAttributeBuffer(1, GL_FLOAT, 0, 2, sizeof(QVector2D));
// Set modelview-projection matrix
m_program->setUniformValue("projectionMatrix", m_projection_matrix);
m_program->setUniformValue("modelViewMatrix", m_model_view_matrix);
// use texture unit 0 which contains our frame
m_program->setUniformValue("texRGB", 0);
// release (unbind) all
m_object.release();
m_vbo_position.release();
m_vbo_tex_coord.release();
m_vbo_index.release();
m_program->release();
}
void OGLWindow::resizeGL(int width, int height)
{
qDebug() << "resizeGL(): width =" << width << ", height=" << height;
if (isInitialized) {
// avoid division by zero
if (height == 0) {
height = 1;
}
m_projection_matrix.setToIdentity();
m_projection_matrix.perspective(60.0, (float) width / (float) height, -1, 1);
glViewport(0, 0, width, height);
}
}
void OGLWindow::paintGL()
{
// clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// render using our shader
m_program->bind();
{
m_texture->bind();
m_object.bind();
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0) );
m_object.release();
}
m_program->release();
}
void OGLWindow::teardownGL()
{
// actually destroy our OpenGL information
m_object.destroy();
m_vbo_position.destroy();
m_vbo_color.destroy();
delete m_program;
}
EDIT-2: I'm creating the context as follows:
QSurfaceFormat format;
format.setRenderableType(QSurfaceFormat::OpenGL);
format.setProfile(QSurfaceFormat::CoreProfile);
format.setVersion(3,3);
This line in your fragment shader code is invalid:
vec3 rgb = texture2DRect(texRGB, vTexCoord.st).rgb;
texture2DRect() is not a built-in function.
Since you're using the GLSL 3.30 core profile (core is the default for the version unless compatibility is specified), you should be using the overloaded texture() function, which replaces the older type specific functions like texture2D() in the core profile.
Functions like texture2D() are still supported in GLSL 3.30 core unless a forward compatible core profile context is used. So depending on how the context is created, you can still use those functions.
However, sampler2DRect was only added as a sampler type in GLSL 1.40 as part of adding rectangular textures to the standard in OpenGL 3.1. At the time, the legacy sampling functions were already marked as deprecated, and only the new texture() function was defined for rectangular textures. This means that texture2DRect() does not exist in any GLSL version.
The correct call is:
vec3 rgb = texture(texRGB, vTexCoord.st).rgb;
Another part of your code that can prevent it from rendering anything is this projection matrix:
m_projection_matrix.perspective(60.0, (float) width / (float) height, -1, 1);
The near and far planes for a standard projection matrix both need to be positive. This call will set up a projection transformation with a "camera" on the origin, looking down the negative z-axis. The near and far values are distances from the origin. A valid call could look like this:
m_projection_matrix.perspective(60.0, (float) width / (float) height, 1.0f, 10.0f);
You will then also need to set the model matrix to transform the coordinates of the object into this range on the negative z-axis. You could for example apply a translation by (0.0f, 0.0f, -5.0f).
Or, if you just want to see something, the quad should also become visible if you simply use the identity matrix for the projection.

GLSL 1.30 is not supported

I have run my gl program successfully on my ubuntu system with a good graphics card. However, when I run it in an old intel machine with graphics mobile 4-series, I get these errors:
QGLShader::compile(Vertex): 0:1(10): error: GLSL 1.30 is not supported. Supported version are: 1.10, 1.20 and 1.00 ES
QGLShader::compile(Fragment): 0:1(10): error: GLSL 1.30 is not supported. Supported version are: 1.10, 1.20 and 1.00 ES
I think some keywords in my vertex and fragment shader files should be changed to old version. Could any one suggest me the old keywords.
Vertex Shader file:
#version 130
uniform mat4 mvpMatrix;
in vec4 vertex;
in vec2 textureCoordinate;
out vec2 varyingTextureCoordinate;
void main(void)
{
varyingTextureCoordinate = textureCoordinate;
gl_Position = mvpMatrix * vertex;
}
Fragment Shader file:
#version 130
uniform sampler2D texture;
in vec2 varyingTextureCoordinate;
out vec4 fragColor;
void main(void)
{
fragColor = texture2D(texture, varyingTextureCoordinate);
}
Run the feature deprecation list on page 2 of the GLSL 1.30 spec backwards:
#version 130 -> #version 120
in -> attribute
out -> varying
Remove fragColor declaration, replace fragColor usage with gl_FragColor
Vertex shader:
#version 120
uniform mat4 mvpMatrix;
attribute vec4 vertex;
attribute vec2 textureCoordinate;
varying vec2 varyingTextureCoordinate;
void main(void)
{
varyingTextureCoordinate = textureCoordinate;
gl_Position = mvpMatrix * vertex;
}
Fragment shader:
#version 120
uniform sampler2D texture;
varying vec2 varyingTextureCoordinate;
void main(void)
{
gl_FragColor = texture2D(texture, varyingTextureCoordinate);
}

Resources