Why is my normal vector always blank? HLSL - vector

I am trying to create a basic diffuse shader with HLSL, with this rbwhitaker tutorial. The shader runs without errors and can display triangles, but it seems that the Normal input to the vertex shader is never given a value, which causes the diffuse lighting to break. I tested this by setting the output color to white, and then subtracting the normal, but there was never any variation in colors, it stayed white. My shader code is below.
float4x4 World;
float4x4 View;
float4x4 Projection;
float4x4 WorldInverseTranspose;
float3 DiffuseLightDirection = float3(1, 0, 0);
float4 DiffuseColor = float4(1, 1, 1, 1);
float DiffuseIntensity = 1.0;
float4 AmbientColor = float4(1, 1, 1, 1);
float AmbientIntensity = 0.1;
struct VertexShaderInput
{
float4 Position : POSITION0;
float4 Normal : NORMAL0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
float4 Color : COLOR0;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
float4 normal = mul(input.Normal, WorldInverseTranspose);
float lightIntensity = dot(normal, DiffuseLightDirection);
output.Color = saturate(DiffuseColor * DiffuseIntensity * lightIntensity);;
return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
return saturate(input.Color + AmbientColor * AmbientIntensity);
}
technique Diffuse
{
pass Pass1
{
VertexShader = compile vs_2_0 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
EDIT:
I think it is probably because I don't directly pass the normal to the shader, but I don't know how to compute it if there isn't a built in way.

Related

How do I apply a 2d shader to a webgl canvas in js?

I hope I'm not taking the wrong approach here but I feel like I'm on the right track and this shouldn't be too complicated. I want to take a simple function of x and y on the screen and return a color applied to each pixel of a webGL canvas.
For example: f(x,y) -> rgb(x/canvasWidth,y/canvasHeight) where x and y are positions on the canvas and color is that of the pixel.
My first thought is to take this example and modify it so that the rectangle fills the screen and the color is as described. I think this is achieved by modifying the vertex shader so that the rectangle covers the canvas and fragment shader to implement the color but I'm not sure how to apply the vertex shader based on window size or get my x and y variables in the context of the fragment shader.
Here's the shader code for the tutorial I'm going off of. I Haven't tried much besides manually changing the constant color in the fragment shader and mutating the square by changing the values in the intitBuffers method.
You will need to pass in the canvasWidth and canvasHeight. Your fragment shader might look like this:
precision mediump float;
// Require resolution (canvas size) as an input
uniform vec3 uResolution;
void main() {
// Calculate relative coordinates (uv)
vec2 uv = gl_FragCoord.xy / uResolution.xy;
gl_FragColor = vec4(uv.x, uv.y, 0., 1.0);
}
And then per #LJ's answer, if you really want the fragment to cover the entire canvas, you could modify your vertex shader to ignore the normal matrix transforms:
void main() {
// Pass through each vertex position without transforming:
gl_Position = aVertexPosition;
}
The runnable example below is mostly copy-pasted from the example you linked, with minor modifications:
const canvas = document.querySelector('#glcanvas');
main();
//
// Start here
//
function main() {
const gl = canvas.getContext('webgl');
// If we don't have a GL context, give up now
if (!gl) {
alert('Unable to initialize WebGL. Your browser or machine may not support it.');
return;
}
// Vertex shader program
const vsSource = `
attribute vec4 aVertexPosition;
uniform mat4 uModelViewMatrix;
uniform mat4 uProjectionMatrix;
void main() {
// We don't need the projection:
//gl_Position = uProjectionMatrix * uModelViewMatrix * aVertexPosition;
// Instead we pass through each vertex position as-is:
gl_Position = aVertexPosition;
}
`;
// Fragment shader program
const fsSource = `
precision mediump float;
// Require resolution (canvas size) as an input
uniform vec3 uResolution;
void main() {
// Calculate relative coordinates (uv)
vec2 uv = gl_FragCoord.xy / uResolution.xy;
gl_FragColor = vec4(uv.x, uv.y, 0., 1.0);
}
`;
// Initialize a shader program; this is where all the lighting
// for the vertices and so forth is established.
const shaderProgram = initShaderProgram(gl, vsSource, fsSource);
// Collect all the info needed to use the shader program.
// Look up which attribute our shader program is using
// for aVertexPosition and look up uniform locations.
const programInfo = {
program: shaderProgram,
attribLocations: {
vertexPosition: gl.getAttribLocation(shaderProgram, 'aVertexPosition'),
},
uniformLocations: {
projectionMatrix: gl.getUniformLocation(shaderProgram, 'uProjectionMatrix'),
modelViewMatrix: gl.getUniformLocation(shaderProgram, 'uModelViewMatrix'),
resolution: gl.getUniformLocation(shaderProgram, 'uResolution'),
},
};
// Here's where we call the routine that builds all the
// objects we'll be drawing.
const buffers = initBuffers(gl);
// Draw the scene
drawScene(gl, programInfo, buffers);
}
//
// initBuffers
//
// Initialize the buffers we'll need. For this demo, we just
// have one object -- a simple two-dimensional square.
//
function initBuffers(gl) {
// Create a buffer for the square's positions.
const positionBuffer = gl.createBuffer();
// Select the positionBuffer as the one to apply buffer
// operations to from here out.
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
// Now create an array of positions for the square.
const positions = [
1.0, 1.0,
-1.0, 1.0,
1.0, -1.0,
-1.0, -1.0,
];
// Now pass the list of positions into WebGL to build the
// shape. We do this by creating a Float32Array from the
// JavaScript array, then use it to fill the current buffer.
gl.bufferData(gl.ARRAY_BUFFER,
new Float32Array(positions),
gl.STATIC_DRAW);
return {
position: positionBuffer,
};
}
//
// Draw the scene.
//
function drawScene(gl, programInfo, buffers) {
gl.clearColor(0.0, 0.0, 0.0, 1.0); // Clear to black, fully opaque
gl.clearDepth(1.0); // Clear everything
gl.enable(gl.DEPTH_TEST); // Enable depth testing
gl.depthFunc(gl.LEQUAL); // Near things obscure far things
// Clear the canvas before we start drawing on it.
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
// Create a perspective matrix, a special matrix that is
// used to simulate the distortion of perspective in a camera.
// Our field of view is 45 degrees, with a width/height
// ratio that matches the display size of the canvas
// and we only want to see objects between 0.1 units
// and 100 units away from the camera.
const fieldOfView = 45 * Math.PI / 180; // in radians
const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
const zNear = 0.1;
const zFar = 100.0;
const projectionMatrix = mat4.create();
// note: glmatrix.js always has the first argument
// as the destination to receive the result.
mat4.perspective(projectionMatrix,
fieldOfView,
aspect,
zNear,
zFar);
// Set the drawing position to the "identity" point, which is
// the center of the scene.
const modelViewMatrix = mat4.create();
// Now move the drawing position a bit to where we want to
// start drawing the square.
mat4.translate(modelViewMatrix, // destination matrix
modelViewMatrix, // matrix to translate
[-0.0, 0.0, -6]); // amount to translate
// Tell WebGL how to pull out the positions from the position
// buffer into the vertexPosition attribute.
{
const numComponents = 2;
const type = gl.FLOAT;
const normalize = false;
const stride = 0;
const offset = 0;
gl.bindBuffer(gl.ARRAY_BUFFER, buffers.position);
gl.vertexAttribPointer(
programInfo.attribLocations.vertexPosition,
numComponents,
type,
normalize,
stride,
offset);
gl.enableVertexAttribArray(
programInfo.attribLocations.vertexPosition);
}
// Tell WebGL to use our program when drawing
gl.useProgram(programInfo.program);
// Set the shader uniforms
gl.uniformMatrix4fv(
programInfo.uniformLocations.projectionMatrix,
false,
projectionMatrix);
gl.uniformMatrix4fv(
programInfo.uniformLocations.modelViewMatrix,
false,
modelViewMatrix);
gl.uniform3f(programInfo.uniformLocations.resolution, canvas.width, canvas.height, 1.0);
{
const offset = 0;
const vertexCount = 4;
gl.drawArrays(gl.TRIANGLE_STRIP, offset, vertexCount);
}
}
//
// Initialize a shader program, so WebGL knows how to draw our data
//
function initShaderProgram(gl, vsSource, fsSource) {
const vertexShader = loadShader(gl, gl.VERTEX_SHADER, vsSource);
const fragmentShader = loadShader(gl, gl.FRAGMENT_SHADER, fsSource);
// Create the shader program
const shaderProgram = gl.createProgram();
gl.attachShader(shaderProgram, vertexShader);
gl.attachShader(shaderProgram, fragmentShader);
gl.linkProgram(shaderProgram);
// If creating the shader program failed, alert
if (!gl.getProgramParameter(shaderProgram, gl.LINK_STATUS)) {
alert('Unable to initialize the shader program: ' + gl.getProgramInfoLog(shaderProgram));
return null;
}
return shaderProgram;
}
//
// creates a shader of the given type, uploads the source and
// compiles it.
//
function loadShader(gl, type, source) {
const shader = gl.createShader(type);
// Send the source to the shader object
gl.shaderSource(shader, source);
// Compile the shader program
gl.compileShader(shader);
// See if it compiled successfully
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
alert('An error occurred compiling the shaders: ' + gl.getShaderInfoLog(shader));
gl.deleteShader(shader);
return null;
}
return shader;
}
canvas {
border: 2px solid black;
background-color: black;
}
video {
display: none;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/gl-matrix/2.8.1/gl-matrix-min.js"></script>
<body>
<canvas id="glcanvas" width="640" height="480"></canvas>
</body>
Same on glitch:
https://glitch.com/edit/#!/so-example-71499942
You're on the right track, for having a quad fill the screen you can skip all the matrix transforms in the vertex shader and just feed normalized device coordinates (-1 ... +1) directly into gl_Position then use gl_FragCoord to access the position of the pixel in the fragment shader.

how to avoid pixmap cutting issues when we using qpainter while rotation

label=new QLabel(this);
label->setGeometry(this->width()/2,this->height()/2,label->width(),label->height());
QPixmap myPixmapForNow;
myPixmapForNow.load("C://Users//abc//Documents//QpixMap//hub_needle.png");
label->setMinimumSize(QSize(myPixmapForNow.width(),myPixmapForNow.width()));
label->setAlignment(Qt::AlignCenter);
QPixmap rotated(label->width(),label->width());
QPainter p(&rotated);
p.setRenderHint(QPainter::Antialiasing);
p.setRenderHint(QPainter::SmoothPixmapTransform);
p.setRenderHint(QPainter::HighQualityAntialiasing);
p.translate(myPixmapForNow.size().width() / 2,
(myPixmapForNow.size().height() / 2));
qDebug()<<"before rotation width:"<<rotated.size().width()<<"height:"<<rotated.size().width();
p.rotate(arg1);
p.translate(-myPixmapForNow.size().width() / 2,
-(myPixmapForNow.size().height() / 2));
qDebug()<<"after rotation height:"<<-rotated.size().width()<<"height:"<<-rotated.size().height();[![enter image description here][1]][1]
p.drawPixmap(QRect(0,0,myPixmapForNow.width(),myPixmapForNow.height()), myPixmapForNow);
p.end();
label->setPixmap(rotated);
After rotation
before rotation
I must admit the OP could have explained the issue a bit more in detail. Unfortunately, OP didn't react on comments.
However, out of curiosity, I tried to puzzle this out in a little demo. (I really like to write little Qt demos, especially with image manipulation and cat pictures.)
My first assumption was that OP has struggled with the order of transformations.
While translations are commutative (changing order doesn't change result), this is not the case for rotations (and other transformations).
However, after having wrapped OPs code into a MCVE, I convinced myself that the order of transformations matched my expectation – a rotation about the center of image.
So, I focused on the title
how to avoid pixmap cutting issues when we using qpainter while rotation
The reason for the “cutting issue” is simple:
To paint a rotated image (rectangle), the output may require a greater range of pixels then the original.
There are two possibilities to fix this:
enlarge the QPixmap for output
scale the result to match the original size of QPixmap.
So, this leaves the task to determine the output size of the rotated image beforehand, to either make the output QPixmap respectively larger or to add the respective scaling.
The bounding rectangle of a rotated rectangle can be calculated with trigonometric functions (sin, cos, etc.) I decided instead (for an IMHO more naïve way) to let Qt do the work for me.
To achieve this, the transformation has to be calculated before creating the QPixmap and QPainter. Hence, the prior
qPainter.translate(cx, cy);
qPainter.rotate(ra);
qPainter.translate(-cx, -cy);
is replaced by:
QTransform xform;
xform.translate(cx, cy);
xform.rotate(ra);
xform.translate(-cx, -cy);
which can be later applied as is:
qPainter.setTransform(xform);
I used the fact that all four corners of the rotated rectangle will touch the bounding rectangle. So, the bounding rectangle can be calculated by applying min() and max() to the x and y components of the rotated image corners:
const QPoint ptTL = xform * QPoint(0, 0);
const QPoint ptTR = xform * QPoint(w - 1, 0);
const QPoint ptBL = xform * QPoint(0, h - 1);
const QPoint ptBR = xform * QPoint(w - 1, h - 1);
QRect qRectBB(
QPoint(
min(ptTL.x(), ptTR.x(), ptBL.x(), ptBR.x()),
min(ptTL.y(), ptTR.y(), ptBL.y(), ptBR.y())),
QPoint(
max(ptTL.x(), ptTR.x(), ptBL.x(), ptBR.x()),
max(ptTL.y(), ptTR.y(), ptBL.y(), ptBR.y())));
Afterwards, the output may be adjusted using the origin and size of qRectBB.
The whole demo application testQPainterRotateCenter.cc:
#include <algorithm>
// Qt header:
#include <QtWidgets>
int min(int x0, int x1, int x2, int x3)
{
return std::min(std::min(x0, x1), std::min(x2, x3));
}
int max(int x0, int x1, int x2, int x3)
{
return std::max(std::max(x0, x1), std::max(x2, x3));
}
QPixmap rotate(
const QPixmap &qPixMapOrig, int cx, int cy, int ra,
bool fitIn, bool keepSize)
{
int w = qPixMapOrig.width(), h = qPixMapOrig.height();
QTransform xform;
xform.translate(cx, cy);
xform.rotate(ra);
xform.translate(-cx, -cy);
if (fitIn) {
// find bounding rect
const QPoint ptTL = xform * QPoint(0, 0);
const QPoint ptTR = xform * QPoint(w - 1, 0);
const QPoint ptBL = xform * QPoint(0, h - 1);
const QPoint ptBR = xform * QPoint(w - 1, h - 1);
QRect qRectBB(
QPoint(
min(ptTL.x(), ptTR.x(), ptBL.x(), ptBR.x()),
min(ptTL.y(), ptTR.y(), ptBL.y(), ptBR.y())),
QPoint(
max(ptTL.x(), ptTR.x(), ptBL.x(), ptBR.x()),
max(ptTL.y(), ptTR.y(), ptBL.y(), ptBR.y())));
qDebug() << "Bounding box:" << qRectBB;
// translate top left corner to (0, 0)
xform *= QTransform().translate(-qRectBB.left(), -qRectBB.top());
if (keepSize) {
// center align scaled image
xform *= w > h
? QTransform().translate((w - h) / 2, 0)
: QTransform().translate(0, (h - w) / 2);
// add scaling to transform
const qreal sx = qreal(w) / qRectBB.width();
const qreal sy = qreal(h) / qRectBB.height();
const qreal s = std::min(sx, sy);
xform *= QTransform().scale(s, s);
} else {
// adjust w and h
w = qRectBB.width(); h = qRectBB.height();
}
}
QPixmap qPixMap(w, h);
qPixMap.fill(Qt::gray);
{ QPainter qPainter(&qPixMap);
qPainter.setRenderHint(QPainter::Antialiasing);
qPainter.setRenderHint(QPainter::SmoothPixmapTransform);
qPainter.setRenderHint(QPainter::HighQualityAntialiasing);
qPainter.setTransform(xform);
qPainter.drawPixmap(0, 0, qPixMapOrig.width(), qPixMapOrig.height(), qPixMapOrig);
} // end of scope -> finalize QPainter
return qPixMap;
}
// main application
int main(int argc, char **argv)
{
qDebug() << "Qt Version:" << QT_VERSION_STR;
QApplication app(argc, argv);
// setup data
const QString file = QString::fromUtf8("cats.jpg");
QPixmap qPixMapOrig;
qPixMapOrig.load(file);
int cx = qPixMapOrig.width() / 2, cy = qPixMapOrig.height() / 2;
int ra = 0;
// setup GUI
QWidget qWin;
qWin.setWindowTitle(
file % QString(" (")
% QString::number(qPixMapOrig.width())
% " x " % QString::number(qPixMapOrig.height())
% ") - testQPainterRotateCenter");
QVBoxLayout qVBox;
QHBoxLayout qHBox1;
QLabel qLblCX(QString::fromUtf8("center x:"));
qHBox1.addWidget(&qLblCX);
QLineEdit qEditCX;
qEditCX.setText(QString::number(cx));
qHBox1.addWidget(&qEditCX, 1);
QLabel qLblCY(QString::fromUtf8("center y:"));
qHBox1.addWidget(&qLblCY);
QLineEdit qEditCY;
qEditCY.setText(QString::number(cy));
qHBox1.addWidget(&qEditCY, 1);
QLabel qLblRA(QString::fromUtf8("rotation angle:"));
qHBox1.addWidget(&qLblRA);
QSpinBox qEditRA;
qEditRA.setValue(ra);
qHBox1.addWidget(&qEditRA, 1);
qVBox.addLayout(&qHBox1);
QHBoxLayout qHBox2;
QCheckBox qTglFitIn(QString::fromUtf8("Zoom to Fit"));
qTglFitIn.setChecked(false);
qHBox2.addWidget(&qTglFitIn);
QCheckBox qTglKeepSize(QString::fromUtf8("Keep Size"));
qTglKeepSize.setChecked(false);
qHBox2.addWidget(&qTglKeepSize);
qVBox.addLayout(&qHBox2);
QLabel qLblImg;
qLblImg.setPixmap(qPixMapOrig);
qLblImg.setAlignment(Qt::AlignCenter);
qVBox.addWidget(&qLblImg, 1);
qWin.setLayout(&qVBox);
qWin.show();
// helper to update pixmap
auto update = [&]() {
cx = qEditCX.text().toInt();
cy = qEditCY.text().toInt();
ra = qEditRA.value();
const bool fitIn = qTglFitIn.isChecked();
const bool keepSize = qTglKeepSize.isChecked();
QPixmap qPixMap = rotate(qPixMapOrig, cx, cy, ra, fitIn, keepSize);
qLblImg.setPixmap(qPixMap);
};
// install signal handlers
QObject::connect(&qEditCX, &QLineEdit::textChanged,
[&](const QString&) { update(); });
QObject::connect(&qEditCY, &QLineEdit::textChanged,
[&](const QString&) { update(); });
QObject::connect(&qEditRA, QOverload<int>::of(&QSpinBox::valueChanged),
[&](int) { update(); });
QObject::connect(&qTglFitIn, &QCheckBox::toggled,
[&](bool) { update(); });
QObject::connect(&qTglKeepSize, &QCheckBox::toggled,
[&](bool) { update(); });
// runtime loop
return app.exec();
}
The Qt project file testQPainterRotateCenter.pro:
SOURCES = testQPainterRotateCenter.cc
QT += widgets
Output:
The rotated image without zoom to fit:
Zoomed to fit:
Zoomed to fit original size:
Notes:
While fiddling originally with a square image of 300×300 pixels, I became aware that rotating a non-square rectangle may result in a bounding box with a different aspect-ratio than the original. Hence, an additional translation might be desirable to align the scaled output in the original bounding box again. I switched to a non-square sample image of 300×200 pixels to illustrate this.
With the fit in calculations, the translations before/after rotation are actually obsolete. The result will be translated in any case to the intended position.
Instead of Qt::gray, the “background color” (i.e. the color the QPixmap is filled with initially) might be set complete transparently. I decided to stick to the Qt::gray for illustration.

How can i transform a recursive function with two calls to itself into iterative? [duplicate]

I am writing a GPU-based real-time raytracing renderer using a GLSL compute shader. So far, it works really well, but I have stumbled into a seemingly unsolvable problem when it comes to having both reflections and refractions simultaneously.
My logic tells me that in order to have reflections and refractions on an object, such as glass, the ray would have to split into two, one ray reflects off the surface, and the other refracts through the surface. The ultimate colours of these rays would then be combined based on some function and ultimately used as the colour of the pixel the ray originated from. The problem I have is that I can't split the rays in shader code, as I would have to use recursion to do so. From my understanding, functions in a shader cannot be recursive because all GLSL functions are like inline functions in C++ due to compatibility issues with older GPU hardware.
Is it possible to simulate or fake recursion in shader code, or can I even achieve reflection and refraction simultaneously without using recursion at all? I can't see how it can happen without recursion, but I might be wrong.
I manage to convert back-raytracing to iterative process suitable for GLSL with the method suggested in my comment. It is far from optimized and I do not have all the physical stuff implemented (no Snell's law etc ...) yet but as a proof of concept it works already. I do all the stuff in fragment shader and CPU side code just send the uniforms constants and scene in form of 32 bit non-clamped float texture GL_LUMINANCE32F_ARB The rendering is just single QUAD covering whole screen.
passing the scene
I decided to store the scene in texture so each ray/fragment has direct access to whole scene. The texture is 2D but it is used as linear list of 32 bit floats. I decided this format:
enum _fac_type_enum
{
_fac_triangles=0, // r,g,b,a, n, triangle count, { x0,y0,z0,x1,y1,z1,x2,y2,z2 }
_fac_spheres, // r,g,b,a, n, sphere count, { x,y,z,r }
};
const GLfloat _n_glass=1.561;
const GLfloat _n_vacuum=1.0;
GLfloat data[]=
{
// r, g, b, a, n, type,count
0.2,0.3,0.5,0.5,_n_glass,_fac_triangles, 4, // tetrahedron
// px, py, pz, r, g, b
-0.5,-0.5,+1.0,
0.0,+0.5,+1.0,
+0.5,-0.5,+1.0,
0.0, 0.0,+0.5,
-0.5,-0.5,+1.0,
0.0,+0.5,+1.0,
0.0, 0.0,+0.5,
0.0,+0.5,+1.0,
+0.5,-0.5,+1.0,
0.0, 0.0,+0.5,
+0.5,-0.5,+1.0,
-0.5,-0.5,+1.0,
};
You can add/change any type of object. This example holds just single semi transparent bluish tetrahedron. You could also add transform matrices more coefficients for material properties etc ...
Architecture
the Vertex shader just initialize corner Rays of the view (start position and direction) which is interpolated so each fragment represents start ray of back ray tracing process.
Iterative back ray tracing
So I created a "static" list of rays and init it with the start ray. The Iteration is done in two steps first the back ray tracing:
Loop through all rays in a list from the first
Find closest intersection with scene...
store the position, surface normal and material properties into ray struct
If intersection found and not last "recursion" layer add reflect/refract rays to list at the end.
also store their indexes to the processed ray struct
Now your rays should hold all the intersection info you need to reconstruct the color. To do that:
loop through all the recursion levels backwards
for each of the rays matching actual recursion layer
compute ray color
so use lighting equations you want. If the ray contains children add their color to the result based on material properties (reflective and refractive coefficients ...)
Now the first ray should contain the color you want to output.
Uniforms used:
tm_eyeview camera matrix
aspectview ys/xs aspect ratio
n0 empty space refraction index (unused yet)
focal_length camera focal length
fac_siz resolution of the scene square texture
fac_num number of floats actually used in the scene texture
fac_txr texture unit for the scene texture
Preview:
The fragment shader contains my debug prints so you will need also the texture if used see the QA:
GLSL debug prints
ToDo:
add matrices for objects, camera etc.
add material properties (shininess, reflection/refraction coefficient)
Snell's law right now the direction of new rays are wrong ...
may be separate R,G,B to 3 start rays and combine at the end
fake SSS Subsurface scattering based on ray lengths
better implement lights (right now they are constants in a code)
implement more primitives (right now only triangles are supported)
[Edit1] code debug and upgrade
I removed old source code to fit inside 30KB limit. If you need it then dig it from edit history. Had some time for more advanced debugging for this and here the result:
this version got resolved some geometrical,accuracy,domain problems and bugs. I got implemented both reflections and refractions as is shown on this debug draw for test ray:
In the debug view only the cube is transparent and last ray that does not hit anything is ignored. So as you can see the ray split ... The ray ended inside cube due to total reflection angle And I disable all reflections inside objects for speed reasons.
The 32bit floats for intersection detection are a bit noisy with distances so you can use 64bit doubles instead but the speed drops considerably in such case. Another option is to rewrite the equation to use relative coordinates which are more precise in this case of use.
Here the float shaders source:
Vertex:
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
uniform float aspect;
uniform float focal_length;
uniform mat4x4 tm_eye;
layout(location=0) in vec2 pos;
out smooth vec2 txt_pos; // frag position on screen <-1,+1> for debug prints
out smooth vec3 ray_pos; // ray start position
out smooth vec3 ray_dir; // ray start direction
//------------------------------------------------------------------
void main(void)
{
vec4 p;
txt_pos=pos;
// perspective projection
p=tm_eye*vec4(pos.x/aspect,pos.y,0.0,1.0);
ray_pos=p.xyz;
p-=tm_eye*vec4(0.0,0.0,-focal_length,1.0);
ray_dir=normalize(p.xyz);
gl_Position=vec4(pos,0.0,1.0);
}
//------------------------------------------------------------------
Fragment:
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
// Ray tracer ver: 1.000
//------------------------------------------------------------------
in smooth vec3 ray_pos; // ray start position
in smooth vec3 ray_dir; // ray start direction
uniform float n0; // refractive index of camera origin
uniform int fac_siz; // square texture x,y resolution size
uniform int fac_num; // number of valid floats in texture
uniform sampler2D fac_txr; // scene mesh data texture
out layout(location=0) vec4 frag_col;
//---------------------------------------------------------------------------
//#define _debug_print
#define _reflect
#define _refract
//---------------------------------------------------------------------------
#ifdef _debug_print
in vec2 txt_pos; // frag screen position <-1,+1>
uniform sampler2D txr_font; // ASCII 32x8 characters font texture unit
uniform float txt_fxs,txt_fys; // font/screen resolution ratio
const int _txtsiz=64; // text buffer size
int txt[_txtsiz],txtsiz; // text buffer and its actual size
vec4 txt_col=vec4(0.0,0.0,0.0,1.0); // color interface for txt_print()
bool _txt_col=false; // is txt_col active?
void txt_decimal(vec2 v); // print vec3 into txt
void txt_decimal(vec3 v); // print vec3 into txt
void txt_decimal(vec4 v); // print vec3 into txt
void txt_decimal(float x); // print float x into txt
void txt_decimal(int x); // print int x into txt
void txt_print(float x0,float y0); // print txt at x0,y0 [chars]
#endif
//---------------------------------------------------------------------------
void main(void)
{
const vec3 light_dir=normalize(vec3(0.1,0.1,1.0));
const float light_iamb=0.1; // dot offset
const float light_idir=0.5; // directional light amplitude
const vec3 back_col=vec3(0.2,0.2,0.2); // background color
const float _zero=1e-6; // to avoid intrsection with start point of ray
const int _fac_triangles=0; // r,g,b, refl,refr,n, type, triangle count, { x0,y0,z0,x1,y1,z1,x2,y2,z2 }
const int _fac_spheres =1; // r,g,b, refl,refr,n, type, sphere count, { x,y,z,r }
// ray scene intersection
struct _ray
{
vec3 pos,dir,nor;
vec3 col;
float refl,refr;// reflection,refraction intensity coeficients
float n0,n1,l; // refaction index (start,end) , ray length
int lvl,i0,i1; // recursion level, reflect, refract
};
const int _lvls=5;
const int _rays=(1<<_lvls)-1;
_ray ray[_rays]; int rays;
vec3 v0,v1,v2,pos;
vec3 c,col;
float refr,refl;
float tt,t,n1,a;
int i0,ii,num,id;
// fac texture access
vec2 st; int i,j; float ds=1.0/float(fac_siz-1);
#define fac_get texture(fac_txr,st).r; st.s+=ds; i++; j++; if (j==fac_siz) { j=0; st.s=0.0; st.t+=ds; }
// enque start ray
ray[0].pos=ray_pos;
ray[0].dir=normalize(ray_dir);
ray[0].nor=vec3(0.0,0.0,0.0);
ray[0].refl=0.0;
ray[0].refr=0.0;
ray[0].n0=n0;
ray[0].n1=1.0;
ray[0].l =0.0;
ray[0].lvl=0;
ray[0].i0=-1;
ray[0].i1=-1;
rays=1;
// debug print area
#ifdef _debug_print
bool _dbg=false;
float dbg_x0=45.0;
float dbg_y0= 1.0;
float dbg_xs=12.0;
float dbg_ys=_rays+1.0;
dbg_xs=40.0;
dbg_ys=10;
float x=0.5*(1.0+txt_pos.x)/txt_fxs; x-=dbg_x0;
float y=0.5*(1.0-txt_pos.y)/txt_fys; y-=dbg_y0;
// inside bbox?
if ((x>=0.0)&&(x<=dbg_xs)
&&(y>=0.0)&&(y<=dbg_ys))
{
// prints on
_dbg=true;
// preset debug ray
ray[0].pos=vec3(0.0,0.0,0.0)*2.5;
ray[0].dir=vec3(0.0,0.0,1.0);
}
#endif
// loop all enqued rays
for (i0=0;i0<rays;i0++)
{
// loop through all objects
// find closest forward intersection between them and ray[i0]
// strore it to ray[i0].(nor,col)
// strore it to pos,n1
t=tt=-1.0; ii=1; ray[i0].l=0.0;
ray[i0].col=back_col;
pos=ray[i0].pos; n1=n0;
for (st=vec2(0.0,0.0),i=j=0;i<fac_num;)
{
c.r=fac_get; // RGBA
c.g=fac_get;
c.b=fac_get;
refl=fac_get;
refr=fac_get;
n1=fac_get; // refraction index
a=fac_get; id=int(a); // object type
a=fac_get; num=int(a); // face count
if (id==_fac_triangles)
for (;num>0;num--)
{
v0.x=fac_get; v0.y=fac_get; v0.z=fac_get;
v1.x=fac_get; v1.y=fac_get; v1.z=fac_get;
v2.x=fac_get; v2.y=fac_get; v2.z=fac_get;
vec3 e1,e2,n,p,q,r;
float t,u,v,det,idet;
//compute ray triangle intersection
e1=v1-v0;
e2=v2-v0;
// Calculate planes normal vector
p=cross(ray[i0].dir,e2);
det=dot(e1,p);
// Ray is parallel to plane
if (abs(det)<1e-8) continue;
idet=1.0/det;
r=ray[i0].pos-v0;
u=dot(r,p)*idet;
if ((u<0.0)||(u>1.0)) continue;
q=cross(r,e1);
v=dot(ray[i0].dir,q)*idet;
if ((v<0.0)||(u+v>1.0)) continue;
t=dot(e2,q)*idet;
if ((t>_zero)&&((t<=tt)||(ii!=0)))
{
ii=0; tt=t;
// store color,n ...
ray[i0].col=c;
ray[i0].refl=refl;
ray[i0].refr=refr;
// barycentric interpolate position
t=1.0-u-v;
pos=(v0*t)+(v1*u)+(v2*v);
// compute normal (store as dir for now)
e1=v1-v0;
e2=v2-v1;
ray[i0].nor=cross(e1,e2);
}
}
if (id==_fac_spheres)
for (;num>0;num--)
{
float r;
v0.x=fac_get; v0.y=fac_get; v0.z=fac_get; r=fac_get;
// compute l0 length of ray(p0,dp) to intersection with sphere(v0,r)
// where rr= r^-2
float aa,bb,cc,dd,l0,l1,rr;
vec3 p0,dp;
p0=ray[i0].pos-v0; // set sphere center to (0,0,0)
dp=ray[i0].dir;
rr = 1.0/(r*r);
aa=2.0*rr*dot(dp,dp);
bb=2.0*rr*dot(p0,dp);
cc= rr*dot(p0,p0)-1.0;
dd=((bb*bb)-(2.0*aa*cc));
if (dd<0.0) continue;
dd=sqrt(dd);
l0=(-bb+dd)/aa;
l1=(-bb-dd)/aa;
if (l0<0.0) l0=l1;
if (l1<0.0) l1=l0;
t=min(l0,l1); if (t<=_zero) t=max(l0,l1);
if ((t>_zero)&&((t<=tt)||(ii!=0)))
{
ii=0; tt=t;
// store color,n ...
ray[i0].col=c;
ray[i0].refl=refl;
ray[i0].refr=refr;
// position,normal
pos=ray[i0].pos+(ray[i0].dir*t);
ray[i0].nor=pos-v0;
}
}
}
ray[i0].l=tt;
ray[i0].nor=normalize(ray[i0].nor);
// split ray from pos and ray[i0].nor
if ((ii==0)&&(ray[i0].lvl<_lvls-1))
{
t=dot(ray[i0].dir,ray[i0].nor);
// reflect
#ifdef _reflect
if ((ray[i0].refl>_zero)&&(t<_zero)) // do not reflect inside objects
{
ray[i0].i0=rays;
ray[rays]=ray[i0];
ray[rays].lvl++;
ray[rays].i0=-1;
ray[rays].i1=-1;
ray[rays].pos=pos;
ray[rays].dir=ray[rays].dir-(2.0*t*ray[rays].nor);
ray[rays].n0=ray[i0].n0;
ray[rays].n1=ray[i0].n0;
rays++;
}
#endif
// refract
#ifdef _refract
if (ray[i0].refr>_zero)
{
ray[i0].i1=rays;
ray[rays]=ray[i0];
ray[rays].lvl++;
ray[rays].i0=-1;
ray[rays].i1=-1;
ray[rays].pos=pos;
t=dot(ray[i0].dir,ray[i0].nor);
if (t>0.0) // exit object
{
ray[rays].n0=ray[i0].n0;
ray[rays].n1=n0;
v0=-ray[i0].nor; t=-t;
}
else{ // enter object
ray[rays].n0=n1;
ray[rays].n1=ray[i0].n0;
ray[i0 ].n1=n1;
v0=ray[i0].nor;
}
n1=ray[i0].n0/ray[i0].n1;
tt=1.0-(n1*n1*(1.0-t*t));
if (tt>=0.0)
{
ray[rays].dir=(ray[i0].dir*n1)-(v0*((n1*t)+sqrt(tt)));
rays++;
}
}
#endif
}
else if (i0>0) // ignore last ray if nothing hit
{
ray[i0]=ray[rays-1];
rays--; i0--;
}
}
// back track ray intersections and compute output color col
// lvl is sorted ascending so backtrack from end
for (i0=rays-1;i0>=0;i0--)
{
// directional + ambient light
t=abs(dot(ray[i0].nor,light_dir)*light_idir)+light_iamb;
t*=1.0-ray[i0].refl-ray[i0].refr;
ray[i0].col.rgb*=t;
// reflect
ii=ray[i0].i0;
if (ii>=0) ray[i0].col.rgb+=ray[ii].col.rgb*ray[i0].refl;
// refract
ii=ray[i0].i1;
if (ii>=0) ray[i0].col.rgb+=ray[ii].col.rgb*ray[i0].refr;
}
col=ray[0].col;
// debug prints
#ifdef _debug_print
/*
if (_dbg)
{
txtsiz=0;
txt_decimal(_lvls);
txt[txtsiz]=' '; txtsiz++;
txt_decimal(rays);
txt[txtsiz]=' '; txtsiz++;
txt_decimal(_rays);
txt_print(dbg_x0,dbg_y0);
for (ii=0;ii<rays;ii++)
{
txtsiz=0;
txt_decimal(ray[ii].lvl);
txt_print(dbg_x0,dbg_y0+ii+1);
}
for (ii=0,st=vec2(0.0,0.0),i=j=0;i<fac_num;ii++)
{
c.r=fac_get; // RGBA
txtsiz=0;
txt_decimal(c.r);
txt_print(dbg_x0,dbg_y0+ii+1);
}
if (_txt_col) col=txt_col.rgb;
}
*/
if (_dbg)
{
float x=dbg_x0,y=dbg_y0;
vec3 a=vec3(1.0,2.0,3.0);
vec3 b=vec3(5.0,6.0,7.0);
txtsiz=0; txt_decimal(dot(a,b)); txt_print(x,y); y++;
txtsiz=0; txt_decimal(cross(a,b)); txt_print(x,y); y++;
if (_txt_col) col=txt_col.rgb;
}
#endif
frag_col=vec4(col,1.0);
}
//---------------------------------------------------------------------------
#ifdef _debug_print
//---------------------------------------------------------------------------
void txt_decimal(vec2 v) // print vec2 into txt
{
txt[txtsiz]='('; txtsiz++;
txt_decimal(v.x); txt[txtsiz]=','; txtsiz++;
txt_decimal(v.y); txt[txtsiz]=')'; txtsiz++;
txt[txtsiz]=0; // string terminator
}
//---------------------------------------------------------------------------
void txt_decimal(vec3 v) // print vec3 into txt
{
txt[txtsiz]='('; txtsiz++;
txt_decimal(v.x); txt[txtsiz]=','; txtsiz++;
txt_decimal(v.y); txt[txtsiz]=','; txtsiz++;
txt_decimal(v.z); txt[txtsiz]=')'; txtsiz++;
txt[txtsiz]=0; // string terminator
}
//---------------------------------------------------------------------------
void txt_decimal(vec4 v) // print vec4 into txt
{
txt[txtsiz]='('; txtsiz++;
txt_decimal(v.x); txt[txtsiz]=','; txtsiz++;
txt_decimal(v.y); txt[txtsiz]=','; txtsiz++;
txt_decimal(v.z); txt[txtsiz]=','; txtsiz++;
txt_decimal(v.w); txt[txtsiz]=')'; txtsiz++;
txt[txtsiz]=0; // string terminator
}
//---------------------------------------------------------------------------
void txt_decimal(float x) // print float x into txt
{
int i,j,c; // l is size of string
float y,a;
const float base=10;
// handle sign
if (x<0.0) { txt[txtsiz]='-'; txtsiz++; x=-x; }
else { txt[txtsiz]='+'; txtsiz++; }
// divide to int(x).fract(y) parts of number
y=x; x=floor(x); y-=x;
// handle integer part
i=txtsiz; // start of integer part
for (;txtsiz<_txtsiz;)
{
a=x;
x=floor(x/base);
a-=base*x;
txt[txtsiz]=int(a)+'0'; txtsiz++;
if (x<=0.0) break;
}
j=txtsiz-1; // end of integer part
for (;i<j;i++,j--) // reverse integer digits
{
c=txt[i]; txt[i]=txt[j]; txt[j]=c;
}
// handle fractional part
for (txt[txtsiz]='.',txtsiz++;txtsiz<_txtsiz;)
{
y*=base;
a=floor(y);
y-=a;
txt[txtsiz]=int(a)+'0'; txtsiz++;
if (y<=0.0) break;
}
txt[txtsiz]=0; // string terminator
}
//---------------------------------------------------------------------------
void txt_decimal(int x) // print int x into txt
{
int a,i,j,c; // l is size of string
const int base=10;
// handle sign
if (x<0.0) { txt[txtsiz]='-'; txtsiz++; x=-x; }
else { txt[txtsiz]='+'; txtsiz++; }
// handle integer part
i=txtsiz; // start of integer part
for (;txtsiz<_txtsiz;)
{
a=x;
x/=base;
a-=base*x;
txt[txtsiz]=int(a)+'0'; txtsiz++;
if (x<=0) break;
}
j=txtsiz-1; // end of integer part
for (;i<j;i++,j--) // reverse integer digits
{
c=txt[i]; txt[i]=txt[j]; txt[j]=c;
}
txt[txtsiz]=0; // string terminator
}
//---------------------------------------------------------------------------
void txt_print(float x0,float y0) // print txt at x0,y0 [chars]
{
int i;
float x,y;
// fragment position [chars] relative to x0,y0
x=0.5*(1.0+txt_pos.x)/txt_fxs; x-=x0;
y=0.5*(1.0-txt_pos.y)/txt_fys; y-=y0;
// inside bbox?
if ((x<0.0)||(x>float(txtsiz))||(y<0.0)||(y>1.0)) return;
// get font texture position for target ASCII
i=int(x); // char index in txt
x-=float(i);
i=txt[i];
x+=float(int(i&31));
y+=float(int(i>>5));
x/=32.0; y/=8.0; // offset in char texture
txt_col=texture(txr_font,vec2(x,y));
_txt_col=true;
}
//---------------------------------------------------------------------------
#endif
//---------------------------------------------------------------------------
The code is not optimized yet I wanted to have the physics working correctly first. There are still not Fresnells implemented but refl,refr coefficients of material are used instead.
Also you can ignore the debug prints stuff (they are encapsulated by #define).
I build a small class for the geometry texture so I can easily set up scene objects. This is how the scene was initiated for the preview:
ray.beg();
// r g b rfl rfr n
ray.add_material(1.0,1.0,1.0,0.3,0.0,_n_glass); ray.add_box ( 0.0, 0.0, 6.0,9.0,9.0,0.1);
ray.add_material(1.0,1.0,1.0,0.1,0.8,_n_glass); ray.add_sphere( 0.0, 0.0, 0.5,0.5);
ray.add_material(1.0,0.1,0.1,0.3,0.0,_n_glass); ray.add_sphere( +2.0, 0.0, 2.0,0.5);
ray.add_material(0.1,1.0,0.1,0.3,0.0,_n_glass); ray.add_box ( -2.0, 0.0, 2.0,0.5,0.5,0.5);
ray.add_material(0.1,0.1,1.0,0.3,0.0,_n_glass);
ray.add_tetrahedron
(
0.0, 0.0, 3.0,
-1.0,-1.0, 4.0,
+1.0,-1.0, 4.0,
0.0,+1.0, 4.0
);
ray.end();
It is important so computed normals are facing out of objects because that is used for detecting inside/outside object crossings.
P.S.
If you're interested here is my volumetric 3D back ray tracer:
How to best write a voxel engine in C with performance in mind
here archive for low rep users
Here newer version of this "Mesh" Raytracer supporting hemisphere objects:
Ray tracing a Hemisphere

Qt5 OpenGL Texture Sampling

I'm trying to render a QImage using OpenGL wrapper classes of Qt5 and shader programs. I have the following shaders and a 3.3 core context. I'm also using a VAO for the attributes. However, I keep getting a blank red frame (red is the background clear color that I set). I'm not sure if it is a problem with the MVP matrices or something else. Using a fragment shader which sets the output color to a certain fixed color (black) still resulted in a red frame. I'm totally lost here.
EDIT-1: I also noticed that attempting to get the location of texRGB uniform from the QOpenGLShaderProgram results in -1. But I'm not sure if that has anything to do with the problem I'm having. Uniforms defined in the vertex shader for the MVP matrices have the locations 0 and 1.
Vertex Shader
#version 330
layout(location = 0) in vec3 inPosition;
layout(location = 1) in vec2 inTexCoord;
out vec2 vTexCoord;
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
void main(void)
{
gl_Position = projectionMatrix * modelViewMatrix * vec4(inPosition, 1.0);
// pass the input texture coordinates to fragment shader
vTexCoord = inTexCoord;
}
Fragment Shader
#version 330
uniform sampler2DRect texRGB;
in vec2 vTexCoord;
out vec4 fColor;
void main(void)
{
vec3 rgb = texture2DRect(texRGB, vTexCoord.st).rgb;
fColor = vec4(rgb, 0.0);
}
OGLWindow.h
#include <QOpenGLWindow>
#include <QOpenGLFunctions>
#include <QOpenGLBuffer>
#include <QOpenGLShaderProgram>
#include <QOpenGLVertexArrayObject>
#include <QOpenGLTexture>
#include <QDebug>
#include <QString>
class OGLWindow : public QOpenGLWindow, protected QOpenGLFunctions
{
public:
OGLWindow();
~OGLWindow();
// OpenGL Events
void initializeGL();
void resizeGL(int width, int height);
void paintGL();
// a method for cleanup
void teardownGL();
private:
bool isInitialized;
// OpenGL state information
QOpenGLBuffer m_vbo_position;
QOpenGLBuffer m_vbo_index;
QOpenGLBuffer m_vbo_tex_coord;
QOpenGLVertexArrayObject m_object;
QOpenGLShaderProgram* m_program;
QImage m_image;
QOpenGLTexture* m_texture;
QMatrix4x4 m_projection_matrix;
QMatrix4x4 m_model_view_matrix;
};
OGLWindow.cpp
#include "OGLWindow.h"
// vertex data
static const QVector3D vertextData[] = {
QVector3D(-1.0f, -1.0f, 0.0f),
QVector3D( 1.0f, -1.0f, 0.0f),
QVector3D( 1.0f, 1.0f, 0.0f),
QVector3D(-1.0f, 1.0f, 0.0f)
};
// indices
static const GLushort indices[] = {
0, 1, 2,
0, 2, 3
};
OGLWindow::OGLWindow() :
m_vbo_position (QOpenGLBuffer::VertexBuffer),
m_vbo_tex_coord (QOpenGLBuffer::VertexBuffer),
m_vbo_index (QOpenGLBuffer::IndexBuffer),
m_program (nullptr),
m_texture (nullptr),
isInitialized (false)
{
}
OGLWindow::~OGLWindow()
{
makeCurrent();
teardownGL();
}
void OGLWindow::initializeGL()
{
qDebug() << "initializeGL()";
initializeOpenGLFunctions();
isInitialized = true;
QColor backgroundColor(Qt::red);
glClearColor(backgroundColor.redF(), backgroundColor.greenF(), backgroundColor.blueF(), 1.0f);
// load texture image
m_image = QImage(":/images/cube.png");
m_texture = new QOpenGLTexture(QOpenGLTexture::TargetRectangle);
// set bilinear filtering mode for texture magnification and minification
m_texture->setMinificationFilter(QOpenGLTexture::Nearest);
m_texture->setMagnificationFilter(QOpenGLTexture::Nearest);
// set the wrap mode
m_texture->setWrapMode(QOpenGLTexture::ClampToEdge);
m_texture->setData(m_image.mirrored(), QOpenGLTexture::MipMapGeneration::DontGenerateMipMaps);
int imgWidth = m_image.width();
int imgHeight = m_image.height();
m_projection_matrix.setToIdentity();
m_projection_matrix.ortho(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
// m_projection_matrix.ortho(0.0, (float) width(), (float) height(), 0.0f, -1.0f, 1.0f);
m_model_view_matrix.setToIdentity();
glViewport(0, 0, width(), height());
m_program = new QOpenGLShaderProgram();
m_program->addShaderFromSourceFile(QOpenGLShader::Vertex, ":/shaders/vshader.glsl");
m_program->addShaderFromSourceFile(QOpenGLShader::Fragment, ":/shaders/fshader.glsl");
m_program->link();
m_program->bind();
// texture coordinates
static const QVector2D textureData[] = {
QVector2D(0.0f, 0.0f),
QVector2D((float) imgWidth, 0.0f),
QVector2D((float) imgWidth, (float) imgHeight),
QVector2D(0.0f, (float) imgHeight)
};
// create Vertex Array Object (VAO)
m_object.create();
m_object.bind();
// create position VBO
m_vbo_position.create();
m_vbo_position.bind();
m_vbo_position.setUsagePattern(QOpenGLBuffer::StaticDraw);
m_vbo_position.allocate(vertextData, 4 * sizeof(QVector3D));
// create texture coordinates VBO
m_vbo_tex_coord.create();
m_vbo_tex_coord.bind();
m_vbo_tex_coord.setUsagePattern(QOpenGLBuffer::StaticDraw);
m_vbo_tex_coord.allocate(textureData, 4 * sizeof(QVector2D));
// create the index buffer
m_vbo_index.create();
m_vbo_index.bind();
m_vbo_index.setUsagePattern(QOpenGLBuffer::StaticDraw);
m_vbo_index.allocate(indices, 6 * sizeof(GLushort));
// enable the two attributes that we have and set their buffers
m_program->enableAttributeArray(0);
m_program->enableAttributeArray(1);
m_program->setAttributeBuffer(0, GL_FLOAT, 0, 3, sizeof(QVector3D));
m_program->setAttributeBuffer(1, GL_FLOAT, 0, 2, sizeof(QVector2D));
// Set modelview-projection matrix
m_program->setUniformValue("projectionMatrix", m_projection_matrix);
m_program->setUniformValue("modelViewMatrix", m_model_view_matrix);
// use texture unit 0 which contains our frame
m_program->setUniformValue("texRGB", 0);
// release (unbind) all
m_object.release();
m_vbo_position.release();
m_vbo_tex_coord.release();
m_vbo_index.release();
m_program->release();
}
void OGLWindow::resizeGL(int width, int height)
{
qDebug() << "resizeGL(): width =" << width << ", height=" << height;
if (isInitialized) {
// avoid division by zero
if (height == 0) {
height = 1;
}
m_projection_matrix.setToIdentity();
m_projection_matrix.perspective(60.0, (float) width / (float) height, -1, 1);
glViewport(0, 0, width, height);
}
}
void OGLWindow::paintGL()
{
// clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// render using our shader
m_program->bind();
{
m_texture->bind();
m_object.bind();
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0) );
m_object.release();
}
m_program->release();
}
void OGLWindow::teardownGL()
{
// actually destroy our OpenGL information
m_object.destroy();
m_vbo_position.destroy();
m_vbo_color.destroy();
delete m_program;
}
EDIT-2: I'm creating the context as follows:
QSurfaceFormat format;
format.setRenderableType(QSurfaceFormat::OpenGL);
format.setProfile(QSurfaceFormat::CoreProfile);
format.setVersion(3,3);
This line in your fragment shader code is invalid:
vec3 rgb = texture2DRect(texRGB, vTexCoord.st).rgb;
texture2DRect() is not a built-in function.
Since you're using the GLSL 3.30 core profile (core is the default for the version unless compatibility is specified), you should be using the overloaded texture() function, which replaces the older type specific functions like texture2D() in the core profile.
Functions like texture2D() are still supported in GLSL 3.30 core unless a forward compatible core profile context is used. So depending on how the context is created, you can still use those functions.
However, sampler2DRect was only added as a sampler type in GLSL 1.40 as part of adding rectangular textures to the standard in OpenGL 3.1. At the time, the legacy sampling functions were already marked as deprecated, and only the new texture() function was defined for rectangular textures. This means that texture2DRect() does not exist in any GLSL version.
The correct call is:
vec3 rgb = texture(texRGB, vTexCoord.st).rgb;
Another part of your code that can prevent it from rendering anything is this projection matrix:
m_projection_matrix.perspective(60.0, (float) width / (float) height, -1, 1);
The near and far planes for a standard projection matrix both need to be positive. This call will set up a projection transformation with a "camera" on the origin, looking down the negative z-axis. The near and far values are distances from the origin. A valid call could look like this:
m_projection_matrix.perspective(60.0, (float) width / (float) height, 1.0f, 10.0f);
You will then also need to set the model matrix to transform the coordinates of the object into this range on the negative z-axis. You could for example apply a translation by (0.0f, 0.0f, -5.0f).
Or, if you just want to see something, the quad should also become visible if you simply use the identity matrix for the projection.

CgFX geometry shader

How to use geometry shaders with CgFX? Actually how to specify geometry shader within the 'technique'?
The listing is below, vertex and fragment shaders are compiled well. But 'NVIDIA FX Composer 2.5' fires an 'error C3001 no program defined' after I added 'GeometryProgram' to the 'technique'.
float4x4 WorldITXf : WorldInverseTranspose;
float4x4 WorldViewProjXf : WorldViewProjection;
float4x4 WorldXf : World;
struct appdata
{
float4 Position : POSITION;
float4 Color : COLOR0;
};
struct vertexOutput
{
float4 Position : POSITION;
float4 Color : COLOR0;
};
TRIANGLE void gshader1(AttribArray<float4> pos : POSITION,
AttribArray<float4> col : COLOR0)
{
// some code will be here;
}
vertexOutput vshader1(appdata IN)
{
vertexOutput OUT;
float4 Po = float4(IN.Position.xyz,1.0f);
OUT.Position = mul(WorldViewProjXf, Po);
OUT.Color = IN.Color;
return OUT;
}
float4 fshader1(vertexOutput IN) : COLOR
{
return IN.Color;
}
technique Tec1 {
pass p0 {
GeometryProgram = compile glslg gshader1();
VertexProgram = compile glslv vshader1();
FragmentProgram = compile glslf fshader1();
}
}
Solved. Something is wrong with 'glslg' profile, changing it to 'gp4gp' fixes the issue.

Resources