Webgl gl_VertexID affected by drawArrays offset? - webgl2

When drawArrays is called with an offset, (the "first" argument being non zero), does the first gl_VertexID still start at 0, or does it start at the offset value?

update
This appears to be a bug in ANGLE on Windows. Filed a bug
https://github.com/KhronosGroup/WebGL/issues/2770
Let's try it
[...document.querySelectorAll('canvas')].forEach((canvas, ndx) => {
const vs = `#version 300 es
void main() {
gl_Position = vec4(float(gl_VertexID) / 10., 0, 0, 1);
gl_PointSize = 10.0;
}`;
const fs = `#version 300 es
precision mediump float;
out vec4 outColor;
void main() {
outColor = vec4(1, 0, 0, 1);
}`;
const gl = canvas.getContext('webgl2');
if (!gl) {
return alert('need webgl2');
}
const prg = twgl.createProgram(gl, [vs, fs]);
gl.useProgram(prg);
gl.drawArrays(gl.POINTS, ndx * 5, 5);
});
canvas {border: 1px solid black;}
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
<canvas></canvas>
<canvas></canvas>
Looks like the answer is it starts at the offset value.

Related

Why is my normal vector always blank? HLSL

I am trying to create a basic diffuse shader with HLSL, with this rbwhitaker tutorial. The shader runs without errors and can display triangles, but it seems that the Normal input to the vertex shader is never given a value, which causes the diffuse lighting to break. I tested this by setting the output color to white, and then subtracting the normal, but there was never any variation in colors, it stayed white. My shader code is below.
float4x4 World;
float4x4 View;
float4x4 Projection;
float4x4 WorldInverseTranspose;
float3 DiffuseLightDirection = float3(1, 0, 0);
float4 DiffuseColor = float4(1, 1, 1, 1);
float DiffuseIntensity = 1.0;
float4 AmbientColor = float4(1, 1, 1, 1);
float AmbientIntensity = 0.1;
struct VertexShaderInput
{
float4 Position : POSITION0;
float4 Normal : NORMAL0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
float4 Color : COLOR0;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
float4 normal = mul(input.Normal, WorldInverseTranspose);
float lightIntensity = dot(normal, DiffuseLightDirection);
output.Color = saturate(DiffuseColor * DiffuseIntensity * lightIntensity);;
return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
return saturate(input.Color + AmbientColor * AmbientIntensity);
}
technique Diffuse
{
pass Pass1
{
VertexShader = compile vs_2_0 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
EDIT:
I think it is probably because I don't directly pass the normal to the shader, but I don't know how to compute it if there isn't a built in way.

pass data between shader programs

Ok I'm going to keep this as simple as possible. I want to pass data between shader programs. I'm using readPixels currently to do that but I feel it may be slowing operations down and I'm exploring faster options.
what my program does:
program1 does my rendering to the canvas.
program2 does some wonderful operations in it's shaders that I want to pass to program1.
MY QUESTIONS:
is it possible to use the vbo from program2 and pass that to program1 for rendering? From what it sounds like in the link I give below, you can't share data across contexts, meaning the data from one buffer can't be used for another. But maybe I'm missing something.
I believe the method mentioned in this article would do what I'm looking for by rendering to a canvas and then using texImage2D to update program1 (Copy framebuffer data from one WebGLRenderingContext to another?). Am I correct? If so, would this be faster than using readPixels? ( i ask because if using texImage2D is about the same I won't bother ).
thanks in advance to anyone who answers.
The normal way to pass data from one shader to the next is to render to a texture (by attaching that texture to a framebuffer). Then pass that texture to the second shader.
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) {
return alert('need webgl2');
}
const vs1 = `#version 300 es
void main () {
gl_Position = vec4(0, 0, 0, 1);
gl_PointSize = 64.0;
}
`;
const fs1 = `#version 300 es
precision highp float;
out vec4 myOutColor;
void main() {
myOutColor = vec4(fract(gl_PointCoord * 4.), 0, 1);
}
`;
const vs2 = `#version 300 es
in vec4 position;
void main () {
gl_Position = position;
gl_PointSize = 32.0;
}
`;
const fs2 = `#version 300 es
precision highp float;
uniform sampler2D tex;
out vec4 myOutColor;
void main() {
myOutColor = texture(tex, gl_PointCoord);
}
`;
// make 2 programs
const prg1 = twgl.createProgram(gl, [vs1, fs1]);
const prg2 = twgl.createProgram(gl, [vs2, fs2]);
// make a texture
const tex = gl.createTexture();
const texWidth = 64;
const texHeight = 64;
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA8, texWidth, texHeight, 0,
gl.RGBA, gl.UNSIGNED_BYTE, null);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
// attach texture to framebuffer
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0,
gl.TEXTURE_2D, tex, 0);
// render to texture
gl.viewport(0, 0, texWidth, texHeight);
gl.useProgram(prg1);
gl.drawArrays(gl.POINTS, 0, 1);
// render texture (output of prg1) to canvas using prg2
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.useProgram(prg2);
// note: the texture is already bound to texture unit 0
// and uniforms default to 0 so the texture is already setup
const posLoc = gl.getAttribLocation(prg2, 'position')
const numDraws = 12
for (let i = 0; i < numDraws; ++i) {
const a = i / numDraws * Math.PI * 2;
gl.vertexAttrib2f(posLoc, Math.sin(a) * .7, Math.cos(a) * .7);
gl.drawArrays(gl.POINTS, 0, 1);
}
}
main();
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
<canvas></canvas>
You can also use "transform feedback" to store the outputs of a vertex shader to one or more buffers and of course those buffers can be used as input to another shader.
// this example from
// https://webgl2fundamentals.org/webgl/lessons/resources/webgl-state-diagram.html?exampleId=transform-feedback
const canvas = document.querySelector('canvas');
const gl = canvas.getContext('webgl2');
const genPointsVSGLSL = `#version 300 es
uniform int numPoints;
out vec2 position;
out vec4 color;
#define PI radians(180.0)
void main() {
float u = float(gl_VertexID) / float(numPoints);
float a = u * PI * 2.0;
position = vec2(cos(a), sin(a)) * 0.8;
color = vec4(u, 0, 1.0 - u, 1);
}
`;
const genPointsFSGLSL = `#version 300 es
void main() {
discard;
}
`;
const drawVSGLSL = `#version 300 es
in vec4 position;
in vec4 color;
out vec4 v_color;
void main() {
gl_PointSize = 20.0;
gl_Position = position;
v_color = color;
}
`;
const drawFSGLSL = `#version 300 es
precision highp float;
in vec4 v_color;
out vec4 outColor;
void main() {
outColor = v_color;
}
`;
const createShader = function(gl, type, glsl) {
const shader = gl.createShader(type)
gl.shaderSource(shader, glsl)
gl.compileShader(shader)
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
throw new Error(gl.getShaderInfoLog(shader))
}
return shader
};
const createProgram = function(gl, vsGLSL, fsGLSL, outVaryings) {
const vs = createShader(gl, gl.VERTEX_SHADER, vsGLSL)
const fs = createShader(gl, gl.FRAGMENT_SHADER, fsGLSL)
const prg = gl.createProgram()
gl.attachShader(prg, vs)
gl.attachShader(prg, fs)
if (outVaryings) {
gl.transformFeedbackVaryings(prg, outVaryings, gl.SEPARATE_ATTRIBS)
}
gl.linkProgram(prg)
if (!gl.getProgramParameter(prg, gl.LINK_STATUS)) {
throw new Error(gl.getProgramParameter(prg))
}
return prg
};
const genProg = createProgram(gl, genPointsVSGLSL, genPointsFSGLSL, ['position', 'color']);
const drawProg = createProgram(gl, drawVSGLSL, drawFSGLSL);
const numPointsLoc = gl.getUniformLocation(genProg, 'numPoints');
const posLoc = gl.getAttribLocation(drawProg, 'position');
const colorLoc = gl.getAttribLocation(drawProg, 'color');
const numPoints = 24;
// make a vertex array and attach 2 buffers
// one for 2D positions, 1 for colors.
const dotVertexArray = gl.createVertexArray();
gl.bindVertexArray(dotVertexArray);
const positionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.bufferData(gl.ARRAY_BUFFER, numPoints * 2 * 4, gl.DYNAMIC_DRAW);
gl.enableVertexAttribArray(posLoc);
gl.vertexAttribPointer(
posLoc, // location
2, // size (components per iteration)
gl.FLOAT, // type of to get from buffer
false, // normalize
0, // stride (bytes to advance each iteration)
0, // offset (bytes from start of buffer)
);
const colorBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, colorBuffer);
gl.bufferData(gl.ARRAY_BUFFER, numPoints * 4 * 4, gl.DYNAMIC_DRAW);
gl.enableVertexAttribArray(colorLoc);
gl.vertexAttribPointer(
colorLoc, // location
4, // size (components per iteration)
gl.FLOAT, // type of to get from buffer
false, // normalize
0, // stride (bytes to advance each iteration)
0, // offset (bytes from start of buffer)
);
// This is not really needed but if we end up binding anything
// to ELEMENT_ARRAY_BUFFER, say we are generating indexed geometry
// we'll change cubeVertexArray's ELEMENT_ARRAY_BUFFER. By binding
// null here that won't happen.
gl.bindVertexArray(null);
// setup a transform feedback object to write to
// the position and color buffers
const tf = gl.createTransformFeedback();
gl.bindTransformFeedback(gl.TRANSFORM_FEEDBACK, tf);
gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, positionBuffer);
gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 1, colorBuffer);
gl.bindTransformFeedback(gl.TRANSFORM_FEEDBACK, null);
// above this line is initialization code
// --------------------------------------
// below is rendering code.
// --------------------------------------
// First compute points into buffers
// no need to call the fragment shader
gl.enable(gl.RASTERIZER_DISCARD);
// unbind the buffers so we don't get errors.
gl.bindBuffer(gl.TRANSFORM_FEEDBACK_BUFFER, null);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.useProgram(genProg);
// generate numPoints of positions and colors
// into the buffers
gl.bindTransformFeedback(gl.TRANSFORM_FEEDBACK, tf);
gl.beginTransformFeedback(gl.POINTS);
gl.uniform1i(numPointsLoc, numPoints);
gl.drawArrays(gl.POINTS, 0, numPoints);
gl.endTransformFeedback();
gl.bindTransformFeedback(gl.TRANSFORM_FEEDBACK, null);
// turn on using fragment shaders again
gl.disable(gl.RASTERIZER_DISCARD);
// --------------------------------------
// Now draw using the buffers we just computed
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.bindVertexArray(dotVertexArray);
gl.useProgram(drawProg);
gl.drawArrays(gl.POINTS, 0, numPoints);
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
<canvas></canvas>
Also this answer might be useful.
ok so what I was trying to do is something like the following ( hopefully this helps someone else in future ). Basically I want to have one shader doing calculations for movement (program#2) for another shader which will render (program#1). I want to avoid any vector calculations in JS. This example combines #gman's transform feedback sample and the sample I provided above:
const canvas = document.querySelector('canvas');
var gl = canvas.getContext('webgl2', {preserveDrawingBuffer: true});
// ___________shaders
// ___________vs and fs #1
const genPointsVSGLSL = `#version 300 es
in vec4 aPos;
void main(void) {
gl_PointSize = 20.0;
gl_Position = vec4( -0.01 + aPos.x , -0.01+aPos.y , aPos.zw);
}
`;
const genPointsFSGLSL = `#version 300 es
precision highp float;
out vec4 color;
void main() {
discard;
//color = vec4(0.5,0.5,0.0,1.0);
}
`;
// ___________vs and fs #2
const drawVSGLSL = `#version 300 es
in vec4 position;
void main() {
gl_PointSize = 20.0;
gl_Position = position;
}
`;
const drawFSGLSL = `#version 300 es
precision highp float;
out vec4 outColor;
void main() {
outColor = vec4( 255.0,0.0,0.0,1.0 );
}
`;
// create shaders and programs code
const createShader = function(gl, type, glsl) {
const shader = gl.createShader(type)
gl.shaderSource(shader, glsl)
gl.compileShader(shader)
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
throw new Error(gl.getShaderInfoLog(shader))
}
return shader
};
const createProgram = function(gl, vsGLSL, fsGLSL, outVaryings) {
const vs = createShader(gl, gl.VERTEX_SHADER, vsGLSL)
const fs = createShader(gl, gl.FRAGMENT_SHADER, fsGLSL)
const prg = gl.createProgram()
gl.attachShader(prg, vs)
gl.attachShader(prg, fs)
if (outVaryings) {
gl.transformFeedbackVaryings(prg, outVaryings, gl.SEPARATE_ATTRIBS)
}
gl.linkProgram(prg)
if (!gl.getProgramParameter(prg, gl.LINK_STATUS)) {
throw new Error(gl.getProgramParameter(prg))
}
return prg
};
const genProg = createProgram(gl, genPointsVSGLSL, genPointsFSGLSL, ['gl_Position']);
const drawProg = createProgram(gl, drawVSGLSL, drawFSGLSL, ['gl_Position']);
// program1 location attribute
const positionLoc = gl.getAttribLocation( drawProg , 'position');
// program2 location attribute
const aPosLoc = gl.getAttribLocation( genProg , 'aPos');
var vertizes = [0.8,0,0,1, 0.8,0.5,0,1];
var indizes = vertizes.length/4;
// create buffers and transform feedback
var bufA = gl.createBuffer()
gl.bindBuffer(gl.ARRAY_BUFFER, bufA)
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array( vertizes ), gl.DYNAMIC_COPY)
var bufB = gl.createBuffer()
gl.bindBuffer(gl.ARRAY_BUFFER, bufB)
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array( vertizes ) , gl.DYNAMIC_COPY)
var transformFeedback = gl.createTransformFeedback()
gl.bindTransformFeedback(gl.TRANSFORM_FEEDBACK, transformFeedback)
// draw
function draw(){
gl.useProgram( genProg );
gl.clear(gl.COLOR_BUFFER_BIT);
// bind bufA to output of program#2
gl.bindBuffer(gl.ARRAY_BUFFER, bufA);
gl.enableVertexAttribArray( aPosLoc );
gl.vertexAttribPointer(aPosLoc, 4, gl.FLOAT, gl.FALSE, 0, 0)
// run movement calculation code, aka program#2 (calculate movement location and hide the results using RASTERIZER_DISCARD )
gl.enable(gl.RASTERIZER_DISCARD);
gl.drawArrays(gl.POINTS, 0, indizes);
gl.disable(gl.RASTERIZER_DISCARD);
gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, bufB);
// move dot using rendering code and the position calculated previously which is still stored in bufA
gl.useProgram( drawProg );
gl.bindBuffer( gl.ARRAY_BUFFER, bufA );
gl.enableVertexAttribArray( positionLoc );
gl.vertexAttribPointer( positionLoc , 4, gl.FLOAT, gl.FALSE, 0, 0);
gl.drawArrays(gl.POINTS, 0, indizes);
gl.useProgram( genProg );
// run transforma feedback
gl.beginTransformFeedback(gl.POINTS);
gl.drawArrays(gl.POINTS, 0, indizes);
gl.endTransformFeedback();
gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, null);
// switch bufA and bufB in preperation for the next draw call
var t = bufA;
bufA = bufB;
bufB = t;
}
setInterval( draw , 100 );
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
<canvas></canvas>

QT Scene Graph cannot draw a grid

I want to use Qt scene graph to draw a grid. I haven't succeeded in research for a few days. Please help me, thank you!
The issue is:
Why can't I show the results?
Where do I call glViewport? or some other way?
I have followed the code and found that Qt called renderer->setViewportRect(rect) in QQuickWindowPrivate::renderSceneGraph();
But the scene graph uses the entire window as the drawing area instead of the custom QQuickItem object.
I recalculated the shader matrix, but it didn't work. I think it is ugly
source code
// grid_item.h
class GridItem : public QQuickItem
{
Q_OBJECT
public:
explicit GridItem(QQuickItem *parent = nullptr);
protected:
QSGNode *updatePaintNode(QSGNode *oldNode, UpdatePaintNodeData *updatePaintNodeData) Q_DECL_OVERRIDE;
};
// grid_item.cpp
GridItem::GridItem(QQuickItem *parent) : QQuickItem (parent)
{
setFlag(ItemHasContents, true);
}
QSGNode *GridItem::updatePaintNode(QSGNode *oldNode, UpdatePaintNodeData *)
{
QRectF rect = boundingRect();
if (rect.isEmpty()) {
delete oldNode;
return nullptr;
}
QSGGeometryNode *node = nullptr;
QSGGeometry *geometry = nullptr;
GridItemMaterial *material = nullptr;
if(!oldNode)
{
node = new QSGGeometryNode;
node->setFlags(QSGNode::OwnsGeometry | QSGNode::OwnsMaterial, true);
geometry = new QSGGeometry(QSGGeometry::defaultAttributes_Point2D(), 0);
geometry->setDrawingMode(QSGGeometry::DrawLines);
node->setGeometry(geometry);
material = new GridItemMaterial;
material->setFlag(QSGMaterial::RequiresDeterminant, true);
node->setMaterial(material);
}
else
{
node = static_cast<QSGGeometryNode *>(oldNode);
geometry = node->geometry();
material = static_cast<GridItemMaterial *>(node->material());
}
int m_xAxisSegment {10};
int m_yAxisSegment {10};
const int totalVertices = (m_xAxisSegment+1)*2 + (m_yAxisSegment+1)*2;
if(geometry->vertexCount() != totalVertices)
{
geometry->allocate(totalVertices);
QSGGeometry::Point2D *vertices = geometry->vertexDataAsPoint2D();
for(int x=0; x<=m_xAxisSegment; x++)
{
float xPos = 1.0f*x/m_xAxisSegment;
(*vertices++).set(xPos, 0.0f);
(*vertices++).set(xPos, 1.0f);
}
for(int y=0; y<=m_yAxisSegment; y++)
{
float yPos = 1.0f*y/m_yAxisSegment;
(*vertices++).set(0.0f, yPos);
(*vertices++).set(1.0f, yPos);
}
node->markDirty(QSGNode::DirtyGeometry);
}
// calculate matrix for shader
ConvertParameter param;
param.windowWidth = 640;
param.windowHeight = 480;
param.contentX = 100;
param.contentY = 100;
param.contentWidth = 200;
param.contentHeight = 200;
param.glX = 0;
param.glY = 0;
param.glWidth = 1.0f;
param.glHeight = 1.0f;
material->m_convertParameter = param;
return node;
}
// grid_item_material.h
class GridItemMaterial : public QSGMaterial
{
public:
QSGMaterialType *type() const Q_DECL_OVERRIDE;
QSGMaterialShader *createShader() const Q_DECL_OVERRIDE;
ConvertParameter m_convertParameter;
};
// grid_item_material.cpp
QSGMaterialType *GridItemMaterial::type() const
{
static QSGMaterialType type;
return &type;
}
QSGMaterialShader *GridItemMaterial::createShader() const
{
return new GridItemMaterialShader;
}
// grid_item_material_shader.h
class GridItemMaterialShader : public QSGMaterialShader
{
public:
GridItemMaterialShader();
const char *const *attributeNames() const Q_DECL_OVERRIDE;
void updateState(const RenderState &state, QSGMaterial *newMaterial, QSGMaterial *oldMaterial) Q_DECL_OVERRIDE;
protected:
void initialize() Q_DECL_OVERRIDE;
QMatrix4x4 getConvertMatrix(const ConvertParameter &param);
private:
int m_id_mvpMatrix {-1};
int m_id_gridlineColor {-1};
};
// grid_item_material_shader.cpp
GridItemMaterialShader::GridItemMaterialShader()
{
setShaderSourceFile(QOpenGLShader::Vertex, ":/shaders/gridlines.vert");
setShaderSourceFile(QOpenGLShader::Fragment, ":/shaders/gridlines.frag");
}
const char * const *GridItemMaterialShader::attributeNames() const
{
static char const *const names[] = { "Vertex", 0 };
return names;
}
void GridItemMaterialShader::updateState(const RenderState &state, QSGMaterial *newMaterial, QSGMaterial *)
{
GridItemMaterial *material = static_cast<GridItemMaterial *>(newMaterial);
QMatrix4x4 matrix = getConvertMatrix(material->m_convertParameter);
program()->setUniformValue(m_id_mvpMatrix, matrix);
program()->setUniformValue(m_id_gridlineColor, QColor::fromRgbF(1, 0, 0, 1));
}
void GridItemMaterialShader::initialize()
{
m_id_mvpMatrix = program()->uniformLocation("mvpMatrix");
m_id_gridlineColor = program()->uniformLocation("gridlineColor");
}
QMatrix4x4 GridItemMaterialShader::getConvertMatrix(const ConvertParameter &param)
{
QMatrix4x4 model1;
// convert window to (-1, -1)..(+1, +1)
model1.setToIdentity();
model1.translate(-1, -1, 0);
model1.scale(2.0f/param.windowWidth, 2.0f/param.windowHeight, 1.0f);
// left-bottom
QVector4D v3(param.contentX, param.windowHeight-param.contentY-param.contentHeight, 0, 1);
v3 = model1 * v3;
// right-top
QVector4D v4(param.contentX+param.contentWidth, param.windowHeight-param.contentY, 0, 1);
v4 = model1 * v4;
// content area should in (-1, -1)..(+1, +1)
float width = v4.x() - v3.x();
float height = v4.y() - v3.y();
QMatrix4x4 model2;
model2.setToIdentity();
model2.translate(v3.x(), v3.y(), 0);
model2.scale(width/param.glWidth, height/param.glHeight, 1);
model2.translate(-param.glX, -param.glY, 0);
return model2;
}
// grid_convert_parameter.h
struct ConvertParameter
{
int windowWidth = 640;
int windowHeight = 480;
int contentX = 100;
int contentY = 100;
int contentWidth = 200;
int contentHeight = 200;
float glX = 3;
float glY = 3;
float glWidth = 4.0f;
float glHeight = 4.0f;
};
// main.cpp
int main(int argc, char *argv[])
{
QGuiApplication app(argc, argv);
qmlRegisterType<GridItem>("io.draw", 1, 0, "GridItem");
QQmlApplicationEngine engine;
engine.load(QUrl(QStringLiteral("qrc:/main.qml")));
QQuickWindow *window = static_cast<QQuickWindow *>(engine.rootObjects().first());
QSurfaceFormat format = window->requestedFormat();
format.setProfile(QSurfaceFormat::CoreProfile);
format.setVersion(3, 3);
window->setFormat(format);
window->show();
return app.exec();
}
// main.qml
import QtQuick 2.9
import QtQuick.Controls 2.4
import io.draw 1.0
ApplicationWindow {
visible: true
width: 640
height: 480
title: qsTr("Hello World")
GridItem {
x: 100
y: 100
width: 200
height: 200
}
}
// gridlines.vert
#version 330 core
uniform mat4 mvpMatrix;
layout(location = 0) in vec2 Vertex;
void main(void)
{
gl_Position = mvpMatrix * vec4(Vertex, 0.0, 1.0);
}
// gridlines.frag
#version 330 core
uniform vec4 gridlineColor;
layout(location = 0) out vec4 fragColor;
void main(void)
{
fragColor = gridlineColor;
}
I have also made a simple change based on the Qt OpenGL demo.
class OpenGLWindow : public QWindow, protected QOpenGLFunctions_3_3_Core
Almost done the same thing, except that the results are output directly to the entire window (but this is not what I want)
Another difference is the transformation matrix changed:
QMatrix4x4 model, view, projection;
projection.ortho(0, 1, 0, 1, -10, 10);
m_program->setUniformValue(m_matrixUniform, projection*view*model);
It works properly...
Because it involves OpenGL and Qt Scene Graph, I don't know what went wrong.

OpenGL Does Not Render Triangle

I am following this tutorial with a few modifications and have got this code:
#define GLSL(src) "#version 330 core\n" #src
void MainWindow::initializeGL() {
glClearColor(0, 0, 0, 1);
// Generate buffers
GLfloat verticies[] = {
+0.0f, +1.0f, +0.0f,
-1.0f, -1.0f, +0.0f,
+1.0f, -1.0f, +0.0f,
};
GLuint vertexBufferID;
glGenBuffers(1, &vertexBufferID);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferID);
glBufferData(GL_ARRAY_BUFFER, sizeof(verticies), verticies, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void *)0);
// Generate shaders
const char *vertexShaderSrc = GLSL(
layout(location = 0) in vec3 pos;
void main() {
gl_Position.xyz = pos;
gl_Position.w = 1.0;
}
);
GLuint vertexShaderID = createGLShader(GL_VERTEX_SHADER, vertexShaderSrc);
const GLchar *fragmentShaderSrc = GLSL(
out vec4 color;
void main() {
color = vec4(0.0, 1.0, 0.0, 1.0);
}
);
GLuint fragmentShaderID = createGLShader(GL_FRAGMENT_SHADER, fragmentShaderSrc);
GLuint programID = glCreateProgram();
glAttachShader(programID, vertexShaderID);
glAttachShader(programID, fragmentShaderID);
glLinkProgram(programID);
glUseProgram(programID);
}
void MainWindow::paintGL() {
//glViewport(0, 0, width(), height());
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 3);
//glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
}
GLuint MainWindow::createGLShader(GLenum type, const GLchar* src) {
GLuint shaderID = glCreateShader(type);
glShaderSource(shaderID, 1, &src, 0);
glCompileShader(shaderID);
GLint vertexCompileStatus;
glGetShaderiv(shaderID, GL_COMPILE_STATUS, &vertexCompileStatus);
if (vertexCompileStatus != GL_TRUE) {
GLint infoLogLength;
glGetShaderiv(shaderID, GL_INFO_LOG_LENGTH, &infoLogLength);
GLchar buffer[infoLogLength];
glGetShaderInfoLog(shaderID, infoLogLength, 0, buffer);
qDebug(buffer);
}
return shaderID;
}
This is all contained in a QGLWidget. However when I run this code I just get a black screen. What is going wrong? I don't get an error message so the shaders are compiling.
I set up the QGLWidget:
#include "mainwindow.h"
#include <QApplication>
#include <QGLFormat>
int main(int argc, char *argv[]) {
QApplication a(argc, argv);
QGLFormat glFormat;
glFormat.setVersion(3, 3);
glFormat.setProfile(QGLFormat::CoreProfile);
MainWindow w(glFormat);
w.show();
return a.exec();
}
Staying with "pure" OpenGL code, you need (at least) a Vertex Array Object. That object needs to be bound when you configure the vertex arrays, and everytime you draw from the aforementioned arrays.
So, before the calls to gl*VertexAttribArray, create and bind the VAO. Add a
GLuint m_vao;
member to your class. Then in initializeGL:
glGenVertexArrays(1, &m_vao);
glBindVertexArray(m_vao);
// now configure the arrays:
glEnableVertexAttribArray...
glVertexAttribArray...
// now release the VAO and move on
glBindVertexArray(0);
Then in paintGL we need the VAO again:
glBindVertexArray(m_vao);
glDrawArrays(...);
glBindVertexArray(0);
And now your code with Qt 5 OpenGL enablers (didn't try to compile it, but you can get the idea). You tell me which one is more readable and less error prone.
#define GLSL(src) "#version 330 core\n" #src
void MainWindow::initializeGL() {
glClearColor(0, 0, 0, 1);
// Generate buffers
GLfloat verticies[] = {
+0.0f, +1.0f, +0.0f,
-1.0f, -1.0f, +0.0f,
+1.0f, -1.0f, +0.0f,
};
m_vertexBuffer = new QOpenGLBuffer(QOpenGLBuffer::VertexBuffer);
m_vertexBuffer->create();
m_vertexBuffer->setusagePatter(QOpenGLBuffer::StaticDraw);
m_vertexBuffer->bind();
m_vertexBuffer->allocate(verticies, sizeof(verticies);
m_vertexBuffer->release();
// Generate shaders
const char *vertexShaderSrc = GLSL(
layout(location = 0) in vec3 pos;
void main() {
gl_Position.xyz = pos;
gl_Position.w = 1.0;
}
);
const GLchar *fragmentShaderSrc = GLSL(
out vec4 color;
void main() {
color = vec4(0.0, 1.0, 0.0, 1.0);
}
);
m_program = new QOpenGLShaderProgram;
m_program->addShaderFromSourceCode(QOpenGLShader::Vertex, vertexShaderSrc);
m_program->addShaderFromSourceCode(QOpenGLShader::Fragment, fragmentShaderSrc);
m_program->link();
// error checking missing from the last three calls. if they return false, check log()
m_vao = new QOpenGLVertexArrayObject;
m_vao->bind();
m_program->bind();
m_vertexBuffer->bind();
m_program->enableAttributeArray("pos");
m_program->setAttributeBuffer("pos", GL_FLOAT, 0, 3);
m_vertexBuffer->release();
m_program->release();
m_vao->release();
}
void MainWindow::paintGL() {
glClear(GL_COLOR_BUFFER_BIT);
m_vao->bind();
m_program->bind();
glDrawArrays(GL_TRIANGLES, 0, 3);
m_program->release();
m_vao->release();
}

QT Quick Designer Custom Components are Blank?

I'm doing some testing of QT Quick to see if i can use it as a GUI replacement for the old Ui files. I noticed in some of the examples that custom components will populate the library view. I managed to do that (apparently they must be in a sub dir of the qml file that uses them?). However these components do not render in the Qt Quick design window. There is actually nothing to grab or manipulate. Upon running the program, they render correctly.
Does anyone have a solution? My source is below
import QtQuick 1.0
import Chips 1.0
Item {
width: 100
height: 62
Chip
{
}
}
chip.cpp
#include "Chip.h"
#include <QtGui>
Chip::Chip(QDeclarativeItem *parent)
: QDeclarativeItem(parent)
{
x = 0;
y = 0;
color = QColor(0, 200, 0);
setFlags(ItemIsSelectable | ItemIsMovable);
setFlag(QGraphicsItem::ItemHasNoContents, false);
setAcceptsHoverEvents(true);
}
//Chip::Chip(const QColor &color, int x, int y, QDeclarativeItem *parent)
// : QDeclarativeItem(parent)
//{
// this->x = x;
// this->y = y;
// this->color = color;
// setZValue((x + y) % 2);
// setFlags(ItemIsSelectable | ItemIsMovable);
// setFlag(QGraphicsItem::ItemHasNoContents, false);
// setAcceptsHoverEvents(true);
//}
QRectF Chip::boundingRect() const
{
return QRectF(0, 0, 110, 70);
}
QPainterPath Chip::shape() const
{
QPainterPath path;
path.addRect(14, 14, 82, 42);
return path;
}
void Chip::paint(QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget)
{
Q_UNUSED(widget);
QColor fillColor = (option->state & QStyle::State_Selected) ? color.dark(150) : color;
if (option->state & QStyle::State_MouseOver)
fillColor = fillColor.light(125);
const qreal lod = option->levelOfDetailFromTransform(painter->worldTransform());
if (lod < 0.2) {
if (lod < 0.125) {
painter->fillRect(QRectF(0, 0, 110, 70), fillColor);
return;
}
QBrush b = painter->brush();
painter->setBrush(fillColor);
painter->drawRect(13, 13, 97, 57);
painter->setBrush(b);
return;
}
QPen oldPen = painter->pen();
QPen pen = oldPen;
int width = 0;
if (option->state & QStyle::State_Selected)
width += 2;
pen.setWidth(width);
QBrush b = painter->brush();
painter->setBrush(QBrush(fillColor.dark(option->state & QStyle::State_Sunken ? 120 : 100)));
painter->drawRect(QRect(14, 14, 79, 39));
painter->setBrush(b);
if (lod >= 1) {
painter->setPen(QPen(Qt::gray, 1));
painter->drawLine(15, 54, 94, 54);
painter->drawLine(94, 53, 94, 15);
painter->setPen(QPen(Qt::black, 0));
}
// Draw text
if (lod >= 2) {
QFont font("Times", 10);
font.setStyleStrategy(QFont::ForceOutline);
painter->setFont(font);
painter->save();
painter->scale(0.1, 0.1);
painter->drawText(170, 180, QString("Model: VSC-2000 (Very Small Chip) at %1x%2").arg(x).arg(y));
painter->drawText(170, 200, QString("Serial number: DLWR-WEER-123L-ZZ33-SDSJ"));
painter->drawText(170, 220, QString("Manufacturer: Chip Manufacturer"));
painter->restore();
}
// Draw lines
QVarLengthArray<QLineF, 36> lines;
if (lod >= 0.5) {
for (int i = 0; i <= 10; i += (lod > 0.5 ? 1 : 2)) {
lines.append(QLineF(18 + 7 * i, 13, 18 + 7 * i, 5));
lines.append(QLineF(18 + 7 * i, 54, 18 + 7 * i, 62));
}
for (int i = 0; i <= 6; i += (lod > 0.5 ? 1 : 2)) {
lines.append(QLineF(5, 18 + i * 5, 13, 18 + i * 5));
lines.append(QLineF(94, 18 + i * 5, 102, 18 + i * 5));
}
}
if (lod >= 0.4) {
const QLineF lineData[] = {
QLineF(25, 35, 35, 35),
QLineF(35, 30, 35, 40),
QLineF(35, 30, 45, 35),
QLineF(35, 40, 45, 35),
QLineF(45, 30, 45, 40),
QLineF(45, 35, 55, 35)
};
lines.append(lineData, 6);
}
painter->drawLines(lines.data(), lines.size());
// Draw red ink
if (stuff.size() > 1) {
QPen p = painter->pen();
painter->setPen(QPen(Qt::red, 1, Qt::SolidLine, Qt::RoundCap, Qt::RoundJoin));
painter->setBrush(Qt::NoBrush);
QPainterPath path;
path.moveTo(stuff.first());
for (int i = 1; i < stuff.size(); ++i)
path.lineTo(stuff.at(i));
painter->drawPath(path);
painter->setPen(p);
}
}
void Chip::mousePressEvent(QGraphicsSceneMouseEvent *event)
{
QGraphicsItem::mousePressEvent(event);
update();
}
void Chip::mouseMoveEvent(QGraphicsSceneMouseEvent *event)
{
if (event->modifiers() & Qt::ShiftModifier) {
stuff << event->pos();
update();
return;
}
QGraphicsItem::mouseMoveEvent(event);
}
void Chip::mouseReleaseEvent(QGraphicsSceneMouseEvent *event)
{
QGraphicsItem::mouseReleaseEvent(event);
update();
}
QColor Chip::getColor() const
{
return color;
}
int Chip::getX() const
{
return x;
}
int Chip::getY() const
{
return y;
}
void Chip::setColor(const QColor &color)
{
this->color = color;
}
void Chip::setX(const int &x)
{
this->x = x;
}
void Chip::setY(const int &y)
{
this->y = y;
}
chip.h
#ifndef CHIP_H
#define CHIP_H
#include <QtGui/QColor>
#include <QDeclarativeItem>
class Chip : public QDeclarativeItem
{
Q_OBJECT
Q_PROPERTY(int x READ getX WRITE setX)
Q_PROPERTY(int y READ getY WRITE setY)
Q_PROPERTY(QColor color READ getColor WRITE setColor)
public:
Chip(QDeclarativeItem *parent = 0);
Chip(const QColor &color, int x, int y);
QRectF boundingRect() const;
QColor getColor() const;
int getX() const;
int getY() const;
void setColor(const QColor &color);
void setX(const int &x);
void setY(const int &y);
QPainterPath shape() const;
void paint(QPainter *painter, const QStyleOptionGraphicsItem *item, QWidget *widget);
protected:
void mousePressEvent(QGraphicsSceneMouseEvent *event);
void mouseMoveEvent(QGraphicsSceneMouseEvent *event);
void mouseReleaseEvent(QGraphicsSceneMouseEvent *event);
private:
int x, y;
QColor color;
QList<QPointF> stuff;
};
#endif
In my investigation of this issue, i learned that you can add custom widgets to the QT Designer. I might have to check that out as well before I make my decision. Any help will be appreciated, thanks.
UPDATE: Dec/2015
You have to mark it as 'supported'
Documentation clearly states that you have to explicitely mark it as supported, otherwise you will get blank boxes.
The items of an unsupported plugin are not painted in the Qt Quick
Designer, but they are still available as empty boxes and the
properties can be edited.
In order to do this you must build it as a plugin and then include the keyword designersupported into a qmldir file in the same folder your plugin shared object/dll is placed. This is a whitelist and Qt Creator's puppet will trust your code, be sure not to make long operations or crash, or you will crash the puppet and make the designer useless.
Old and outdated answer below
Same thing happens to me. This seems to be a bug in QtCreator. I digged into the code of the Rectangle QML item and found no special instructions. So QtCreator must have it hard coded or something.
See this: http://developer.qt.nokia.com/forums/viewthread/2555
They are blank, custom components created with C++ cant be show, but they work fine when running your app. There is a bug placed into Qt for allowing that, but seems to be low priority now as it requires internal Qt changes.
Update:
Just recently, after some discussion, a task was created to implement a mechanism for shadowing custom components, and it was marked P1, nice!. See this bug report, they have even added some work in progress patch you can test. That's a patch for Qt, so we might (or not) be close to having it inside Qt 5.4 and then Qt Creator will have that support. Remember Qt has a release cycle of 6 months now, so it might not be that far. Please register and vote for it, if you need it.

Resources