I use two QGLShaderProgram for processing the texture.
ShaderProgram1->bind(); // QGLShaderProgram
ShaderProgram2->bind();
glBegin(GL_TRIANGLE_STRIP);
...
glEnd();
ShaderProgram1->release();
ShaderProgram2->release();
The texture should be processed with Shaderprogram1 and then ShaderProgram2. But when I call ShaderProgram2->bind() automatically fires ShaderProgram1->release() and only one shader works. How do I bind both shaders?
You don't.
Unless these are separate shaders (and even they don't work that way), each rendering operation will apply a single set of shaders to the rendered primitive. That means a single Vertex Shader, followed by any Tessellation Shaders, followed by optionally a single Geometry Shader, followed by a single Fragment Shader.
If you want to daisy-chain shaders, you have to do that within the shaders themselves.
I know this is a very old question, but I just wanted to add my two cents here for anyone who might stumble upon this.
If you want to run multiple shaders that use the same texture, you should set the active texture at the start of the update loop. Then, you must run the shaders one at a time. One shader has to complete before the other may begin. So instead, it would look like this.
ShaderProgram1->bind();
...
ShaderProgram1->release();
ShaderProgram2->bind();
...
ShaderProgram2->release();
Related
In Qt3D 5.9, I am using scene 3D to render an .obj file and display it. I also have enabled object picking, so when a user selects part of the object, I know exactly where on the model they clicked. What I would like to do is add color to that part of the obj/mesh that the user clicked on. To be more specific, for the 'y' value that the user clicked on, I want to color a line all the way around the object model on that 'y' value. I've looked around online and can't find anything to help. Unfortunately I'm not familiar when it comes to 3D objects, meshes, etc. How can I color just part of a mesh in Qt 3D 5.9?
Since you managed to load your own meshes, I suppose you understood how the GeometryRenderer and Geometry QML Components work. The Geometry component takes attribute that define (for instance) the position and normals of your object. The names you give to these attributes allow you to retrieve them in custom shaders. You can add an Attribute in your geometry that defines a buffer in which you will store vertices colors instead of positions and normals.
Then, you will need a custom Material (If you haven't a custom Material, try to read the QML doc to understand how it works. I know, the doc is not really complete but it is a good start)
This custom material will allow you to call your own shader, in which you can retrieve the color of a vertex in the same way you retrieve it's position.
So to conclude, since you want to color just a part of the vertices, you will need
A buffer containing all the colors of all vertices of your mesh
A Geometry attribute that tells how to read this buffer
A script that update the buffer on selection
A custom material and a custom shader that uses the color buffer to paint the object
This is a not-so-easy thing to accomplish, but it is possible and should give you a better undestanding of how Geometry, Materials and shaders work in QML.
If you are not familiar with these, I would suggest that you first put asside the par vertex color buffer and try to make a custom shader that paint all your object red. From that you will be able to go on and find out how to pass per vertex colors to your shader
Good luck
I have a QAbstractListModel with custom objects as items. Each object has a QImage that is loaded from a database. I use ListView in QML to visualize it but I do not see any mean to represent QImage in the delegate. Image primitive seems to accept only URLs.
Is the only way for me to show QImages is to create a QQuickImageProvider with some custom system of a URL per element (looks like a total overkill)?
I think QQuickImageProvider is the proper way.
Also, I think you can use the word 'overkill' if you know exactly how the Qt internals work. Otherwise it's just guessing.
AFAIK there is a complex caching system of images (and other data) underneath, so once an image pixmap is loaded (and doesn't change) data retrieval is immediate. So no overkill at all, since in any case at some point you need to load those QImage, but just once.
I believe a QQuickImageProvider provides pointers to the cached data, and not the whole rasterized data every time. Moreover blitting operations are nowadays performed with hardware accelerations, so it's a single operation taking a fraction of millisecond.
In other words you end up having:
give me image with url "image://xyz"
Qt looks up in the cache and returns the data pointer or performs a full load of the image if not found
the QML renderer passes the data array to OpenGL
one single blit operation (microseconds) and you have it on screen
A QML ShaderEffect will bind a QImage to a GLSL Sampler2D. See the list of "how properties are mapped to GLSL uniform variables" in the ShaderEffect docs. Needs a few lines of GLSL writing in the ShaderEffect to pass pixels through, (will update this post with an example sometime, when I have access to my code), but the result is a fast, QSG-friendly all-QML element which can render QImages.
I am trying to render geographical data obtained at different time with different sensors. Currently, I manage (through OpenGL and QOpenGL widget) to render a single image (i.e. all vertices have a z=0 coordinates). However, I am wondering how to add new "images" (still with different vertices and texture) which can overlap (in the same plane z=0) the others.
Sample from each texture in your fragment shader doing whatever composing you need, such as additive, though for geospatial data its probably more complex than that.
If using a library that does all that, then simply disable depth testing, and render each layer, adjusting transparency function between passes.
I am developing a library for Qt that extends it's OpenGL functionality to support modern OpenGL (3+) features like texture buffers, image load-store textures, shader storage buffers, atomic counter buffers, etc.. In order to fully support features like transform feedback, I allow users to bind a buffer object to different targets at different times (regardless of what data they allocated the buffer with). Consider the following scenario where I use transform feedback to advance vertex data ONCE, and then bind it to a separate program for rendering (used for the rest of the application run time) :
// I attach a (previously allocated) buffer to the transform feedback target so that I
// can capture data advanced in a shader program.
glBindBuffer(GL_TRANSFORM_FEEDBACK_BUFFER, someBufferID);
// Then I execute the shader program...
// and release the buffer from the transform feedback target.
glBindBuffer(GL_TRANSFORM_FEEDBACK_BUFFER, 0);
// Then, I bind the same buffer containing data advanced via transform feedback
// to the array buffer target for use in a separate shader program.
glBindBuffer(GL_ARRAY_BUFFER, someBufferID);
// Then, I render something using this buffer as the data source for a vertex attrib.
// After, I release this buffer from the array buffer target.
glBindBuffer(GL_ARRAY_BUFFER, 0);
In this scenario, being able to bind a buffer object to multiple targets is useful. However, I am uncertain if there are situations in which this capability would cause problems given the OpenGL specification. Essentially, should I allow a single buffer object to be bound to multiple targets or force a target (like the standard Qt buffer wrappers) during instantiation?
Edit:
I have found that there is a problem mixing creation/binding targets with TEXTURE objects as the OpenGL documentation for glBindTexture states:
GL_INVALID_OPERATION is generated if texture was previously created
with a target that doesn't match that of target.
However, the documentation for glBindBuffer states no such problem.
Well, there can always be faulty drivers (and if you don't have "green" hardware nothing can be guaranteed, anyway). But rest assured that it is perfectly legal to bind buffers to any target you want, disregarding the target they were created with.
There might be certain subtleties, like certain targets need to be used in an indexed way using glBindBufferRange/Base (like uniform buffers or transform feedback buffers). But enforcing a buffer to be used with a single target only, like Qt does, is too rigid to be useful in modern OpenGL, as your very common example already shows. Maybe a good compromise would be to use an either per-class (like Qt does) or per-buffer default target (which will be sufficient in most simple situations, when the buffer is always used with a single target, and can be used for class-internal binding operations if neccessary), but provide the option to bind it to something else.
The question is 2D specific.
I am a constantly updating texture, which is a render target for one of my layers. The update is a whole redraw of the texture and is performed by drawing sprites and outputting text. The operation is performed frequently, consumes quite a lot of CPU and I have, of course, optimized the number of the redraws to keep it down.
Is there a way to buffer these operations in Direct3D? Because currently I have to repeatedly construct a chain of sprite/text operations. Lets assume any game performing a world update - how do they overcome this tedious work? Maybe by creating more layers?
The best thing for me would be creating a modifiable draw chain object, but I haven't found anything like this in Direct3D.
There are a few general methods you might look into:
Batching: Order and combine draws to perform as few calls as possible, and draw as many objects between state changes as you can.
Cache: Keep as much geometry in vertex buffers as you can. With 2D, this gets more interesting, since most things are textured quads. In that case...
Shaders: It may be possible to write a vertex shader that takes a float4 giving the X/Y position/size of your quad, then use that to draw 4 vertexes. You won't need to perform full matrix state changes then, just update 4 floats in your shader (skips all the view calculations, 75% less memory and math). To help make sure the right settings are being used with the shaders, ...
State Blocks: Save a state block for each type of sprite, with all the colors, modes, and shaders bound. Then simply apply the state block, bind your textures, set your coordinates, and draw. At best, you can get each sprite down to 4 calls. Even still...
Cull: It's best not to draw something at all. If you can do simple screen bounds-checking (can be faster than the per-poly culling that will be done otherwise), sorting and basic occlusion (flag sprites with transparency). With 2D, most culling checks are very very cheap. Sort and clip and cull wherever you can.
As far as actually buffering, the driver will handle that for you, when and where it's appropriate. State blocks can effect buffering, by delivering all the modes in a single call (I forget whether it's good or bad, though I believe they can be beneficial). Cutting down the calls to:
if (sprite.Visible && Active(sprite) && OnScreen(sprite))
{
states[sprite.Type]->Apply();
device->BindTexture(sprite.Texture);
device->SetVertexShaderF(sprite.PositionSize);
device->Draw(quad);
}
is very likely to help with CPU use.