Cut-out effect using a QML ShaderEffect - qt

I'd like to achieve a "cutout" effect using a custom QML ShaderEffect item. The area that is cut out should display the pixels of the image (src) but only the pixels that are directly under the ShaderEffect item in the z order. In other words, only the pixels that exist at the same coordinates as the area of cutout square. The final effect would be exactly like if you had two images on top of each other and the top image was being masked in an area to allow the lower image to show through. Like so:
Because of application specific details, I need to achieve this using a custom fragment shader and a pixel shader but am almost a complete stranger to the GLSL language. What I currently have in the code is this:
ShaderEffect {
id: shader_element
x: resizeable.x
y: resizeable.y
width: resizeable.width
height: resizeable.height
property Image src: global_image_reference // from the app's root scope
vertexShader: "
uniform highp mat4 qt_Matrix;
attribute highp vec4 qt_Vertex;
attribute highp vec2 qt_MultiTexCoord0;
varying highp vec2 coord;
void main() {
coord = qt_MultiTexCoord0;
gl_Position = qt_Matrix * qt_Vertex;
}"
fragmentShader: "
varying highp vec2 coord;
uniform sampler2D src;
uniform lowp float qt_Opacity;
void main() {
gl_FragColor = texture2D(src, coord);
}"
}
I'm passing a global reference to the underlying image (that I want to show through) to the ShaderEffect item and using that reference in the pixel shader. This works but instead of getting a cutout effect, I get a squish effect where the referenced image is being squished when the container is resized:
Any advice on how I need to change either my fragment shader or my pixel shader to achieve the cutout effect instead of the squish effect? I was thinking maybe something utilizing Item's mapToItem() of mapFromItem() functions but I'm not sure how the points returned by those functions can be passed to the vertex or pixel shader.

Related

QML Image Masking

I have been working on a project where I need to apply image masking that applies an effect like this:
Pic1: https://i.stack.imgur.com/6zI2x.jpg
Pic2: https://i.stack.imgur.com/z7IVX.jpg
Mask frame: https://i.stack.imgur.com/3syEm.jpg
Desired effect: https://i.stack.imgur.com/t2kO5.jpg
I got it to work by using OpacityMask however to do that I had to use some photoshop and edit my mask frame image. I need to apply this affect to multiple mask frames with different shapes therefore using photoshop to edit all of them seem troublesome. Moreover, the inside of the mask frame images arent all transparent either.
Is there any ideas you can give me to solve this issue without using any pre photoshoping each mask frame images. I tried to look into ShaderEffect but I could not really understand how I should use it for my purpose. Moreover I searched for a OpacityMask like effect but working only on part of the mask image which has a specific color/specific shaped area. However, I could not find any.
ShaderEffect appears to be the only option, considering what you said in the comments that the frame shape could be anything.
The code examples below show how to solve your issue with ShaderEffect.
QML code
The only property on the QML side is the rect, which defines the x, y, width, and height of the frame, which are scaled down to between 0 and 1.
Image { id: img; visible: false; source: "2.jpg" }
Image { id: frme; visible: false; source: "3.jpg" }
Image { id: back; visible: false; source: "1.jpg" }
ShaderEffect {
width: img.sourceSize.width / 3.5
height: img.sourceSize.height / 3.5
property var back: back
property var image: img
property var frame: frme
property vector4d rect: Qt.vector4d(width/2-50, height/2-60, 100, 120);
readonly property vector4d frect: Qt.vector4d(rect.x/width,rect.y/height,
rect.z/width,rect.w/height);
fragmentShader: "qrc:/shader.glsl"
}
shader.glsl
I discovered that the saturation inside the image is very different from other areas after using a color picker in different points of the frame image.
So, in order to decide where to mask in the image, I used saturation.
uniform highp sampler2D back;
uniform highp sampler2D image;
uniform highp sampler2D frame;
varying highp vec2 qt_TexCoord0;
uniform highp vec4 frect;
uniform highp float qt_Opacity;
// From https://gist.github.com/983/e170a24ae8eba2cd174f
vec3 rgb2hsv(vec3 c) {
vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);
vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));
vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));
float d = q.x - min(q.w, q.y);
float e = 1.0e-10;
return vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);
}
void main() {
vec2 u = qt_TexCoord0;
vec2 frameCoord = (u - frect.xy) / frect.zw ;
gl_FragColor = texture2D(back, u);
if(frameCoord.x>0. && frameCoord.y>0. && frameCoord.x<1. && frameCoord.y<1.) {
vec4 mask = texture2D(frame, frameCoord);
vec3 hsv = rgb2hsv(mask.xyz);
gl_FragColor = mask;
// Check that the saturation is between 0 and 0.2.
if(abs(hsv.y - 0.1) < 0.1) {
gl_FragColor = texture2D(image, u);
}
}
}
Note
You can also change the last line of code if you want the frame's shadow to cover your image.
gl_FragColor = mix(texture2D(image, u), mask, 1. - hsv.z);
Result
If you know that the geometry of your picture frame is just a rectangle, we can just create a rectangular mask aligned within your picture frame. I worked out that your picture frame was 410x500 pixels and decided to shrink it to 50% i.e. 205x250 pixels. At that scale, I worked out that your picture frame had a border size of about 18 pixels. So I created an inner rectangle based on those dimensions and used that rectangle for the OpacityMask maskSource:
import QtQuick 2.15
import QtQuick.Controls 2.15
import QtGraphicalEffects 1.0
Page {
Image {
id: pic1
anchors.fill: parent
source: "https://i.stack.imgur.com/6zI2x.jpg"
}
Image {
id: pic2
anchors.fill: parent
visible: false
source: "https://i.stack.imgur.com/z7IVX.jpg"
}
Image {
id: rawMask
anchors.centerIn: parent
width: 205
height: 250
source: "https://i.stack.imgur.com/3syEm.jpg"
}
Item {
id: mask
anchors.fill: parent
Rectangle {
id: borderMask
x: rawMask.x + 18
y: rawMask.y + 18
width: rawMask.width - 36
height: rawMask.height - 36
color: "white"
}
}
OpacityMask {
anchors.fill: parent
source: pic2
maskSource: mask
}
}

Implementing 3D Shader to 2D Object with Qt?

I am trying to implement the following shader from here:
https://gamedev.stackexchange.com/questions/68401/how-can-i-draw-outlines-around-3d-models
my base is a 2D Image that has preapplied shaders.
I was unsure how to apply this
glDrawBuffer( GL_COLOR_ATTACHMENT1 );
Vec3f clearVec( 0.0, 0.0, -1.0f );
// from normalized vector to rgb color; from [-1,1] to [0,1]
clearVec = (clearVec + Vec3f(1.0f, 1.0f, 1.0f)) * 0.5f;
glClearColor( clearVec.x, clearVec.y, clearVec.z, 0.0f );
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
So I didn't, this is what my QML code looks like:
ShaderEffect {
id: outline
anchors.fill: swirls
visible: true
property variant source: swirls
//property variant source: mascot
// first render target from the first pass
property variant uTexColor: swirls
// second render target from the first pass
property variant uTexNormals: swirls
property variant uResolution: Qt.vector2d(960, 640) //screen resolution
property variant delta: Qt.size(0.1 / width, 0.2 / height)
fragmentShader: "qrc:effects/shaders/outline.frag"
layer.enabled: true
layer.effect: OpacityMask {
maskSource: swirls
}
}
I don't know much about normal diffuse maps and have no idea what
in vec2 fsInUV;
is which seems to be important to getting this to work. I am trying to create sprite like outlines around a circle I have made with opacity mask + shaders (it's animated to look like water with shaders)
The original user for the shaders is inactive and I'm not familiar with how QML implements Shaders, as I'm very unfamiliar with Shaders in general.

How to size the texture to occupy only a portion of a QQuickItem UI

I have overriden updatePaintNode in the following way to draw an OpenGL texture on a QQuickItem derived class called MyQQuickItem here.
QSGNode *MyQQuickItem::updatePaintNode(QSGNode * oldNode, QQuickItem::UpdatePaintNodeData * /*updatePaintNodeData*/)
{
QSGSimpleTextureNode * textureNode = static_cast<QSGSimpleTextureNode *>(oldNode);
if (!textureNode) {
textureNode = new QSGSimpleTextureNode();
}
QSize size(800, 800);
// myTextureId is a GLuint here
textureNode.reset(window()->createTextureFromId(myTextureId, size));
textureNode->setTexture(my_texture);
textureNode->markDirty(QSGBasicGeometryNode::DirtyMaterial);
QSizeF myiewport = boundingRect().size();
qreal xOffset = 0;
qreal yOffset = 10;
textureNode->setRect(xOffset, yOffset, myViewport.width(), myViewport.height());
return textureNode;
}
This renders the texture content well but covers the whole of my MyQQuickItem UI.
How can reduce the bottom margin of the texture to say fit 80% of the height of MyQQuickItem.
I want to render the texture to a portion of MyQQuickItem & leave the rest blank or black? Is that possible within updatePaintNode.
Note that the texture size is not the UI window size here. My texture size is 800 by 800. Whereas the UI window size is different and depends on the screen.
I found the answer to this:
Changing myViewport.height() gives the required end in Y direction one wishes to set. Similarly, changing myViewport.width() gives the required end in X direction one wishes to set.
4 parameters in TextureNode's setRect can stretch & fit the texture in the way one wishes within a portion of the QQuickItem.

How to render text in QQuickFramebufferObject?

Motivation: I currently have 1000 simple items in my QML scene, and one of them animates at 60fps, so the entire scene repaints at 60fps. Cpu usage is on average 15% on each of my 4 virtual cores on my pc. On the target hardware the situation is even worse - 60% on each of the 4 physical cores, leading to overheating, leading to freeze. Note that I have implemented an optimization: via Loaders, unload all items that are outside the (scrolling) viewport (so, only ~18 items are loaded at any given time). The perf stats I report are with this optimization - without it it's worse.
My solution is to start drawing all the 1000 items in a single QQuickFramebufferObject, and stop having them as actual QML Item's. That way I'll avoid Qt punishing me for just having 1000 (unloaded!) items.
Where I'm stuck though: How to draw the text parts of the items in OpenGL?
Approach 1: I know QPainter can be used to directly render text into a QOpenGLWidget, but that option seems to be absent in QQFBO.
Approach 2: Have a single, parentless Text item in QML with layer.enabled: true, set its text property, wait 1 frame (for it to render) then fetch the texture. Somewhat ugly and roundabout; also may be slow-ish.
Approach 3: Look at the source of QQuickText to see what magic it does and copy it. Might be difficult, and I'll have to comply with license restrictions.
Approach 4: Do software rendering with a QPainter to a QImage, then upload that image to a texture. Elegant, but may be too slow.
Any suggestions on a way to do it that doesn't have the problems in these approaches?
It's not totally clear why rendering one item makes your whole scene repaint. But if only one item is animating you might want to split your scene. Those not moving should be into a parent item, and the one moving can be outside.
There is a rather easy way to render a subtree to a FBO, just render the subtree to a ShaderItem which does nothing.
This example renders an Image with a grayscale shader (borrowed from the example at Qt docs):
import QtQuick 2.0
Rectangle {
width: 200; height: 100
Row {
Image { id: img;
sourceSize { width: 100; height: 100 } source: "qt-logo.png" }
ShaderEffect {
width: 100; height: 100
property variant src: img
vertexShader: "
uniform highp mat4 qt_Matrix;
attribute highp vec4 qt_Vertex;
attribute highp vec2 qt_MultiTexCoord0;
varying highp vec2 coord;
void main() {
coord = qt_MultiTexCoord0;
gl_Position = qt_Matrix * qt_Vertex;
}"
fragmentShader: "
varying highp vec2 coord;
uniform sampler2D src;
uniform lowp float qt_Opacity;
void main() {
lowp vec4 tex = texture2D(src, coord);
gl_FragColor = vec4(vec3(dot(tex.rgb,
vec3(0.344, 0.5, 0.156))),
tex.a) * qt_Opacity;
}"
}
}
}
A ShaderEffect is just a render to texture, you are seeing a rectangle filled with a picture of the object. In this case the illusion is still there, but your animated object is only dealing with a single textured rectangle.
I don't know if that is the solution as it seems the problem might be elsewhere. Please elaborate your problem and I might update the answer as needed.
I know this seems like your second approach but in this case you render to texture your whole unchanged subtree. If I may guess it seems you have scrolling text batching again and again to the GPU because some scrolling animation, if you use a ShaderEffect and a long stripe of items you could animate just the scrolling window and always leave your text static, avoiding batching.

Depth map for particle system using a texture map

I'm trying to generate a depth map for a particle system, but if I render the particle system using MeshDepthMaterial, then every particle is only rendered as a single point for each vertex--not covering the entire area the texture mapped particle is displayed.
Do I need to use MeshDepthMaterial to generate a depth map, or are there other options?
Right now there is no way to get the MeshDepthMaterial to respect the size or texture of the ParticleSystem. However, it is not too hard to implement a custom ShaderMaterial that does that. First, you need a vertex shader and fragment shader.
<script type="x-shader/x-vertex" id="vertexShader">
uniform float size;
void main() {
gl_PointSize = size;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position , 1.0 );
}
</script>
<script type = "x-shader/x-fragment" id="fragmentShader">
uniform sampler2D map;
uniform float near;
uniform float far;
void main() {
float depth = gl_FragCoord.z / gl_FragCoord.w;
float depthColor = 1.0 - smoothstep( near, far, depth );
vec4 texColor = texture2D( map, vec2( gl_PointCoord.x, 1.0 - gl_PointCoord.y ) );
gl_FragColor = vec4( vec3(depthColor), texColor.a);
}
</script>
The vertex shader is totally standard, the fragment shader takes the texture (sampler2D map), but instead of using it for color values it just uses the alpha level texColor.a. For the rgb, a grayscale value based on the depth is used, just like the MeshDepthMaterial. Now to use this shader you just need to grab the html and create a THREE.ShaderMaterial like so:
var material = new THREE.ShaderMaterial({
uniforms : {
size : { type: 'f', value : 20.0 },
near : { type: 'f', value : camera.near },
far : { type: 'f', value : camera.far },
map : { type: "t", value : THREE.ImageUtils.loadTexture( url ) }
},
attributes : {},
vertexShader: vertShader,
fragmentShader: fragShader,
transparent: true
});
Here you have provided the shader with all the info it needs: the camera's near/far range, the size of the particle and the texture it needs to map.
You can see a jsFiddle demo of it here.

Resources