QML Image Masking - qt

I have been working on a project where I need to apply image masking that applies an effect like this:
Pic1: https://i.stack.imgur.com/6zI2x.jpg
Pic2: https://i.stack.imgur.com/z7IVX.jpg
Mask frame: https://i.stack.imgur.com/3syEm.jpg
Desired effect: https://i.stack.imgur.com/t2kO5.jpg
I got it to work by using OpacityMask however to do that I had to use some photoshop and edit my mask frame image. I need to apply this affect to multiple mask frames with different shapes therefore using photoshop to edit all of them seem troublesome. Moreover, the inside of the mask frame images arent all transparent either.
Is there any ideas you can give me to solve this issue without using any pre photoshoping each mask frame images. I tried to look into ShaderEffect but I could not really understand how I should use it for my purpose. Moreover I searched for a OpacityMask like effect but working only on part of the mask image which has a specific color/specific shaped area. However, I could not find any.

ShaderEffect appears to be the only option, considering what you said in the comments that the frame shape could be anything.
The code examples below show how to solve your issue with ShaderEffect.
QML code
The only property on the QML side is the rect, which defines the x, y, width, and height of the frame, which are scaled down to between 0 and 1.
Image { id: img; visible: false; source: "2.jpg" }
Image { id: frme; visible: false; source: "3.jpg" }
Image { id: back; visible: false; source: "1.jpg" }
ShaderEffect {
width: img.sourceSize.width / 3.5
height: img.sourceSize.height / 3.5
property var back: back
property var image: img
property var frame: frme
property vector4d rect: Qt.vector4d(width/2-50, height/2-60, 100, 120);
readonly property vector4d frect: Qt.vector4d(rect.x/width,rect.y/height,
rect.z/width,rect.w/height);
fragmentShader: "qrc:/shader.glsl"
}
shader.glsl
I discovered that the saturation inside the image is very different from other areas after using a color picker in different points of the frame image.
So, in order to decide where to mask in the image, I used saturation.
uniform highp sampler2D back;
uniform highp sampler2D image;
uniform highp sampler2D frame;
varying highp vec2 qt_TexCoord0;
uniform highp vec4 frect;
uniform highp float qt_Opacity;
// From https://gist.github.com/983/e170a24ae8eba2cd174f
vec3 rgb2hsv(vec3 c) {
vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);
vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));
vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));
float d = q.x - min(q.w, q.y);
float e = 1.0e-10;
return vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);
}
void main() {
vec2 u = qt_TexCoord0;
vec2 frameCoord = (u - frect.xy) / frect.zw ;
gl_FragColor = texture2D(back, u);
if(frameCoord.x>0. && frameCoord.y>0. && frameCoord.x<1. && frameCoord.y<1.) {
vec4 mask = texture2D(frame, frameCoord);
vec3 hsv = rgb2hsv(mask.xyz);
gl_FragColor = mask;
// Check that the saturation is between 0 and 0.2.
if(abs(hsv.y - 0.1) < 0.1) {
gl_FragColor = texture2D(image, u);
}
}
}
Note
You can also change the last line of code if you want the frame's shadow to cover your image.
gl_FragColor = mix(texture2D(image, u), mask, 1. - hsv.z);
Result

If you know that the geometry of your picture frame is just a rectangle, we can just create a rectangular mask aligned within your picture frame. I worked out that your picture frame was 410x500 pixels and decided to shrink it to 50% i.e. 205x250 pixels. At that scale, I worked out that your picture frame had a border size of about 18 pixels. So I created an inner rectangle based on those dimensions and used that rectangle for the OpacityMask maskSource:
import QtQuick 2.15
import QtQuick.Controls 2.15
import QtGraphicalEffects 1.0
Page {
Image {
id: pic1
anchors.fill: parent
source: "https://i.stack.imgur.com/6zI2x.jpg"
}
Image {
id: pic2
anchors.fill: parent
visible: false
source: "https://i.stack.imgur.com/z7IVX.jpg"
}
Image {
id: rawMask
anchors.centerIn: parent
width: 205
height: 250
source: "https://i.stack.imgur.com/3syEm.jpg"
}
Item {
id: mask
anchors.fill: parent
Rectangle {
id: borderMask
x: rawMask.x + 18
y: rawMask.y + 18
width: rawMask.width - 36
height: rawMask.height - 36
color: "white"
}
}
OpacityMask {
anchors.fill: parent
source: pic2
maskSource: mask
}
}

Related

Cut-out effect using a QML ShaderEffect

I'd like to achieve a "cutout" effect using a custom QML ShaderEffect item. The area that is cut out should display the pixels of the image (src) but only the pixels that are directly under the ShaderEffect item in the z order. In other words, only the pixels that exist at the same coordinates as the area of cutout square. The final effect would be exactly like if you had two images on top of each other and the top image was being masked in an area to allow the lower image to show through. Like so:
Because of application specific details, I need to achieve this using a custom fragment shader and a pixel shader but am almost a complete stranger to the GLSL language. What I currently have in the code is this:
ShaderEffect {
id: shader_element
x: resizeable.x
y: resizeable.y
width: resizeable.width
height: resizeable.height
property Image src: global_image_reference // from the app's root scope
vertexShader: "
uniform highp mat4 qt_Matrix;
attribute highp vec4 qt_Vertex;
attribute highp vec2 qt_MultiTexCoord0;
varying highp vec2 coord;
void main() {
coord = qt_MultiTexCoord0;
gl_Position = qt_Matrix * qt_Vertex;
}"
fragmentShader: "
varying highp vec2 coord;
uniform sampler2D src;
uniform lowp float qt_Opacity;
void main() {
gl_FragColor = texture2D(src, coord);
}"
}
I'm passing a global reference to the underlying image (that I want to show through) to the ShaderEffect item and using that reference in the pixel shader. This works but instead of getting a cutout effect, I get a squish effect where the referenced image is being squished when the container is resized:
Any advice on how I need to change either my fragment shader or my pixel shader to achieve the cutout effect instead of the squish effect? I was thinking maybe something utilizing Item's mapToItem() of mapFromItem() functions but I'm not sure how the points returned by those functions can be passed to the vertex or pixel shader.

Converting 2d coordinates to 3d coordinates in Qml qt3d

I have a window size width: 1500 height: 780, I would like to render the 3d object with 2d co-ordinates, giving z=1 as dummy value. Following is the qml code.(If i have to render at x=0,y=0 3d should object should render exactly the same place where the Rectangle in qml renders i.e window coordinates)
Entity {
id: root
property real x: 2.0
property real y: 0.0
property real z: 0.0
property real scale: 1
property var mainCam
property var forwardRenderer
property Material material
components: [ trefoilMeshTransform, mesh, root.material ]
Transform {
id: trefoilMeshTransform
property real userAngle: 900.0
translation:Qt.vector3d(0,1,1)
property real theta: 0.0
property real phi:0.0
property real roll: 0.0
rotation: fromEulerAngles(theta, phi, roll)
scale: root.scale
}
Mesh {
id: mesh
source: "assets/obj/cube.obj"
}
}
The code is exactly the same as wireframe example, including the camera settings. I tried to use the qt3d unproject api in qml but ended up unsuccessfully . Can you please help me with the clue?
Please read this document about How to convert world to screen coordinates and vice versa this is good for understanding some logics.
and also this.
In qt3d I get 3d coordinate by using RayCaster and ScreenRayCaster
it gives you the local position and world position and screen x , y
see this site from kdab and its example
RenderSettings{
//all components that you need like viewport , InputSettings ,...
ScreenRayCaster {
id: screenRayCaster
onHitsChanged: printHits("Screen hits", hits)
}
}
MouseHandler {
id: mouseHandler
sourceDevice: MouseDevice {}
onReleased: { screenRayCaster.trigger(Qt.point(mouse.x, mouse.y)) }
}
function printHits(desc, hits) {
console.log(desc, hits.length)
for (var i=0; i<hits.length; i++) {
console.log(" " + hits[i].entity.objectName, hits[i].distance,
hits[i].worldIntersection.x, hits[i].worldIntersection.y, hits[i].worldIntersection.z)
}
}

Implementing 3D Shader to 2D Object with Qt?

I am trying to implement the following shader from here:
https://gamedev.stackexchange.com/questions/68401/how-can-i-draw-outlines-around-3d-models
my base is a 2D Image that has preapplied shaders.
I was unsure how to apply this
glDrawBuffer( GL_COLOR_ATTACHMENT1 );
Vec3f clearVec( 0.0, 0.0, -1.0f );
// from normalized vector to rgb color; from [-1,1] to [0,1]
clearVec = (clearVec + Vec3f(1.0f, 1.0f, 1.0f)) * 0.5f;
glClearColor( clearVec.x, clearVec.y, clearVec.z, 0.0f );
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
So I didn't, this is what my QML code looks like:
ShaderEffect {
id: outline
anchors.fill: swirls
visible: true
property variant source: swirls
//property variant source: mascot
// first render target from the first pass
property variant uTexColor: swirls
// second render target from the first pass
property variant uTexNormals: swirls
property variant uResolution: Qt.vector2d(960, 640) //screen resolution
property variant delta: Qt.size(0.1 / width, 0.2 / height)
fragmentShader: "qrc:effects/shaders/outline.frag"
layer.enabled: true
layer.effect: OpacityMask {
maskSource: swirls
}
}
I don't know much about normal diffuse maps and have no idea what
in vec2 fsInUV;
is which seems to be important to getting this to work. I am trying to create sprite like outlines around a circle I have made with opacity mask + shaders (it's animated to look like water with shaders)
The original user for the shaders is inactive and I'm not familiar with how QML implements Shaders, as I'm very unfamiliar with Shaders in general.

How to use QML Scale Element for incremental scaling with different origin

I'm trying to use a QML Scale Element to perform view scaling around a point clicked by the user, but it's not always working as documented.
To reproduce the problem, run the minimal QML example below (I'm using Qt 5.3.1 on Ubuntu 14.04 x86_64) and then:
Click in the center of the blue rectangle at the top left.
See that everything is scaled up, but the center of the blue rectangle remains at your click location. This is as documented in http://doc.qt.io/qt-5/qml-qtquick-scale.html - "[The origin] holds the point that the item is scaled from (that is, the point that stays fixed relative to the parent as the rest of the item grows)."
Now click in the center of the red rectangle.
See that everything is scaled up, but the center of the red rectangle did not remain at your click point, it was translated up and to the left. This is not as documented.
My goal is to have it always zoom correctly maintaining the click point as the origin, as stated in the documentation.
P.S. Interestingly, if you now click again in the center of the red rectangle, it scales up around that point as promised. Clicking again now on the center of the blue rectangle, you see the same unexpected translation behaviour.
P.P.S. I'm working on an application where the user can mouse-wheel / pinch anywhere on the containing rectangle, and everything inside should scale up or down around the mouse / pinch position. Many applications have exactly this behaviour. See for example inkscape.
import QtQuick 2.2
import QtQuick.Controls 1.1
ApplicationWindow {
visible: true
width: 640
height: 480
title: qsTr("Hello World")
Rectangle {
x: 100
y: 100
width: 300
height: 300
transform: Scale {
id: tform
}
MouseArea {
anchors.fill: parent
onClicked: {
console.log(mouse.x + " " + mouse.y)
tform.xScale += 0.5
tform.yScale += 0.5
tform.origin.x = mouse.x
tform.origin.y = mouse.y
}
}
Rectangle {
x: 50
y: 50
width: 50
height: 50
color: "blue"
}
Rectangle {
x: 100
y: 100
width: 50
height: 50
color: "red"
}
}
}
(I filed this as a Qt bug, because the behaviour does not follow the documentation. At the moment of writing, the bug seems to have been triaged as "important". https://bugreports.qt.io/browse/QTBUG-40005 - I'm still very much open to suggestions of work-arounds / fixes over here)
In fact it is not a deviant behavior, just a different one from what you may expect after reading the doc.
When you change the Scale transform of your rectangle, the transformation is applied on the original rectangle. The point you click on stay in the same place from the original rectangle point of view.
That's why your rectangle "moves" so much when you click on one corner then the opposite corner.
In order to achieve what you want, you can't rely on transform origin. You have to set the actual x y coordinates of your rectangle.
Here's a working example:
ApplicationWindow {
visible: true
width: 640
height: 480
title: qsTr("Hello World")
Rectangle {
id: rect
x: 100
y: 100
width: 300
height: 300
Rectangle {
x: 50
y: 50
width: 50
height: 50
color: "blue"
}
Rectangle {
x: 100
y: 100
width: 50
height: 50
color: "red"
}
transform: Scale {
id: tform
}
MouseArea {
anchors.fill: parent
property double factor: 2.0
onWheel:
{
if(wheel.angleDelta.y > 0) // zoom in
var zoomFactor = factor
else // zoom out
zoomFactor = 1/factor
var realX = wheel.x * tform.xScale
var realY = wheel.y * tform.yScale
rect.x += (1-zoomFactor)*realX
rect.y += (1-zoomFactor)*realY
tform.xScale *=zoomFactor
tform.yScale *=zoomFactor
}
}
}
}

QML NumberAnimation.duration behaviour

I'm trying to make a player move smoothly towards a destination in QML. I'm using a NumberAnimation to animate the x,y position changes. The NumberAnimation's duration should be proportional to the distance the player has to travel, so that the player moves at the same speed regardless of how far away they are from their destination.
import QtQuick 1.1
Item {
width: 720
height: 720
MouseArea {
anchors.fill: parent
onClicked: {
var newXDest = mouse.x - player.width / 2;
var newYDest = mouse.y - player.height / 2;
var dist = player.distanceFrom(newXDest, newYDest);
// Make duration proportional to distance.
player.xMovementDuration = dist; // 1 msec per pixel
player.yMovementDuration = dist; // 1 msec per pixel
console.log("dist = " + dist);
player.xDest = newXDest;
player.yDest = newYDest;
}
}
Rectangle {
id: player
x: xDest
y: yDest
width: 32
height: 32
color: "blue"
property int xDest: 0
property int yDest: 0
property int xMovementDuration: 0
property int yMovementDuration: 0
function distanceFrom(x, y) {
var xDist = x - player.x;
var yDist = y - player.y;
return Math.sqrt(xDist * xDist + yDist * yDist);
}
Behavior on x {
NumberAnimation {
duration: player.xMovementDuration
// duration: 1000
}
}
Behavior on y {
NumberAnimation {
duration: player.yMovementDuration
// duration: 1000
}
}
}
Rectangle {
x: player.xDest
y: player.yDest
width: player.width
height: player.height
color: "transparent"
border.color: "red"
}
}
My problem can be demonstrated by running the application above and following these steps:
Click on the bottom right hand corner of the screen.
Immediately click in the centre (or closer towards the top left) of the screen.
On the second click (while the rectangle is still moving), it seems that the rectangle's number animation is stopped (which is what I want) but it assumes the position of the destination (not what I want). Instead, I want the animation to stop and the rectangle to assume the position at which it was stopped, then to continue on to the new destination.
The correct behaviour - ignoring that the movement speed becomes disproportional - can be seen by setting both of the NumberAnimation.durations to 1000.
I think that you are looking for SmoothedAnimation. There are only two types of animation that deal nicely with the destination changing before the animation is completed. That is SmoothedAnimation and SpringAnimation. Both of these use the current position and velocity to determine the position in the next frame. Other animation types move the position along a predetermined curve.
Simply changing NumberAnimation to SmoothedAnimation makes your example look correct to me.

Resources