MultiPointTouchArea and Canvas - qt

I am playing with MultiPointTouchArea in a Canvas component to make a little drawing exercise. The code below works but the onReleased event is getting called twice and I don't understand why.
From the log statements below, I see it gets called first with one TouchPoint, then again with two TouchPoints – the x and y positions are the same for all. Also the id of these touchPoints are undefined.
I don't get it. Since I define a maximumTouchPoints and am testing with just one touch (I am testing this on my laptop using a trackpad, with just one "finger".) :
why am I getting multiple touchpoints
why is onReleased getting called twice?
why are the touchPoints id undefined, since I have defined my touchPoints?
qml: released 1
qml: undefined 386.66015625 207.6640625
qml: is this touch1? true
qml: released 2
qml: undefined 386.66015625 207.6640625
qml: is this touch1? true
qml: undefined 386.66015625 207.6640625
qml: is this touch1? true
import QtQuick 2.5
import QtQuick.Controls 1.4
ApplicationWindow {
visible: true
width: 640
height: 480
title: qsTr("Canvas")
Canvas {
id: canvas
anchors.fill: parent
property real lastX: 0
property real lastY: 0
onPaint: {
var ctx = getContext("2d")
ctx.lineWidth = 1
ctx.strokeStyle = "blue"
ctx.beginPath()
ctx.moveTo(lastX,lastY)
ctx.lineTo(touch1.x,touch1.y)
ctx.stroke()
canvas.lastX = touch1.x;
canvas.lastY = touch1.y;
}
function clearCanvas() {
var ctx = canvas.getContext("2d")
ctx.clearRect(0, 0, canvas.width, canvas.height)
}
MultiPointTouchArea {
anchors.fill: parent
minimumTouchPoints: 1
maximumTouchPoints: 1
touchPoints: [TouchPoint { id: touch1 }]
onPressed: {
canvas.lastX = touch1.x;
canvas.lastY = touch1.y;
canvas.clearCanvas();
}
onReleased: {
console.log("released", touchPoints.length); // CALLED TWICE?
var tp;
for (var i = 0; i < touchPoints.length; i++) {
tp = touchPoints[i];
console.log("\t",tp.id, tp.x, tp.y);
console.log("is this touch1?", tp === touch1);
}
}
onUpdated: canvas.requestPaint();
}
}
}

So it seems that there are open bugs for these issues
The "two release events" issue has been reported and is open: https://bugreports.qt.io/browse/QTBUG-44781
The "no previousX, previousY" TouchPoint issue is there too: https://bugreports.qt.io/browse/QTBUG-41692

Related

How to perform dynamic conversion between mouse coordinates and world coordinates with QtQuick3D?

Description/ Code
I have a Qt Quick 3D View and corresponding scene that was designed to be compiled on Qt 6.3.0
import QtQuick
import QtQml
import QtQuick3D
import QtQuick3D.Helpers
Window {
width: 800
height: 600
visible: true
property var selectedItem
property bool mousePressed: false
function multiply_vectors(vec1, vec2) {
return Qt.vector3d(vec1.x * vec2.x, vec1.y * vec2.y, vec1.z * vec2.z);
}
View3D {
renderMode: View3D.Inline
camera: camera
anchors.fill: parent
width: 800
height: 600
x: 0
y: 0
id: view
environment: SceneEnvironment {
clearColor: "black"
backgroundMode: SceneEnvironment.Color
depthTestEnabled: false
depthPrePassEnabled: true
}
Model {
id: rootEntity
pickable: true
source: "#Cube"
materials: PrincipledMaterial {
baseColor: "red"
roughness: 0.1
}
position: Qt.vector3d(25.0, 15.0, -60.0)
scale: Qt.vector3d(1.0, 1.0, 1.0)
}
PerspectiveCamera {
id: camera
position.z: 330.0
position.y: 0.75
eulerRotation.x: -12
clipNear: 0.0
clipFar: 1600.0
}
MouseArea {
acceptedButtons: Qt.LeftButton | Qt.RightButton
anchors.fill: parent
id: mouseArea
onPressed: function (mouse) {
var result = view.pick(mouse.x, mouse.y);
if (result.objectHit) {
selectedItem = result.objectHit;
mousePressed = true;
} else {
mousePressed = false;
}
}
onMouseXChanged: function(mouse) {
if (mousePressed) {
var viewCoords = view.mapFromGlobal(mouseArea.mapToGlobal(mouse.x, mouse.y));
var sceneCoords = Qt.vector3d(viewCoords.x, viewCoords.y, 0);
var worldCoords = view.mapTo3DScene(sceneCoords);
worldCoords.z = selectedItem.z
selectedItem.position = multiply_vectors(worldCoords, Qt.vector3d(Math.abs(camera.z - selectedItem.z), Math.abs(camera.z - selectedItem.z), 1.0))
}
}
onReleased: function (mouse) {
mousePressed = false
}
}
Component.onCompleted: {
camera.lookAt(rootEntity)
}
}
}
Overview
The use case is that whenever the mouse is pressed while pointing at the cube, whenever the mouse moves it will cause the cube to move along with it to the corresponding point in the 3d Scene.
This works great when looking from a point that is on the same z-axis. However when looking at the object from a point say along the x-axis, the model will move along the x-axis instead of following the mouse position.
Question
How can I modify the business logic in onMouseXChanged: function(mouse) { to correctly transform the matrix (or equivalent transform) to consistently match the mouse position irregardless of the camera's position relative to the Model?
If I understood you correctly, you need to move the object with the mouse parallel to the camera regardless of the camera position and model scaling? I admit that I don't have a solution, but still it's better than the original code. First of all, do not set the clipNear to 0, it would make the frustum degenerate and break the projection math.
Secondly, I would suppose that the code which sets the object position should look like
selectedItem.position = view.mapTo3DScene(
Qt.vector3d(mouse.x, mouse.y,
view.mapFrom3DScene(selectedItem.position).z))
The docs say that mapFrom3DScene/mapTo3DScene should interpret the z coordinate as the distance from the near clip plane of the frustum to the mapped position. However when I move it towards the sides of the window the object gets larger, whereas it should get smaller.
Here's the complete code with a few corrections of mine:
import QtQuick
import QtQml
import QtQuick3D
import QtQuick3D.Helpers
Window {
width: 800
height: 600
visible: true
property var selectedItem
property bool mousePressed: false
View3D {
renderMode: View3D.Inline
camera: camera
anchors.fill: parent
width: 800
height: 600
x: 0
y: 0
id: view
environment: SceneEnvironment {
clearColor: "black"
backgroundMode: SceneEnvironment.Color
depthTestEnabled: false
depthPrePassEnabled: true
}
Model {
id: rootEntity
pickable: true
source: "#Cube"
materials: PrincipledMaterial {
baseColor: "red"
roughness: 0.1
}
position: Qt.vector3d(25.0, 15.0, -60.0)
scale: Qt.vector3d(2.0, 1.0, 0.5)
}
PerspectiveCamera {
id: camera
position.z: 330.0
position.y: 100
position.x: 700
eulerRotation.x: -12
// Note 1: clipNear shouldn't be 0, otherwise
// it would break the math inside the projection matrix
clipNear: 1.0
clipFar: 1600.0
}
MouseArea {
acceptedButtons: Qt.LeftButton | Qt.RightButton
anchors.fill: parent
id: mouseArea
onPressed: function (mouse) {
var result = view.pick(mouse.x, mouse.y);
if (result.objectHit) {
selectedItem = result.objectHit;
mousePressed = true;
} else {
mousePressed = false;
}
}
onPositionChanged: function(mouse) {
if (mousePressed) {
// Note 2: recalculate the position, since MouseArea has
// the same geometry as View3D we can use coords directly
selectedItem.position = view.mapTo3DScene(
Qt.vector3d(mouse.x, mouse.y,
view.mapFrom3DScene(selectedItem.position).z))
}
}
onReleased: function (mouse) {
mousePressed = false
}
}
Component.onCompleted: {
camera.lookAt(rootEntity)
}
}
}
After spending a while experimenting with different approaches, I found that mapping the mouse coordinates to the 3d space wasn't fully supported by the Qt API in terms of when the mouse is not fixed over an active object.
So, instead, the way that I made a workout was by casting a new RayCast each time the mouse moves and storing the offset when the mouse is pressed originally and then translating the item based on the result of the raycast and lining up the offset by translating by the normalized matrix with a small scalar.
onMouseXChanged: function (mouse) {
if (mousePressed) {
if (selectedItem != null) {
var result = view.pick(mouse.x, mouse.y)
if (result.objectHit) {
if (result.objectHit == selectedItem) {
var mouseGlobalPos = mouseArea.mapToGlobal(
mouse.x, mouse.y)
var mouseViewPos = view.mapFromGlobal(
mouseGlobalPos)
var mouseScenePos = result.scenePosition
var resultPos = result.position
/* here we subtract the result of the new raycast by the starting offset and then normalize
* the result and multiply it by a scalar 3 to determine the amount of offset the Model
* under the mouse is from where the mouse was originally pressed, so we can translate it */
var differencePos = resultPos.minus(
startMousePressSelectedItemLocalDragOffset).normalized(
).times(3)
selectedItem.position = selectedItem.position.plus(
differencePos)

QML Loading View during function runtime

I am attempting to create a qml button object that displays a screen for the duration of a function's runtime. I plan to use this loading screen when I need to parse through a larger dataset/run a slower function. Currently this is what I have come up with.
//LoadingButton.qml
import QtQuick 2.4
import QtQuick.Controls 1.2
Item
{
id: impl
function callbackFunction() { console.log("This is a dummy funciton and needs to be overwritten in the implementation") } //empty dummy function
property alias style: button.style
Button {
id: button
anchors.fill: parent
onClicked: {
loadingScreen.visible = true;
console.log("Loading should be visible")
impl.callbackFunction();
loadingScreen.visible = false;
console.log("Loading should be hidden")
}
}
Rectangle
{
width: 500
height: 500
x:0
y:0
z: 60
id: loadingScreen
color: "red"
visible: false
}
}
This example runs the callbackFunction once overwritten in the parent object correctly, but the visibility of the Rectangle does not change until the slower function is completed. Also the application freezes until it finishes.
Is there any way to force the Rectangle to show/hide mid-javascript function execution?
the best solution is of course to move your slow function to a background thread. That way the GUI stays responsive.
If you want to keep the callbackFunction in same thread as the GUI, you can use a Timer that will delay the start of the slow function until the loading screen is shown. Please note that the GUI will be blocked during the execution of the slow function.
import QtQuick 2.4
import QtQuick.Controls 1.2
Item
{
id: impl
function callbackFunction() {
console.log("This is a dummy funciton and needs to be overwritten in the implementation")
var cnt = 0
var largeNumber = 1
while (cnt < 99999999) {
largeNumber += largeNumber/3
cnt++
}
//put this at the end of your slow function
loadingScreen.visible = false;
console.log("Loading should be hidden")
}
property alias style: button.style
Button {
id: button
anchors.fill: parent
onClicked: {
loadingScreen.visible = true;
console.log("Loading should be visible")
timer.start()
}
}
Timer {
id: timer
interval: 500
repeat: false
onTriggered: impl.callbackFunction()
}
Rectangle
{
id: loadingScreen
width: 500
height: 500
x:0
y:0
z: 60
color: "red"
visible: false
BusyIndicator {
anchors.centerIn: parent
running: loadingScreen.visible
}
}
}

How do I permanently delete objects from a QML canvas?

Consider the QML code below, which allows me to insert points onto a blank QML canvas, with mouse-clicks and then clear all the input points and the corresponding pictures on the canvas, using a button placed in the upper-left hand corner
import QtQuick 2.5
import QtQuick.Window 2.2
import QtQuick.Controls 1.4
Window{
id: root
width: 640
height: 480
visible: true
Canvas {
id: mycanvas
width: 1000
height: 1000
property var arrpoints : []
onPaint: {
var context = getContext("2d");
// Delete everything drawn before?
context.clearRect(0, 0, mycanvas.width, mycanvas.height);
// Render all the points as small black-circles
context.strokeStyle = Qt.rgba(0, 1, 1, 0)
// Draw all the points
for(var i=0; i < arrpoints.length; i++){
var point = arrpoints[i]
context.ellipse(point["x"], point["y"], 10, 10)
context.fill()
context.stroke()
}
}
// For mousing in points.
MouseArea {
id: mymouse
anchors.fill: parent
onClicked: {
// Record mouse-position into all the input objects
mycanvas.arrpoints.push({"x": mouseX, "y": mouseY})
mycanvas.requestPaint()
console.log( mycanvas.arrpoints )
} // onClicked
}// MouseArea
} // Canvas
Button {
text: "clear input"
onClicked: {
mycanvas.arrpoints.length = 0
mycanvas.requestPaint()
console.log( mycanvas.arrpoints )
}
}
}//Window
This code behaves quite strangely. Suppose I input a few points onto the canvas, and then click the "clear input" button. then as expected all the pictures (ie little circles corresponding to points) vanish from the canvas
and the arrpoints array is set to empty.
But when I start clicking on the canvas again, the cleared pictures are redrawn, alongside the new points being entered!! Why should this be? After printing to the console, I can still see arrpoints=[] so the problem should be with the clearing of the canvas in the onPaint section.
How do I tell QML to erase its canvas memory completely?
If you want to clean the Canvas you must reset the context. In this case, implement a function that does it and force the canvas to update.
import QtQuick 2.5
import QtQuick.Window 2.2
import QtQuick.Controls 1.4
Window{
id: root
width: 640
height: 480
visible: true
Canvas {
id: mycanvas
width: 1000
height: 1000
property var arrpoints : []
onPaint: {
var context = getContext("2d");
// Delete everything drawn before?
context.clearRect(0, 0, mycanvas.width, mycanvas.height);
// Render all the points as small black-circles
context.strokeStyle = Qt.rgba(0, 1, 1, 0)
// Draw all the points
for(var i=0; i < arrpoints.length; i++){
var point = arrpoints[i]
context.ellipse(point["x"], point["y"], 10, 10)
context.fill()
context.stroke()
}
}
function clear() {
var ctx = getContext("2d");
ctx.reset();
mycanvas.requestPaint();
}
// For mousing in points.
MouseArea {
id: mymouse
anchors.fill: parent
onClicked: {
// Record mouse-position into all the input objects
mycanvas.arrpoints.push({"x": mouseX, "y": mouseY})
mycanvas.requestPaint()
console.log( mycanvas.arrpoints )
} // onClicked
}// MouseArea
} // Canvas
Button {
text: "clear input"
onClicked: {
mycanvas.arrpoints.length = 0
mycanvas.clear()
console.log( mycanvas.arrpoints )
}
}
}//Window

Updating a QML canvas

I am a QML / Javascript noob, and would like some help with that.
I want to insert some points (represented as small black circles) onto a white QML canvas element and then run an algorithm on them (such as finding convex hulls via an external geometric library)
Here is my QML code.
import QtQuick 2.5
import QtQuick.Window 2.2
Window{
id: root
width: 640
height: 480
visible: true
Canvas {
width: 1000
height: 1000
onPaint: {
var context = getContext("2d");
}
MouseArea {
id: mymouse
anchors.fill: parent
property var arrpoints : []
onClicked: {
// Record mouse-position
arrpoints = arrpoints.concat([mouseX, mouseY])
console.log(arrpoints)
}
}
}
}
So far the above code, opens up a window, with a QML canvas on it, and can keep track of the positions on the canvas (via the array arrpoints) where I single-clicked with my mouse, and outputs the array of clicked-points to the console.
But now, everytime the arrpoints changes, how do I 'tell' QML to draw a small black circle at that point immediately?
I would have thought the onPaint part of QML would trigger the rendering of the new state immediately, but it seems that part is only for the initial drawing on the canvas, before the user starts interacting with it.
You have to call the canvas requestPaint() function to force the painting. It is also advisable to save the data of the positions appropriately: {"x": x_value, "y": y_value}
import QtQuick 2.5
import QtQuick.Window 2.2
Window{
id: root
width: 640
height: 480
visible: true
Canvas {
id: canvas
width: 1000
height: 1000
onPaint: {
var context = getContext("2d")
context.strokeStyle = Qt.rgba(0, 0, 0, 1)
context.lineWidth = 1
for(var i=0; i < mymouse.arrpoints.length; i++){
var point = mymouse.arrpoints[i]
context.ellipse(point["x"]-5, point["y"]-5, 10, 10)
}
context.stroke()
}
MouseArea {
id: mymouse
anchors.fill: parent
property var arrpoints : []
onClicked: {
arrpoints.push({"x": mouseX, "y": mouseY})
canvas.requestPaint()
}
}
}
}

QML: Bind loop detected without double assignment

As far as I know the bind loop happens when I try to assign two properties each other. Example:
CheckBox {
checked: Settings.someSetting
onCheckedChanged: {
Settings.someSetting = checked;
}
}
but in my scenario I can't see such a "double assignment". I report here the full code:
import QtQuick 2.7
import QtQuick.Window 2.3
Window {
visible: true;
width: 500
height: 500
Rectangle {
id: main
anchors.fill: parent
color: "black"
property bool spinning: true
property bool stopping: false
Rectangle {
x: 0.5 * parent.width
y: 0.5 * parent.height
width: 10
height: 200
radius: 5
color: 'red'
transformOrigin: Item.Top
rotation: {
if (main.stopping)
{
main.spinning = false;
main.stopping = false;
}
return timer.angle
}
}
Timer {
id: timer
interval: 5
repeat: true
running: true
onTriggered: {
if (main.spinning) angle += 1;
}
property real angle
}
MouseArea {
id: control
anchors.fill: parent
onClicked: {
main.stopping = true;
}
}
}
}
When you click with the mouse you will get the warning:
qrc:/main.qml:17:9: QML Rectangle: Binding loop detected for property "rotation"
I don't see my mistake. I'm using flags (bool variables) to control the execution of my code. I know in this case I can just stopping the timer directly, but the actual program is more complex than this example.
The binding is in the following lines:
rotation: {
if (main.stopping)
{
main.spinning = false;
main.stopping = false;
}
return timer.angle
}
The change of rotation is triggered by the change of main.stopping: let's say that change main.stopping is given by the mouseArea, then it will be called a rotation, but inside this there is an if, and in this you are changing back to main.stopping , where he will call rotation back.
If a property in QML changes all the elements that depend on it will change

Resources