I am trying to proceduraly generate Cube with a GeometryRenderer. Why when GeometryRenderer.primitiveType is set to LineLoop, Lines or TriangleFan everything works, but with Triangles or TrianglesStrip draws absolutely nothing?
PhongMaterial { id: material; }
GeometryRenderer {
id: renderer
primitiveType: GeometryRenderer.LineLoop
instanceCount: 1
geometry: Geometry {
attributes: [
Attribute {
name: defaultPositionAttributeName
attributeType: Attribute.VertexAttribute
vertexBaseType: Attribute.Float
vertexSize: 3
byteOffset: 0
byteStride: 3 * 4
count: 6
buffer: Buffer {
type: Buffer.VertexBuffer
data: new Float32Array([
// #1 triangle
-0.9, -0.5, 0.0,
-0.0, -0.5, 0.0,
-0.45, 0.5, 0.0,
// #2 triangle
0.0, -0.5, 0.0,
0.9, -0.5, 0.0,
0.45, 0.5, 0.0,
])
}
}
]
}
}
Entity {
components: [
renderer,
material
]
}
If I adapt your QML the following way then I can see two triangles:
import QtQuick 2.2
import Qt3D.Core 2.0
import Qt3D.Render 2.0
import Qt3D.Extras 2.0
Entity {
id: root
PhongMaterial { id: material; diffuse: Qt.rgba(0.8, 0., 0., 1.0) }
GeometryRenderer {
id: renderer
primitiveType: GeometryRenderer.Triangles
instanceCount: 1
geometry: Geometry {
Attribute {
name: defaultPositionAttributeName
attributeType: Attribute.VertexAttribute
vertexBaseType: Attribute.Float
vertexSize: 3
byteOffset: 0
byteStride: 3 * 4
count: 6
buffer: Buffer {
type: Buffer.VertexBuffer
data: new Float32Array([
// #1 triangle
-0.9, -0.5, 0.0,
-0.0, -0.5, 0.0,
-0.45, 0.5, 0.0,
// #2 triangle
0.0, -0.5, 0.0,
0.9, -0.5, 0.0,
0.45, 0.5, 0.0,
])
}
}
}
}
components: [ renderer, material ]
}
I suppose the error is related to the wrong hierarchy of GeometryRenderer and Entity.
To get correct lighting you need to give the GeometryRenderer the vertexNormals too.
Edit:
As it is a common question whether a GeometryRenderer always needs an indexBuffer for primitiveType: Triangles, TriangleStrip, and TriangleFan, I'll come up with a somewhat more detailed answer.
A GeometryRenderer of primitiveType: Triangles does not necessarily need an index array (I checked the source code of Qt3D as I was unsure).
The reason you are not seeing your triangles is: You are defining the vertices in the wrong order! Change the order of the vertices in the vertexBuffer so that every three consecutive vertices form a triangle when going counterclockwise around the triangle. The triangle normal will then point at you.
Or, have a look onto your Entity from the opposite direction: you'll see the two triangles.
When using huge buffers and you don't want to repeat large amounts of vertices for memory/efficiency reasons you'll definitely should consider using an indexBuffer.
Related
I'm following a tutorial by George Francis in the tutorial after some initial examples he shows how to use image data to create random layouts.
I'm trying to work out how to get the image data from a canvas created using paper.js, as I need to get the rgb values from each individual pixel on the canvas
Link to codepen
Unknowns:
Do I need to use the rasterize() method on the shape I've created?
Currently I am attempting the following:
// create a white rectangle the size of the view (not sure I need this but doing it so that there are both white and black pixels)
const bg = new paper.Path.Rectangle({
position: [0,0],
size: view.viewSize.multiply(2),
fillColor: 'white'
})
// create a black rectangle smaller than the view size
const shape = new paper.Path.RegularPolygon({
radius: view.viewSize.width * 0.4,
fillColor: 'black',
strokeColor: 'black',
sides: 4,
position: view.center
})
// So far so good shapes render as expected. Next put the shapes in a group
const group = new paper.Group([bg,shape])
// rasterise the group (thinking it needs to be rasterized to get the pixel data, but again , not sure?)
group.rasterize()
// iterate over each pixel on the canvas and get the image data
for(let x = 0; x < width; x++){
for(let y = 0; y < height; y++){
const { data } = view.context.getImageData(x,y,1,1)
console.log(data)
}
}
Expecting: To get an array of buffers where if the pixel is white it would give me
Uint8ClampedArray(4) [0, 0, 0, 0, buffer: ArrayBuffer(4),
byteLength: 4, byteOffset: 0, length: 4]
0: 255
1: 255
2: 255
//(not sure if the fourth index represents (rgb'a')?
3: 255
buffer:
ArrayBuffer(4)
byteLength: 4
byteOffset: 0
length: 4
Symbol(Symbol.toStringTag): (...)
[[Prototype]]: TypedArray
and if the pixel is black I should get
Uint8ClampedArray(4) [0, 0, 0, 0, buffer: ArrayBuffer(4),
byteLength: 4, byteOffset: 0, length: 4]
0: 0
1: 0
2: 0
3: 0
buffer:
ArrayBuffer(4)
byteLength: 4
byteOffset: 0
length: 4
Symbol(Symbol.toStringTag): (...)
[[Prototype]]: TypedArray
i.e either 255,255,255 (white) or 0,0,0(black)
Instead, all the values are 0,0,0?
I think that your issue was that at the time where you are getting the image data, your scene is not yet drawn to the canvas.
In order to make sure it's drawn, you just need to call view.update().
Here's a simple sketch demonstrating how it could be used.
Note that you don't need to rasterize your scene if you are using the Canvas API directly to manipulate the image data. But you could also rasterize it and take advantage of Paper.js helper methods like raster.getPixel().
// Draw a white background (you effectively need it otherwise your default
// pixels will be black).
new Path.Rectangle({
rectangle: view.bounds,
fillColor: 'white'
});
// Draw a black rectangle covering most of the canvas.
new Path.Rectangle({
rectangle: view.bounds.scale(0.9),
fillColor: 'black'
});
// Make sure that the scene is drawn into the canvas.
view.update();
// Get the canvas image data.
const { width, height } = view.element;
const imageData = view.context.getImageData(0, 0, width, height);
// Loop over each pixel and store all the different colors to check that this works.
const colors = new Set();
const length = imageData.data.length;
for (let i = 0; i < length; i += 4) {
const [r, g, b, a] = imageData.data.slice(i, i + 4);
const color = JSON.stringify({ r, g, b, a });
colors.add(color);
}
console.log('colors', [...colors]);
I have a simple 3D scene and a box that I want to rotate around its origin point:
Entity {
components: [ mesh, phongMaterial, transform ]
CuboidMesh {
id: mesh
yzMeshResolution: Qt.size(2, 2)
xzMeshResolution: Qt.size(2, 2)
xyMeshResolution: Qt.size(2, 2)
zExtent: 1
xExtent: 1
yExtent: 2
}
PhongAlphaMaterial {
id: phongMaterial
property color randColor: Qt.rgba(Math.random(), Math.random(), Math.random(), 1)
ambient: randColor
diffuse: randColor
specular: randColor
shininess: 1.0
alpha: 0.4
}
Transform {
id: transform
property real userAngle: 0.0
scale: 1
rotation: fromAxisAndAngle(Qt.vector3d(0, 0, 1), userAngle)
translation: Qt.vector3d( 0.0, 0.0, 0.0 )
}
QQ2.NumberAnimation {
target: transform
property: "userAngle"
duration: 2000
loops: QQ2.Animation.Infinite
running: true
easing.type: QQ2.Easing.InOutQuad
from: 0
to: 360
}
}
By default the origin point is in the center of the box. I need to move the origin point down as shown on the image:
But I have no clue how to do that. I've tried to play with Transform.translation but that just move the shape along the axis. I've tried to play with Transform.rotateAround(point, real angle, vector3d axis) but I see no changes. I've changed the point value but the origin point remains in the center of the shape.
Transform {
id: transform
property real userAngle: 0.0
matrix: rotateAround(Qt.point(1,1), userAngle, Qt.vector3d( 0.0, 0.0, 1.0 ))
}
Ok, I've found a solution. The first parameter of the rotateAround is 3d vector, not point. So using
matrix: rotateAround(Qt.vector3d(0.0, 1.0, 0.0), userAngle, Qt.vector3d( 0.0, 0.0, 1.0 ))
does the trick.
For reasons that are more complex than this minimal testcase, I need to have a entity (childEntity, the magenta box) child of another entity (parentEntity, the cyan box), but childEntity should be independent of parentEntity's transform.
Therefore I add this handler:
QtQuick.Connections {
target: parentTransform
onMatrixChanged: {
// cancel parent's transform
var m = parentTransform.matrix
var i = m.inverted()
childTransform.matrix = i
// debug:
console.log(parentTransform.matrix.times(i))
}
}
which works well for cancelling out parent's translation and rotation, but not for scale.
When parent's scale3D is not [1,1,1] and rotation is also set, then childEntity appears distorted, despite the product of parentTransform.matrix times childTransform.matrix gives the 4x4 identity. Why?
Minimal testcase: (load into a QQuickView)
import QtQml 2.12 as QtQml
import QtQuick 2.12 as QtQuick
import QtQuick.Controls 2.12 as QtQuickControls
import QtQuick.Scene3D 2.0
import Qt3D.Core 2.0
import Qt3D.Render 2.0
import Qt3D.Input 2.0
import Qt3D.Extras 2.0
Scene3D {
function change_translation_and_rotation() {
parentTransform.translation.x = 0.1
parentTransform.translation.y = 0.5
parentTransform.translation.z = 2
parentTransform.rotationX = 30
parentTransform.rotationY = 60
parentTransform.rotationZ = 10
}
function change_rotation_and_scale() {
parentTransform.rotationX = 30
parentTransform.rotationY = 60
parentTransform.rotationZ = 10
parentTransform.scale3D.x = 0.1
parentTransform.scale3D.y = 0.5
parentTransform.scale3D.z = 2
}
function reset_transform() {
parentTransform.translation.x = -0.5
parentTransform.translation.y = 0
parentTransform.translation.z = 0.5
parentTransform.rotationX = 0
parentTransform.rotationY = 0
parentTransform.rotationZ = 0
parentTransform.scale3D.x = 1
parentTransform.scale3D.y = 1
parentTransform.scale3D.z = 1
}
data: [
QtQml.Connections {
target: parentTransform
onMatrixChanged: {
// cancel parent's transform
var m = parentTransform.matrix
var i = m.inverted()
childTransform.matrix = i
// debug:
console.log(parentTransform.matrix.times(i))
}
},
QtQuick.Column {
spacing: 5
QtQuick.Repeater {
id: buttons
model: ["change_translation_and_rotation", "change_rotation_and_scale", "reset_transform"]
delegate: QtQuickControls.Button {
text: modelData.replace(/_/g, ' ')
font.bold: focus
onClicked: {focus = true; scene3d[modelData]()}
}
}
}
]
id: scene3d
anchors.fill: parent
aspects: ["render", "logic", "input"]
Entity {
id: root
components: [RenderSettings {activeFrameGraph: ForwardRenderer {camera: mainCamera}}, InputSettings {}]
Camera {
id: mainCamera
projectionType: CameraLens.PerspectiveProjection
fieldOfView: 45
aspectRatio: 16/9
nearPlane : 0.1
farPlane : 1000.0
position: Qt.vector3d(-3.46902, 4.49373, -3.78577)
upVector: Qt.vector3d(0.41477, 0.789346, 0.452641)
viewCenter: Qt.vector3d(0.0, 0.5, 0.0)
}
OrbitCameraController {
camera: mainCamera
}
Entity {
id: parentEntity
components: [
CuboidMesh {
xExtent: 1
yExtent: 1
zExtent: 1
},
PhongMaterial {
ambient: "#6cc"
},
Transform {
id: parentTransform
translation: Qt.vector3d(-0.5, 0, 0.5)
}
]
Entity {
id: childEntity
components: [
CuboidMesh {
xExtent: 0.5
yExtent: 0.5
zExtent: 0.5
},
PhongMaterial {
ambient: "#c6c"
},
Transform {
id: childTransform
translation: Qt.vector3d(-0.5, 0, 0.5)
}
]
}
}
QtQuick.Component.onCompleted: reset_transform()
}
}
The problem is that the QTransform node does not store the transformation as a general 4x4 matrix. Rather is decomposes the matrix into a 3 transformations that are applied in fixed order:
S - a diagonal scaling matrix
R - the rotation matrix
T - translation
and then applies it in the order T * R * S * X to a point X.
The documentation for the matrix property writes about this decomposition step https://doc.qt.io/qt-5/qt3dcore-qtransform.html#matrix-prop
So when the transformation on the parent is M = T * R * S, then the inverse on the child will be M^-1 = S^-1 * R^-1 * T^-1. Setting the inverse on the child QTransform will attempt to decompose it in the same way:
M^-1 = T_i * R_i * S_i = S^-1 * R^-1 * T^-1
That doesn't work, because particularly S and R don't commute like this.
You can test this in your code by comparing the value of childTransform.matrix and i after you set the childTransform.matrix.
I think the only solution is to rather have 3 QTransforms on entities nested above the child to implement the correct order of the inverses as S^-1 * R^-1 * T^-1.
It is pretty simple to compute the inverse S^-1, R^-1 T^-1 from the corresponding parameters in the parent QTransform.
How can I draw the outline of an object on top of any other object in Qt3D? For instance to highlight a selected object in a 3D editor?
If you want to draw the outline of an entity at all times, even if the entity is behind other entities, one solution is to do it in two steps:
Draw everything as normal.
Draw only the outline of the selected object.
When drawing the outline, you need to use an outline effect, which can be implemented in two render passes:
Render the geometry to a texture using a simple color shader.
Render to screen using a shader that takes each pixel in the texture and compares the surrounding pixels. If they are equal, we are inside the object and the fragment can be discarded. If they differ, we are on the edge of the object and we should draw the color.
Here is a simple implementation of the above-mentioned shader:
#version 150
uniform sampler2D color;
uniform vec2 winSize;
out vec4 fragColor;
void main()
{
int lineWidth = 5;
vec2 texCoord = gl_FragCoord.xy / winSize;
vec2 texCoordUp = (gl_FragCoord.xy + vec2(0, lineWidth)) / winSize;
vec2 texCoordDown = (gl_FragCoord.xy + vec2(0, -lineWidth)) / winSize;
vec2 texCoordRight = (gl_FragCoord.xy + vec2(lineWidth, 0)) / winSize;
vec2 texCoordLeft = (gl_FragCoord.xy + vec2(-lineWidth, 0)) / winSize;
vec4 col = texture(color, texCoord);
vec4 colUp = texture(color, texCoordUp);
vec4 colDown = texture(color, texCoordDown);
vec4 colRight = texture(color, texCoordRight);
vec4 colLeft = texture(color, texCoordLeft);
if ((colUp == colDown && colRight == colLeft) || col.a == 0.0)
discard;
fragColor = col;
}
Note: It might be a better idea to take the difference between the values instead of using an equality.
With this method, you don't have to worry about depth testing and the order in which the objects are drawn: The second time you draw, you will always draw on top of everything else.
You could do this by adding a single effect with two techniques with different filter keys. Alternatively, if you want to use materials from Qt3D.Extras, you can add another entity with the same transform and mesh and a material that uses the outline technique.
Here is an example that draws the outline on top of everything else using two render passes:
import QtQuick 2.2 as QQ2
import Qt3D.Core 2.0
import Qt3D.Render 2.0
import Qt3D.Input 2.0
import Qt3D.Extras 2.0
Entity {
Camera {
id: camera
projectionType: CameraLens.PerspectiveProjection
fieldOfView: 45
aspectRatio: 16/9
nearPlane : 0.1
farPlane : 1000.0
position: Qt.vector3d( 0.0, 0.0, -40.0 )
upVector: Qt.vector3d( 0.0, 1.0, 0.0 )
viewCenter: Qt.vector3d( 0.0, 0.0, 0.0 )
}
OrbitCameraController {
camera: camera
}
components: [
RenderSettings {
activeFrameGraph: RenderSurfaceSelector {
id: surfaceSelector
Viewport {
CameraSelector {
camera: camera
FrustumCulling {
TechniqueFilter {
matchAll: [
FilterKey { name: "renderingStyle"; value: "forward" }
]
ClearBuffers {
clearColor: Qt.rgba(0.1, 0.2, 0.3)
buffers: ClearBuffers.ColorDepthStencilBuffer
}
}
TechniqueFilter {
matchAll: [
FilterKey { name: "renderingStyle"; value: "outline" }
]
RenderPassFilter {
matchAny: [
FilterKey {
name: "pass"; value: "geometry"
}
]
ClearBuffers {
buffers: ClearBuffers.ColorDepthStencilBuffer
RenderTargetSelector {
target: RenderTarget {
attachments : [
RenderTargetOutput {
objectName : "color"
attachmentPoint : RenderTargetOutput.Color0
texture : Texture2D {
id : colorAttachment
width : surfaceSelector.surface.width
height : surfaceSelector.surface.height
format : Texture.RGBA32F
}
}
]
}
}
}
}
RenderPassFilter {
parameters: [
Parameter { name: "color"; value: colorAttachment },
Parameter { name: "winSize"; value : Qt.size(surfaceSelector.surface.width, surfaceSelector.surface.height) }
]
matchAny: [
FilterKey {
name: "pass"; value: "outline"
}
]
}
}
}
}
}
}
},
InputSettings { }
]
PhongMaterial {
id: material
}
Material {
id: outlineMaterial
effect: Effect {
techniques: [
Technique {
graphicsApiFilter {
api: GraphicsApiFilter.OpenGL
majorVersion: 3
minorVersion: 1
profile: GraphicsApiFilter.CoreProfile
}
filterKeys: [
FilterKey { name: "renderingStyle"; value: "outline" }
]
renderPasses: [
RenderPass {
filterKeys: [
FilterKey { name: "pass"; value: "geometry" }
]
shaderProgram: ShaderProgram {
vertexShaderCode: "
#version 150 core
in vec3 vertexPosition;
uniform mat4 modelViewProjection;
void main()
{
gl_Position = modelViewProjection * vec4( vertexPosition, 1.0 );
}
"
fragmentShaderCode: "
#version 150 core
out vec4 fragColor;
void main()
{
fragColor = vec4( 1.0, 0.0, 0.0, 1.0 );
}
"
}
}
]
}
]
}
}
SphereMesh {
id: sphereMesh
radius: 3
}
Transform {
id: sphereTransform
}
Transform {
id: sphereTransform2
// TODO workaround because the transform cannot be shared
matrix: sphereTransform.matrix
}
Entity {
id: sphereEntity
components: [ sphereMesh, material, sphereTransform ]
}
Entity {
id: sphereOutlineEntity
components: [ sphereMesh, outlineMaterial, sphereTransform2 ]
}
Entity {
id: outlineQuad
components: [
PlaneMesh {
width: 2.0
height: 2.0
meshResolution: Qt.size(2, 2)
},
Transform {
rotation: fromAxisAndAngle(Qt.vector3d(1, 0, 0), 90)
},
Material {
effect: Effect {
techniques: [
Technique {
filterKeys: [
FilterKey { name: "renderingStyle"; value: "outline" }
]
graphicsApiFilter {
api: GraphicsApiFilter.OpenGL
profile: GraphicsApiFilter.CoreProfile
majorVersion: 3
minorVersion: 1
}
renderPasses : RenderPass {
filterKeys : FilterKey { name : "pass"; value : "outline" }
shaderProgram : ShaderProgram {
vertexShaderCode: "
#version 150
in vec4 vertexPosition;
uniform mat4 modelMatrix;
void main()
{
gl_Position = modelMatrix * vertexPosition;
}
"
fragmentShaderCode: "
#version 150
uniform sampler2D color;
uniform vec2 winSize;
out vec4 fragColor;
void main()
{
int lineWidth = 5;
vec2 texCoord = gl_FragCoord.xy / winSize;
vec2 texCoordUp = (gl_FragCoord.xy + vec2(0, lineWidth)) / winSize;
vec2 texCoordDown = (gl_FragCoord.xy + vec2(0, -lineWidth)) / winSize;
vec2 texCoordRight = (gl_FragCoord.xy + vec2(lineWidth, 0)) / winSize;
vec2 texCoordLeft = (gl_FragCoord.xy + vec2(-lineWidth, 0)) / winSize;
vec4 col = texture(color, texCoord);
vec4 colUp = texture(color, texCoordUp);
vec4 colDown = texture(color, texCoordDown);
vec4 colRight = texture(color, texCoordRight);
vec4 colLeft = texture(color, texCoordLeft);
if ((colUp == colDown && colRight == colLeft) || col.a == 0.0)
discard;
fragColor = col;
}
"
}
}
}]
}
}
]
}
}
The result:
Is there any way to draw half dashed circle in QML? I drawn half circle in this way
var Circle = getContext("2d");
Circle.save();
var CircleGradient =
Circle.createLinearGradient(parent.width/4,parent.height,parent.width/4,0);
CircleGradient.addColorStop(0, firstGradientPoint);
CircleGradient.addColorStop(1, secondGradientPoint);
Circle.clearRect(0, 0, parent.width, parent.height);
Circle.beginPath();
Circle.lineCap = "round";
Circle.lineWidth = 10;
Circle.strokeStyle = CircleGradient
Circle.arc(parent.width/2, parent.height/2, canvas.radius - (Circle.lineWidth / 2), Math.PI/2, canvas.Value);
Circle.stroke();
Circle.restore();
Result
But how can I make it dashed like this.
I need
I know that this question is very outdated, but it might help someone. You can use Qt Quick Shapes (since Qt 5.10) to render what you want. It's not copy-paste code, but more of an approach:
Shape {
ShapePath {
id: shapePath
strokeColor: "black"
strokeStyle: ShapePath.DashLine
dashPattern: [6, 8]
fillColor: "transparent"
PathArc {
x: 0
y: radiusX + radiusY
radiusX: 100
radiusY: 100
useLargeArc: true
}
}
}
PathArc documentation has pretty much everything you need. Here are some more Shape Examples.
I know QML little bit but never coded.
But you can solve your problem by logic.
Here is the logic- Code below is pseudo, will not work but will give you an idea.
Draw the small arcs in loop with spaces in between.
//DECLARE YOUR ANGLES START AND END
startAngle = 0.0;
endAngle = pi/20;// 10 ARCS AND 10 SPACES
while (q++ < 10){
Circle.arc(parent.width/2, parent.height/2, canvas.radius - (Circle.lineWidth / 2), startAngle, endAngle, canvas.Value)
//LEAVE SPACE AND CREATE NEW START AND END ANGLE.
startAngle = endAngle + endAngle;
endAngle = startAngle + endAngle;
}