I have a canvas with a shader on it, to do that I am using glslCanvas, and I want to transform that canvas with css transform. This works fine in general, but when I scale it such that it becomes significantly bigger, the shader freezes.
I tried to discover what causes this, but had no luck so far. It seemed I can do scaleX however large I want but not with scaleY or scale. I checked if there was a certain resolution or scale where it freezes but haven't found any result, it depends on the size of the canvas.
Minimal example
Here is an example where the canvas gets enlarged incrementally, and breaks on scale(4). Starting the canvas at scale(4) instead of incrementally increasing just results in a white canvas.
setTimeout(() => document.getElementById('glsl').style.transform = "scale(2)", 2000);
setTimeout(() => document.getElementById('glsl').style.transform = "scale(3)", 5000);
setTimeout(() => document.getElementById('glsl').style.transform = "scale(4)", 8000);
setTimeout(() => document.getElementById('glsl').style.transform = "scale(5)", 12000);
<script src="https://rawgit.com/patriciogonzalezvivo/glslCanvas/master/dist/GlslCanvas.js"></script>
<canvas id="glsl" class="glslCanvas" data-fragment="
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
void main()
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = gl_FragCoord.xy/u_resolution.xy;
// Time varying pixel color
vec3 col = 0.5 + 0.5*cos(u_time+uv.xyx+vec3(0,2,4));
// Output to screen
gl_FragColor = vec4(col,1.0);
}"></canvas>
I'm not sure whether this is specific to the glslCanvas library or not, but haven't found any issues regarding this topic on their repository on Github.
Investigating this led me to uncovering an issue with glslCanvas.
The PR I submitted is here: http://github.com/patriciogonzalezvivo/glslCanvas/pull/47
Essentially, scaling can cause the code to falsely believe that the canvas is not visible, and so it stops rendering.
Related
I am trying to convert a mouse event to pixel coordinates within a video. By pixel coordinates, I mean coordinates relative to the original video resolution.
My video element has object-fit: contain, which means that the top left corner of the video is not necessarily located at position (0,0), as this picture shows:
If I click on the top-left corner of the white section in this video then I want to get (0,0), but in order to do this I need to discover the offset of the video content (white area) relative to the video element (black border).
How can I recover this offset?
I am already aware of width, height, videoWidth, and videoHeight, but these only let me account for the scaling, not the offset.
The offset can be deduced. I think this kind of code should do the trick:
if(videoHeight/height > videoWidth/width){
scale = videoHeight/height;
offsetX = (videoWidth - width*scale)/2;
offsetY = 0;
}
else{
scale = videoWidth/width;
offsetY = (videoHeight - height*scale)/2;
offsetX = 0;
}
I was also interested in getting the actual pixel positions from mouse or touch events when using object-fit, and this is the only result I found when searching. Although I suspect it is probably too late to be helpful to you, I thought I'd answer in case anybody else comes across this in future like I did.
Because I'm working on code with other people, I needed a robust solution that would work even if someone changed or removed the object-fit or object-property in the css
The approach that I took was:
Implement the cover, contain etc algorithms myself, just functions doing math, not dependent on the DOM
Use getComputedStyle to get the element's objectFit and objectPosition properties
Use .getBoundingClientRect() to get the DOM pixel size of the element
Pass the element's current objectFit, objectPosition, its DOM pixel size and it's natural pixel size to my function to figure out where the fitted rectangle sat within the element
You then have enough information to transform the event point to a pixel location
There's more code than would comfortably fit here, but getting the size of the fitted rectangle for cover or contain is something like:
if ( fitMode === 'cover' || fitMode === 'contain' ) {
const wr = parent.width / child.width
const hr = parent.height / child.height
const ratio = fitMode === 'cover' ? Math.max( wr, hr ) : Math.min( wr, hr )
const width = child.width * ratio
const height = child.height * ratio
const size = { width, height }
return size
}
// handle other object-fit modes here
Hopefully this gives others a rough idea of how to solve this problem themselves, alternately I have published the code at the link below, it supports all object-fit modes and it includes examples showing how to get the actual pixel point that was clicked:
https://github.com/nrkn/object-fit-math
I'm trying to draw scaled image on canvas in javafx. Using this code:
Image image = ...;
canvas.setWidth(scale * width);
canvas.setHeight(scale * height);
GraphicsContext gc = canvas.getGraphicsContext2D();
gc.drawImage(image, 0, 0, scale * width, scale * height);
// this gives same result
// gc.scale(scale, scale);
// gc.drawImage(editableImage, 0, 0, width, height);
It works really fast but makes blurred images like this:
This is not what I'd like to see. Instead I want to get this picture:
Which can be drawn by manually setting each pixel color with such code:
PixelReader reader = image.getPixelReader();
PixelWriter writer = gc.getPixelWriter();
for (int y = 0; y < scale * height; ++y) {
for (int x = 0; x < scale * width; ++x) {
writer.setArgb(x, y, reader.getArgb(x / scale, y / scale));
}
}
But I cannot use this approach as it's too slow. It took couple of seconds to draw 1Kb image scaled 8 times. So I ask if there's any way to disable this blurry effect for drawing on canvas?
UPD 10/07/2019:
Looks like the issue is fixed! Now GraphicsContext should have property "image smoothing" controlling this behavior.
INITIAL ANSWER
I guess I've found answer to my question. As this issue says that there's no way to specify filtering options in graphics context.
Description:
When drawing an image in a GraphicsContext using the drawImage()
method to enlarge a small image to a larger canvas, the image is being
interpolated (possibly using a bilinear or bicubic algorithm). But
there are times like when rendering color maps (temperature,
zooplancton, salinity, etc.) or some geographical data (population
concentration, etc.) where we want to have no interpolation at all
(ie: use the nearest neighbor algorithm instead) in order to represent
accurate data and shapes.
In Java2D, this is possible by setting the appropriate
RenderingHints.KEY_RENDERING on the Graphics2D at hand. Currently on
JavaFX's GraphicsContext there is no such way to specify how the image
is to be interpolated.
The same applies when shrinking images too.
This could be expanded to support a better form of smoothing for the
"smooth" value that is available in both Image and ImageView and that
does not seem to work very well currently (at least on Windows).
The issue was created in 2013 but it's still untouched so unlikely it will be resolved soon.
Motivation: I currently have 1000 simple items in my QML scene, and one of them animates at 60fps, so the entire scene repaints at 60fps. Cpu usage is on average 15% on each of my 4 virtual cores on my pc. On the target hardware the situation is even worse - 60% on each of the 4 physical cores, leading to overheating, leading to freeze. Note that I have implemented an optimization: via Loaders, unload all items that are outside the (scrolling) viewport (so, only ~18 items are loaded at any given time). The perf stats I report are with this optimization - without it it's worse.
My solution is to start drawing all the 1000 items in a single QQuickFramebufferObject, and stop having them as actual QML Item's. That way I'll avoid Qt punishing me for just having 1000 (unloaded!) items.
Where I'm stuck though: How to draw the text parts of the items in OpenGL?
Approach 1: I know QPainter can be used to directly render text into a QOpenGLWidget, but that option seems to be absent in QQFBO.
Approach 2: Have a single, parentless Text item in QML with layer.enabled: true, set its text property, wait 1 frame (for it to render) then fetch the texture. Somewhat ugly and roundabout; also may be slow-ish.
Approach 3: Look at the source of QQuickText to see what magic it does and copy it. Might be difficult, and I'll have to comply with license restrictions.
Approach 4: Do software rendering with a QPainter to a QImage, then upload that image to a texture. Elegant, but may be too slow.
Any suggestions on a way to do it that doesn't have the problems in these approaches?
It's not totally clear why rendering one item makes your whole scene repaint. But if only one item is animating you might want to split your scene. Those not moving should be into a parent item, and the one moving can be outside.
There is a rather easy way to render a subtree to a FBO, just render the subtree to a ShaderItem which does nothing.
This example renders an Image with a grayscale shader (borrowed from the example at Qt docs):
import QtQuick 2.0
Rectangle {
width: 200; height: 100
Row {
Image { id: img;
sourceSize { width: 100; height: 100 } source: "qt-logo.png" }
ShaderEffect {
width: 100; height: 100
property variant src: img
vertexShader: "
uniform highp mat4 qt_Matrix;
attribute highp vec4 qt_Vertex;
attribute highp vec2 qt_MultiTexCoord0;
varying highp vec2 coord;
void main() {
coord = qt_MultiTexCoord0;
gl_Position = qt_Matrix * qt_Vertex;
}"
fragmentShader: "
varying highp vec2 coord;
uniform sampler2D src;
uniform lowp float qt_Opacity;
void main() {
lowp vec4 tex = texture2D(src, coord);
gl_FragColor = vec4(vec3(dot(tex.rgb,
vec3(0.344, 0.5, 0.156))),
tex.a) * qt_Opacity;
}"
}
}
}
A ShaderEffect is just a render to texture, you are seeing a rectangle filled with a picture of the object. In this case the illusion is still there, but your animated object is only dealing with a single textured rectangle.
I don't know if that is the solution as it seems the problem might be elsewhere. Please elaborate your problem and I might update the answer as needed.
I know this seems like your second approach but in this case you render to texture your whole unchanged subtree. If I may guess it seems you have scrolling text batching again and again to the GPU because some scrolling animation, if you use a ShaderEffect and a long stripe of items you could animate just the scrolling window and always leave your text static, avoiding batching.
My LineItem inheriting from QGraphicsLineItem can change its pen width.
I have created a boundingRect that uses the QGraphicsLineItem::boundingRect adjusted by pads that get calculated based on pen width and arrows. It works.
void LineItem::calculateStuff() // called on any change including pen width
{
qreal padLeft, padRight, padT;
padLeft = 0.5 * m_pen.width(); // if no arrows
padT = padLeft;
padRight = padLeft;
m_boundingRect = QGraphicsLineItem::boundingRect().adjusted(-padLeft, -padT, padRight, padT);
update();
}
QRectF LineItem::boundingRect() const
{
return m_boundingRect;
}
QPainterPath LineItem::shape() const
{
QPainterPath p;
p.addRect(m_boundingRect);
return p;
}
There is only one artifact that I get:
if I increase the pen width, then decrease it, I get traces:
these of course disappear as soon as i move mouse or any action (I had a hard time getting the screen shots)
As pretty as they are (seriously I consider them a "feature :-) ) - I am trying to eliminate them. I tried to remember previous bounding rectangle, and update the item with the previous bounding rectangle - i thought that was what the option was for - but it didn't work.
QRectF oldRect = selectedItem->boundingRect();
item->setItemPenWidth(p);
selectedItem->update(oldRect);
selectedItem->update();
My viewport has
setViewportUpdateMode(BoundingRectViewportUpdate);
If I change to
setViewportUpdateMode(FullViewportUpdate);
I don't get artifacts - but I think this will impact performance which is a major constraint.
How can I fix these artifacts - that only occur in that specific situation, decreasing pen width / decreasing bounding rect of line, without impacting performance ?
Simple fix... I had to add
prepareGeometryChange();
in my calculateStuff() function.
I have not seen any changes from this before, it is the first time I change my boundingRect that it does not update seamlessly.
I am rendering a QPixmap inside of a QThread. the code to paint is inside a function. If I declare the painter inside the drawChart function everything seems ok but if I declare the painter inside the run function the image is wrong in the sense that at the edge of a black and white area, the pixels at the interface are overlapped to give a grey. Does anyone know why this is so? Could it be because of the nature of the run function itself?
//This is ok
void RenderThread::run()
{
QImage image(resultSize, QImage::Format_RGB32);
drawChart(&image);
emit renderedImage(image, scaleFactor);
}
drawChart(&image)
{
QPainter painter(image);
painter.doStuff()(;
...
}
//This gives a image that seems to have artifacts
void RenderThread::run()
{
QImage image(resultSize, QImage::Format_RGB32);
QPainter painter(image);
drawChart(painter);
emit renderedImage(image, scaleFactor);
}
drawChart(&painter)
{
painter.doStuff();
...
}
//bad
.
//good
.
From C++ GUI Programming with Qt 4 by Jasmin Blanchette and Mark Summerfield:
One important thing to understand is
that the center of a pixel lies on
“half-pixel” coordinates. For example,
the top-left pixel covers the area
between points (0, 0) and (1, 1), and
its center is located at (0.5, 0.5).
If we ask QPainter to draw a pixel at,
say, (100, 100), it will approximate
the result by shifting the coordinate
by +0.5 in both directions, resulting
in the pixel centered at (100.5,
100.5) being drawn.
This distinction may seem rather
academic at first, but it has
important consequences in practice.
First, the shifting by +0.5 only
occurs if antialiasing is disabled
(the default); if antialiasing is
enabled and we try to draw a pixel at
(100, 100) in black, QPainter will
actually color the four pixels (99.5,
99.5), (99.5, 100.5), (100.5, 99.5), and (100.5, 100.5) light gray, to give
the impression of a pixel lying
exactly at the meeting point of the
four pixels. If this effect is
undesirable, we can avoid it by
specifying half-pixel coordinates, for
example, (100.5, 100.5).