Manually invalidating a composite layer in Chrome - css

Background
I'm working on a web app built in HTML (not WebGL/Canvas) which includes 2d viewport controls for panning and zooming the page content. Something like Figma, perhaps, but rendered entirely with DOM, which is a hard technical requirement.
To achieve the viewport functionality I've made extensive use of CSS transform to power all offsets and animations in order to reduce the work required to render changes to compositing, as much as possible. The "canvas" of my app contains many discrete items which can be moved and resized by the user, similar to any typical OS window manager. These 'widgets' may contain their own scrollable content.
For example, after panning 50px,25px and zooming to 1.5x, the DOM and transform values might look like this for a particular "canvas" which has a widget at (20, 100):
<div id="canvas" style="transform: scale(1.5, 1.5) translate(50px, 25px)">
<div id="widget-1" style="transform: translate(20px, 100px)" />
</div>
After a lot of experimentation I've discovered that the most efficient way to render these items across multiple browsers is to promote each individual 'widget' to its own layer by applying will-change: transform to the outermost element. This results in a pretty reasonable framerate, even with a lot of content in the frame while panning and zooming.
Webkit Misbehavior
However, there's one catch - on Webkit-based browsers, when zooming (which is applied via scale transform in CSS on the root canvas element), the contents of the widgets are not re-rasterized to accommodate the new scale value. At a zoom greater than 1x, this produces noticeable blurriness. Images with text, in particular, are basically unreadable.
Above, one image widget and another DOM text widget at 1x (native) scale.
And now, at 2x scale (you won't be able to tell the difference inline in this post, but you can see it at full resolution). Notice that the image is just as illegible as before, and the text is blurry.
For a live reproduction of this problem, see this CodeSandbox (leave "Animation" unchecked).
Side note: this only happens on Chrome, Safari, and Edge - so it seems like an artifact of Webkit's rendering behavior. Firefox actually scales everything quite nicely, and with a faster framerate to boot.
However, the performance of this approach is desirable. After trying some other configurations of layering, I decided the best approach would be to try to force the browser to re-rasterize the widgets once a zoom change animation was completed.
The Hack
The intended goal is to allow the old rasterized textures to persist during the zoom animation to make it as smooth as possible (so the blurriness seen above will be present while the viewport scales up/down), but to trigger a re-rasterization at the final scale once the animation is complete - re-draw all the widgets, taking current scale into account so that their contents are sharp and legible even at 2x zoom.
To correct this problem, I have implemented what feels like a "hack": after the end of each zoom animation, I'm toggling the will-change on all the widgets from transform to initial for 1 frame, and then back again:
const rerasterize = () => {
requestAnimationFrame(() => {
element.style.willChange = 'initial';
requestAnimationFrame(() => {
element.style.willChange = 'transform';
});
});
};
This actually works. Now when I zoom in to 2x, the image text is legible and the DOM text is nice and sharp:
(In my app, you can even see the moment when the zoom animation "settles" and text "pops" into high-resolution.)
However, if I understand correctly, the code above is actually forcing the browser to dispose of the composite layer for each widget for 1 frame, and then recreate it. While the performance seems acceptable in practice, I would much prefer to just ask the browser to invalidate the layer which it has already constructed.
The Question
So, with all that context aside, the question is simply: is there a way to manually trigger an invalidation of a composite layer without trashing it? A magic CSS incantation, perhaps?
I'm also open to alternative approaches with respect to layer grouping which might improve behavior without harming render performance.
Other Stuff I Tried
One thing I noticed when creating the reproduction CodeSandbox is that if I add a transition property to the "canvas" element (which is being transformed to achieve the viewport changes), even if widgets are composited in different layers, it appears to fix the blurriness. You can see this by checking "Animation" in the demo. However, my animations are currently done via JS, so adding a secondary CSS transition on top of this doesn't seem like a great plan.
I tried ripping out JS animations entirely and relying solely on transition, but surprisingly this did not seem to help. Panning and zooming felt noticeably choppier (some of this might come down to native easing feeling less natural than JS spring-based easing), but more concerningly the GPU memory usage and dropped frames were notably worse than without transition - which leads me to believe that transition might be causing a lot more work than I really want on the GPU for my use case (perhaps invalidating layers frequently during animations, when I would prefer them to remain intact until the transition ends).

Related

Creating a parallax affect with react-scroll-parallax and image masks

Here is the desired outcome I'm looking to achieve by scrolling using react-scroll-parallax.
On Mobile browser
View web browser example here
Description
I want to create a website with the parallax affect shown above. The key elements being a website build in react containing three pages.
While scrolling from Page 1 to Page 2 I want the mobile device mock to start halfway on the screen (as to avoid the other content of page 1), then move to being basically centered.
While scrolling from Page 2 to Page 3, the website and components stick and once again act like a normal website scroll.
Additionally, during the scroll from Page 1 to Page 2, I want the content inside the device mock to scroll as well.
What I tried
For starters I was able to get nearly the affect I wanted by using a div with it's z-index and absolute position set, and parallax on translateY of -50, 125.
<div className={"absolute z-10 w-full"}>
<Parallax translateY={[-50, 125]}></Parallax>
</div>
The problem became however when I wanted to place content inside the div. Having another div within the parallax that also had z-index set seemed to mess with the parallax affect.
Important notes
Content inside device mock
One issue I found that was tricky was trying to place the content inside the device mock. I want a parallax both on the device mock itself, and the content within it.
I'm not entirely sure how I should crop the content inside the device mock.
The device mock svg frame and device mock mask can be found here if you want to give it a try
Device mock svg and mask
I tried imgs with various z-indexes, masking the div with an svg mask, using image backgrounds. Nothing is quite getting the preferred outcome.
Scaling of device mock
I want to make sure this works well on both mobile and browser. With that said I was trying to use margins to scale the device mock but I had a hard time with trying to then correctly get the mask to work for the content within the mock.
I'm not sure if using dedicated width and height sizes would be the ideal way to go, but very open to suggestions! It seems hard to scale the device frame and the mask properly.
Parallax of device and parallax of device content
I want the content inside the device mock to be html so that I can change it more than just an image. That being said the most important feature I want is for both the device and the content inside to have a parallax scroll affect.
Summary
I know this is a bit much for a quick simple stack overflow issue, but I've been trying a lot to get this to work and just can't seem to nail down the little details correctly. I sincerely appreciate all help and suggestions and if there is anything else I can provide please let me know!
The trickier part of the request was blowing up the <svg>, adding new <path /> and <clipPath /> for the color swap inside the phone mock.
Eventually I got it working here. The part linking the clipPath transition to the scroll progress looks like this:
const [y, setY] = React.useState(1739);
const onProgressChange = React.useCallback(
(a) =>
setY(Math.max(Math.min(1739, 1739 - ((a - 0.24) / 0.0018) * 17), 36)),
[setY]
);
const { ref } = useParallax({
translateY: [0, 185],
onProgressChange
});
The 1739 and 36 are max and min values for the translation and they are strictly related to the svg's viewBox. The other values allow tweaking the start, end and speed of animation, with regards to overall scroll progress.
This, together with some CSS, took care of binding the right animations to the correct scroll progress.
Feel free to tweak it some more, especially if you add more markup.
The other thing I wanted was a function activated shortly after scrolling, which would snap the scroll to certain positions. Namely, to the .page elements.
I used gsap's ScrollTrigger plugin for the task, for multiple reasons:
I'm somewhat familiar with it (used it before)
it's performant, light and non-obtrusive (basically quits when it detects another user scroll)
listens to all relevant events (touch, mouse pointer, keyboard) without me having to make sense of them, providing a unified interface.
uses inertia (if you scroll down faster from page 1 it will scroll past page 2, directly to page 3 - other scroll plugins limit you to having to scroll once for each page change)
works well on mobile devices
There are other libs/plugins out there for the task, you don't have to use gsap (although I do think it's awesome). If you end up including it in your project, make sure you comply with their licensing (which is not MIT).
By the way, my first choice for the parallax effect per-se would also be gsap, as their timelines provide a lot of flexibility and options.
Their most advanced stuff is reserved for subscribers, but even if you limit yourself to the free plugins, you're still getting more than from alternative libs/plugins, IMHO.
See it working.

Is there a reason clip path a div with an image inside slows performance in chrome?

I have a div that uses:
-webkit-clip-path: polygon(0 0, 100% 7%, 100% 100%, 0 100%);
clip-path: polygon(0 0, 100% 7%, 100% 100%, 0 100%);
And there is an image inside this div inside another div. Is there a reason why this specific code causes chrome performance to drop - scrolling becomes choppy too. In Firefox everything looks normal.
Strangely enough, it only affects scrolling when view is on that element, once you scroll past it looks fine again
Clip-Path GPU Rendering
clip-path uses the GPU for rendering, so it is likely to be a graphics card/driver issue or that your system was out of resources and unable to render it effectively.
Try viewing on other machines to see if the same problem exists.
To understand the performance issues and how to debug them these articles will help
Debugging a Canvas Element
Chrome allows you to profile and debug canvas elements from the
Developer Tools. It can be used for both 2D and WebGL canvas projects.
To be able to do this, you need to have enabled the "Experiments" tab.
If you haven't already, navigate to chrome://flags and enable the
option marked "Enable Developer Tools experiments". You'll need to
press "Relaunch Now" button at the bottom of the page to apply your
changes. Go to the Settings panel of Chrome Developer Tools by
clicking the cog on the bottom right. Click the "Experiments" tab and
check the option "Canvas inspection".
Now visit the "Profile" tab and you will see an option called "Capture
Canvas Frame". The Developer Tools may ask you to Reload the page to
use the canvas. Pressing "Start" captures a single frame of the canvas
application. Alternatively, you can click the box below to switch to
"Consecutive Frames" which allows for capture of multiple frames.
Chrome creates a log of each call to canvas, providing a list of each
call to the context and a screenshot. You can click one of the log
items to replay the frame in the Developer Tools and see which
commands were called in the order they were called and from which
line.
Firefox has Canvas and WebGL Shader debugger, giving you features to
inspect frames, fps, modify shaders and more.
In order to enable these tools, go to Devtools settings (the cog icon
in devtools) and check "Canvas" and "Shader Editor".
Picking Your Properties
Animation is not selecting a syntax, it’s designing the animation for
fast rendering. The difference between a smooth, life-like animation
and a janky, stuttery one is rarely as simple as CSS versus
JavaScript. Instead, it’s often determined by which properties or
attributes you animate, on which elements.
Regardless of whether you’re changing a style property with CSS or
with SMIL or with JavaScript, the browser needs to determine which
pixels on the screen need to be updated, and how.
If the DOM and style computation steps determine that no styles or SVG
rendering attributes have changed for any elements, the browser can
stop right there.
If the changed styles don’t affect layout (only painting), or if
layout has changed for some elements but not for others, the browser
has to determine which parts it needs to repaint. This region is known
as the “dirty” rectangle of the screen. Elements elsewhere on the
screen can be skipped, their pixels unchanged for this update.
The changed element usually needs to be repainted, but also maybe
others. Did the changed element overlap another element, which is now
revealed? If so, the browser may need to redraw that background
element.
But maybe not.
It depends on whether the browser has the original pixel data for the
background saved in memory. The graphical processing units (GPU) in
most modern computers and smartphones can keep a certain number of
rendering layers in memory, not just the final version that appears on
screen. The main browser program may also save partial images in
memory.
Much of browser rendering optimization comes down to how it selects
which parts of the rendered document to divide into separately cached
(saved) layers.
GPUs can perform certain operations on the cached rendering layers,
and are highly optimized for the limited number of operations they can
do.
If browsers know that an element is going to change in a way that can
be efficiently calculated by the GPU, they can save that image’s pixel
data in a different GPU layer from its background (or foreground). The
animated changes can therefore be applied by sending new instructions
to the GPU for how to combine the saved pixels, instead of by
calculating new pixel values in the main processor.
Tip Most browser Dev Tools now have options to highlight the “dirty”
paint rectangles whenever they are updated. If your animation is being
GPU-optimized, you won’t see any colored rectangles flashing when you
run this Dev Tools mode.
Of course, all GPU-optimized pathways are conditional on having a
compatible GPU available—and on the browser knowing how to use it,
which may depend on the operating system. So browser performance, and
sometimes even browser bugs, will depend not just on the browser
version but also on the OS and hardware.
Most GPUs can adjust opacity of the saved layers, and translate them
to different relative positions before combining them. They can also
perform image scaling, usually including 3D perspective scaling—but
the scaling is calculated on a pixel level, not a vector level, and
can cause a visible loss in resolution. More advanced GPUs can
calculate some filter operations and blend modes, and masking of one
image layer with an alpha mask layer.
Some GPUs also have optimized vector rasterization, which can
calculate high-resolution vector shapes for use as clipping paths of
other vector levels. These “clipping paths” aren’t only used for
clip-path effects, though. Filling and stroking a shape is clipping
the paint image layer to the fill-region or stroke-region vector
outline. Similarly, CSS border-radius effects are vector clipping
paths on the content and background image layers.
But you currently can’t rely on your end users having these optimized
pathways.
The best performance, across a wide range of browsers and hardwares,
comes from animations that can be broken into layers (of elements,
groups, or individual graphics) that are animated in the
following ways:
opacity changes
translational and rotational transformations
Warning Currently, Chrome never divides an SVG graphic into different
GPU layers (although they do other optimizations).
To create a fully GPU-optimized animation in Chrome, you can sometimes
position separate inline elements over top of each other,
creating your own layers.
If you can’t define your animation entirely in translation and opacity
layers, consider the following guidelines:
Minimize the size of the “dirty” rectangle at each frame.
Solid-color objects are better than semi-transparent ones, since the
browser doesn’t need to calculate pixel updates for shapes that can’t
be seen behind a solid object. (Although this may not apply if the
browser is using GPU layers for optimization.)
Moving elements around is more efficient than changing what they look
like. (Although it depends on the browser whether “moving around” only
applies to transform movements or also to other absolute position
changes.)
Changing fill and stroke is more efficient than changing shapes and
sizes.
Scaling transformations are better than changing the underlying
geometry; browsers may be able to use GPU image scaling for an
animated scale effect, instead of recalculating the vector image at
the correct resolution at each frame.
Clipping is usually more efficient than masking.
Avoid rescaling gradient and pattern layers; this could mean using
user-space effects instead of bounding-box effects, if the bounding
box is changing.
Avoid any changes that require a filter to be recalculated. That
includes any change to the filtered element or its child content.

What means "Has an inline transform which causes subsequent layers to assume overlap"?

I'm trying to find the reason for performance problems on a mobile website (based on React and Material UI).
The page shows fairly complex content (a form), which however does not change (in respect to this question). The form itself has position: fixed, exact coordinates and even transform: translateZ(0).
Visually on top of the content there is a 48x48 pixel 100% rounded <div> (a custom FAB = Floating Action Button) that is scaled up to 25x during a transition (covering the whole screen when finished).
The FAB is basically an overlay for the whole page. It's an element outside of the form tree.
When I simplify the page by removing the complex form, the FAB animated smoothly even on low-end hardware (thanks to the GPU).
However, with the form the performance decreases significantly.
I see no reason for that and would expect that the form would get it's own layer and only rendered ("painted") once.
When looking at the Chrome DevTools timeline with just the Paint option checked I see that a lot of form elements (like a simple label) get re-painted during the animation.
See the images below for the reasons that Chrome gives for that:
I don't really understand what that means. Why chooses Chrome to repaint these elements?
Update 1
I was able to reproduce the problem here: http://www.webpackbin.com/N18obvEBM
The problems show up when there is a MUI <Drawer>. Even when it is closed (moved out of the viewport using transform: translate) it forces the browser to re-render:
It's even worse if the Drawer is visible on screen.
Note that apparently it makes a difference if the browser window is on a HiDPI (4K) screen or not. Same test on a 1050p screen:
On the lower-res screen the circle is apparently scaled-up from the 48x48px raster rendering (edges become very blurry). That does not happen on the HiDPI screen.
Anyway adding display:none to the Drawer layer makes makes the rendering perform well (but is obviously no solution).

Normalizr replacing SVG for PNG producing STRANGE results

I'm using Normalizr to display alternate PNG background images, where SVG is being used, for older browsers.
The no-svg class is being triggered, however, there are some strange results: background sizing properties set on the backgrounds are lost on some elements, but seem to be retained on others. Furthermore, I couldn't use shorthand to set this (having to write separate declarations), but I imagine that's the browser's fault.
I need to control the sizing, especially because the layout is responsive.
If you have any ideas, please do reply.
Thank you for your time.

Having trouble implementing -webkit-transform to scale up images in a photo gallery

I have a conceptual question about photo galleries like this:
http://www.nikesh.me/demo/image-hover.html
If you open this in a browser that supports CSS Transitions (for example Chrome), it will smoothly scale the hovered image whilst the zoomed version remains of a high quality.
This is accomplished by showing the non-zoomed images into a slightly smaller version than they really are, in essence the zoom shows them in their true dimensions.
So, normal images are first scaled down:
-webkit-transform:scale(0.8);
And then upon hover scaled up:
-webkit-transform:scale(1.2);
My question: How is the initial scaling down supposed to work for browsers that do not support this method of scaling down? Try opening that gallery in IE to see what I mean, it shows the images not scaled down, which makes them too large and thereby they break the layout.
What I want:
The full effect in browsers that support it. Important is that the zoomed version remains quality.
No effect at all for browsers that do not support it, yet a solid original dimension so that no layout is broken
It should work for both image orientations and there may be variety in image widths and heights too
Anyone? Preferably an elegant solution that does not need browser sniffing or javascript, but all answers are welcome.
If you are wanting it to work without the use of javascript then it seems the only method you have is to forgo the initial scale down with css. You will want to do this in the "antiquated" way of adjusting the width and height of the image in the markup.
<img src="yourImageSrc" width="80%" height="80%">
This would allow you to still keep your layout in tact if the user agent is not up to date.
** I don't know if the percentage works in the literal height/width definition. But you can always figure out the ratio you need and plug it in.

Resources