I am developing an app on iOS using JavaFXPorts. I have a Pane that holds a ListView with countries and their flags. I have noticed that ListView scrolling is laggy when I apply the dropshadow effect on the Pane. As you can see from the videos below, without the effect the scrolling is super smooth, whilst applying the effect through CSS, scrolls starts to get laggy. I would like to keep the shadow effect as it makes app more beautiful. So any suggestion is really appreciated.
CSS Code I am using is:
-fx-effect: dropshadow( three-pass-box, rgba(0,0,0,0.6) , 5, 0.0 , 0 , 1);
Video: Scrolling without shadow effect
Video: Scrolling with shadw effect (Laggy)
Please note that this is on iPhone 6 running. On iPhone 5 results are much worse.
When adding effects, css, transitions, custom controls and other complex stuff that typically works fine on desktop, there could be a big loss in terms of performance when being ported to mobile.
Effects
While effects make nodes or panes look fancy, they have the highest negative impact on performance on mobile.
Try to avoid applying them to nodes that change a lot, like cells on ListView, TableView or ComboBox controls.
Also if you apply them to a parent with the referred children (ListView, ...), the parent (and the effect) will be rendered all the time if the children are invalidated (after scrolling, or similar).
If you really need the effect over this parent, try to split parent and children.
Instead of:
parent (Pane with effect)
|-- ListView
you can do something like:
parent (StackPane without effect)
|-- Pane (with effect)
|-- ListView
Since the pane won't change much, you can use Cache over it. Typically, the cache strategy works by rendering an image of the node (pane with effect), instead of recreating all over again the node and that effect, so it is a quick win:
parent (StackPane without effect)
|-- Pane (with effect) and with Cache
|-- ListView
On the contrary, don't use cache on nodes that change a lot (like the ListView).
CSS
Complex CSS requires long CPU time. Try to simplify it. Even you can remove the whole CSS for a quick test. Then decide what you may or may not use.
Try as well to replace some of the styling by code.
Animations
The same goes for animations: Avoid animations, transitions, if possible.
Number of nodes and custom controls
The higher the number of nodes, the lower the performance, so try to keep it to a minimum (replacing complex content with images, canvas when possible).
Switching scenes
Mobile screens are smaller and it is better having less content on each scene than on desktop. Also it is important avoiding switching stages or scenes. Instead use different nodes and replace them over the same scene.
Gluon Charm uses View nodes, and an easy way to switch between different views: MobileApplication.getInstance().switchView("other view name").
Images
Finally, when using images either downloaded from internet or loaded from a file, cache strategies are a must. Have a look at those provided by Gluon Charm Down.
Related
Background
I'm working on a web app built in HTML (not WebGL/Canvas) which includes 2d viewport controls for panning and zooming the page content. Something like Figma, perhaps, but rendered entirely with DOM, which is a hard technical requirement.
To achieve the viewport functionality I've made extensive use of CSS transform to power all offsets and animations in order to reduce the work required to render changes to compositing, as much as possible. The "canvas" of my app contains many discrete items which can be moved and resized by the user, similar to any typical OS window manager. These 'widgets' may contain their own scrollable content.
For example, after panning 50px,25px and zooming to 1.5x, the DOM and transform values might look like this for a particular "canvas" which has a widget at (20, 100):
<div id="canvas" style="transform: scale(1.5, 1.5) translate(50px, 25px)">
<div id="widget-1" style="transform: translate(20px, 100px)" />
</div>
After a lot of experimentation I've discovered that the most efficient way to render these items across multiple browsers is to promote each individual 'widget' to its own layer by applying will-change: transform to the outermost element. This results in a pretty reasonable framerate, even with a lot of content in the frame while panning and zooming.
Webkit Misbehavior
However, there's one catch - on Webkit-based browsers, when zooming (which is applied via scale transform in CSS on the root canvas element), the contents of the widgets are not re-rasterized to accommodate the new scale value. At a zoom greater than 1x, this produces noticeable blurriness. Images with text, in particular, are basically unreadable.
Above, one image widget and another DOM text widget at 1x (native) scale.
And now, at 2x scale (you won't be able to tell the difference inline in this post, but you can see it at full resolution). Notice that the image is just as illegible as before, and the text is blurry.
For a live reproduction of this problem, see this CodeSandbox (leave "Animation" unchecked).
Side note: this only happens on Chrome, Safari, and Edge - so it seems like an artifact of Webkit's rendering behavior. Firefox actually scales everything quite nicely, and with a faster framerate to boot.
However, the performance of this approach is desirable. After trying some other configurations of layering, I decided the best approach would be to try to force the browser to re-rasterize the widgets once a zoom change animation was completed.
The Hack
The intended goal is to allow the old rasterized textures to persist during the zoom animation to make it as smooth as possible (so the blurriness seen above will be present while the viewport scales up/down), but to trigger a re-rasterization at the final scale once the animation is complete - re-draw all the widgets, taking current scale into account so that their contents are sharp and legible even at 2x zoom.
To correct this problem, I have implemented what feels like a "hack": after the end of each zoom animation, I'm toggling the will-change on all the widgets from transform to initial for 1 frame, and then back again:
const rerasterize = () => {
requestAnimationFrame(() => {
element.style.willChange = 'initial';
requestAnimationFrame(() => {
element.style.willChange = 'transform';
});
});
};
This actually works. Now when I zoom in to 2x, the image text is legible and the DOM text is nice and sharp:
(In my app, you can even see the moment when the zoom animation "settles" and text "pops" into high-resolution.)
However, if I understand correctly, the code above is actually forcing the browser to dispose of the composite layer for each widget for 1 frame, and then recreate it. While the performance seems acceptable in practice, I would much prefer to just ask the browser to invalidate the layer which it has already constructed.
The Question
So, with all that context aside, the question is simply: is there a way to manually trigger an invalidation of a composite layer without trashing it? A magic CSS incantation, perhaps?
I'm also open to alternative approaches with respect to layer grouping which might improve behavior without harming render performance.
Other Stuff I Tried
One thing I noticed when creating the reproduction CodeSandbox is that if I add a transition property to the "canvas" element (which is being transformed to achieve the viewport changes), even if widgets are composited in different layers, it appears to fix the blurriness. You can see this by checking "Animation" in the demo. However, my animations are currently done via JS, so adding a secondary CSS transition on top of this doesn't seem like a great plan.
I tried ripping out JS animations entirely and relying solely on transition, but surprisingly this did not seem to help. Panning and zooming felt noticeably choppier (some of this might come down to native easing feeling less natural than JS spring-based easing), but more concerningly the GPU memory usage and dropped frames were notably worse than without transition - which leads me to believe that transition might be causing a lot more work than I really want on the GPU for my use case (perhaps invalidating layers frequently during animations, when I would prefer them to remain intact until the transition ends).
I'm trying to find the reason for performance problems on a mobile website (based on React and Material UI).
The page shows fairly complex content (a form), which however does not change (in respect to this question). The form itself has position: fixed, exact coordinates and even transform: translateZ(0).
Visually on top of the content there is a 48x48 pixel 100% rounded <div> (a custom FAB = Floating Action Button) that is scaled up to 25x during a transition (covering the whole screen when finished).
The FAB is basically an overlay for the whole page. It's an element outside of the form tree.
When I simplify the page by removing the complex form, the FAB animated smoothly even on low-end hardware (thanks to the GPU).
However, with the form the performance decreases significantly.
I see no reason for that and would expect that the form would get it's own layer and only rendered ("painted") once.
When looking at the Chrome DevTools timeline with just the Paint option checked I see that a lot of form elements (like a simple label) get re-painted during the animation.
See the images below for the reasons that Chrome gives for that:
I don't really understand what that means. Why chooses Chrome to repaint these elements?
Update 1
I was able to reproduce the problem here: http://www.webpackbin.com/N18obvEBM
The problems show up when there is a MUI <Drawer>. Even when it is closed (moved out of the viewport using transform: translate) it forces the browser to re-render:
It's even worse if the Drawer is visible on screen.
Note that apparently it makes a difference if the browser window is on a HiDPI (4K) screen or not. Same test on a 1050p screen:
On the lower-res screen the circle is apparently scaled-up from the 48x48px raster rendering (edges become very blurry). That does not happen on the HiDPI screen.
Anyway adding display:none to the Drawer layer makes makes the rendering perform well (but is obviously no solution).
I'm developing a application which has a ListView which contains items which needs complex cell layouts. The cells are in variable heights and some of the cells tends to be larger than the view port height.
But when the ListView is filled with items the scroll thumb tends to resize its self while scrolling, which makes it hard to hold onto the thumb while scrolling. This happens mainly when passing through different size of cells.
This is not a problem in Swing if I create a same kind of a cell render to be used with the JList. This problem is there in JavaFX 2 and JavaFX8 both.
When looking at the VirtualFlow which is responsible for layout of the ListView and handle scrolling, it seems that the scrollbar thumb side (lenghtbar) is calculated based on the cell count and the visible cell count, which is actually a problem when it comes to lists which has variable heights of cells.
So is this the future of the scroll bar behavior for Java FX list views? or is there any solution available for this problem? Or should I try to hide the scrollbar and provide a different user interaction to scroll?
This problem is already reported under https://javafx-jira.kenai.com/browse/RT-25059 and fixed in Java8 upto some extend. So if this fix is needed on JavaFx2 we have to backport the changes under commit http://hg.openjdk.java.net/openjfx/8/controls/rt/rev/81cc13fe6f96
To get this changes in JavaFX 2.2 you need to apply the required changes on to FX2.2 VirtualFlow.java class and load those changes before the jfxrt.jar is loaded. Another approach is if you don't like to mess up with the jfxrt classes is to have you own ListView which uses your own Skin and the patched VirtualFlow version may be with a different name. But this might require lot of customization compared to first solution.
More approaches are welcome :).
I'm working on a Flex 4 application and I started customizing the interface with skins to give a whole new look.
So, I've created two scrollbar skins in Flash Catalyst (one horizontal, one vertical).
Its working great when I test the application through Catalyst so I took it and imported it on Flash Builder, copied the components and defined the new skins in my css file for the HScrollbar and VScrollbar.
The skin is working, all the buttons are ok. But, the scrollbar isnt resizing for some reason. It remains in the same height I've designed it to be regardless of the content it is bound to.
It scrolls the content in all the ways it should be but it doesnt resize and the thumb isnt getting all the way down.
Also I've noticed the following.
I have a custom component acting as a list. It extends Group and contains a Scroller. So at one place of the application the Scrollthumb is getting lower than on another place where the same custom list is used.
I also have to mention that this scroller works perfectly without a custom skin.
Anyone else having similar problems?
Okay, I know you posted this a while ago but I have been scouring the internet for days looking for why the scrollbar's thumb wasn't scaling like the default scrollbar.
There are a couple things to check, first is there a set height on your thumb's skin?
If not, and this is what I was overlooking, go to your scroller skin and at the point where you add the vertical and horizontal scrollbar set the "fixedThumbSize" property to false.
I suppose that your graphic elements are defined as every single part of the scrollbar (top arrow, bottom arrow, track, etc...): in this case you should check that the elements dimensions are not fixed... they should be in % to be able to change the dimensions based on the container.
I have a Flex app (SDK 3.5 - FP10) that does mindmap trees. Every node is a Canvas (I'm using Canvas specific properties so I needed it). It has a shadow effect, background color and some small ui element on it (like icons, texts...). It works perfectly until it goes over ~700 nodes (Canvas). Over that number it shows grey rectangles. If I turn off the DropShadowFilter effect for the Canvas, they are also gone, so I assume it's a DropShadowFilter problem.
The effect is simple:
private static var _nodeDropShadow:DropShadowFilter = new DropShadowFilter(1, 45, 0x888888, 1, 1, 1);
_backgroundComp.filters = _nodeDropShadow;
Is it possible that Flex can't handle that much?
I think you're exactly right, flex can't handle that many drop shadow filters. They're very expensive. However, if you're using the built in skins, they create bitmap versions of the dropshadows that are less processor intensive. You'll want to set the style "dropShadowEnabled" to true to enable this effect. You'll have less control over this type of dropShadow, but you may be able to get it to do what you want.
For more dropshadow styles, read the style list of mx:Canvas here: http://livedocs.adobe.com/flex/3/langref/mx/containers/Canvas.html
Yeah, 700 is a bit much for Flex components. At this level you're going to need to write your own custom components that does the drawing & layout manually. Also I agree with using bitmapCaching to make sure the drop shadow filters aren't being constantly re-rendered.