I am moving 10,000 small div elements in a css3 experiment from the top of the browser viewport to the bottom. For this test I use 2 different approaches:
With GPU acceleration using translate3D(x, y, z) or translateZ(0)
No GPU acceleration by simply adjusting the top property in css
Using NO hardware-acceleration runs fairly smooth on Google Chrome.
If I enable hardware-acceleration performance becomes a lot worse. It's so bad the boxes aren't even spread out evenly anymore:
With GPU/Hardware acceleration:
Without GPU/Hardware acceleration:
Question
Why is that so? Shouldn't using the GPU improve performance?
Demo application
https://www.timo-ernst.net/misc/hwtest/
Source
https://github.com/valnub/hwtest
My hardware used for test
Apple Macbook Pro 15" 2015 Model
CPU 2,8 GHz Intel Core i7
16 GB RAM
macOS Big Sur 11.2
Update (2014-11-13): Since this question is still attracting attention I'd like to point out that the problem itself still seems to exist although the mentioned stuttering might not be visible anymore in the provided demo on modern hardware. Older devices might still see performance issues.
*Update II (2021-02-17): The problem still persists but you will have to increase the number of boxes being moved in the demo based on the hardware used. I changed the UI of the demo app so you can now adjust the number of boxes moved to create a stuttering animation for your specific hardware. To replicate the issue I recommend to create enough boxes to see stuttering with GPU/hardware acceleration enabled. Then tick off the box and run the test again without acceleration. The animation should be smoother.
I always add :
-webkit-backface-visibility: hidden;
-webkit-perspective: 1000;
When working with 3d transform. Even "fake" 3D transforms. Experience tells me that these two lines always improve performances, especially on iPad but also on Chrome.
I did test on your exemple and, as far as I can tell, it works.
As for the "why" part of your question... I don't know. 3D transform are a young standard, so implementation is choppy. That's why it's a prefixed property : for beta testing. Someone could fill a bug report or a question and have the team investigate.
Edit per August 19th 2013:
There's regular activity on this answer, and I just lost an hour finding that IE10 also need this.
So don't forget :
backface-visibility: hidden;
perspective: 1000;
The reason animation was slower when you added the null transform hack (translateZ(0)) is that each null 3D transform creates a new layer. When there are too many of these layers, rendering performance suffers because the browser needs to send a lot of textures to the GPU.
The problem was noticed in 2013 on Apple's homepage, which abused the null transform hack. See http://wesleyhales.com/blog/2013/10/26/Jank-Busting-Apples-Home-Page/
The OP also correctly noticed the explanation in their comment:
Moving few big objects is more performant than moving lots of small items when using 3D-acceleration because all the 3D-accelerated layers have to be transferred to the GPU and the way back. So even if the GPU does a good job, the transfer of many objects might be a problem so that using GPU acceleration might not be worth it.
Interesting. I've tried playing with a few options in about:flags, specifically these ones:
GPU compositing on all pages Uses GPU-accelerated compositing on all
pages, not just those that include GPU-accelerated layers.
GPU Accelerated Drawing Enable GPU accelerated drawing of page
contents when compositing is enabled.
GPU Accelerated Canvas 2D Enables higher performance of canvas tags
with a 2D context by rendering using Graphics Processor Unit (GPU)
hardware.
Enabled those, tried it and failed miserably with the tickbox enabled (just like you did). But then I noticed yet another option:
FPS counter Shows a page's actual frame rate, in frames per second,
when hardware acceleration is active.
Given the highlight in the flag description, I'd speculate that hardware acceleration was, in fact, on for me even without the ticked checkbox since I saw the FPS counter with the options above turned on!
TL;DR: hardware acceleration is, in the end, a user preference; forcing it with dummy translateZ(0) will introduce redundant processing overhead giving the appearance of lower performance.
Check chrome://flags in chrome. It says
"When threaded compositing is enabled, accelerated CSS animations run on the compositing thread. However, there may be performance gains running with accelerated CSS animations, even without the compositor thread."
My expericence is that GPUs aren't generally faster for all kind of graphics. For very "basic" graphics they can be slower.
You might have gotten different result if you were rotating an image - that's the kind of thing GPUs are good at
Also consider that translateZ(0) is an operation in 3 dimensions, while changing top or left is a 2 dimensional operation
I saw you two demo, I think I know the reason you confused:
The animation elements Do not use the left or top to change the location, try to use the -webkit-transform;
All child elements need to turn on hardware acceleration such as use translateZ () or translate3D;
FPS measure animation fluency, your demo FPS on average only 20FPS.
Above, only a personal opinion, thank you!
Related
I wonder, if it is more beneficial to use the abilities of QML for animations, or prefer to use animation files (such as GIF oder MNG) for simple, small-scale animations.
Examples for what I call "simple, small-scale animations" are:
turning Hourglasses
those rotating dots, known from video platforms, while loading
flashing alert symbols
those "recharging buttons" known from many RPGs used for special attacks
I don't know much about the internals of Qt, so I am unsure, whether I benefit from hardware acceleration, when programming the animations (e.g. image rotation) or not. And if so, whether this hardware acceleration outperforms the display of pre-calculated animations from GIF and MNG.
Greetings and thanks,
-m-
I wouldn't worry about things like this unless they visibly slow the performance of your application. Some points to consider:
The use cases you mentioned almost always involve only one "busy indicator" being visible at a time.
Both Image and AnimatedImage have the high DPI #*x file look-up.
Both Image and AnimatedImage support caching.
Both Image and AnimatedImage will use the Qt Quick scene graph to display the images (OpenGL textures, which should result in hardware acceleration).
AnimatedImage has to read several images, but won't require rotation.
Rotation of images is pretty cheap, as far as I know.
It's trivial to swap out one with the other, or with something else.
If you're looking for good general performance advice, read the Performance Considerations And Suggestions documentation.
Are there any reasons not to hardware-accelerate everything with
transform: translate3d(0,0,0);
Using the * as the selector?
What things should be hardware accelerated and what things not?
#IMUXIxD You ask a really good question and the answer is no you shouldn't hardware accelerate everything it may seem to solve an issue but can actually be causing several other issues. It can also cause weird display issues when you're trying to z-index items as hardware accelerating object tends to remove them from the DOM while animating.
I wrote an extensive article on my understandings and tests with hardware acceleration here http://blog.zindustriesonline.com/gpu-accelerated-animations-and-composite-layering/
it also has a very good video on the subject from Matt Seeley an engineer at Netflix.
I hope this helps you understand a little better the benefits and downfalls of using hardware acceleration and what the best case scenarios are for use cases.
I'm working on a drawing application which requires high levels of accuracy, and I'm wondering which of the major browser platforms (including the HTML Canvas element, and Flash) give the best sub-pixel layout accuracy, both for drawn elements (rectangles in the Canvas or Flash, absolutely positioned DIVs in the browser), and for text.
There are a number of posts related to this, both on this site and others, (see list at bottom), but many are quite old, and none summarises the current situation.
My understanding is that Flash has native support for sub-pixel positioning, using twips to position objects to one twentieth of a pixel, and that when the TextLayoutFramework is used, this accuracy also extends to text. There is at least one report, however, that this doesn't work properly in Chrome. Can anyone confirm this?
My understanding of the situation in the browsers is that Firefox 14+ supports sub-pixel positioning for text and drawn elements, both in page layout and within the Canvas, but I haven't been able to ascertain how accurate this is.
I understand Chrome (as of v21) does not support sub-pixel positioning at all.
I understand IE9 doesn't support sub-pixel positioning, but it appears from the MS blog post linked below that IE10 will.
I don't know if there's any Mac/PC variance in this, and I don't know also if the accuracy of Flash varies between platforms and/or browsers.
I understand a summary question like this may provoke some debate, but I believe this is specific enough for people to provide useful answers, and hope that this thread can be a reference for the state of positioning accuracy up to now.
Some references:
http://blogs.msdn.com/b/ie/archive/2012/02/17/sub-pixel-rendering-and-the-css-object-model.aspx
Sub-pixel rendering in Chrome Canvas
http://johnblackburne.blogspot.co.uk/2011/11/twips.html
http://ejohn.org/blog/sub-pixel-problems-in-css/
Sub Pixel CSS positioning
https://productforums.google.com/forum/?fromgroups=#!topic/chrome/pRt3tiVIkSI
Currently, you can expect the best rounding and sub-pixel support to come from Mozilla with IE as the runner up. IE might end up being more fine tuned, but their release cycles are so long that Mozilla is likely to stay ahead of them.
As far as doing sub-pixel layout, you may be chasing a wisp, because the sub-pixel advantage improves anti-aliasing issues, not screen location accuracy. Your image will never be more accurate than 1 pixel from the true position, regardless of sub-pixel support.
The reason why some browsers don't zoom properly has nothing to do with sub-pixel support, it is because they are not remembering the exact position and rounding correctly. In other words, they are prematurely rounding the position and that causes the image to be mis-aligned.
Short answer:
No. It is NOT possible/documented.
And even if determined experimentally, it is NOT guaranteed to remain the same in future.
Long answer:
At sub-pixel accuracies, there is a lot of variance among Browsers/OS/HW about how the input is captured/rendered. With h/w acceleration being enabled on most modern browsers, there are a large number variations in rendering across different PCs running different browsers on different operating systems. So much so that, it is possible to even identify every unique user by the slightly different variations in the rendered output of a common sample.
Rather than worrying about the discrepancies in the underlying frameworks, How about designing the UI of your drawing application to be independent of those problems. Couple of methods i can think of right now are:
Allow editing the image at zoomed/magnified levels.
Design a snap-to-grid method for elements.
Update:
The "zoom" operation would your custom implementation and NOT a feature of the underlying frameworks. So if you need sub-pixel accuracy to the order of 1/10th of a pixel, one would need to have a 10x_zoom() implemented as part of you web-app which would render the data from
1st pixel --> 10x10pixels at (0,0),
2nd pixel --> 10x10pixels starting from (11,11).
This way one would have a very magnified view of the data, but the framework is blissfully unaware of all this and renders accurate to the onscreen-pixel(which in our case now is 1/10th of the image pixel).
Also an important thing to note that this operation would consume a lot of memory if done for the entire image at once. Hence doing this for ONLY the visible part of the image in a "zoom-window" would be faster and a less memory intensive process.
Once implemented in your drawing web-app the sub-pixel inaccuracies in the frameworks might not turn out to be a problem for the user as he can always switch into these modes and provide accurate input.
I've often been told that CSS 3D transforms are hardware accelerated in Mobile Safari which makes me wonder if the implication is that 2D transforms are not? I can think of no reason why they wouldn't be, since they can basically all be implemented as 3D transforms, but I would like to know for sure.
If it turns out that 2D transforms are not hardware accelerated, any insight as to why would be much appreciated.
You're right, CSS 2D transforms aren't hardware accelerated in Mobile Safari, but 3D transforms are. I'm not sure why it's that way, but perhaps they decided it was overkill for most 2D transforms. Using the GPU unnecessarily could adversely affect battery life.
It's very easy to convert a 2D transform to a 3D transform so it isn't much of a problem. One trick is to use translateZ(0) as described here: http://creativejs.com/2011/12/day-2-gpu-accelerate-your-dom-elements/
EDIT
Apple doesn't say anything about it in their documentation, so it is difficult to get an authoritative source. Here is what Dean Jackson from Apple had to say about it (from http://mir.aculo.us/2010/08/05/html5-buzzwords-in-action/):
In essence, any transform that has a 3D operation as one of its functions will trigger hardware compositing, even when the actual transform is 2D, or not doing anything at all (such as translate3d(0,0,0)). Note this is just current behaviour, and could change in the future (which is why we don’t document or encourage it). But it is very helpful in some situations and can significantly improve redraw performance.
Ariya Hidayat from Sencha wrote a post explaining hardware acceleration on mobile browsers: http://www.sencha.com/blog/understanding-hardware-acceleration-on-mobile-browsers/. Here's a snippet from the post:
The best practice of setting the CSS transformation matrix to translate3d or scale3d (even though there is no 3-D involved) comes from the fact that those types of matrix will switch the animated element to have its own layer which will then be composited together with the rest of the web page and other layers. But you should note that creating and compositing layers come with a price, namely memory allocation. It is not wise to blindly composite every little element in the web page for the sake of hardware acceleration, you’ll eat memory.
Here is an article from html5rocks.com that discusses hardware acceleration: http://www.html5rocks.com/en/tutorials/speed/html5/. Here's a snippet from it:
Currently most browsers only use GPU acceleration when they have a strong indication that an HTML element would benefit from it. The strongest indication is that a 3D transformation was applied to it. Now you might not really want to apply a 3D transformation, but still gain the benefits from GPU acceleration - no problem. Simply apply the identity transformation:-webkit-transform: translateZ(0);
Firefox and Internet Explorer already use hardware acceleration for 2D transforms, so I wouldn't be surprised if the WebKit browsers (Chrome, Safari) include it in the near future.
I've written a helpdesk monitor application that is designed to sit on a big plasma screen in a support department, the application has 5 views that it revolves around, the content of most of those screens is different, but they have some common components, being one silverlight control and a css background image.
I'm worried that over a period of time these will get burnt into the screen, I've looked into techniques to fix this, and some people suggest moving the image by one pixel every few seconds or displaying a different view.
I just don't know if these techniques are sufficient.
Does ensuring that I use a different css background, and a bit of silverlight animation 1-50% of the time actually fix this problem? The same image will be in the same place the other 99-50% of the time.
Check the documentation for the plasma screen, I did hear that many of them countered burn in by running colour flashes at some points and it is not as big a problem with modern plasma screens.
From what I've heard, this is a common complaint because of the annoying channel logos in the corner of screens so they had to do something about it.
What I am saying is, I think your hardware will probably manage it anyway.
Ryan
It depends on the plasma screen you use. Some manufacturers take steps to reduce the risk of it happening. However, if it does happen, I've found that there is something called JScreenFix that can be used to remove the burn-in. The basic problem is caused by the image on the screen not changing. You can either make sure the image moves at least slightly over time or reduce the contrast to reduce the risk.
Also, if possible you should use an LCD screen instead which are technically not susceptible to burn in...though they sometimes suffer from image persistence which is not permanent.
Check out for more detailed information:
http://www.plasmatvbuyingguide.com/plasmatv/plasmatv-burnin.html
http://www.wikihow.com/Use-JScreenFix-to-Remove-Plasma-Screen-Burn-in
http://compreviews.about.com/od/monitors/a/LCDBurnIn.htm
The comments that new plasma displays do not burn as easily is only partly valid since your department will probably buy the cheapest plasma they find.
mezoid is right. Reduce brightness and contrast and turn it off at night but I have found that burn-in isn't that serious. We have few monitors at work for this purpose and although there is obvious burn-in around borders of windows we can still see very clearly the important data.
If you are not presenting this to customers it should be okay although the staff may make fun of this occasionally :)
Plus you could run the JScreenFix every couple of months mezoid proposed you are okay.
Just be careful with JScreenFix, do note that it works merely by burning in the rest of the screen simply changing your perception of burnin and will, over time, make your monitor a washout.
There's an idea I haven't tried, but might help, If you phase the obvious static problem area through the 3 primary colours, or the 3 secondary colours, or both, you could utilise the benefit from only burning each pixel for 1/3rd of the time effectively tripling the time it takes for burnin to occur.
I think the risk of screen burn is much smaller than it used to be.
And why even bother if the screen will only be used to display the same view all the time? If the same image is kept in place all the time, it doesn't really matter if it gets burnt into the screen or not :-)
If you still would like to take measures, I would also suggest some animation or moving the image around a bit when the view rotates.
[EDIT]
Forgot something... A lot depends on time between the views rotating. If you only switch the view (and image) every few hours, the risk is a lot greater than if you switch to a different view every ten minutes...
[/EDIT]
I've used this program with pretty good success. You can probably create something similar in your program.
http://www.e-motional.com/TScreenLock.htm
Plasma Screen Saver Option.(TSL-PRO Only) A black bar of variable width floats across the screen preventing Plasma Screen Burn-In. This option allows TSL to be used as a Plasma Screensaver.