I've seen many examples for animating the background-position of elements to produce nice looking scrolling backgrounds.
These examples tend to also script in reset counters to put the background-position in its original location after a certain amount of pixels.
My question is: Is it feasible to never reset the background-position for a tiled background? This would naturally produce very large background-position values over time, but if there is no significant difference in browser performance, it might be ok. I've tested IE, Firefox, and Chrome over an 8 hour period, and there didn't appear to be any negative impact, although my boxes are relatively fast.
To answer the "why not reset?" question, it just comes down to simplicity. I am animating many background elements, each with different X/Y ratios, so not having to calculate when exactly it would be pixel perfect timing to switch back would make my life easier.
Anyone have thoughts on this?
This would naturally produce very large background-position values over time
Yes, that could eventually become a problem if your code is like
el.style.backgroundPosition= '0 '+n+'px';
When n reaches a high number (typically 1000000000000000000000), its toString will switch to exponential representation, which would end up trying to set:
el.style.backgroundPosition= '0 1e21px';
which is an obvious error. It's possible some layout engines might bail out earlier, perhaps at 1<<31 pixels. But even so, if you were animating something by (say) 32 pixels 60 times a second, it would still take twelve days to reach that stage.
not having to calculate when exactly it would be pixel perfect timing to switch back would make my life easier.
Typically you'd use the modulo operator % separately on each counter rather than resetting the whole thing.
var framen1= 30; // item 1 has 30 frames of animation
var framen2= 50; // item 2 has 50 frames of animation
...
var framei1= (framei1+1)%framen1; // 0 to 29 then reset
var framei2= (framei2+1)%framen2; // 0 to 49 then reset
Or, for time-based animations:
var frametime1= 100; // item 1 updates 10 times a second
var frametime2= 40; // item 2 updates 25 times a second
...
var dt= new Date()-t0; // get time since started animation
var framei1= Math.floor(dt/frametime1) % framen1;
var framei2= Math.floor(dt/framelength2) % numberofframes2;
document.getElementById('div1').style.backgroundPosition= '0 '+(framei1*24)+'px';
document.getElementById('div2').style.backgroundPosition= '0 '+(framei2*24)+'px';
...
I would think you would hit an overflow of some sort, but if you tested it over an 8 hour period, it is unlikely that it is possible to get the browser to overflow. It is just speculation.
Related
I have a TableView with around 40 rows and 4 columns. All of the 160 cells have a Rectangle with a gradient. I use Qt5.13 with enabled Quick compiler. Yet, when I animate all of these 160 cells in relatively large time intervals (100ms), the UI will become unresponsive. This means that rendering the gradients takes too long. In fact, if I only render 40 of such cells, I can update in 100ms intervals with ease.
The rectangles represent progress bars. They have gradients from top to bottom. However, the value (length) of the progress bars changes the gradients, too. This is why for each value (length) point, the gradients have to be recreated and rerendered.
Clearly, this is slow. What I would like to do is have the gradients being cached for each value (length) point. They represent percentages, so I would only need to cache 101. I am quite certain that this improves the performance here.
However, how can I cache gradients (or any objects) myself in QML? The more general (or bonus) question is: how can I have a shared QML resource between multiple QML files?
You can try and load images instead of rendering if you have access to a large memory. Maybe you can also try scaling SVGs.
I'll make some indefinite animations inside my A-frame web application. My animation must play infinity and must have a yo-yo effect. Form an opacity of 0.25 to 0.75 an back each in 3000 millisecondes. For this I use next code:
let box = document.createElement('a-box');
box.setAttribute('src', '#outer');
box.setAttribute('position', '0 0 -5');
box.setAttribute('color', 'red');
let anim = document.createElement('a-animation');
anim.setAttribute('attribute', 'opacity');
anim.setAttribute('from', '0.25');
anim.setAttribute('to', '0.75');
anim.setAttribute('fill', 'both');
anim.setAttribute('repeat', 'indefinite');
anim.setAttribute('dur', '3000');
anim.setAttribute('direction', 'alternate');
box.appendChild(anim);
document.getElementsByTagName('a-scene')[0].appendChild(box);
<script src="https://aframe.io/releases/0.5.0/aframe.min.js"></script>
<a-scene></a-scene>
Like you can see this is not working. It goes from 0.25 to 0.75 in 3000 millisecondes and instantly back to 0.25 to repeat again. In the documentation of A-frame stand this:
When we define an alternating direction, the animation will go back and forth between the from and to values like a yo-yo. Alternating directions only take affect when we repeat the animation.
If I use an number (example the number x) instead of indefinite, the yo-yo effect works great but stops when it's x-times repeated.
What could I do for fix this issue?
i think it's fixed by throwing out the fill attribute
anim.setAttribute('fill', 'both');
Its supposed to handle the animation when its not playing, i guess when the replay is indefinite, its playing all the time and either tween.js or three.js don't like it.
Working demo based on Your code:
https://codepen.io/gftruj/pen/qjdZmw
I tried setting it to 'none' or whatever, but i only got it working while thrown out.
Heyho. I just came up with a tricky question.
I did something like this: http://jsfiddle.net/LspdF/1/
As you cann see there are more than 4 sites. Thats why I did the following:
Whenevery the pic-numer is greater than the last pic-number I turn the cube left, otherwise rigt. Thats why it can happen, that the css is somewhat like this:
transform: rotateY(450deg)
Since a circle only has 360 deg it turns and turns but always in the right direction.
Now I wanted to add a nice effect. Something like this: http://jsfiddle.net/p8a2t/
For this effect I need the 14th value of the 3d matrix created by the browser (z-value of translation). Since this value is poorly not the same as translateZ(), I have to use the matrix3d()-attribute.
Thats why I calculate the rotateY myself. But as you know the sin and cos are periodic and wont work with my 450deg. They reset the cube to 90deg which makes the cube spin back very fast.
My Question: How to denie that? Is there any possibility to change the matrix for spinning more than 360deg?
PS: The effect is realized using transition. There may be code in the fiddles which isnt used since I had to create both examples at once.
PPS: Sometimes the calculation has fatal numeric errors (near zero but not rly zero). I tried to avoid that using toFixed but for some reason that wont work sometimes. Same with Math.round. Note that you can break the second example clicking many links while the animation is still not done. But thats no point here :)
Any help is appreciated!
I've seen many mandelbrot image generator drawing a low resolution fractal of the mandelbrot and then continuously improve the fractal. Is this a tiling algorithm? Here is an example: http://neave.com/fractal/
Update: I've found this about recursively subdivide and calculate the mandelbrot: http://www.metabit.org/~rfigura/figura-fractal/math.html. Maybe it's possible to use a kd-tree to subdivide the image?
Update 2: http://randomascii.wordpress.com/2011/08/13/faster-fractals-through-algebra/
Update 3: http://www.fractalforums.com/programming/mandelbrot-exterior-optimization/15/
Author of Fractal eXtreme and the randomascii blog post linked in the question here.
Fractal eXtreme does a few things to give a gradually improving fractal image:
Start from the middle, not from the top. This is a trivial change that many early fractal programs ignored. The center should be the area the user cares the most about. This can either be starting with a center line, or spiraling out. Spiraling out has more overhead so I only use it on computationally intense images.
Do an initial low-res pass with 8x8 blocks (calculating one pixel out of 64). This gives a coarse initial view that is gradually refined at 4x4, 2x2, then 1x1 resolutions. Note that each pass does three times as many pixels as all previous passes -- don't recalculate the original points. Subsequent passes also start at the center, because that is more important.
A multi-pass method lends itself well to guessing. If four pixels in two rows have the same value then the pixels in-between probably have the same value, so don't calculate them. This works extremely well on some images. A cleanup pass at the end to look for pixels that were miscalculated is necessary and usually finds a few errors, but I've never seen visible errors after the cleanup pass, and this can give a 10x+ speedup. This feature can be disabled. The success of this feature (guess percentage) can be viewed in the status window.
When zooming in (double-click to double the magnification) the previously calculated pixels can be used as a starting point so that only three quarters of the pixels need calculating. This doesn't work when the required precision increases but these discontinuities are rare.
More sophisticated algorithms are definitely possible. Curve following, for instances.
Having fast math also helps. The high-precision routines in FX are fully unwound assembly language (generated by C# code) that uses 64-bit multiplies.
FX also has a couple of checks for points within the two biggest bulbs, to avoid calculating them at all. It also watches for cycles in calculations -- if the exact same point shows up then the calculations will repeat.
To see this in action visit http://www.cygnus-software.com/
I think that site is not as clever as you give it credit for. I think what happens on a zoom is this:
Take the previous image, scale it up using a standard interpolation method. This gives you the 'blurry' zoomed in image. Click the zoom in button several times to see this best
Then, in concentric circles starting from the central point, recalculate squares of the image in full resolution for the new zoom level. This 'sharpens' the image progressively from the centre outwards. Because you're probably looking at the centre, you see the improvement straight away.
You can more clearly see what it's doing by zooming far in, then dragging the image in a diagonal direction, so that almost all the screen is undrawn. When you release the drag, you will see the image rendered progressively in squares, in concentric circles from the new centre.
I haven't checked, but I don't think it's doing anything clever to treat in-set points differently - it's just that because an entirely-in-set square will be black both before and after rerendering, you can't see a difference.
The oldschool Mandelbrot rendering algorithm is the one that begins calculating pixels at the top-left position, goes right until it reaches the end of the screen then moves to the beginning of next line, like an ordinary typewriter machine (visually).
The linked algorithm is just calculating pixels in a different order, and when it calculates one, it quickly makes assumption about certain neighboring pixels and later goes back to properly redraw them. That's when you see improvement, think of it as displaying a progressive JPEG. If you zoom into the set, certain pixel values will remain the same (they don't need to be recalculated) the interim pixels will be guessed, quickly drawn and later recalculated.
A continuously improving Mandelbrot is just for your eyes, it will never finish earlier than a properly calculating per-pixel algorithm which can detect "islands".
I am looking for a fairly simple image comparison method in AS3. I have taken an image from a web cam (with no subject) passed it in to bitmap data, then a second image is taken (this time with a subject) to compare this data, from these two images I would like to create a mask from the pixels that match on both bitmaps. I have been scratching my head for a while, and I am not really making any progress. Could any one point me in the right direction for pixel comparison method, something like getPixel32()
Cheers
Jono
use compare to create a difference between the two and then use treshold to extract the parts that interest you.
edit: actually it is pretty straight forward. the trick is to apply the threshold multiple times per channel using the mask parameter (otherwise the comparison only makes little sense, since 0x010000 (which is almost black) is consider greater than 0x0000FF (which is anything but black)). here's how:
var dif:BitmapData;//your original bitmapdata
var mask:BitmapData = new BitmapData(dif.width, dif.height, true, 0);
const threshold:uint = 0x20;
for (var i:int = 0; i < 3; i++)
mask.threshold(dif, dif.rect, new Point(), ">", threshold << (i * 8), 0xFF000000, 0xFF << (i * 8));
this creates a transparent mask. then the threshold is applied for all three channels, setting the alpha channel to fully opaque where the channels value exceeds the threshold value (you might wanna decrease it).
you can isolate the foreground object ("the guy in front of the webcam") by copying the alpha channel from the mask to the current video image.
one of the problems here is that you want to find if a pixel has ANY change to it, and if it does then to convert that pixel to another color (for masking). Unfortunately, a webcam's quality isn't great so even if your scene does not change at all the bitmapdata coming from the webcam will change slightly. Therefor, when your subject steps into frame...you will get pixel changes for the subject...but also noise in other areas due to lighting changes or camera quality. What you'll need to do is write a function that analyzes the result of a bitmapdaya.compare() for change in area's larger than _____ to determine if there is enough change to warrant an actual object being there. That will help remove noise and make your mask more accurate.