I have a long and busy for loop which is supposed to addElement on the stage iteratively. Since it takes several seconds to execute the whole loop (i=1:N), i just want to refresh the stage at each loop so that the output is displayed before the loop reaches its final point (N). each iteration should add a displayable element before the next iteration begins.
For this i wrote the following
for(var i:int = 0; i < 280; i++){
addElement(...);
validateNow();
}
but it is not working like i want. anyone having solution please?
You need to divide up this lengthy work so that it can occur over multiple frames. Flash/Flex do not update the screen when your code is executing. Have a look at the elastic race track for Flash to help understand why. The Flex component life cycle adds another layer on top of that as well. By the way, calling validateNow() can be computationally expensive, possibly making your loop take longer :)
There are a number of ways to break up the work in the loop.
Here's a simple example using callLater():
private function startWorking():void
{
// create a queue of work (whatever you are iterating over)
// in your loop you're counting to 280, you could use a
// simple counter variable here instead
var queue:Array = [ a, b, c];
callLater(doWork, [ queue ] );
}
private function doWork(workQueue:Array):void
{
var workItem:Object = workQueue.shift();
// do expensive work to create the element to add to screen
addElement(...);
if (workQueue.length > 0)
callLater(doWork, workQueue);
}
You may want to process more than 1 item at a time. You could also do the same thing using the ENTER_FRAME event instead of callLater() (in essence, this is what callLater() is doing).
References:
validateClient() docs, calling validateNow() results in a call to this expensive method
Related
I am creating a small program that takes user input into a model and then shows that input in several views that take it through filters.
When the user clicks the button that accepts the input, the program updates the amount of cells in the views and then resizes those cells as necessary so that they fit neatly in their area.
My problem is that the cell resizing doesn't seem to work for one of the views for some reason (I tried looking for differences but couldn't find a reason for what I'm experiencing).
I'm calling the cell resizing function in two places:
dataChanged slot.
resizeEvent slot.
If the cell resize function gets called twice inside dataChanged, then the view does update, however this involves some calculations and ui access and obviously not supposed to happen.
If I resize my window then the cells are resized properly.
I suspect that I'm always one update behind - that the view doesn't paint until the new update starts getting calculated and then that new update is on hold until the next calculation (since resize happens a lot of times in succession it might just act the same as the button but is harder/impossible to notice).
I have some dirty workarounds:
As I mentioned, if I call my cell resize function again, the view updates properly.
If I remove the second "if" in this next piece of code then everything works.
I thought I'd save my computer some work by only processing when the entire input had been received. My thinking was that, although dataChanged is emitted for every single item I'm inserting, I only really need to update once it is all in:
void MainWindow::on_dataChanged()
{
static int left_to_insert = -1;
if ( 0 > left_to_insert )
{
left_to_insert = m_model.rowCount() - 1;
}
if ( 0 == left_to_insert )
{
...
m_matrix_proxy.resize_to_fit();
adjust_matrix_cells_sizes();
}
--left_to_insert
}
Is it bad to only process the last signal? Why?
I tried calling update() and/or repaint() on both the matrix and the main window.
I tried calling both of these on the viewport of the QTableView and tried calling them in succession from the matrix itself to the highest parent that didn't make my program crash. (ui->matrix->parentWidget()->parentWidget()...)
I tried qApp->processEvents().
I even resorted to emitting a resizeEvent, but this is overkill IMO as it makes some calculations be performed again.
Just in case it is somehow relevant: The data appears correctly. The only thing that's wrong is that the cells don't resize.
You need to emit layoutChanged signal from your model. But be care with large amounts of items, because handling of this signal may take a lot of time.
Similar questions: one, two
This logic in only code sample you have given is wrong. And this static keyword makes it even worse.
Actual answer:
There is ready solution delivered by Qt! See documentation of QHeaderView::ResizeMode and QHeaderView::setSectionResizeMode
Old answer:
IMO this should look like this:
void MainWindow::MainWindow()
…
{
…
mNeetToResizeCells = false;
connect(this, &MainWindow::NeedUpdateCellsSizes,
this, &MainWindow::ResizeTableCells,
Qt::QueuedConnection); // this is imporatant
}
void MainWindow::on_dataChanged()
{
if (!mNeetToResizeCells) {
mNeetToResizeCells = true;
emit NeedUpdateCellsSizes();
}
}
void MainWindow::ResizeTableCells()
{
mNeetToResizeCells = false;
// update cells sizes here
ui->tableView->resizeColumnsToContents();
ui->tableView->resizeRowsToContents();
}
This way all data updates performed in one iteration of event loop will cause only one invocation of MainWindow::ResizeTableCells in some future iteration of event loop.
I had an issue with setting items as selected... so I took a peek at the QGraphicsScene::selectedItems() function.
Ever since, I became really nervous about using it in loops.
In a construct like this:
foreach(QGraphicsItem* selectedItem, selectedItems())
{
// perform action with selectedItem
}
Will every iteration recalculate the selectedItems() function ?
(I think so, because the above becomes unstable if inside the loop I change selection)
I imagine this would have a big impact on my code speed...
So I am starting to change me code everywhere, to:
QList<QGraphicsItem*> sel = selectedItems();
foreach(QGraphicsItem* selectedItem, sel)
{
// perform action with selectedItem
}
Am I correct in assuming this will speed up the program ?
(or will it make it slower because of copying, while the replacement in loop test would not change, if I am wrong and the selectedItems() doesn't really get through all its code ?)
I wonder what other functions should be avoided... like perhaps sceneRect() or boundingRect() for items inheriting from QGraphicsItem.... Is it right to copy those to a QRectF if used more than once in the same function ?
foreach(QGraphicsItem* selectedItem, selectedItems())
{
// perform action with selectedItem
}
In this case, you're iterating over the returned selected items and as you've discovered, if the selection is changed, this can cause problems.
Is this better?
QList<QGraphicsItem*> sel = selectedItems();
foreach(QGraphicsItem* selectedItem, sel)
{
// perform action with selectedItem
}
The simple answer is yes.
will it make it slower because of copying
Not necessarily. If the container is unchanged, it won't make a difference.
However, Qt implements implicit sharing, which ensures that a container, such as QList will share the data in this case and only make a copy (COW - copy on write), when one of the copies changes.
I wouldn't worry too much over sceneRect and boundingRect, as they just return simple data types such as QRectF, rather than a container of items, so implicit sharing is not required here. Only in extreme circumstances would caching such returned values make a noticeable difference.
I have a function that when called takes about 10 seconds to run. I’d like to add a simple progress bar to show the user something is happening but the progressBar doesn’t run until the function is finished
btn.addEventListener("click",bigFunction);
private function bigFunction(event:Event):void{
var progress:ProgressBar = new ProgressBar();
progress.indeterminate = true;
progress.validateNow();
mainPanel.addChild(progress)
// do massive loop
}
Is there a way to force the progress bar to run before the rest of the function is completed. Cheers
The execution model for Flex/Actionscript is single threaded. You have to take some tricky approach to handle this.
http://blogs.infosupport.com/flex-4-a-multi-threading-solution/
Take a look at this example.
I doubt this is possible. Why? because Flash (Flex does nothing) is single-threaded, a 10 second function will just cause the display (and hence the browser, if you are displaying it in a browser) to freeze. Even so, you can try the following:
Force a refresh on the progressbar using
progress.invalidateDisplayList();
after the addChild
This will add the progressbar to the redraw list
In Flash, to repaint anything, you need a new frame to be processed by swf (Event.ENTER_FRAME). Your 10-second function should be split to shorter calls, or it indeed just hangs the Flash player. If you're processing something large, do it in limited portions. You can use getTimer() function to control portion size.
I have a component I created that works like a Viewstack but the next index component slides in from one of the four sides. I've got it working well enough that it's acceptable to use, but I want to make it more efficient.
Right now I'm using a Canvas as the base component, I create a snapshot of the current view using an ImageSnapshot (new Bitmap( ImageSnapshot.captureBitmapData( this ) )), and I slide the new index on top of that image on index change.
I'm basically looking for suggestions on how to do this a better way. By taking the Image after the component loads, and after the slide happens, I've gotten the initial jaded moves down to a minimum, but we normally use this for transitioning grids so it's almost always slow on the first or first couple slides.
Here's what some of it looks like so far:
private function creationComplete(e:Event):void
{
tmpImage.source = new Bitmap( ImageSnapshot.captureBitmapData( this ) );
}
public function set selectedIndex(value:int):void
{
if(_selectedIndex == value + 1)
return;
_selectedIndex = value+1;
var obj:UIComponent;
tmpImage.height = height;
tmpImage.width = width;
tmpImage.visible = true;
tmpImage.x = 0;
//tmpImage.includeInLayout = true;
for (var i:int = 1; i < numChildren; i++)
{
obj = UIComponent(getChildAt(i));
//obj.x = width;
if(i == _selectedIndex){
obj.visible = true;
objDisplay = obj;
}
else
obj.visible = false;
}
mv1.target = tmpImage;
mv2.target = objDisplay;
switch ( direction )
{
// X/Y sliding logic
}
parEfect.play();
tmpImage.source = new Bitmap( ImageSnapshot.captureBitmapData( this ) );
}
If you're wondering, I'm using index 0 of the canvas for the image, and offset my custom selectedIndex by 1.
I'll post more of it if need be, but I want to keep the question down to a minimum and this pretty much sums it up.
Any help is greatly appreciated! I really want to get this component to perform better. Also, this has to be done using Flex 3
What are mv1 and mv2? Are they Flex Effects? If so they are notoriously slow, I recommend using TweenLite. If you absolutely need to use them set suspendBackgroundProcessing = true on them. Last but not least, make sure you do not have a layout set on them. If you do you are causing a re-layout every frame which can easily bog down animation.
You are probably getting some memory hits from all the components being created and then immediately being converted to an image. I would definitely try adding some intelligence at creation time. Try checking the memory usage and test against maximum mem load before creating the image:
http://livedocs.adobe.com/flash/9.0/ActionScriptLangRefV3/flash/system/System.html
However, I would need to look at what is being created in the object. I suspect that you are loading some pretty heavy objects in each of the views. And if you are loading data from the server for each object, there will be a lag, possibly.
Set up a priority queue for creating objects within the class that is being created . . . e.g., if you have a menu system that is hidden by default, load the front-end, then load the menu drop-down only when a user clicks on it, or after all other immediately visible objects have been created. You will also have the advantage of being able to take a snapshot when all the immediately visible objects are in place, and before the hidden objects are created.
Finally, add event listeners after object creation, if you can, and remember to remove listeners asap.
Do you use Flex 3 or Flex 4?
Because if you use Flex 4, I would recommand to use Animate Filter with Shader Filter.
Shader Filters use Pixel Bender so you can build a shader in Pixel Bender that will do the transition between your 2 images.
See these 2 videos for more info :
http://tv.adobe.com/watch/codedependent/pixel-bender-shaders-and-flex-4/
http://tv.adobe.com/watch/codedependent/shader-transitions-in-flex-4
It would be helpful to see how you're creating your Move effects, mv1 and mv2. It is possible to set combinations of the *From, *To, and/or *By attributes--or various manipulations of the properties that control the tween's speed/duration--that together can cause "jitter" or "jerkiness" in the resulting animation.
of course, it's also possible that you're hitting against a performance barrier of some sort, but I suspect it's something more insidious. Simple translation ("x/y sliding") of any clip should perform relatively well, as long as the clip hasn't been rotated, skewed, or scaled; and as long as the processor isn't completely maxed out with some other operation that's going on at the same time.
In most cases, when defining a Move effect, you want to set as little information as possible, and let Flex compute the optimum values for the other things. Usually, this means setting only xTo and yTo.
Also, be sure to call end() on your tweens before you start setting up the new values (just in case any previous sequence is still running).
Finally - make sure that you're not competing with the component's layout manager during the tween. While the tween is running, you should disable the layout completely (by setting autoLayout=false on your container component)--or you can change the layout (temporarily) to an absolute layout. Either way, the tween must be allowed to move things around while it's running, and the moving of stuff must not cause the layout manager to recompute things until after it's all over. Once it's finished, you can re-enable whatever layout manager you had originally.
So I was having a debate with a fellow engineer about looping in JavaScript. The issue was about the native for loop construct and prototype's each() method. Now I know there are lots of docs/blogs about for and for-each, but this debate is somewhat different and I would like to hear what some of you think.
Let's take the following loop for example
example 1
var someArray = [blah, blah, blah,...,N];
var length = someArray.length;
for(var index = 0; index < length; index++){
var value = someFunction(someArray[index]);
var someOtherValue = someComplicatedFunction(value, someArray[index]);
//do something interesting...
}
To me, this comes second nature mainly because I learnt how to code in C and it has carried me through. Now, I use the For-each in both C# and Java (bear with me, I know we are talking about JavaScript here..) but whenever i hear for loops, i think of this guy first. Now lets look at the same example written using Prototype's each()
example 2
var someArray = [blah, blah, blah,…..,N];
someArray.each(function(object){
var value = someFunction(object);
var someOtherValue = someComplicatedFunction(value, object);
//do something interesting...
});
In this example, right off the bat, we can see that the construct has less code, however, i think each time we loop through an object, we have to create a new function to deal with the operation in question. Thus this would preform badly with collections with large number of objects. So my buddy's argument was that example 2 is much easier to understand and is actually cleaner than example 1 due to its functional approach. My argument is that any programmer should be able to understand example 1 since it is taught in programming 101. So the easier argument doesn't apply and example 1 performs better than example 2. Then why bother with #2. Now after reading around i found out that when the array size is small the overhead for example 2 is minuscule. However people kept on talking about the lines of code you write is less and that example 1 is error prone. I still don't buy those reasons, so I wanted to know what you guys think…
You are not creating a new function on each iteration in the second example. There is only one function that keeps getting called over and over. To clarify the point of only a single function being used, consider how would you implement the each method yourselves.
Array.prototype.each = function(fn) {
for(var i = 0; i < this.length; i++) {
// if "this" should refer to the current object, then
// use call or apply on the fn object
fn(this[i]);
}
}
// prints each value (no new function is being created anywhere)
[1, 2, 3].each(function(v) { console.log(v); });
Sure, there is the overhead of calling a function in the first place, but unless you are dealing with massive collections, this is a non-issue in modern browsers.
Personally I prefer the second approach for the simple reason that it frees me from worrying about unnecessary details that I shouldn't be concerned about in the first place. Say if we are looping through a collection of employee objects, then index is just an implementation detail and in most cases, if not all, can be abstracted away from the programmer by constructs such as the each method in Prototype, which by the way is now a standard in JavaScript as it has been incorporated into ECMAScript 5th ed under the name forEach.
var people = [ .. ];
for(var i /* 1 */ = 0; i /* 2 */ < people.length; i++ /* 3 */) {
people[i /* 4 */].updateRecords();
people[i /* 5 */].increaseSalary();
}
Marked all 5 occurrences all i inline with comments. We could have so easily eliminated the index i altogether.
people.forEach(function(person) {
person.updateRecords();
person.increaseSalary();
});
For me the benefit is not in lesser lines of code, but removing needless details from code. This is the same reason why most programmers jumped on the iterator in Java when it was available, and later on the enhanced for loop when it came out. The argument that the same grounds don't hold for JavaScript just doesn't apply.
I'm fairly certain it does not create a new code function for every iteration, but rather calls the function on each one. It may involve a little more overhead to keep track of the iterator internally, (some languages allow the list to be changed, some don't) but it shouldn't be all that very much different internally than the procedural version.
But anytime you ask which is faster, you should ask, does it really matter? Have you put a code profiler on it and tested it with real world data? You can spend a lot of time figuring out which is faster, but if it only accounts for .0001% of your execution time, who cares? Use profiling tools to find the bottlenecks that really matter, and use whichever iteration method you and your team agree is easier to use and read.
Example one is not only error prone, for arrays of trivial length it's a poor choice - and the each (or forEach, as defined in JavaScript 1.6, yet IE8 still does not support it) is definitely style-wise the better choice.
The reason for this is simple: you are telling the code what to do, not how to do it. In my tests with firefox, the forEach method is about 30% the speed as a for loop. But when you're doing miniscule arrays it doesn't even matter that much. Much better to make your code cleaner and easier to understand (remember: what it's doing instead of how to do it), for not only your sanity the next time you come back to it, but for the sanity of anyone else looking at your code.
If the only reason you're including prototype is for the .each method, you're doing it wrong. If all you want is a clean iteration method, use the .forEach method - but remember to define your own. This is from the MDC page on forEach - a useful check to give yourself a .forEach method if none exists:
if (!Array.prototype.forEach)
{
Array.prototype.forEach = function(fun /*, thisp*/)
{
var len = this.length >>> 0;
if (typeof fun != "function")
throw new TypeError();
var thisp = arguments[1];
for (var i = 0; i < len; i++)
{
if (i in this)
fun.call(thisp, this[i], i, this);
}
};
}
You can tell from how this works, that a new function is not created for each item, though it is invoked for each item.
The answer is - its subjective.
For me, example 2 is not really THAT much less code, and also involves downloading (bandwidth) AND parsing/executing (execution-time) a ~30kb library before you can even use it - so not only is the method itself less efficient in and of itself, it also involves setup overhead. For me - arguing that example 2 is better is insanity - however that's just an opinion, many would (and are perfectly entitled to) disagree completely.
IMO the second approach is succinct and easier to use, though it gets complicated (see prototype doc) if you want to use things like break or continue. See the each documentation.
So if you are using simple iteration, use of the each() function is better IMO as it could be more succinct and easy to understand, although its less performant than the raw for loop