Reading through Misko's excellent answer on databinding here: How does data binding work in AngularJS?, I am wondering how Angular does it's dirt-checking behind the scenes, because:
I'm creating an app, that prints a large amount of Car objects to the DOM, each Car looking something like this:
var Car = function(settings) {
this.name = settings.name;
+ many more properties...
}
Car.prototype = {
calcPrice: function() { ... },
+ many more methods...
}
$scope.cars = [lots of Cars];
The linked answer above mentions a limit of around 2000 values that can be provided through databinding when printed in the DOM, and due to the large amount of properties on each Car object, this number could very easily be exceeded in this app when looping through the cars array.
Say you end up having 2000+ values printed in the DOM through databinding, and one of these values updates, does it affect Angular's dirt-checking performance that 2000 values are present, or does Angular somehow flag the values that change, so it only looks at the changed values when running its $digest()? In other words, does it matter that you have a lot of databound values, when only a very small number of these are likely to be updated after the initial print?
If it does matter, -- and since most of the values are read-only -- is there some way to use the databinding syntax {{car.prop}} to get the value to the DOM once and then tell Angular to not bind to them anymore
Would it make a difference to add getter-methods to the Car object and provide it's properties like this {{car.getProp()}} ?
I had the same kind of problem with an application I was working on. Having a huge data set is not a problem, the problem comes from the bindings,ng-repeats in particular killed performances.
Part of the solution was removing "dynamic" bindings with "static" bindings using this nice library: http://ngmodules.org/modules/abourget-angular.
Related
I'm implementing drag/drop behavior in my model, which is derived from QAbstractItemModel. My code (C++) for the drop event looks something like this:
beginInsertRows(destination_index, row, row);
destination->AcquireDroppedComponent(component);
endInsertRows();
The call to AcquireDroppedComponent can fail for a number of reasons and reject the drop, in which case no new rows will be inserted in the index stored in destination_index. My question is will calling begin/endInsertRows cause problems if this happens? My limited testing on Windows 7 so far shows no undesirable behavior, but I want to be thorough and not rely on the specific behavior of one platform. I can check beforehand if the drop will succeed or not, but I'd like to avoid the extra code if I can. My question also applies for the other begin/end functions like beginRemoveRows, beginInsertColumns, etc.
Calling these methods without doing the actions you indicate breaks their contract. How the clients of your model will cope with that is essentially undefined.
I can check beforehand if the drop will succeed or not, but I'd like to avoid the extra code if I can.
That "extra" code is absolutely necessary.
I'd refactor your code to perform acquisition and model change separately:
if (destination->acquireDroppedComponent(component)) {
beginInsertRows(destination_index, row, row);
destination->insertDroppedComponent(component);
endInsertRows();
}
The acquireDroppedComponent would store the data of the dropped object without modifying the model, and return true if it was successful and the data is usable. You then would call insertDroppedComponent to perform the model change.
I have a ListView control which I used to populate using an ASP ObjectDataSource control. This worked fine.
However, I wanted to implement a filter that showed items in the ListView that began with the letter A, B, C, etc. To do this, I removed the ObjectDataSource control and replaced it with some code in the Page_Load event allowing me greater control over what I was passing in as the data source, similar to this:
System.Diagnostics.Debug.Print("{0:HH:mm:ss} : GET DATA", DateTime.Now);
List<MyItem> items = GetItems("A"); // Gets a list of items with a description that
// begins with A
MyListView.Datasource = items;
System.Diagnostics.Debug.Print("{0:HH:mm:ss} : BIND DATA", DateTime.Now);
MyListView.DataBind();
System.Diagnostics.Debug.Print("{0:HH:mm:ss} : DONE", DateTime.Now);
Output (times are representative of actual results):
16:00:00 : GET DATA
16:00:00 : BIND DATA
16:00:20 : DONE
Since doing this, it takes about 20 seconds to load the page in my browser, instead of around 1 second when I used the ObjectDataSource.
To load the data into my ListView rows, I use the standard <%# Eval("Description") %> method. After some searching on SO and google, some people seem to say that Eval is inefficient.
Why does manual binding in the Page_Load event slow everything down? Is it because Eval is inefficient? How can I speed it up, and what is the correct way to do this?
It seems highly unlikely to me that the problem is the Eval statement or the the fact that you're databinding in the page load unless your returning a very very large list. Eval may be slower but not by the amount you are seeing. There is probably another cause.
I would double check the GetItems() function. It's more likely that the selection code is somehow less efficient than it could be.
Additional things to check...
Check the properties Eval is calling. Does they do something more that just return a string? Eval will run whatever code is within those properties/methods so make sure they are as fast as possible.
How many records are in your database? Do you have paging enabled? If so, the problem might be that the ObjectDataSource is using a more efficent method to retrieve only the number of objects that it intends on displaying, whereas your call to GetItems() is returning everything, even if it isn't being displayed. Given the huge disparity in the time to return, that is my guess as to what is happening.
If that is what is happening, you speed it up by limiting the number of records you are returning. This is going to depend on your implementation of GetItems(). You'd want to write something like GetItemsPaged(int firstRecord, int pageLength) that returns only a limited amount of data.
i have a Flex tree control and im trying to select a tree node 3 levels down right after the dataProvider is assigned with a collection object like the following.
basically treeItem1, treeItem2, treeItem3 are the nodes in the tree and treeitem3 is a child of treeItem2 which is a child of treeItem1. Assume these treeItem(1,2,3) are referenced correctly from the collection items.
my problem is that if i wait for the whole component to load completely then select the nodes, it open/select/scrolltoIndex correctly. However, if i were to select the node right after the dataProvider is assigned, then it doesn't even open or select (basically the this.treeService.selectedItem is always null).
can anyone point out what i did wrong? is there anything needs to happen after the dataProvider is assigned?
thanks
this.treeService.dataProvider = oPricingHelper.getCurrentPricingSercicesTreeSource();
this.treeService.expandItem(treeItem1, true);
this.treeService.expandItem(treeItem2, true);
this.treeService.selectedItem = treeItem3;
this.treeService.scrollToIndex(this.treeService.selectedIndex);
I have used the updateComplete event to know when a component (such as a DataGroup or List) has completed rendering after performing a simple task (such as updating the dataProvider reference). Of course, you have to be careful and remove listening to updateComplete because it can run a lot, unless you have a need for it to run.
Something like:
//...some function...
this.treeService.addEventListener(FlexEvent.UPDATE_COMPLETE, onTreeUpdateComplete);
this.treeService.dataProvider = oPricingHelper.getCurrentPricingSercicesTreeSource();
//...rest of some function...
private function onTreeUpdateComplete(event:FlexEvent):void {
this.treeService.removeEventListener(FlexEvent.UPDATE_COMPLETE, onTreeUpdateComplete);
this.treeService.expandItem(treeItem1, true);
this.treeService.expandItem(treeItem2, true);
this.treeService.selectedItem = treeItem3;
this.treeService.scrollToIndex(this.treeService.selectedIndex);
}
I'm not positive your experiencing the same issue but I seem to have the same type of problem with using the advanced data grid, it appears in these cases where the dataprovider is acceptable as multiple types, the components do some extra work in the background to wrap things up into something Hierarchical (HierarchicalData or HierarchicalCollectionView) and in doing so the dataprovider setter call is not synchronous (so it will return before actually having assigned the internal property storing the dataprovider). I've used callLater in this case with moderate success, callLater is generally a bad practice but basically adds a function to a list of functions to call once background processing is done, so this is assuming that something in the dataprovider setter called UIComponent.suspendBackgroundProcessing() and that it will subsequently call UIComponent.resumeBackgroundProcessing() and then it will execute the list of functions added by using callLater. Alternatively you could use setTimeout(someFunction,1000).
These are both "hacks" the real solution is to dig into the framework code and see what it's really doing when you tell it to set the dataprovider. Wherever you see that it actually has set the dataprovider you could extend that class and dispatch an event that you could listen for to run the function to do the selections after this point.
If anyone has a better solution please by all means correct me (I would love to have a better answer than this)
Ok, so I swear this question should be all over the place, but its not.
I have a value object, inside are lots of getters/setters. It is not a dynamic class. And I desperately need to search an ArrayCollection filled with them. The search spans all fields, so and there are about 13 different types of VOs I'll be doing this with.
I've tried ObjectUtil.toString() and that works fine and all but it's slow as hell. There are 20 properties to return and ObjectUtil.toString() adds a bunch of junk to the output, not to mention the code is slow to begin with.
flash.utils.describeType() is even worse.
I'll be pleased to hear I'm missing something obvious.
UPDATE:
I ended up taking Juan's code along with the filter algorithm I use for searching and created ArrayCollectionX. Which means that every ArrayCollection I use now handles it's own filters. I can search through individual properties of the items in the AC, or with Juan's code it handles full collection search like a champ. There was negligible lag compared to the same solution with external filters.
If I understand your problem correctly, what you want is a list of the getters defined for certain objects. As far as I know, you'll have to use describeType for something like this (I'm pretty sure ObjectUtils uses this method under the hood).
Calling describeType a lot is going to be slow, as you note. But for only 13 types, this shouldn't be problematic, I think. Since these types are not dynamic, you know their properties are fixed, so you can retrieve this data once and cache it. You can build your cache up front or as you find new types.
Here's is a simple way to do this in code:
private var typePropertiesCache:Object = {};
private function getPropertyNames(instance:Object):Array {
var className:String = getQualifiedClassName(instance);
if(typePropertiesCache[className]) {
return typePropertiesCache[className];
}
var typeDef:XML = describeType(instance);
var props:Array = [];
for each(var prop:XML in typeDef.accessor.(#access == "readwrite" || #access == "readonly")) {
props.push(prop.#name);
}
return typePropertiesCache[className] = props;
}
I am having a problem with the speed of accessing an association property with a large number of records.
I have an XAF app with a parent class called MyParent.
There are 230 records in MyParent.
MyParent has a child class called MyChild.
There are 49,000 records in MyChild.
I have an association defined between MyParent and MyChild in the standard way:
In MyChild:
// MyChild (many) and MyParent (one)
[Association("MyChild-MyParent")]
public MyParent MyParent;
And in MyParent:
[Association("MyChild-MyParent", typeof(MyChild))]
public XPCollection<MyCHild> MyCHildren
{
get { return GetCollection<MyCHild>("MyCHildren"); }
}
There's a specific MyParent record called MyParent1.
For MyParent1, there are 630 MyChild records.
I have a DetailView for a class called MyUI.
The user chooses an item in one drop-down in the MyUI DetailView, and my code has to fill another drop-down with MyChild objects.
The user chooses MyParent1 in the first drop-down.
I created a property in MyUI to return the collection of MyChild objects for the selected value in the first drop-down.
Here is the code for the property:
[NonPersistent]
public XPCollection<MyChild> DisplayedValues
{
get
{
Session theSession;
MyParent theParentValue;
XPCollection<MyCHild> theChildren;
theParentValue = this.DropDownOne;
// get the parent value
if theValue == null)
{
// if none
return null;
// return null
}
theChildren = theParentValue.MyChildren;
// get the child values for the parent
return theChildren;
// return it
}
I marked the DisplayedValues property as NonPersistent because it is only needed for the UI of the DetailVIew. I don't think that persisting it will speed up the creation of the collection the first time, and after it's used to fill the drop-down, I don't need it, so I don't want to spend time storing it.
The problem is that it takes 45 seconds to call theParentValue = this.DropDownOne.
Specs:
Vista Business
8 GB of RAM
2.33 GHz E6550 processor
SQL Server Express 2005
This is too long for users to wait for one of many drop-downs in the DetailView.
I took the time to sketch out the business case because I have two questions:
How can I make the associated values load faster?
Is there another (simple) way to program the drop-downs and DetailView that runs much faster?
Yes, you can say that 630 is too many items to display in a drop-down, but this code is taking so long I suspect that the speed is proportional to the 49,000 and not to the 630. 100 items in the drop-down would not be too many for my app.
I need quite a few of these drop-downs in my app, so it's not appropriate to force the user to enter more complicated filtering criteria for each one. The user needs to pick one value and see the related values.
I would understand if finding a large number of records was slow, but finding a few hundred shouldn't take that long.
Firstly you are right to be sceptical that this operation should take this long, XPO on read operations should add only between 30 - 70% overhead, and on this tiny amount of data we should be talking milli-seconds not seconds.
Some general perf tips are available in the DevExpress forums, and centre around object caching, lazy vs deep loads etc, but I think in your case the issue is something else, unfortunately its very hard to second guess whats going on from your question, only to say, its highly unlikely to be a problem with XPO much more likely to be something else, I would be inclined to look at your session creation (this also creates your object cache) and SQL connection code (the IDataStore stuff), Connections are often slow if hosts cannot not be resolved cleanly and if you are not pooling / re-using connections this problem can be exacerbated.
I'm unsure why you would be doing it the way you are. If you've created an association like this:
public class A : XPObject
{
[Association("a<b", typeof(b))]
public XPCollection<b> bs { get { GetCollection("bs"); } }
}
public class B : XPObject
{
[Association("a<b") Persistent("Aid")]
public A a { get; set; }
}
then when you want to populate a dropdown (like a lookupEdit control)
A myA = GetSomeParticularA();
lupAsBs.Properties.DataSource = myA.Bs;
lupAsBs.Properties.DisplayMember = "WhateverPropertyName";
You don't have to load A's children, XPO will load them as they're needed, and there's no session management necessary for this at all.
Thanks for the answer. I created a separate solution and was able to get good performance, as you suggest.
My SQL connection is OK and works with other features in the app.
Given that I'm using XAF and not doing anything extra/fancy, aren't my sessions managed by XAF?
The session I use is read from the DetailView.
I'm not sure about your case, just want to share some my experiences with XAF.
The first time you click on a dropdown (lookup list) control (in a detail view), there will be two queries sent to the database to populate the list. In my tests, sometimes entire object is loaded into the source collection, not just ID and Name properties as we thought so depends on your objects you may want to use lighter ones for lists. You can also turn on Server Mode of the list then only 128 objects are loaded each time.