I am writing a Windows 8 store application where I have a view model that contains a rather large observable collection. Every once in a while we get an event from the model that causes us to start a pretty long process to update the observable collection. It works great, except that currently the process is running in the UI thread and is locking the UI for a few seconds, which is a pretty bad practice for UI development...
I wanted to move the heavy calculation into a task, so that the calculations are made without blocking the UI and the updates to the observable collection are done one at a time on the UI thread. in WPF there is a mechanism that allows to do exactly that using the following code in the view model constructor:
var myLock = new object();
var myList = new ObservableCollection<ItemType>();
BindingOperations.EnableCollectionSynchronization(myList, myLock);
and then wrapping each update to the observable collection in a lock block. I tried to do the same in WinRT but it appears that BindingOperations does not have this method there.
Is there any acceptable best practice to accomplish the same thing in WinRT?
Thnx,
Related
I am using Unit of work and Repository pattern along with EF6 in my asp.net web application. DbContext object is getting created and destroyed on every request.
I am thinking that it is costly creating the new dbcontext on every request(I have not done any performance bench marking).
Is this cost of creating DbContext on every request can be ignored ? Does anybody done some bench marking?
Creating a new context is ridiculously cheap, on the order of about 137 ticks on average (0.0000137 seconds) in my application.
Hanging onto a context, on the other hand, can be incredibly expensive, so dispose of it often.
The more objects you query, the more entities end up being tracked in the context. Since entities are POCOS, entity framework has absolutely no way of knowing which ones you've modified except to examine every single one of them in the context and mark it accordingly.
Sure, once they're marked, it will only make database calls for the ones that need updated, but it's determining which ones need updated that is expensive when there are lots of entities being tracked, because it has to check all the POCOS against known values to see if they've changed.
This change tracking when calling save changes is so expensive, that if you're just reading and updating one record at a time, you're better off disposing of the context after each record and creating a new one. The alternative is hanging onto the context, such that every record you read results in a new entity in the context, and every time you call save changes it's slower by one entity.
And yes, it really is slower. If you're updating 10,000 entities for example, loading one at a time into the same context, the first save will only take about 30 ticks, but every subsequent one will take longer to the point where the last one will take over 30,000 ticks. In contrast, creating a new context each time will result in a consistent 30 ticks per update. In the end, because of the cumulative slow-down of hanging onto the context and all the tracked entities, disposing of and recreating the context before each commit ends up taking only 20% as long (1/5 the total time)!
That's why you should really only call save changes once on a context, ever, then dispose of it. If you're calling save changes more than once with a lot of entities in the context, you may not be using it correctly. The exceptional case, obviously, is when you're doing something transactional.
If you need to perform some transactional operation, then you need to manually open your own SqlConnection and either begin a transaction on it, or you need to open it within a TransactionScope. Then, you can create your DbContext by passing it that same open connection. You can do that over and over, disposing of the DbContext object each time while leaving the connection open. Usually, DbContext handles opening and closing the context for you, but if you pass it an open connection, it won't try to close it automatically.
That way, you treat the DbContext as just a helper for tracking object changes on an open connection. You create and destroy it as many times as you like on the same connection, where you can run your transaction. It's very important to understand what's going on under the hood.
Entity Framework is not thread safe, meaning, you cannot use a context across more than one thread. IIS uses a thread for each request sent to the server. Given this, you have to have a context per request. Else, you run a major risks of unexplained and seemingly random exceptions and potentially incorrect data being saved to the database.
Lastly, the context creation is not that expensive of an operation. If you are experiencing a slow application experience (not on first start, but after using the site), your issue probably lies somewhere else.
We are working on a highly interactive design tool using Meteor. Meteor has done a great job for us since we migrated to Meteor last year. It has simplified our development dramatically. But recently we found a big performance issue in our app.
After investigating deeply and I found out that it's due to how we update the database. Our app is a quite complicated design tool, and it has lot of data for each user project. And when user make a change to the design, it will run the logic, then create, update and remove hundreds even thousands of small objects in the data collections. As collection operations has stub method in the client side, the UI is updated very fast and it feels very performant. But on the server side, it can be slowed down dramatically. As the hundreds of DB operations generate method calls and it only run one by one. The server process can take 100% CPU usage for a few seconds even only one user make a small change. I used Kadira to profile the server and found out that some methods spend 99% of the time waiting.
One of the solution is batch DB operations, but meteor doesn't support it. Any idea on how to implement such a solution? or how to circumvent the lots-of-db-operation problem generally? I'm kind of stuck on this issue and any suggestions are welcome. Thanks.
Meteor exposes the Node Mongodb driver connection in MongoInternals. That driver supports bulk inserts.
So the below should work:
var meteorCol = new Meteor.Collection( 'yourCollection' );
meteorCol.remove({});
console.log( 'Count :', meteorCol.find().count());
var db = MongoInternals.defaultRemoteCollectionDriver().mongo.db;
var mongoCol = db.collection( 'yourCollection' );
//make a synchronous function so the insert can be checked
var mongoInsert = Meteor._wrapAsync( mongoCol.insert ).bind( mongoCol );
//batch insert
mongoInsert([{hello:'world_safe1'}
, {hello:'world_safe2'}]
, {w:1}
);
console.log( 'Count :', meteorCol.find().count());
Code is running in Meteor Pad if you want to play with it.
Just doing the batch inserts may not completely avoid performance problems according to this discussion.
I am writing an Adobe AIR application using PureMVC.
Imagine that I have an page-based application view ( using ViewStack ), and user is navigating through this pages in some way ( like clicking the button or whatever ).
Now for example I have an Account Infromation page which when instantiated or showed again needs to load the data from WebService ( for example email, account balance and username ), and when the data is returned I want to show it on my Account Information page in the proper labels.
The problem is when I will execute this three Web Calls, each of them will return different resultEvent at different time. I am wondering what is the best way to get the information that ALL of the service calls returned results, so I know that I can finally show all the results at once ( and maybe before this happens play some loading screen ).
I really don't know much about PureMVC, but the as3commons-async library is great for managing async calls and should work just fine in any framework-setup
http://as3commons.org/as3-commons-async/
In your case, you could create 3 classes implementing IOperation or IAsyncCommand (depending on if you plan to execute the operations immediately or deferred) encapsulating your RPCs.
After that is done you simply create a new CompositeCommand and add the operations to its queue.
When all is done, CompositeCommand will fire an OperationEvent.COMPLETE
BTW, the library even includes some pre-implemented common Flex Operations, such as HTTPRequest, when you download the as3commons-asyc-flex package as well.
I would do it in this way:
Create a proxy for each of three information entities (EMailProxy, BalanceProxy, UsernameProxy);
Create a delegate class which handles the interaction with your WebService (something like "public class WSConnector implements IResponder{...}"), which is used by the proxies to call the end ws-methods;
Create a proxy which coordinates all the three results (CoordProxy);
Choose a mediator which will coordinate all the three calls (for example it could be done by your ApplicationMediator);
Create notification constants for all proxy results (GET_EMAIL_RESULT, GET_BALANCE_RESULT, GET_USERNAME_RESULT, COORD_RESULT);
Let the ApplicationMediator get all 4 notifications;
it is important that you should not only wait for all three results but also be ready for some errors and their interpretation. That is why a simple counter could be too weak.
The overall workflow could look like this:
The user initiates the process;
Some mediator gets an event from your GUI-component and sends a notification like DO_TRIPLECALL;
The ApplicationMediator catches this notification, drops the state of the CoordProxy and calls all 3 methods from your proxies (getEMail, getBalance, getUsername).
The responses are coming asynchronously. Each proxy gets its response from the delegate, changes its own data object and sends an appropriate notification.
The ApplicationMediator catches those notifications and changes the state of the CoordProxy. When all three responses are there (may be not all are successful) the CoordProxy sends a notification with the overall result.
I know it is not the best approach to do such an interaction through mediators. The initial idea was to use commands for all "business logic" decisions. But it can be too boring to create the bureaucracy.
I hope it can help you. I would be glad to know your solution and discuss it here.
in my iOS application, I am calling a method called loadData in my viewDidLoad method.
loadData takes data from a local SQLite database and populates an array of items.
In another method, showData, I take the loaded data and show it on the actual screen.
- (void)viewDidLoad {
[super viewDidLoad];
[self loadData];
[self showData];
}
- (void)loadData {
//connects to local SQLite database
//data is loaded into a NSMutableArray called 'myData'
}
- (void)showData {
UIImage *myImage = [UIImage imageNamed:[myData objectAtIndex:0]];
self.imageView.image = myImage;
}
However, this currently does not work because it takes some time for the method loadData to populate my array.
I would like to show a custom progress indicator view that pops up on the screen with a spinning image that I made. This would appear until the method loadData completes, and then showData would be run.
Could someone point me in the right direction or link me to a way to do this? Thank you so much!
The way to do this is to do your database retrieval asynchronously, in a background queue. This can be tricky to do multi-threaded database operations and one simple solution is to simply create a dedicated serial background queue in which you'll do all of your database interactions (thus the database operations, themselves, are running on a single thread). Thus your background process that loads the data will dispatch database operations to this queue, as will any database interactions you would otherwise do from the main queue.
If you're using FMDB, you can do this with FMDatabaseQueue, which handles all of this for you. Or you can write your own. (I'd encourage you to check out FMDB, though. If nothing else, look at how it's doing FMDatabaseQueue and adopt this pattern in your own code.)
If you do this, dispatching the initial load to a background queue that will do all database interactions through this separate dedicated serial database queue, just make sure to dispatch the UI updates of your progress indicator back to the main queue, because all UI interactions should take place on the main queue.
Having said that, there's absolutely no reason why database queries for a user interface should be so slow as to necessitate this, though. I wonder whether it's worth digging into why your database code is so slow, and you may be able to cut the Gordian knot altogether.
A typical example of inefficient database operations is when you store images in the database. Unless you're dealing with very small thumbnail images, SQLite is notoriously inefficient when dealing with BLOB data. The typical solution is to store the images in persistent storage (e.g. your Documents folder), and only save references to those paths in the database. This offers all sorts of potential optimizations.
In your comment, you mention that the database maintains URLs of images. Well, in that case, you can refrain from retrieving the images in your loadData routine, and instead employ "lazy loading" of the images, namely load the images in a just-in-time manner, only as they're needed. You can achieve this using a UIImageView category that does asynchronous loading, such as SDWebImage or, if you're already using AFNetworking, you can use its UIImageView category. If you google "lazy loading UIImage", you'll probably find tons of wonderful links, tutorials, etc. But both of these categories provide not only a easy "lazy loading" mechanism, but also provide caching of images, etc.
But the ideal scenario, rather than designing a progress view for a slow loading process, just refactor your design, completely eliminating this performance bottleneck. It would be a shame to go through the work of designing a progress view for an inefficient loading interface.
I have a custom UITableViewCell, and when the user clicks a button i make a request to a server and update the cell. I do this with a NSUrlConnection and it all works fine (this is all done inside the cell class) and once it returns it fires a delegate method and the tableivew controller handles this. However when i create the cell in the tableview, i use the dequeue method and reuse my cells. So if a cell has fired a asynchronous nsurlconnection, and the cell gets reused whilst this is still going on, will this in turn erase the current connection? I just want to make sure that if the cell is reused, the actual memory that was assigned to the cell is still there so the connection can fulfil its duty??
You can customize the behavior of a UITableViewCell by subclassing it and overriding the -perpareForReuse method. In this case, I would recommend destroying the connection when the cell is dequeued. If the connection should still keep going, you’ll need to remove the reference to it (set it to nil) and handle its delegate methods elsewhere.
It's never a good idea to keep a reference of a connection or any data that you want to display in a cell, no matter how much of effort you put into it afterward to work around to arising problems. Your approach will never work reliable.
In your case, if the user quickly scrolls the table view up and down, your app will start and possibly cancel dozens of connections and yet never finishes to load something. That will be an awful user experience, and may crash the app.
Better you design your app with MVC in mind: the cell is just a means to display your model data, nothing else. It's the View in this architectural design.
For that purpose the Table View Delegate needs to retrieve the Model's properties which shall to be displayed for a certain row and setup the cell. The model encapsulates the network connection. The Controller will take the role to manage and update change notification and process user inputs.
A couple of Apple samples provide much more details about this topic, and there is a nice introduction about MVC, please go figure! ;)
http://developer.apple.com/library/ios/#documentation/general/conceptual/devpedia-cocoacore/MVC.html
The "Your Second iOS App: Storyboards" also has a step by step explanation to create "Data Controller Classes". Quite useful!
Now, when using NSURLConnection which updates your model, it might become a bit more complex. You are dealing with "lazy initializing models". That is, they may provide some "placeholder" data when the controller accesses the property instead of the "real" data when is not yet available. The model however, starts a network request to load it. When it is eventually loaded, the model must somehow notify the Table View Controller. The tricky part here is to not mess with synchronization issues between model and table view. The model's properties must be updated on the main thread, and while this happens, it must be guaranteed that the table view will not access the model's properties. There are a few samples which demonstrate a few techniques to accomplish this.