Does anyone have and definitive information on how iOS handles a UINavigationController stack it the user is allow to keep drilling-down. Will it actually start to free up memory by saving the state of the previous controllers?
This has been asked on here before, but the answers have been contradictory, so if anyone really knows how this is handled that'd be very useful.
If not, does anyone know of a subclass that will handle this?
Basically, it depends on the implementation of your controllers.
When you keep pushing controller on to a navigation controller, eventually memory will get low. Controllers that you pushed on a navigation controller will not be released for you when memory gets low, though.
What happens is your controllers get a notification which is handled in your controllers' didReceiveMemoryWarning method. There you can release all the objects used in your controller that are not necessary anymore or can be recreated when the controller is popped back.
Memory is a critical resource in iOS, and view controllers provide built-in support for reducing their memory footprint at critical times. The UIViewController class provides some automatic handling of low-memory conditions through its didReceiveMemoryWarning method, which releases unneeded memory.
Prior to iOS 6, when a low-memory warning occurred, the UIViewController class purged its views if it knew it could reload or recreate them again later. If this happens, it also calls the viewWillUnload and viewDidUnload methods to give your code a chance to relinquish ownership of any objects that are associated with your view hierarchy, including objects loaded from the nib file, objects created in your viewDidLoad method, and objects created lazily at runtime and added to the view hierarchy. On iOS 6, views are never purged and these methods are never called. If your view controller needs to perform specific tasks when memory is low, it should override the didReceiveMemoryWarning method.
Related
Latest approach says about injecting DbContext instance right to the MVC\WebAPI controller. It has a number of pros but I have one question which is not answered yet - performance of the DbContext instance creation which will not be used.
According to this question: What happens when i instantiate a class derived from EF DbContext? DbContext creation is not so cheap operation (both memory and CPU). And it's twice bad when:
Your action doesn't need the DbContext at all (so you have a mix actions which use and not use the DB)
Some logic (e.g. conditions) doesn't allow to access the DbContext (e.q. ModelState.IsValid). So action will return result BEFORE access to the DbContext instance.
So in both (an maybe some other cases) DI creates a scoped instance of the DbContext, wastes resources on it and then just collect it at the end of the request.
I didn't make any performance tests, just googled for some articles firsts. I don't say that it will be 100% lack of the performance. I just thought: "hey man, why have you created the instance of the object if I will not use it at all".
Why have you created the instance of the object if I will not use it
at all.
Mark Seemann said in his book Dependency Injection in .NET, "creating an object instance is something the .Net Framework does extremely fast. any performance bottleneck your application may have will appear in other place, so don't worry about it."
Please note that Dbcontext enables lazy loading by default. Just by instantiating it, it doesn't have much impact on the performance. So, I would not worry about Dbcontext.
However, if you have some custom classes doing heavy lifting inside constructor, then you might want to consider refactoring those.
If you really want to compare the performance, you could wrap those dependencies with Lazy, and see how much performance you gain. Does .net core dependency injection support Lazy.
You could register it as Lazy or you could do what I do and just inject IMyDbContextFactory and then you can call Create() that will return the DbContext when you actually need it (with its own pros/cons). If the constructor doesn't do anything, it won't be a huge hit, but keep in mind that the first time it gets newed up, it will hit the static constructor that goes through and does all the model validation. This hit only happens once, but it is a huge hit.
I have some unmanaged resources in classes I'm injecting into controllers that I need to dispose once the controller is disposed (otherwise I'll have memory leak). I have looked at IUnityContainer and did not find a Release (or similar) method that allow me to do that.
After some trial and error (and reading), it seems to me that Unity do not keep track of what is going on about the types it creates. This is way different from Windsor, where I can call Release and the entire object graph will be release. This is actually one of the points of having a container in the first place (object lifecycle management). I should not need to call Dispose directly the container should be able to do that for me in the proper order/objects.
So, my question is, how can I tell Unity that an object is no longer needed and should be disposed?
If there is no way of doing that, is there a way to change the lifecycle to per web request?
As a note, changing the container is not an option. Unfortunately :(
You will have to look at the different lifetime managers in Unity. The ContainerControlledLifetimeManager will call dispose on every item it creates. Unfortunately this manager acts as a singleton for resolved objects so might not be appropriate for you.
The other alternative is to create your own lifetime manager which keeps track of objects that it creates and when the container is disposed just disposes every object.
We have a flex application that connects to a proxy server which handles authentication. If the authentication has timeout out the proxy server returns a json formatted error string. What I would like to do is inspect every URLRequest response and check if there's an error message and display it in the flex client then redirect back to login screen.
So I'm wondering if its possible to create an event listener to all URLRequests in a global fashion. Without having to search through the project and add some method to each URLRequest. Any ideas if this is possible?
Unless you're only using one service, there is no way to set a global URLRequest handler. If I were you, I'd think more about architecting your application properly by using a delegate and always checking the result through a particular service which is used throughout the app.
J_A_X has some good suggestions, but I'd take it a bit farther. Let me make some assumptions based on the limited information you've provided.
The services are scattered all over your application means that they're actually embedded in multiple Views.
If your services can all be handled by the same handler, you notionally have one service, copied many times.
Despite what you see in the Adobe examples showing their new Service generation code, it's incredibly bad practice to call services directly from Views, in part because of the very problem you are seeing--you can wind up with lots of copies of the same service code littered all over your application.
Depending on how tightly interwoven your application is (believe me, I've inherited some pretty nasty stuff, so I know this might be easier said than done), you may find that the easiest thing is to remove all of those various services and replace them by having all your Views dispatch a bubbling event that gets caught at the top level. At the top level, you respond to that event by calling one instance of your service, which is again handled in one place.
You may or may not choose to wrap that single service in a delegate, but once you have your application archtected in a way where the service is decoupled from your Views, you can make that choice at any time.
Would you be able to extend the class and add an event listener in the object's constructor? I don't like this approach but it could work.
You would just have to search/replace the whole project.
I have a childwindow with an associated VM that gets created each time I ask the child window to open. when the childwindow opens, it registers a listener for an MVVM Light message. After I close the window, I'm pretty sure that I'm releasing all references to it, but I don't actually call dispose because it does not implement IDisposeable.
When I instanciate another child window of the same type, and send it a different context, I know that I'm receiving the message from the previous instanciation of the VM... each time I use the window, more and more VM are listening, and the code repeats.
How can I be sure that my previous VM that registered to listen to a message, has actually been released and is no longer active. Is there a deterministic way to do this?
Whenever you register a message you should make sure that you unregister the message as well. To unregister you can use Cleanup method on classes deriving from ViewModelBase. In other cases, e.g. a view, you should implement a method hat is called when the view is unloaded - e.g by trapping and handling the unloaded event on a control or view. In this method you then call Messenger.Unregister(EventTarget).
This behaviour is a quirk in the current version of the toolkit, and Laurent is aware of it.
How have you coded the handler for the message in the VM? It sounds like you're probably pointing it to a method inside the same VM which is registering for the message. This causes the Messenger class to maintain a reference to the VM and prevent it's garbage collection (see here for discussion). There are two solutions: implement IDisposable and unregister all messages for your VM instance or simply unregister all messages from the VM instance when the child dialog closes. Personally I'd do both to ensure the entire object web is released.
You can use either the Cleanup method or manually unregister the message. For more details click here.
I am writing a custom Windows Workflow Foundation activity, that starts some process asynchronously, and then should wake up when an async event arrives.
All the samples I’ve found (e.g. this one by Kirk Evans) involve a custom workflow service, that does most of the work, and then posts an event to the activity-created queue. The main reason for that seems to be that the only method to post an event [that works from a non-WF thread] is WorkflowInstance.EnqueueItem, and the activities don’t have access to workflow instances, so they can't post events (from non-WF thread where I receive the result of async operation).
I don't like this design, as this splits functionality into two pieces, and requires adding a service to a host when a new activity type is added. Ugly.
So I wrote the following generic service that I call from the activity’s async event handler, and that can reused by various async activities (error handling omitted):
class WorkflowEnqueuerService : WorkflowRuntimeService
{
public void EnqueueItem(Guid workflowInstanceId, IComparable queueId, object item)
{
this.Runtime.GetWorkflow(workflowInstanceId).EnqueueItem(queueId, item, null, null);
}
}
Now in the activity code, I can obtain and store a reference to this service, start my async operation, an when it completes, use this service to post an event to my queue. The benefits of this - I keep all the activity-specific code inside activity, and I don't have to add new services for each activity types.
But seeing the official and internet samples doing it will specialized non-reusable services, I would like to check if this approach is OK, or I’m creating some problems here?
There is a potential problem here with regard to workflow persistence.
If you create long running worklfows that are persisted in a database to the runtime will be able to restart these workflows are not reloaded into memory until there is some external event that reloads them. As there they are responsible for triggering the event themselves but cannot until they are reloaded. And we have a catch 22 :-(
The proper way to do this is using an external service. And while this might feel like dividing the code into two places it really isn't. The reason is that the workflow is responsible for the big picture, IE what should be done. And the runtime service is responsible for the actual implementation or how it should be done. That way you can change the how without changing the why and when part.
A followup - regardless of all the reasons, why it "should be done" using a service, this will be directly supported by .NET 4.0, which provides a clean way for an activity to start an asynchronous work, while suspending the persistence of the activity.
See
http://msdn.microsoft.com/en-us/library/system.activities.codeactivitycontext.setupasyncoperationblock(VS.100).aspx
for details.