Linq to SQL - Retain DataContext across post-backs? - asp.net

I read Rick Strahl's post about DataContext lifetime management, and some of the other related questions on Stackoverflow. If they contained an answer to my question, I must have missed it.
I generally follow the atomic approach and instantiate a DataContext for a unit of work when it is needed, and dispose it afterwards. This worked well until I hit a scenario with a complex page that contains a multi-view control with several grids and popup panels that all represent one unit of work. The data is in memory (I actually stuff the root object into the session so that the entire hierarchy is available across post-backs). Obviously, the DataContext is long gone by the time the user clicks on "Save".
Tom Brune's comment caught my eye at first, because it seemed like such an elegant approach - to use reflection to "wet" a fresh copy of the object and to update the database using a new DataContext. However, Rick's concerns about this approach are valid, and since my data structures are complex and hierarchical, I don't think I will try this.
So I am left with few options, as far as I see.
either use Rick's suggestion to deserialize/serialize the object and re-attach it to a new context
hand-code the logic that compares and updates a fresh copy of the object
Which one should I follow, and is there a third option, i.e. can I keep the DataContext around between post-backs? If that's feasible, it would require the least amount of coding, as my root object has about a dozen children.

My suggestion would be to go with your first bullet point there and deserialize/serialize the object and then re-attach it to a new context.
I've used that approach in the past and it has worked well for me. I think you'll run in to less issues and have an easier implementation ahead.

Related

Domain Object in Views

We've been having a discussion at work about whether to use Domain Objects in our views (asp.net mvc 2) or should every view that requires data be sent a ViewModel?
I was wondering if anyone had any pros/cons on this subject that they could shed some light on?
Thank you
I like to segregate my Domain Objects from my Views. As far as I'm concerned, my Domain Objects are solely for the purpose of representing the Domain of the application, now how the application is displayed.
The presentation layer should not contain any domain logic. Everything they display should be pre-determined by their Controller. The ideal way to ensure this is always adhered to is to ensure the view only receives these flattened ViewModels.
I did ask a similar question myself. Here's a quote from the answer I accepted:
I think that there are merits to
having a different design in the
domain than in the presentation layer.
So conceptually you are actually
looking at two different models, one
for the domain layer and one for the
presentation layer. Each of the models
is optimized for their purpose.
If I have the domain objects for Customer > Sales > Dispatch Address, then I don't want to have to deal with the object traversal in my view. I create a flattened view model that contains all of the properties. There's almost no extra work in mapping to and from this flattened view/presentation model if you use the excellent open source project AutoMapper.
Also, why would you want to pass an entire domain object back to a view if you can create an optimised representation of that model?
If you use NHibernate or similar - your domain objects will most likely be proxies, serializing these dun work. You should always use a ViewModel and map your domain objects to DTOs within your viewmodel. Don't take shortcuts here. Setting the convention will alleviate the pain you'll suffer later on.
It's a standard pattern for a reason.
w://
It depends. In some case it will be fine to use instances of model classes. In other cases a separate ViewModel is the better choice. In my experience it is perfectly acceptable to have different models in your domain and in your views. Or to use the domain model in the view. Do what works best for you. Do a spike for each option, see what works and then decide. You can even choose a different option for each view (and/or partial).
There are definitely going to be simple little apps where it's fine to use the same models across all layers. Generally little forms over data apps. But for a proper domain, my thoughts on the subject are to keep the domain models and view models separate because you don't want them to ever impact each other when changed.
If the domain logic needs a small change to process some new business logic on the back end, you don't want to risk that altering your view. Conversely, if marketing or someone wants to make changes to a view, you don't want those changes leaking back into your domain (having to populate fields and maintain data for no other purpose than some view somewhere is going to use it).
I have a good comparision currently because I'm working on two projects using different approaches. I'm far from stating that "this is bad and this is good" because this is written in some patterns. I know patterns, I like patterns, but I never blindly follow them just to be right. I always use what do I need currently to achieve current goals.
In first app, using domain objects in view, development is very quick. Few changes in few places and you have additional properties, form inputs etc. You don't bother about the layers, just extends/change the code and pass to another problem.
In the second app, where there are always object for use here, there and somewhere else, there's a dozens of classes looking the same, doing the same, and a ton of conversion code between various version of the same objects. More bad is that some developers do some logic on "this version" of class, and other logic is done on "that version". Development is very painful and requires a lot of testing afterwards. Changing a simple thing requires a lot of attention and a lot of code need to be changed. I really don't like this app for that, because I've never yet seen a business benefits from this approach, at least during last year (and we are in the production stage from the year). This app is three-four times more expensive to develop and maintain than the first one.
So, my funny answer on the question is: it depends. If you work in 10-20 people team, you like to come into the work, drink few coffies, talk with friend, do few simple things and go home, a lot of intermediate objects and conversion code will be good for you. If your goal is to be fast and cheap, if you want to focus on business layer, new features, quick changes following, and more if you touch software business and want to cash your project (we do all this stuff to be finally sold, right?), the second approach would be probably better.

Displaying Flex Object References

I have a bit of a memory leak issue in my Flex application, and the short version of my question is: is there any way (in AcitonScript 3) to find all live references to a given object?
What I have is a number of views with presentation models behind each of them (using Swiz). The views of interest are children of a TabNavigator, so when I close the tab, the view is removed from the stage. When the view is removed from the stage, Swiz sets the model reference in the view to null, as it should. I also removeAllChildren() from the view.
However when profiling the application, when I do this and run a GC, neither the view nor the presentation model are freed (though both set their references to each other to null). One model object used by the view (not a presenter, though) IS freed, so it's not completely broken.
I've only just started profiling today (firmly believing in not optimising too early), so I imagine there's some kind of reference floating around somewhere, but I can't see where, and what would be super helpful would be the ability to debug and see a list of objects that reference the target object. Is this at all possible, and if not natively, is there some light-weight way to code this into future apps for debugging purposes?
Cheers.
Assuming you are using Flex Builder, you could try the Profiler. In my experience, it's not so good for profiling performance, but it's been of great help for finding memory leaks.
It's not the most intuitive tool and it takes a while to get used to it (I mean, to the point where it actually becomes helpful). But, in my opinion, investing some time to at least learn the basics pays off. There's an enormous difference between just seeing how much memory the player is using globally (what System.totalMemory gives you, a very rough, imprecise and often misleading indicator) and actually track how many instances of each object have been created, how many are still alive and where were they allocated (so you can find the potential leak in the code and actually fix it instead of relying in black magic).
I don't know of any good tutorials for the FB profiler, but maybe this'll help to get you started.
First, launch the profiler. Uncheck performance profiling and check everything else (Enable memory profiling, watch live memory data and generate object allocation stack traces).
When the profiler starts, you'll see stats about the app objects, grouped by class. At this point, you might want to tweak filters. You'll see a lot of data and it's very easy to be overwhelmed. For now, ignore everything native to flash and flex stuff, if possible, and concentrate on some object that you think it should be collected.
The most important figures are "cumulative instances" and "instances". The first is the total number of instances created so far; the second, the number of said instances that are still alive. So, a good starting point is get your app to the state where the view you suspect that leaks gets created. You should see 1 for "cumulative instances" and "instances".
Now, do whatever you need to do to get to the point where this view should be cleaned up (navigate to other part of the app, etc) and run a GC (there's a button for that in the profiler UI). A crucial point is that you will be checking the app behaviour against your expectations -if that makes sense-. Finding leaks automatically in a garbarge collected environment is close to impossible by definition; otherwise, there would be no leaks. So, keep that in mind: you test against your expectations; you are the one who knows the life cycle of your objects and can say, "at this point this object should have been collected; if it's not, there's something wrong".
Now, if the "instances" count for you view goes down to 0, there's no leak there. If you think the app leaks, try to find other objects that might not have been disposed properly. If the count remains at 1, it means your view is leaked. Now, you'll have to find why and where.
At this point, you should take a "memory snapshot" (the button next to the Force GC button). Open the snapshot, find the object in the grid and double click on it. This will give you a list of all the objects that have a reference to this object. It's actually a tree, and probably each item will contain in turn a number of backreferences and so on. These are the objects that are preventing your view from being collected. In the right panel, also, you will an allocation trace. This will show how the selected object was created (pretty much like a stack trace).
You'll probably see a hugh number of objects there. But your best bet is to concentrate in those that have a longer life cycle than the object you're examining (your view). What I mean is, look for stage, a parent view, etc; objects on which your view depends on rather than objets that depend on your view, if that makes sense. If your view has a button and you added a listener to it, your button will have a ref to your view. In most cases, this is not a problem, since the button depends on the view and once the view is collect, so is the button. So, the idea is that since there are a lot of objects, you should try to stay focused or you will get nowhere. This method is rather heuristic, but in my experience, it works.
Once you find the source of a leak, go back to the source, change the code accordingly (maybe this requires not just changing code but refactoring a bit). Then repeat the process and check whether your change has caused the desired effect. It might take a while, depending on how big or complex is your app and how much you know about it. But if you go step by step, finding and fixing one problem at the time, you'll eventually get rid of the leaks. Or at least the worst and more evident ones. So, while a bit tedious, it pays off (and as a nice aside, you'll eventually understand what a waste of time is in most cases to use weak refs for every single event handler on the face of this earth, nulling out every single variable, etc, etc; it's an enlightening experience ;).
Hope this helps.
Flash GC uses a mix of ref counting and mark and sweep, so it does detect circular references. It seems rather you're having another reference in you object graph. The most common reason is, that the objects you want disposed still are having event handlers registered on objects that are not disposed. You could try to ensure that handlers are always registered with weak reference. You could also override addEventListener and removeEventListener in all (base) classes, if possible, to look which listeners are registered and whether there are chances for some not to be removed.
Also, you can write destructors for your objects, that for ui components clear graphics and remove all children, and for all objects, removes references to all properties. That way, only your object is kept in RAM, which shouldn't require much memory (a small footprint of 20 B or so, plus 4 B per variable (8 for a Number)).
greetz
back2dos
also a useful heuristics for finding memory leaks: http://www.tikalk.com/flex/solving-memory-leaks-using-flash-builder-4-profiler

Singleton pattern with Web application, Not a good idea!

I found something funny, I notice it by luck while I was debugging other thing. I was applying MVP pattern and I made a singleton controller to be shared among all presentations.
Suddenly I figured out that some event is called once at first postback, twice if there is two postback, 100 times if there is 100 postbacks.
because Singleton is based on a static variable which hold the instance, and the static variable live across postbacks, and I wired the event assuming that it will be wired once, and rewired for each postback.
I think we should think twice before applying a singleton in a web application, or I miss something??
thanks
I would think twice about using a Singleton anywhere.
Many consider Singleton an anti-pattern.
Some consider it an anti-pattern, judging that it is overused, introduces unnecessary limitations in situations where a sole instance of a class is not actually required, and introduces global state into an application.
There are lots of references on Wikipedia that discuss this.
It is very rare to need a singleton and personally I hold them in the same light as global variables.
You should think twice any time you are using static objects in a multi-threaded application (not only the singleton pattern) because of the shared state. Proper locking mechanisms should be applied in order to synchronize the access to the shared state. Failing to do so some very difficult to find bugs could appear.
I've been using Singletons in my web apps for quite some time and they have always worked out quite well for me, so to say they're a bad idea is really a pretty difficult claim to believe. The main idea, when using Singletons, is to keep all the session-specific information out of them, and to use them more for global or application data. To avoid them because they are "bad" is really not too smart because they can be very useful when applied correctly.

Dispose & Finalize for collections of properties?

I'm looking at some vb.net code I just inherited, and cannot fathom why the original developer would do this.
Basically, each "Domain" class is a collection of properties. And each one implements IDisposable.Dispose, and overrides Finalize(). There is no base class, so each just extents Object.
Dispose sets each private var to Nothing, or calls _private.Dispose when the property is another domain object. There's a private var that tracks the disposed state, and the final thing in Dispose is GC.suppressFinalize(Me)
Finalize just calls Me.Dispose and MyBase.Finalize.
Is there any benefit to this? Any harm? There are no un-managed resources, no db connections, nothing that would seem to need this.
That strikes me as being a VB6 pattern.
I would bet the guy was coming straight from VB6, maybe in the earlier days of .NET when these patterns were not widely understood.
There also is one case were setting an nternal reference to nothing is useful in a call to Dispose: when the member is marked as Withevents.
Without that, you risk having an uncollected object handling events when it really should not be doing that anymore.
It would seem to me that this is something that is NOT needed at all, especially without un-managed resources and data connections.
If you happen to be able to sanitize and post the code we might be able to get a bit more insight, but realistically I can't see a need for it.
Depending on the size of the objects, and how often they are created/destroyed, it could be to ensure GC can happen as early as possible.
It may be, that this pattern was used in other projects and it continues on without understanding why it was used in the first place. Monkey Gardeners
The only reason that I could see for this -- and this is dubious at best -- is if these things are being created and disposed of higher in the "food chain" and there is a potential for some of these domain classes to have either a limited or unmanaged resource at some point.
Even that is sketchy...it sounds like someone came from an unmanaged background and was looking for the .NET equivalent to managing your memory and came across the IDisposable interface.

Using a DataContext static variable

I have recently inherited an ASP.Net app using Linq2SQL. Currently, it has its DataContext objects declared as static in every page, and I create them the first time I find they are null (singleton, sort of).
I need comments if this is good or bad. In situations when I only need to read from the DB and in situations where i need to write as well.
How about having just one DataContext instance for the entire application?
One DataContext per application would perform badly, I'm afraid. The DataContext isn't thread safe, for starters, so even using one as a static member of a page is a bad idea. As asgerhallas mentioned it is ideal to use the context for a unit of work - typically a single request. Anything else and you'll start to find all of your data is in memory and you won't be seeing updates without an explicit refresh. Here are a couple posts that talk to those two subjects: Identity Maps and Units of Work
I use to have one DataContext per request, but it depends on the scenarios you're facing.
I think the point with L2S was to use it with the unit of work pattern, where you have a context per ... well unit of work. But it doesn't work well in larger applications as it's pretty hard to reattach entities to a new context later.
Rick Strahl has a real good introduction to the topic here:
http://www.west-wind.com/weblog/posts/246222.aspx
One thing I can say I have had problems with in the past, is to have one context to both read and write scenarios. The change tracking done in the datacontext is quite an overhead when you are just reading, which is what most webapps tends to do most of the time. You can make the datacontext readonly and it will speed up things quite a bit - but then you'll need another context for writing.

Resources