when to clear or make null asp.net mvc models? - asp.net

I am working in an asp .net mvc application. I am using the model and storing some of the values which i need to preserve between the page posts, in the form of datacontexts.
Say my model looks something like this:
public SelectedUser SelectedUserDetails
{
//get and set has
//this.datacontext.data.SelectedUser = .....
//return this.datacontext.data.....
}
Now when this model needs to be cleared? I have many such models with many properties and datacontext. But I don't have an idea on when to clear it. Is there a way or an event that can be triggered automatically when the model is not used for a long time?
Oneway I thought is when i navigate away from a page which uses my underlying model, I can clear that model if its no longer used anywhere and initialize it back as needed. But I need to clear almost many models at many points. Is there an automatic way that can clear models when it is no longer used because care can be taken by my code to initialize them when I need them, but I don't know when to clear them when I no longer need them. I need this to get rid of any memory related issues. Any thoughts or comments?

I would use the ASP.NET cache or the Session to keep data between requests. The cache timeout can be set on the object and it will automatically be removed -- note that you'll need a way to reconstitute it if it is removed before you are done using it. If you use the session, the objects will be removed when the session times out. You could also -- by default -- remove it (or replace it) when you hit an action that would start the sequence of actions for which it is needed.

Related

UITableViewCells and NSURLConnection

I have a custom UITableViewCell, and when the user clicks a button i make a request to a server and update the cell. I do this with a NSUrlConnection and it all works fine (this is all done inside the cell class) and once it returns it fires a delegate method and the tableivew controller handles this. However when i create the cell in the tableview, i use the dequeue method and reuse my cells. So if a cell has fired a asynchronous nsurlconnection, and the cell gets reused whilst this is still going on, will this in turn erase the current connection? I just want to make sure that if the cell is reused, the actual memory that was assigned to the cell is still there so the connection can fulfil its duty??
You can customize the behavior of a UITableViewCell by subclassing it and overriding the -perpareForReuse method. In this case, I would recommend destroying the connection when the cell is dequeued. If the connection should still keep going, you’ll need to remove the reference to it (set it to nil) and handle its delegate methods elsewhere.
It's never a good idea to keep a reference of a connection or any data that you want to display in a cell, no matter how much of effort you put into it afterward to work around to arising problems. Your approach will never work reliable.
In your case, if the user quickly scrolls the table view up and down, your app will start and possibly cancel dozens of connections and yet never finishes to load something. That will be an awful user experience, and may crash the app.
Better you design your app with MVC in mind: the cell is just a means to display your model data, nothing else. It's the View in this architectural design.
For that purpose the Table View Delegate needs to retrieve the Model's properties which shall to be displayed for a certain row and setup the cell. The model encapsulates the network connection. The Controller will take the role to manage and update change notification and process user inputs.
A couple of Apple samples provide much more details about this topic, and there is a nice introduction about MVC, please go figure! ;)
http://developer.apple.com/library/ios/#documentation/general/conceptual/devpedia-cocoacore/MVC.html
The "Your Second iOS App: Storyboards" also has a step by step explanation to create "Data Controller Classes". Quite useful!
Now, when using NSURLConnection which updates your model, it might become a bit more complex. You are dealing with "lazy initializing models". That is, they may provide some "placeholder" data when the controller accesses the property instead of the "real" data when is not yet available. The model however, starts a network request to load it. When it is eventually loaded, the model must somehow notify the Table View Controller. The tricky part here is to not mess with synchronization issues between model and table view. The model's properties must be updated on the main thread, and while this happens, it must be guaranteed that the table view will not access the model's properties. There are a few samples which demonstrate a few techniques to accomplish this.

How can we make webforms in asp.net without using viewstate

i need to make a large website , but when using web forms in asp.net , we have viewstate issues.As viewstate makes site very heavy.
Do we have any alternative for this, i do not want to use mvc .
ViewState can be disabled using EnableViewState="false" (or ViewStateMode="disabled") attribute at Web Application, page or web control level and viewstate data will not be stored in HTML.
You can save your data in Session (Unique for each user) or Cache (Same for all users)
Hope this helps!
Edit
Save data into session:
Session["YourKeyName"] = "Object data";
Get saved data from session:
object o = Session["YourKeyName"] as object; // Where object can be any type
Save data into Cache:
Cache["YourAnotherKeyName"] = "Object data";
Get saved data from Cache:
object o = Cache["YourAnotherKeyName"] as object; // Where object can be any type
There are several "fixes" here.
Store viewstate in session. Personally, I don't like this, but that's mainly because I don't like using session for anything.
Split viewstate across multiple hidden form fields. Out of the box it's stored in one gigantic field. This is handled in your web.config by setting the maxPageStateFieldLength to something reasonable like 1024.
Go through your pages and fix those that are heavy on viewstate. The biggest offenders are grids and other repeating views, especially if you are using the built in paging functions. Basically, get rid of the various standard .net controls and replace them with controls that don't try to send entire data sets to the client.
Disable viewstate altogether. Although I like this, it's not often achievable. Especially if you are using controls that absolutely depend on it (like grids with the built in paging..).
The fastest "fix" is number 2 above. That will allow you to get past issues with certain safari versions. Essentially, your biggest problem is that various browsers have limits on the amount of data that can appear in a single form field.
For a quick fix you can store the viewstate in the session very easily. There are hundreds of how-to articles on the subject.
But in the long run it is to your benefit to learn how to use the viewstate more efficiently.
If you do not want to use ViewState but want to mantain state of your controls you can use the Session or manually assigning values to your controls after each post to the server. You can always disable ViewState for the heavier controls, and enable it for the least heavy ones.
Good luck!

How can I utilize multiple databases in an entity framework solution simultaneously?

I have two unrelated databases and I need to pass data back and forth between them. Right now I have created two separate entity models - one for each database - but this is causing issues in my code b/c I have to do a Using nameofcontext / End Using and when I try to then use some of the results from the first section of the code in a second Using nameofcontext / End Using it doesn't like it - b/c I've closed the connection to the first database!
Since this is a website, you could create one instance of each context in Global.asax's BeginRequest event, and dispose of that instance in EndRequest. Doing that means during the rest of the event lifecycle, you have contexts that will remain open and can do what you need, but you still know they're being properly disposed.
That's how I've gotten around issues like this.
Note: Don't store the context in a global shared variable because that will share it between multiple requests and havok will ensure. HttpContext.Current.Items lets you store something that is easy to retrieve in your code but is specific to the current request, so that's a safe place to store them.

Ways to store an object across multiple postbacks

For the sake of argument assume that I have a webform that allows a user to edit order details. User can perform the following functions:
Change shipping/payment details (all simple text/dropdowns)
Add/Remove/Edit products in the order - this is done with a grid
Add/Remove attachments
Products and attachments are stored in separate DB tables with foreign key to the order.
Entity Framework (4.0) is used as ORM.
I want to allow the users to make whatever changes they want to the order and only when they hit 'Save' do I want to commit the changes to the database. This is not a problem with textboxes/checkboxes etc. as I can just rely on ViewState to get the required information. However the grid is presenting a much larger problem for me as I can't figure out a nice and easy way to persist the changes the user made without committing the changes to the database. Storing the Order object tree in Session/ViewState is not really an option I'd like to go with as the objects could get very large.
So the question is - how can I go about preserving the changes the user made until ready to 'Save'.
Quick note - I have searched SO to try to find a solution, however all I found were suggestions to use Session and/or ViewState - both of which I would rather not use due to potential size of my object trees
If you have control over the schema of the database and the other applications that utilize order data, you could add a flag or status column to the orders table that differentiates between temporary and finalized orders. Then, you can simply store your intermediate changes to the database. There are other benefits as well; for example, a user that had a browser crash could return to the application and be able to resume the order process.
I think sticking to the database for storing data is the only reliable way to persist data, even temporary data. Using session state, control state, cookies, temporary files, etc., can introduce a lot of things that can go wrong, especially if your application resides in a web farm.
If using the Session is not your preferred solution, which is probably wise, the best possible solution would be to create your own temporary database tables (or as others have mentioned, add a temporary flag to your existing database tables) and persist the data there, storing a single identifier in the Session (or in a cookie) for later retrieval.
First, you may want to segregate your specific state management implementation into it's own class so that you don't have to replicate it throughout your systems.
Second, you may want to consider a hybrid approach - use session state (or cache) for a short time to avoid unnecessary trips to a DB or other external store. After some amount of inactivity, write the cached state out to disk or DB. The simplest way to do this, is to serialize your objects to text (using either serialization or a library like proto-buffers). This helps allow you to avoid creating redundant or duplicate data structure to capture the in-progress data relationally. If you don't need to query the content of this data - it's a reasonable approach.
As an aside, in the database world, the problem you describe is called a long running transaction. You essentially want to avoid making changes to the data until you reach a user-defined commit point. There are techniques you can use in the database layer, like hypothetical views and instead-of triggers to encapsulate the behavior that you aren't actually committing the change. The data is in the DB (in the real tables), but is only visible to the user operating on it. This is probably a more complicated implementation than you may be willing to undertake, and requires intrusive changes to your persistence layer and data model - but allows the application to be ignorant of the issue.
Have you considered storing the information in a JavaScript object and then sending that information to your server once the user hits save?
Use domain events to capture the users actions and then replay those actions over the snapshot of the order model ( effectively the current state of the order before the user started changing it).
Store each change as a series of events e.g. UserChangedShippingAddress, UserAlteredLineItem, UserDeletedLineItem, UserAddedLineItem.
These events can be saved after each postback and only need a link to the related order. Rebuilding the current state of the order is then as simple as replaying the events over the currently stored order objects.
When the user clicks save, you can replay the events and persist the updated order model to the database.
You are using the database - no session or viewstate is required therefore you can significantly reduce page-weight and server memory load at the expense of some page performance ( if you choose to rebuild the model on each postback ).
Maintenance is incredibly simple as due to the ease with which you can implement domain object, automated testing is easily used to ensure the system behaves as you expect it to (while also documenting your intentions for other developers).
Because you are leveraging the database, the solution scales well across multiple web servers.
Using this approach does not require any alterations to your existing domain model, therefore the impact on existing code is minimal. Biggest downside is getting your head around the concept of domain events and how they are used and abused =)
This is effectively the same approach as described by Freddy Rios, with a little more detail about how and some nice keyword for you to search with =)
http://jasondentler.com/blog/2009/11/simple-domain-events/ and http://www.udidahan.com/2009/06/14/domain-events-salvation/ are some good background reading about domain events. You may also want to read up on event sourcing as this is essentially what you would be doing ( snapshot object, record events, replay events, snapshot object again).
how about serializing your Domain object (contents of your grid/shopping cart) to JSON and storing it in a hidden variable ? Scottgu has a nice article on how to serialize objects to JSON. Scalable across a server farm and guess it would not add much payload to your page. May be you can write your own JSON serializer to do a "compact serialization" (you would not need product name,product ID, SKU id, etc, may be you can just "serialize" productID and quantity)
Have you considered using a User Profile? .Net comes with SqlProfileProvider right out of the box. This would allow you to, for each user, grab their profile and save the temporary data as a variable off in the profile. Unfortunately, I think this does require your "Order" to be serializable, but I believe all of the options except Session thus far would require the same.
The advantage of this is it would persist through crashes, sessions, server down time, etc and it's fairly easy to set up. Here's a site that runs through an example. Once you set it up, you may also find it useful for storing other user information such as preferences, favorites, watched items, etc.
You should be able to create a temp file and serialize the object to that, then save only the temp file name to the viewstate. Once they successfully save the record back to the database then you could remove the temp file.
Single server: serialize to the filesystem. This also allows you to let the user resume later.
Multiple server: serialize it but store the serialized value in the db.
This is something that's for that specific user, so when you persist it to the db you don't really need all the relational stuff for it.
Alternatively, if the set of data is v. large and the amount of changes is usually small, you can store the history of changes done by the user instead. With this you can also show the change history + support undo.
2 approaches - create a complex AJAX application that stores everything on the client and only submits the entire package of changes to the server. I did this once a few years ago with moderate success. The applicaiton is not something I would want to maintain though. You have a hard time syncing your client code with your server code and passing fields that are added/deleted/changed is nightmarish.
2nd approach is to store changes in the data base in a temp table or "pending" mode. Advantage is your code is more maintainable. Disadvantage is you have to have a way to clean up abandonded changes due to session timeout, power failures, other crashes. I would take this approach for any new development. You can have separate tables for "pending" and "committed" changes that opens up a whole new level of features you can add. What if? What changed? etc.
I would go for viewstate, regardless of what you've said before. If you only store the stuff you need, like { id: XX, numberOfProducts: 3 }, and ditch every item that is not selected by the user at this point; the viewstate size will hardly be an issue as long as you aren't storing the whole object tree.
When storing attachments, put them in a temporary storing location, and reference the filename in your viewstate. You can have a scheduled task that cleans the temp folder for every file that was last saved over 1 day ago or something.
This is basically the approach we use for storing information when users are adding floorplan information and attachments in our backend.
Are the end-users internal or external clients? If your clients are internal users, it may be worthwhile to look at an alternate set of technologies. Instead of webforms, consider using a platform like Silverlight and implementing a rich GUI there.
You could then store complex business objects within the applet, provide persistant "in progress" edit tracking across multiple sessions via offline storage and easily integrate with back-end services that providing saving / processing of the finalised order. All whilst maintaining access via the web (albeit closing out most *nix clients).
Alternatives include Adobe Flex or AJAX, depending on resources and needs.
How large do you consider large? If you are talking sessions-state (so it doesn't go back/fore to the actual user, like view-state) then state is often a pretty good option. Everything except the in-process state provider uses serialization, but you can influence how it is serialized. For example, I would tend to create a local model that represents just the state I care about (plus any id/rowversion information) for that operation (rather than the full domain entities, which may have extra overhead).
To reduce the serialization overhead further, I would consider using something like protobuf-net; this can be used as the implementation for ISerializable, allowing very light-weight serialized objects (generally much smaller than BinaryFormatter, XmlSerializer, etc), that are cheap to reconstruct at page requests.
When the page is finally saved, I would update my domain entities from the local model and submit the changes.
For info, to use a protobuf-net attributed object with the state serializers (typically BinaryFormatter), you can use:
// a simple, sessions-state friendly light-weight UI model object
[ProtoContract]
public class MyType {
[ProtoMember(1)]
public int Id {get;set;}
[ProtoMember(2)]
public string Name {get;set;}
[ProtoMember(3)]
public double Value {get;set;}
// etc
void ISerializable.GetObjectData(
SerializationInfo info,StreamingContext context)
{
Serializer.Serialize(info, this);
}
public MyType() {} // default constructor
protected MyType(SerializationInfo info, StreamingContext context)
{
Serializer.Merge(info, this);
}
}

Caching ListView data a viable option?

Here's my scenario:
1) User runs search to retrieve values for display in ListView via LinqDataSource.
2) They click on one of the items which takes them to another page where the details can be examined, further drill-down can happen, etc.
3) User wants to go back to the original ListView results to select another item for inspection.
I can see it's possible to pass the querystring params around, allowing the querying to be duplicated each time the user comes back to the ListView, but it seems like there ought to be a way to cache the results.
Since I'm using the LinqDataSource, though, I believe the actual results are fetched each time the query is run. I'm currently feeding a "select new {blah, blah}" type of IEnumerable to the e.Results, which can't be turned into a List since it's populated with anonymous types.
In short:
1) Does it make sense to try to place potentially large query results in the users session?
2) If it does, is a List the reasonable data structure?
3) Do I need to resort to something like creating a class with the correct properties to hold the anonymous data, enumerate the query return, populate the List?
4) Is there a better option than the LinqDataSource for this type goal?
5) Or, does it just make more sense to run the query each time they hit the ListView?
I apologize if this wasn't clear. I would really appreciate it if someone can set me straight before I nuke a bunch of my free time headed down the wrong path :)
First, I would suggest that you look into the caching mechanism that comes with ASP.NET, unless the data is private for a certain user.
Second, I would suggest that you design your application in a way so that you create natural points where you could try to get data from a cache before querying the database (and insert data into the cache, with expiration rules), but don't start putting stuff into the cache until you have verified that it will actually make a difference.
Measure how much time that is actually spent on retrieving data and use caching in the cases where it makes a difference.
I'm not sure if resurrecting threads from the dead is cool on SO, but here is what I found to answer this question:
http://weblogs.asp.net/pwelter34/archive/2007/08.aspx

Resources