Cost of creating dbcontext per web request on ASP.Net - asp.net

I am using Unit of work and Repository pattern along with EF6 in my asp.net web application. DbContext object is getting created and destroyed on every request.
I am thinking that it is costly creating the new dbcontext on every request(I have not done any performance bench marking).
Is this cost of creating DbContext on every request can be ignored ? Does anybody done some bench marking?

Creating a new context is ridiculously cheap, on the order of about 137 ticks on average (0.0000137 seconds) in my application.
Hanging onto a context, on the other hand, can be incredibly expensive, so dispose of it often.
The more objects you query, the more entities end up being tracked in the context. Since entities are POCOS, entity framework has absolutely no way of knowing which ones you've modified except to examine every single one of them in the context and mark it accordingly.
Sure, once they're marked, it will only make database calls for the ones that need updated, but it's determining which ones need updated that is expensive when there are lots of entities being tracked, because it has to check all the POCOS against known values to see if they've changed.
This change tracking when calling save changes is so expensive, that if you're just reading and updating one record at a time, you're better off disposing of the context after each record and creating a new one. The alternative is hanging onto the context, such that every record you read results in a new entity in the context, and every time you call save changes it's slower by one entity.
And yes, it really is slower. If you're updating 10,000 entities for example, loading one at a time into the same context, the first save will only take about 30 ticks, but every subsequent one will take longer to the point where the last one will take over 30,000 ticks. In contrast, creating a new context each time will result in a consistent 30 ticks per update. In the end, because of the cumulative slow-down of hanging onto the context and all the tracked entities, disposing of and recreating the context before each commit ends up taking only 20% as long (1/5 the total time)!
That's why you should really only call save changes once on a context, ever, then dispose of it. If you're calling save changes more than once with a lot of entities in the context, you may not be using it correctly. The exceptional case, obviously, is when you're doing something transactional.
If you need to perform some transactional operation, then you need to manually open your own SqlConnection and either begin a transaction on it, or you need to open it within a TransactionScope. Then, you can create your DbContext by passing it that same open connection. You can do that over and over, disposing of the DbContext object each time while leaving the connection open. Usually, DbContext handles opening and closing the context for you, but if you pass it an open connection, it won't try to close it automatically.
That way, you treat the DbContext as just a helper for tracking object changes on an open connection. You create and destroy it as many times as you like on the same connection, where you can run your transaction. It's very important to understand what's going on under the hood.

Entity Framework is not thread safe, meaning, you cannot use a context across more than one thread. IIS uses a thread for each request sent to the server. Given this, you have to have a context per request. Else, you run a major risks of unexplained and seemingly random exceptions and potentially incorrect data being saved to the database.
Lastly, the context creation is not that expensive of an operation. If you are experiencing a slow application experience (not on first start, but after using the site), your issue probably lies somewhere else.

Related

Why adding new entity just to modify another one

I'm reading the source of an ASP.NET Core example project (MSDN) and try to understand all.
There's an Edit razor page which shows the values of an entity record in <input> fields allowing the user to see and change different fields of a given record. There's this line:
Movie = await _context.Movie.FirstOrDefaultAsync(m => m.ID == id);
...
_context.Attach(Movie).State = EntityState.Modified;
I don't understand why it adds a new entity and change its EntityState to Modified, instead of fetch the record and change it then call SaveChanges().
My guess is that their example is loading the movie in one call, passing it to the view, then in another update action, passing the modified entity from the view to the controller, which is attaching it, setting it's state to modified, and then calling save changes.
IMHO this is an extremely bad practice with EF for a number of reasons, and I have no idea why Microsoft uses it in examples (other than that it makes CRUD look easy-peazy).
By serializing entities to the view, you are serializing typically far more data to send across the wire than your view actually needs. You give malicious, or curious users far more information about your system than you should.
You are bound to run into serializer errors with bi-directional references. ("A" has reference to "B" which has a reference back to "A") Serializers (like JSON) generally don't like these.
You are bound to run into performance issues with lazy loading calls as the serializer "touches" references. When dealing with collections of results, the resulting lazy load calls can blow performance completely out of the water.
Without lazy loading enabled, you can easily run into issues where referenced data is passed as #null or potentially incomplete collections due to the Context possibly having some referenced data in the cache that it can pull and associate to the entity, but not the complete set of child records.
By passing entities back to the controller you expose your system to unintentional modifications by which an attacker can modify the entity data in ways that you do not intend it to be modified, then when you attach it, set the state to modified, and save, you overwrite the data state. I.e. change FKs, or otherwise alter data that is not supported, or even displayed by your UI.
You are bound to run into stale data issues where data can have changed between the point you read the entity initially and the point it is saved. Attaching and saving takes a brutal "last-in-wins" approach to concurrent data.
My general advice is to:
1. Leverage Select or Automapper's ProjectTo to populate ViewModel classes with just the fields your view will need. This avoids the risks of lazy loads, and minimizes the data passed to the client. (Faster, and reveals nothing extra about your system)
2. Trust absolutely nothing coming back from the client. Validate the returned view model object, then only after you're satisfied it is legit, load the entity(ies) from the context and copy the applicable fields across. This also gives you the opportunity to evaluate row versioning to handle concurrency issues.
Disclaimer: You certainly can address most of my pointed out issues while still passing entities back & forth, but you definitely leave the door wide open to vulnerabilities and bugs creeping in when someone just defaults to an attach & save, or lazy loads start creeping in.

Solution for previewing user changes and allowing rollback/commit over a period of time

I have asked a few questions today as I try to think through to the solution of a problem.
We have a complex data structure where all of the various entities are tightly interconnected, with almost all entities heavily reliant/dependant upon entities of other types.
The project is a website (MVC3, .NET 4), and all of the logic is implemented using LINQ-to-SQL (2008) in the business layer.
What we need to do is have a user "lock" the system while they make their changes (there are other reasons for this which I won't go into here that are not database related). While this user is making their changes we want to be able to show them the original state of entities which they are updating, as well as a "preview" of the changes they have made. When finished, they need to be able to rollback/commit.
We have considered these options:
Holding open a transaction for the length of time a user takes to make multiple changes stinks, so that's out.
Holding a copy of all the data in memory (or cached to disk) is an option but there is heck of a lot of it, so seems unreasonable.
Maintaining a set of secondary tables, or attempting to use session state to store changes, but this is complex and difficult to maintain.
Using two databases, flipping between them by connection string, and using T-SQL to manage replication, putting them back in sync after commit/rollback. I.e. switching on/off, forcing snapshot, reversing direction etc.
We're a bit stumped for a solution that is relatively easy to maintain. Any suggestions?
Our solution to a similar problem is to use a locking table that holds locks per entity type in our system. When the client application wants to edit an entity, we do a "GetWithLock" which gets the client the most up-to-date version of the entity's data as well as obtaining a lock (a GUID that is stored in the lock table along with the entity type and the entity ID). This prevents other users from editing the same entity. When you commit your changes with an update, you release the lock by deleting the lock record from the lock table. Since stored procedures are the api we use for interacting with the database, this allows a very straight forward way to lock/unlock access to specific entities.
On the client side, we implement IEditableObject on the UI model classes. Our model classes hold a reference to the instance of the service entity that was retrieved on the service call. This allows the UI to do a Begin/End/Cancel Edit and do the commit or rollback as necessary. By holding the instance of the original service entity, we are able to see the original and current data, which would allow the user to get that "preview" you're looking for.
While our solution does not implement LINQ, I don't believe there's anything unique in our approach that would prevent you from using LINQ as well.
HTH
Consider this:
Long transactions makes system less scalable. If you do UPDATE command, update locks last until commit/rollback, preventing other transaction to proceed.
Second tables/database can be modified by concurent transactions, so you cannot rely on data in tables. Only way is to lock it => see no1.
Serializable transaction in some data engines uses versions of data in your tables. So after first cmd is executed, transaction can see exact data available in cmd execution time. This might help you to show changes made by user, but you have no guarantee to save them back into storage.
DataSets contains old/new version of data. But that is unfortunatelly out of your technology aim.
Use a set of secondary tables.
The problem is that your connection should see two versions of data while the other connections should see only one (or two, one of them being their own).
While it is possible theoretically and is implemented in Oracle using flashbacks, SQL Server does not support it natively, since it has no means to query previous versions of the records.
You can issue a query like this:
SELECT *
FROM mytable
AS OF TIMESTAMP
TO_TIMESTAMP('2010-01-17')
in Oracle but not in SQL Server.
This means that you need to implement this functionality yourself (placing the new versions of rows into your own tables).
Sounds like an ugly problem, and raises a whole lot of questions you won't be able to go into on SO. I got the following idea while reading your problem, and while it "smells" as bad as the others you list, it may help you work up an eventual solution.
First, have some kind of locking system, as described by #user580122, to flag/record the fact that one of these transactions is going on. (Be sure to include some kind of periodic automated check, to test for lost or abandoned transactions!)
Next, for every change you make to the database, log it somehow, either in the application or in a dedicated table somewhere. The idea is, given a copy of the database at state X, you could re-run the steps submitted by the user at any time.
Next up is figuring out how to use database snapshots. Read up on these in BOL; the general idea is you create a point-in-time snapshot of the database, do whatever you want with it, and eventually throw it away. (Only available in SQL 2005 and up, Enterprise edition only.)
So:
A user comes along and initiates one of these meta-transactions.
A flag is marked in the database showing what is going on. A new transaction cannot be started if one is already in process. (Again, check for lost transactions now and then!)
Every change made to the database is tracked and recorded in such a fashion that it could be repeated.
If the user decides to cancel the transaction, you just drop the snapshot, and nothing is changed.
If the user decides to keep the transaction, you drop the snapshot, and then immediately re-apply the logged changes to the "real" database. This should work, since your requirements imply that, while someone is working on one of these, no one else can touch the related parts of the database.
Yep, this sure smells, and it may not apply to well to your problem. Hopefully the ideas here help you work something out.

What is the cost of object creating

If I have to choose between static method and creating an instance and use instance method, I will choose static methods always. but what is the detailed overhead of creating an instance?
for example I saw a DAL which can be done with static classes but they choose to make it instance now in the BLL at every single call they call something like.
new Customer().GetData();
how far this can be bad?
Thanks
The performance penalty should be negligible. In this blog entry someone did a short benchmark, with the result that creating 500,000 objects and adding them to a list cost about 1.5 seconds.
So, since I guess new Customer().GetData(); will be called at most a few hundred times in a single BLL function, the performance penalty can be ignored.
As a side note, either the design or the naming of the class is broken if new Customer().GetData(); is actually used: If class Customer just provides the means to get the data, it should be called something different, like CustomerReader (for lack of a better name). On the other hand, if Customer has an instance state that actually represents a Customer, GetData should be static -- not for reasons of performance, but for reasons of consistency.
Normally one shouldn't be too concerned about object creation overhead in the CLR. The allocation of memory for the new object would be really quick (due to the memory compacting stage of the garbage collector - GC). Creating new objects will take up a bit of memory for the object and put a bit more pressure on the GC (as it will have to clean up the object), but if it's only being used for a short time then it might be collected in an early GC generation which isn't that bad performance wise. Also the performance overhead would be dwarfed by the call to a database.
In general I'll make the decision whether to create a new object for some related methods or just utilize a static class (and methods) based on what I require from the object (such as need to mock/stub it out for tests) and not the slight difference in performance
As a side note - whether new Customer().GetData(); is the right place to put such code is questionable - to me it seems like the Data returned is directly related to a customer instance based on that statement and not actually a call to the database to retrieve data.

Ways to store an object across multiple postbacks

For the sake of argument assume that I have a webform that allows a user to edit order details. User can perform the following functions:
Change shipping/payment details (all simple text/dropdowns)
Add/Remove/Edit products in the order - this is done with a grid
Add/Remove attachments
Products and attachments are stored in separate DB tables with foreign key to the order.
Entity Framework (4.0) is used as ORM.
I want to allow the users to make whatever changes they want to the order and only when they hit 'Save' do I want to commit the changes to the database. This is not a problem with textboxes/checkboxes etc. as I can just rely on ViewState to get the required information. However the grid is presenting a much larger problem for me as I can't figure out a nice and easy way to persist the changes the user made without committing the changes to the database. Storing the Order object tree in Session/ViewState is not really an option I'd like to go with as the objects could get very large.
So the question is - how can I go about preserving the changes the user made until ready to 'Save'.
Quick note - I have searched SO to try to find a solution, however all I found were suggestions to use Session and/or ViewState - both of which I would rather not use due to potential size of my object trees
If you have control over the schema of the database and the other applications that utilize order data, you could add a flag or status column to the orders table that differentiates between temporary and finalized orders. Then, you can simply store your intermediate changes to the database. There are other benefits as well; for example, a user that had a browser crash could return to the application and be able to resume the order process.
I think sticking to the database for storing data is the only reliable way to persist data, even temporary data. Using session state, control state, cookies, temporary files, etc., can introduce a lot of things that can go wrong, especially if your application resides in a web farm.
If using the Session is not your preferred solution, which is probably wise, the best possible solution would be to create your own temporary database tables (or as others have mentioned, add a temporary flag to your existing database tables) and persist the data there, storing a single identifier in the Session (or in a cookie) for later retrieval.
First, you may want to segregate your specific state management implementation into it's own class so that you don't have to replicate it throughout your systems.
Second, you may want to consider a hybrid approach - use session state (or cache) for a short time to avoid unnecessary trips to a DB or other external store. After some amount of inactivity, write the cached state out to disk or DB. The simplest way to do this, is to serialize your objects to text (using either serialization or a library like proto-buffers). This helps allow you to avoid creating redundant or duplicate data structure to capture the in-progress data relationally. If you don't need to query the content of this data - it's a reasonable approach.
As an aside, in the database world, the problem you describe is called a long running transaction. You essentially want to avoid making changes to the data until you reach a user-defined commit point. There are techniques you can use in the database layer, like hypothetical views and instead-of triggers to encapsulate the behavior that you aren't actually committing the change. The data is in the DB (in the real tables), but is only visible to the user operating on it. This is probably a more complicated implementation than you may be willing to undertake, and requires intrusive changes to your persistence layer and data model - but allows the application to be ignorant of the issue.
Have you considered storing the information in a JavaScript object and then sending that information to your server once the user hits save?
Use domain events to capture the users actions and then replay those actions over the snapshot of the order model ( effectively the current state of the order before the user started changing it).
Store each change as a series of events e.g. UserChangedShippingAddress, UserAlteredLineItem, UserDeletedLineItem, UserAddedLineItem.
These events can be saved after each postback and only need a link to the related order. Rebuilding the current state of the order is then as simple as replaying the events over the currently stored order objects.
When the user clicks save, you can replay the events and persist the updated order model to the database.
You are using the database - no session or viewstate is required therefore you can significantly reduce page-weight and server memory load at the expense of some page performance ( if you choose to rebuild the model on each postback ).
Maintenance is incredibly simple as due to the ease with which you can implement domain object, automated testing is easily used to ensure the system behaves as you expect it to (while also documenting your intentions for other developers).
Because you are leveraging the database, the solution scales well across multiple web servers.
Using this approach does not require any alterations to your existing domain model, therefore the impact on existing code is minimal. Biggest downside is getting your head around the concept of domain events and how they are used and abused =)
This is effectively the same approach as described by Freddy Rios, with a little more detail about how and some nice keyword for you to search with =)
http://jasondentler.com/blog/2009/11/simple-domain-events/ and http://www.udidahan.com/2009/06/14/domain-events-salvation/ are some good background reading about domain events. You may also want to read up on event sourcing as this is essentially what you would be doing ( snapshot object, record events, replay events, snapshot object again).
how about serializing your Domain object (contents of your grid/shopping cart) to JSON and storing it in a hidden variable ? Scottgu has a nice article on how to serialize objects to JSON. Scalable across a server farm and guess it would not add much payload to your page. May be you can write your own JSON serializer to do a "compact serialization" (you would not need product name,product ID, SKU id, etc, may be you can just "serialize" productID and quantity)
Have you considered using a User Profile? .Net comes with SqlProfileProvider right out of the box. This would allow you to, for each user, grab their profile and save the temporary data as a variable off in the profile. Unfortunately, I think this does require your "Order" to be serializable, but I believe all of the options except Session thus far would require the same.
The advantage of this is it would persist through crashes, sessions, server down time, etc and it's fairly easy to set up. Here's a site that runs through an example. Once you set it up, you may also find it useful for storing other user information such as preferences, favorites, watched items, etc.
You should be able to create a temp file and serialize the object to that, then save only the temp file name to the viewstate. Once they successfully save the record back to the database then you could remove the temp file.
Single server: serialize to the filesystem. This also allows you to let the user resume later.
Multiple server: serialize it but store the serialized value in the db.
This is something that's for that specific user, so when you persist it to the db you don't really need all the relational stuff for it.
Alternatively, if the set of data is v. large and the amount of changes is usually small, you can store the history of changes done by the user instead. With this you can also show the change history + support undo.
2 approaches - create a complex AJAX application that stores everything on the client and only submits the entire package of changes to the server. I did this once a few years ago with moderate success. The applicaiton is not something I would want to maintain though. You have a hard time syncing your client code with your server code and passing fields that are added/deleted/changed is nightmarish.
2nd approach is to store changes in the data base in a temp table or "pending" mode. Advantage is your code is more maintainable. Disadvantage is you have to have a way to clean up abandonded changes due to session timeout, power failures, other crashes. I would take this approach for any new development. You can have separate tables for "pending" and "committed" changes that opens up a whole new level of features you can add. What if? What changed? etc.
I would go for viewstate, regardless of what you've said before. If you only store the stuff you need, like { id: XX, numberOfProducts: 3 }, and ditch every item that is not selected by the user at this point; the viewstate size will hardly be an issue as long as you aren't storing the whole object tree.
When storing attachments, put them in a temporary storing location, and reference the filename in your viewstate. You can have a scheduled task that cleans the temp folder for every file that was last saved over 1 day ago or something.
This is basically the approach we use for storing information when users are adding floorplan information and attachments in our backend.
Are the end-users internal or external clients? If your clients are internal users, it may be worthwhile to look at an alternate set of technologies. Instead of webforms, consider using a platform like Silverlight and implementing a rich GUI there.
You could then store complex business objects within the applet, provide persistant "in progress" edit tracking across multiple sessions via offline storage and easily integrate with back-end services that providing saving / processing of the finalised order. All whilst maintaining access via the web (albeit closing out most *nix clients).
Alternatives include Adobe Flex or AJAX, depending on resources and needs.
How large do you consider large? If you are talking sessions-state (so it doesn't go back/fore to the actual user, like view-state) then state is often a pretty good option. Everything except the in-process state provider uses serialization, but you can influence how it is serialized. For example, I would tend to create a local model that represents just the state I care about (plus any id/rowversion information) for that operation (rather than the full domain entities, which may have extra overhead).
To reduce the serialization overhead further, I would consider using something like protobuf-net; this can be used as the implementation for ISerializable, allowing very light-weight serialized objects (generally much smaller than BinaryFormatter, XmlSerializer, etc), that are cheap to reconstruct at page requests.
When the page is finally saved, I would update my domain entities from the local model and submit the changes.
For info, to use a protobuf-net attributed object with the state serializers (typically BinaryFormatter), you can use:
// a simple, sessions-state friendly light-weight UI model object
[ProtoContract]
public class MyType {
[ProtoMember(1)]
public int Id {get;set;}
[ProtoMember(2)]
public string Name {get;set;}
[ProtoMember(3)]
public double Value {get;set;}
// etc
void ISerializable.GetObjectData(
SerializationInfo info,StreamingContext context)
{
Serializer.Serialize(info, this);
}
public MyType() {} // default constructor
protected MyType(SerializationInfo info, StreamingContext context)
{
Serializer.Merge(info, this);
}
}

what will happen if not lock the dictionary when i modify it? about asp.net cache

sorry i have many questions about the lock/cache.=_=..
->1. about cache, i know that the cache in asp.net is threadsafe, the simple code i usually use is
IList<User> user= HttpRuntime.Cache["myCacheItem"] as IList<User>;
if (user == null)
{
//should i have a lock here?
//lock(some_static_var){...}
HttpRuntime.Cache["myCacheItem"] = GetDateFromDateBase();
}
return user;
should use a lock in the code?
->->1.1 if i use, maybe i should declare many lockitem? i have seen some implementation in community server, it use a static dictionary to store the lockitem, is it a good idea? cause i am worried that it maybe too many lockitems in the dictionary and it maybe slow down the system.
->->1.2 if i don't use, what will happen? just maybe two or more threads access the GetDateFromDateBase()? if just this, i think maybe i can give up the lock.
->2.i have a generic dictionary stored in the cache, i have to modify(add/update/delete) it. and i just use it to get the value like dic.trygetvalue(key), don't loop it.
->->2.1 if i can guarantee that the modify is just happen in only one thread, the scene like
a.aspx -> read the dictionary from cache, and display on the page, public for user
b.ashx -> will modify the dictionary when call it.(loop in 5 minutes),private used
should i use lock in a/b? lock reader and writer?
->->->2.11 if i don't use any lock, what will happened? will it throw exception when the reader and writer access on the same time?
->->->2.12 if i just lock the writer in b.ashx, what will happen? will the reader in a.aspx blocked? and what's the best practice to deal with this situation?
->->2.2 if the reader and writer both occured in the multi threads access. they are both the in public page.
a.aspx -> just read from cache
b.aspx -> modify the dictionary
what to do? lock all?
->->2.3 if i implement a new dictionary add function:
it just copy the current dictionary to a new dictionary and then add the new item or modify
and at last return the new dic,,will it solve the concurrency problem?
will the hashtable solve these problem?
how to determine a item need be locked? i think i'm wrong in these things=_=.
->3.the last question..i have two web application
a -> show the web
b -> mamage some settings
they both have own cache, what can i concurrent the two cache?
will the operation in [2.1] right or other operation?(i test memcached but it too slow than in the web application, so i just use two)
thank you for reading all-_-...
wish you can know what i say:)
Update======================================================
Thanks for Aaron's answer. after read his answer, i try to answer myself:)
1. about cache, if the data will not modify, it can read into cache first in Application_Start(global.asax).
1.1 If lock, i should add lock start when reading the data, not only writing. the lock item should static but i also feel uncerrain about 1.1.
1.2 Yes, if you can premise that the code is just read date from database and then insert into cache and will not modify(right?-_-). the result of this maybe read several times from database. i think this is not a big problem.
2. the generic dictionary write is not threadsafe, so it should be lock when modify. To solve this, i can use a immutable dictionary.
but if i use a ConcurrentDictionary(threadsafe when read/write) in .net4.0 or i implement a new dictionary(use reader-writer lock) myself, will it solve? Do i need lock it again when modify? the code is like
ConcurrentDictionary user= HttpRuntime.Cache["myCacheItem"] as ConcurrentDictionary;
if (user == null)
{
//is it safe when the user is ConcurrentDictionary?
HttpRuntime.Cache["myCacheItem"] = GetDateFromDateBase();
}
else
{
//is it safe when the user is ConcurrentDictionary?
user["1"] = a_new_user;
}
3.the question is that, i have a small application like a stroe, it have two web application one is the store show site(A) and the other is the management site(B), so i need to concurrnt two web cache, like that if i modify a product price in B, how can i notify site A to change/delete cache?(i know that the cache in A can set shortor, but it not quickly, so i want to know wheather there have a inner support in asp.net or just like question 2.1? have a aspx/ashx page to call?)
Thread-safe means that you do not have to worry about multiple threads reading and writing the same individual cache item at the same time. You do not need any kind of lock for simple reads and writes.
However, if you are trying to perform what should be an atomic operation - for example, checking for the presence of an item and adding it if it doesn't exist - then you need to synchronize or serialize access. Because the lock needs to be global, it is usually best to declare a static readonly object in your global.asax and lock that before performing the atomic operation. Note that you should lock before the read and only release the lock after the write, so your hypothetical lock in your example above is actually happening too late.
This is why many web applications don't lazy-load the cache; instead, they perform the load in the Application_Start method. At the very least, putting an empty dictionary into the cache would save you the trouble of checking for null, and this way you wouldn't have to synchronize because you would only be accessing the cache item once (but read on).
The other aspect to the problem is, even though the cache itself is thread-safe, it does not make any items you add to it thread-safe. That means that if you store a dictionary in there, any threads that retrieve the cache item are guaranteed to get a dictionary that's fully-initialized, but you can still create race conditions and other thread-safety issues if you have multiple requests trying to access the dictionary concurrently with at least one of them modifying it.
Dictionaries in .NET can support multiple concurrent readers but not writers, so you definitely can run into issues if you have a single dictionary stored in the ASP.NET cache and have multiple threads/requests reading and writing. If you can guarantee that only one thread will ever write to the dictionary, then you can use the Dictionary as an immutable type - that is, copy the dictionary, modify the copy, and replace the original in the cache with the copy. If the dictionary is infrequently-modified, then this would indeed save you the trouble of synchronizing access, because no request will ever be trying to read from the same dictionary that is being modified. On the other hand, if the dictionary is very large and/or frequently modified, then you could run into performance issues making all those copies - the only way to be sure is profile, profile, profile!
If you find that performance constraints don't allow to use that approach, then the only other option is to synchronize access. If you know that only one thread will ever modify the dictionary at a time, then a reader-writer lock (i.e. ReaderWriterLockSlim) should work well for you. If you can't guarantee this, then you need a Monitor (or just a simple lock(...) clause around each sequence of operations).
That should answer questions 1-2; I'm sorry but I don't quite understand what you're asking in #3. ASP.NET applications are all instantiated in their own AppDomains, so there aren't really any concurrency issues to speak of because nothing is shared (unless you are actually using some method of IPC, in which case everything is fair game).
Have I understood your questions correctly? Does this help?

Resources