How to handle concurrency control in ASP.NET Dynamic Data? - asp.net

I've been quite impressed with dynamic data and how easy and quick it is to get a simple site up and running. I'm planning on using it for a simple internal HR admin site for registering people's skills/degrees/etc.
I've been watching the intro videos at www.asp.net/dynamicdata and one thing they never mention is how to handle concurrency control.
It seems that DD does not handle it right out of the box (unless there is some setting I haven't seen) as I manually generated a change conflict exception and the app failed without any user friendly message.
Anybody know if DD handles it out of the box? Or do you have to somehow build it into the site?

Concurrency is not handled out the of the box by DD.

One approach would be to implement this on the database side, by adding a "last updated" timestamp column (or other unique stamp, such as a GUID) to each table.
You then create an update trigger for each table. For each row being updated, is the "last updated" stamp passed in the same as the one on the row in the database?
If so, update the row, but give it a new "last updated" stamp.
If not, raise a specific "Data is out of date" exception.
On the client side, for each row you update, you'd need to refresh the "last updated" stamp.
In the client code you watch for the "Data is out of date" exception and display a helpful message to the user, asking them to refresh the data and re-submit their change.
Hope this helps.

All depends on the definition, what do you mean under "out of the box". Of cause you have to create a lot of code to handle concurrency, but some features help us to implement it.
My favorite model is "optimistic concurrency" based on rowversion datatype of SQL Server. It is like "last updated" timestamp, but you need not use any update trigger for each table. All updates of the corresponding "timestamp" column in your tables will be made automatically by SQL server at every update of data in the table row. I describes it in my old answer Concurrency handling of Sql transactrion. I hope it will be helpful for you.

I was of the impression the Dynamic data does the update on the underlying data source. Maybe you can specify the concurrency model (pessimistic/optimistic) on the data meta model that gets registered on the App_Init section. But you would probably get unable to save changes error, so by default would be pessimistic, last in loses....

Sorry to replay late. Yes DD is too strong when it come to fast development of project. Not only that it is base for .Net 4.0. DD is more enhance and have been included in .Net 4.0.
DD mostly work on Linq to sql. I will suggest you to have a look on that part.
In linq to SQl when you go to property of table you will find a property there which specify wheater to check the old value before updating new value. If you set that true I think your proble will get handle.
wish you best luck.
Let's learn from each other.

The solution given by Binary Worrier works and it's widely used on platforms providing a GUI to merge the changes (e.g. source control programs, wiki engines, etc). That way none of the users lose their changes. In the other hand, it requires much code or using external components or DLLs.
If you are not happy with that, another approach is just to lock the record that is being edited. Nobody else will be able to edit that record until the user commit the changes or his session expires. It has pros and cons but requires little code compared with the first option.

Related

Best practice for saving application-level data to database (ASP.NET)

We have a very large HTML form (> 100 fields) that updates a SQL Server database with user-entered data. It will take the user a long time to fill out the form, but every piece of information they submit is very valuable to the business process. Even if the user gives up on the form, we want to retain everything they have entered.
We plan to attach an onblur event to each field and use jQuery/AJAX to post each piece of data back to the application server immediately. That part is pretty straightforward. The question we have is when and how to best save this application-level information to the database. Again, our priority is data retention as opposed to performance but we also want to do this as efficiently as possible.
Options as I see it are:
Have the web service immediately post each piece of data to the database server.
Store the information in a custom class on the application server, then periodically call an update method to post new data to the database.
Store the information in view or session state, then run a routine to post this information to the database server.
Something else that we haven't thought of.
Option 1 seems the most obviously failsafe, but also the most resource intensive. Option 2 seems the most elegant, but can we be absolutely certain that the custom class instance can't be destroyed without first running its update method?
Thanks for your help!
IMHO, I'd really cut up the form into sections (if possible). Since this is ASP.Net, if you are using Web Forms then look into using wizards (cut up the form into logical Steps)
You can do same without Form Wizard, but still cut up the process into logical steps, client-side. You can probably do this in pure JavaScript, but it would likely be easier if you used a framework (jQuery, Knockout, etc.) - the concept remains the same, cut up the form entry process into sections (aka 'Steps') - e.g. using display toggles, divs for each "step", etc.
"retain everything even if abandoned later": assumed that the steps are "hierarchical" where the "most critical" inputs are at the beginning. This makes the "steps" approach even more important - this is a "logical group" (of inputs you really want) so if you do the Step approach, then you can save this data (of this "step") to DB in whatever fashion you deem appropriate (e.g. Ajax, or ASP.Net Post/postback).
Hth...
I would package everything up in some xml or dataset (.getxml) and pass the xml to a stored procedure....
How to pass XML from C# to a stored procedure in SQL Server 2008?
And maybe put the call on a background thread.
http://code.msdn.microsoft.com/CSASPNETBackgroundWorker-dda8d7b6
The xml will be faster than calling the values row by row (RBAR).
You can save just the xml, or shred the xml into a relational table(s).

Is there a way in either sql or web developer to all deleting say from a gridview all but the last row?

I have a web portal set up where customers can add company contacts, etc to a sql database thru gridviews and detail views i created with microsoft Web Developer. I need to have it set up, where although customers that have rights can edit and delete contact records, they cannot delete the last one. There must be one left in the system. I know this sounds like a strange request, but it is a needed one. Is this something that can be done thru VB, SQL, or thru one of web developers objects such as the Details View?
Thank you!
Not a strange request, as I understand the need. The solution we employed is the actual default Admin account did not show up on the delete list, as it was "system generated". That may not be a retooling you can do.
What you can't do is use the automagic drag and drop crap and institute this type of functionality. You will have to either pair down the request to remove the last requested item, write a trigger on the database or write custom code to not delte the last item.
As an example of pairing down: with datasets, since they are disconnected, you can remove the last row prior to sending your delete request. With LINQ and EF, you have similar options to shaping the return data.
Then, if you desire to explicitly control your code (good), you can roll through the items and ensure the last one does not delete.
If you want to make this as "safe" as possible, you might consider a trigger on the table that checks if the delete is last administrator user for a particular organization and abort if so. Then the user might say "delete all" on the web, but the database protects them.

Solution for previewing user changes and allowing rollback/commit over a period of time

I have asked a few questions today as I try to think through to the solution of a problem.
We have a complex data structure where all of the various entities are tightly interconnected, with almost all entities heavily reliant/dependant upon entities of other types.
The project is a website (MVC3, .NET 4), and all of the logic is implemented using LINQ-to-SQL (2008) in the business layer.
What we need to do is have a user "lock" the system while they make their changes (there are other reasons for this which I won't go into here that are not database related). While this user is making their changes we want to be able to show them the original state of entities which they are updating, as well as a "preview" of the changes they have made. When finished, they need to be able to rollback/commit.
We have considered these options:
Holding open a transaction for the length of time a user takes to make multiple changes stinks, so that's out.
Holding a copy of all the data in memory (or cached to disk) is an option but there is heck of a lot of it, so seems unreasonable.
Maintaining a set of secondary tables, or attempting to use session state to store changes, but this is complex and difficult to maintain.
Using two databases, flipping between them by connection string, and using T-SQL to manage replication, putting them back in sync after commit/rollback. I.e. switching on/off, forcing snapshot, reversing direction etc.
We're a bit stumped for a solution that is relatively easy to maintain. Any suggestions?
Our solution to a similar problem is to use a locking table that holds locks per entity type in our system. When the client application wants to edit an entity, we do a "GetWithLock" which gets the client the most up-to-date version of the entity's data as well as obtaining a lock (a GUID that is stored in the lock table along with the entity type and the entity ID). This prevents other users from editing the same entity. When you commit your changes with an update, you release the lock by deleting the lock record from the lock table. Since stored procedures are the api we use for interacting with the database, this allows a very straight forward way to lock/unlock access to specific entities.
On the client side, we implement IEditableObject on the UI model classes. Our model classes hold a reference to the instance of the service entity that was retrieved on the service call. This allows the UI to do a Begin/End/Cancel Edit and do the commit or rollback as necessary. By holding the instance of the original service entity, we are able to see the original and current data, which would allow the user to get that "preview" you're looking for.
While our solution does not implement LINQ, I don't believe there's anything unique in our approach that would prevent you from using LINQ as well.
HTH
Consider this:
Long transactions makes system less scalable. If you do UPDATE command, update locks last until commit/rollback, preventing other transaction to proceed.
Second tables/database can be modified by concurent transactions, so you cannot rely on data in tables. Only way is to lock it => see no1.
Serializable transaction in some data engines uses versions of data in your tables. So after first cmd is executed, transaction can see exact data available in cmd execution time. This might help you to show changes made by user, but you have no guarantee to save them back into storage.
DataSets contains old/new version of data. But that is unfortunatelly out of your technology aim.
Use a set of secondary tables.
The problem is that your connection should see two versions of data while the other connections should see only one (or two, one of them being their own).
While it is possible theoretically and is implemented in Oracle using flashbacks, SQL Server does not support it natively, since it has no means to query previous versions of the records.
You can issue a query like this:
SELECT *
FROM mytable
AS OF TIMESTAMP
TO_TIMESTAMP('2010-01-17')
in Oracle but not in SQL Server.
This means that you need to implement this functionality yourself (placing the new versions of rows into your own tables).
Sounds like an ugly problem, and raises a whole lot of questions you won't be able to go into on SO. I got the following idea while reading your problem, and while it "smells" as bad as the others you list, it may help you work up an eventual solution.
First, have some kind of locking system, as described by #user580122, to flag/record the fact that one of these transactions is going on. (Be sure to include some kind of periodic automated check, to test for lost or abandoned transactions!)
Next, for every change you make to the database, log it somehow, either in the application or in a dedicated table somewhere. The idea is, given a copy of the database at state X, you could re-run the steps submitted by the user at any time.
Next up is figuring out how to use database snapshots. Read up on these in BOL; the general idea is you create a point-in-time snapshot of the database, do whatever you want with it, and eventually throw it away. (Only available in SQL 2005 and up, Enterprise edition only.)
So:
A user comes along and initiates one of these meta-transactions.
A flag is marked in the database showing what is going on. A new transaction cannot be started if one is already in process. (Again, check for lost transactions now and then!)
Every change made to the database is tracked and recorded in such a fashion that it could be repeated.
If the user decides to cancel the transaction, you just drop the snapshot, and nothing is changed.
If the user decides to keep the transaction, you drop the snapshot, and then immediately re-apply the logged changes to the "real" database. This should work, since your requirements imply that, while someone is working on one of these, no one else can touch the related parts of the database.
Yep, this sure smells, and it may not apply to well to your problem. Hopefully the ideas here help you work something out.

Use ASP.NET Profile or not?

I need to store a few attributes of an authenticated user (I am using Membership API) and I need to make a choice between using Profiles or adding a new table with UserId as the PK. It appears that using Profiles is quick and needs less work upfront. However, I see the following downsides:
The profile values are squished into a single ntext column. At some point in the future, I will have SQL scripts that may update user's attributes. Querying a ntext column and trying to update a value sounds a little buggy to me.
If I choose to add a new user specific property and would like to assign a default for all the existing users, would it be possible?
My first impression has been that using profiles may cause maintainance headaches in the long run. Thoughts?
There was an article on MSDN (now on ASP.NET http://www.asp.net/downloads/sandbox/table-profile-provider-samples) that discusses how to make a Profile Table Provider. The idea is to store the Profile data in a table versus a row, making it easier to query with just SQL.
More onto that point, SQL Server 2005/2008 provides support for getting data via services and CLR code. You could conceivably access the Profile data via the API instead of the underlying tables directly.
As to point #2, you can set defaults to properties, and while this will not update other profiles immediately, the profile would be updated when next it is accessed.
Seems to me you have answered your own question. If your point 1 is likely to happen, then a SQL table is the only sensible option.
Check out this question...
ASP.NET built in user profile vs. old stile user class/tables
The first hint that the built-in profiles are badly designed is their use of delimited data in a relational database. There are a few cases that delimited data in a RDBMS makes sense, but this is definitely not one of them.
Unless you have a specific reason to use ASP.Net Profiles, I'd suggest you go with the separate tables instead.

How do I implement "pessimistic locking" in an asp.net application?

I would like some advice from anyone experienced with implementing something like "pessimistic locking" in an asp.net application. This is the behavior I'm looking for:
User A opens order #313
User B attempts to open order #313 but is told that User A has had the order opened exclusively for X minutes.
Since I haven't implemented this functionality before, I have a few design questions:
What data should i attach to the order record? I'm considering:
LockOwnedBy
LockAcquiredTime
LockRefreshedTime
I would consider a record unlocked if the LockRefreshedTime < (Now - 10 min).
How do I guarantee that locks aren't held for longer than necessary but don't expire unexpectedly either?
I'm pretty comfortable with jQuery so approaches which make use of client script are welcome. This would be an internal web application so I can be rather liberal with my use of bandwidth/cycles. I'm also wondering if "pessimistic locking" is an appropriate term for this concept.
It sounds like you are most of the way there. I don't think you really need LockRefreshedTime though, it doesn't really add anything. You may just as well use the LockAcquiredTime to decide when a lock has become stale.
The other thing you will want to do is make sure you make use of transactions. You need to wrap the checking and setting of the lock within a database transaction, so that you don't end up with two users who think they have a valid lock.
If you have tasks that require gaining locks on more than one resource (i.e. more than one record of a given type or more than one type of record) then you need to apply the locks in the same order wherever you do the locking. Otherwise you can have a dead lock, where one bit of code has record A locked and is wanting to lock record B and another bit of code has B locked and is waiting for record A.
As to how you ensure locks aren't released unexpectedly. Make sure that if you have any long running process that could run longer than your lock timeout, that it refreshes its lock during its run.
The term "explicit locking" is also used to describe this time of locking.
I have done this manually.
Store the primary-key of the record to a lock table, and mark record
mode attribute to edit.
When another user tries to select this record, indicate the user's
ready only record.
Have a set-up maximum time for locking the records.
Refresh page data for locked records. While an user is allowed to
make changes, all other users are only allowed to check.
Lock table should have design similar to this:
User_ID, //who locked
Lock_start_Time,
Locked_Row_ID(Entity_ID), //this is primary key of the table of locked row.
Table_Name(Entity_Name) //table name of the locked row.
Remaining logic is something you have to figure out.
This is just an idea which I implemented 4 years ago on special request of a client. After that client no one has asked me again to do anything similar, so I haven't achieved any other method.

Resources