Suppose that I have a bank account table with columns customer_id, name, address, and balance. Balance is constantly changing because customer is depositing and withdrawing money. How can I chance customer's address without getting an DBConcurrencyException at the same time?
What is the general approach in this situation, it is not possible for two users to change the SAME cell in a table row but it is possible for them to change different cells in same row AND updating the row with new values without problem?
You should look at link http://blog.sqlauthority.com/2008/08/18/sql-server-detailed-explanation-of-transaction-lock-lock-type-avoid-locks/ which has detailed information.
I'm not sure if you are using LINQ, but if you are then you should take a look at how it handles concurrency. LINQ uses optimistic concurrency. So basically if a a user attempts to update a record, and the record has been updated in the interim, then a concurrency conflict occurs.
However, you can decide which properties in your LINQ table are used for concurrency checking-- in which case LINQ won't care if the value in that record was changed.
There are also ways to handle the exception- you could put a try block around the exception-- and write some code to handle different scenarios.
Related
I am writing a web application that allows the user basic CRUD operations against a database. The tables that are being updated have less than 200 records and there may be multiple users using this applications there is a need for some sort of locking mechanism to avoid the 2 users from overwriting each others changes.
I have looked into semaphores but that seems to only limit the number of users executing the same code. In my data layer I have a class file for each table so I can certainly employ this on a specific table's class file but can I somehow limit the locking to the key fields?
Assuming that you are using a proper SQL implementation along with ASP .Net, why dont you use transactions to achieve this? Check it out here.
Additionally, you can also read up on optimistic concurrency to see if that is what you need. Basically, before saving a value, the user checks if the value in a particular field is the same as it was when he first read it. If the value is the same, it is assumed that noone else has overwritten it, and the new value is saved to the DB; if the values are not the same, a warning message is returned instead.
I am designing a simple messaging service using ASP.NET MVC / Windows Azure Table Storage. I have two kinds of entities - messages and message threads. Relation between them is simple - each thread can have multiple messages but the message can only be assigned to one thread.
Table storage is not a relational DB, so representing relations is always a bit tricky. I need to decide between 2 approaches:
Having one big table for threads and one for messages. And having threadId as a partition key of message entity so that messages are partitioned by threads.
Dynamically creating a special table for each message thread and having threadId as a name of the table.
I tend to prefer the second because it fits better into architecture of the rest of the service. But there will obviously be large number of tables created in a storage account.
Do you think this may be a problem?
You could also consider having just one table, that stores both Thread and Message entities. This would give you transaction support, and you could use Lucifure's hybrid approach on this table.
Creating a large number of tables may be an issue, depending on how you want to manage them. The underlying REST API for listing tables works like a query for table entities. It only returns the first 1000 tables, after that you have to use a continuation token. All of the storage explorers I've seen don't allow you to query tables based on name, they simply like the first 1000 tables. If you end up with 20000 threads, it could take you a while to get to the table you want.
One way you could mitigate this is to put your message table in its own storage account. This way your storage account with all of your other tables won't get crowded out by all of these dynamic tables that you will be creating and possibly deleting.
Deleting is actually one of the ways in which using a separate table for each thread would be easier. To delete all of the related messages you simply have to delete one table rather than iterating over each message and deleting it.
Everything else however will be more complicated than keeping all of the messages in one table. If this is core functionality to your app and you can dedicate enough time to develop it this way, one table per thread is probably a good idea. Otherwise the easy way to do things is with one big table.
You may consider a hybrid approach to keep the number of tables to a manageable level, depending on your scalability needs.
My experience has been that date based partitioning at the table level is a very effective approach and can be leverage across the board.
For example you could partition tables based on date and with a granularity of day or month. So a table name like “Thread201202” could be used for all threads started in February 2012.
Your thread id would implicitly include the “201202” and be something like “201202-myid01” although you would not need to explicitly store it in the partition key since it would be implied in the table name.
Aged threads could then be easily disposed by deleting tables say more than a year old.
I need to manage the acquisition of many record at hour. About 1000000 records. And I need to get every second the last insert value for every primary key. It works quit well with sharding. I was thinking to try the use os capped collection to get only the last record for every primary key. In order to do this, I made two separated insert, there is a way, into mongodb, to make some kind of trigger to propagate the insert into a collection to another collection?
MongoDB does not have any support for triggers or similar behavior.
The only way to do this is to make it happen in your code. So the code that writes the first entry should also write the second.
People have definitely requested triggers. If they are necessary for your solution, please cast a vote on the feature request.
I disagree with "triggers is needed". People, MongoDB was created to be very fast and to provide as basic functionalities as can be. This is a power of this solution.
I think that here the best think is to create triggers inside Your application as a part of Data Access layer.
I have the scenario like this,
My environment is .Net2.0, VS 2008, Web Application
I need to lock a record when two members are trying to access at the same time.
We can do it in two ways,
By Front end (putting the sessionID and record unique number in the dictionary and keeping it as a static or application variable), we will release when the response is go out of that page, client is not connected, after the post button is clicked and session is out.
By backend (record locking in the DB itself - need to study - my team member is looking ).
Is there any others to ways to do and do I need to look at other ways in each and every steps?
Am I missing any conditions?
You do not lock records for clients, because locking a record for anything more than a few milliseconds is just about the most damaging thing one can do in a database. You should use instead Optimistic Concurrency: you detect if the record was changed since the last read and re-attempt the transaction (eg you re-display the screen to the user). How that is actually implemented, will depend on what DB technology you use (ADO.Net, DataSets, Linq, EF etc).
If the business domain requires lock-like behavior, those are always implemented as reservation logic in the database: when a record is displayed, it is 'reserved' so that no other users can attempt to make the same transaction. The reservation completes or times out or is canceled. But a 'reservation' is never done using locks, is always an explicit update of state from 'available' to 'reserved', or something similar.
This pattern is also describe din P of EAA: Optimistic Offline Lock.
If your talking about only reading data from a record from SQL server database, you don't need to do anything!!! SQL server will do everything about managing multi access to records. but if you want to manipulate data, you have to use Transactions.
I agree with Ramus. But still if u need it. Create a column with name like IsInUse as bit type and set it true if one is accessing. Since other guys will also need same data at same time then u need to save your app from crash .. so at every place from where the data is retrieved you have to put a check if IsInUse is False or not.
One requirement is that when persisting my C# objects to the database I must decide the database ID (surrogate primary key) in code.
Second requirement is that the database type for the key must be int or char(x)... so no uniqueidentifier or binary(16) or the like.
These are unchangeable requirements.
What would be the best way to go about handling this?
One idea is the base64 encoded GUIDs looking like "XSiZtdXcKU68QWe7N96Dig". These are easily created in code and are to me acceptable in URLs if necessary. But will it be too expensive regarding performance (indexing, size) having all primary and foreign keys be char(22)? Off hand I really like this idea.
Another idea would be to create a code version of a database sequence creating incremented integers for me. But I don't know if this is plausible and would need some guidance to secure the reliability. The sequencer must know har far it has come and what about threads that I don't control etc.
I imagine that no table involved will ever exceed 1.000.000 rows... will probably be far less.
You could have a table called "sequences". For each table there would be a row with a counter. Then, when you need another number, fetch it from the counter table and increment it. Put it in a transaction and you will have uniqueness.
However this will suffer in terms of performance, of course.
A simple incrementing int would be the easiest way to ensure uniqueness. This is what the database will do if you let it. If you set the table row to auto_increment, the database will do this for you automatically.
There are no security issues with this, but since you will be handling it yourself instead of letting the database engine take care of it, you will need to ensure that you don't generate the same id twice. This should be simple if you are on a single threaded system, but if your program is distributed you will need to put some effort into ensuring the uniqueness.
Seeing that you have an ASP.NET app, you could do the following (hoping and assuming all users must authenticate themselves before using your app!):
Assign each user a unique "UserID" in your database (can be INT, or CHAR)
Assign each user a "HighestSequentialID" (INT) in your database
When the user logs on, read those values from the database and store them in e.g. a custom principal, or in a cookie, or something else
whenever the user is about to insert a row, create a segmented ID: (UserID).(User's sequential number) and store it as "VARCHAR(20)" - e.g. your UserID is 15 and thus this user's entries would have unique IDs of "15.00001", "15.00002" and so on.
when the user logs off (or at any other time), update its new, highest used sequential ID in the database so that next time around, you'll know what this user has used last
Again - you'll have to do a lot more housekeeping work yourself, and it's always prone to a mishap (assigning a duplicate user ID, or misinterpreting the highest sequential number for that user).
I would strongly recommend trying to get these requirements changed - with these in place, all solutions will be sub-optimal at best, while using the database to handle this would be totally painless.
Marc
For a table below 1.000.000 rows, I would not be too terribly concerned about a char(22) Primary key. Of course the ideal solution for a situation like this would be for each object to have something unique about it that you could leverage for the key, even if it is a multi-part key. The next ideal solution would be to have the requirements changed :)