nhibernate deadlocks - asp.net

I'm using the following code in an ASP.NET page to create a record, then count the records to make sure I haven't exceeded a set limit and rollback the transaction if I have.
using (var session = NhibernateHelper.OpenSession())
using (var transaction = session.BeginTransaction())
{
session.Lock(mall, LockMode.None);
var voucher = new Voucher();
voucher.FirstName = firstName ?? string.Empty;
voucher.LastName = lastName ?? string.Empty;
voucher.Address = address ?? string.Empty;
voucher.Address2 = address2 ?? string.Empty;
voucher.City = city ?? string.Empty;
voucher.State = state ?? string.Empty;
voucher.Zip = zip ?? string.Empty;
voucher.Email = email ?? string.Empty;
voucher.Mall = mall;
session.Save(voucher);
var issued = session.CreateCriteria<Voucher>()
.Add(Restrictions.Eq("Mall", mall))
.SetProjection(Projections.Count("ID"))
.UniqueResult<int>();
if (issued >= mall.TotalVouchers)
{
transaction.Rollback();
throw new VoucherLimitException();
}
transaction.Commit();
return voucher;
}
However, I'm getting a ton of deadlocks. I presume this happens because I'm trying to count the records in a table I just performed an insert on and a lock is still held on the inserted row, causing the deadlock.
Can anyone confirm this?
Can anyone suggest an fix?
I've tried calling SetLockMode(LockMode.None) on the final query, but that results in a NullReferenceException that I cannot figure out.
Edit: If I run the query before I save the object, it works, but then I'm not accomplishing the goal of verifying that my insert didn't somehow go over the limit (in the case of concurrent inserts).
Edit: I found that using IsolationLevel.ReadUncommited in the session.BeginTransaction call solves the problem, but I'm no database expert. Is this the appropriate solution to the problem or should I adjust my logic some how?

That design would be deadlock prone - typically (not always) one connection is unlikely to deadlock itself, but multiple connections that do inserts and aggregates against the same table are very likely to deadlock. That's because while all activity in one transaction looks complete from the point of view of the connection doing the work -- the db won't lock a transaction out of "its own" records -- the aggregate queries from OTHER transactions would attempt to lock the whole table or large portions of it at the same time, and those would deadlock.
Read Uncommitted is not your friend in this case, because it basically says "ignore locks," which at some point will mean violating the rules you've set up around the data. I.E. the count of records in the table will be inaccurate, and you'll act on that inaccurate count. Your count will return 10 or 13 when the real answer is 11.
The best advice I have is to rearrange your insert logic such that you capture the idea of the count, without literally counting the rows. You could go a couple of directions. One idea I have is this: literally number the inserted vouchers with a sequence and enforce a limit on the sequence itself.
Make a sequence table with columns (I am guessing) MallID, nextVoucher, maxVouchers
Seed that table with the mallids, 1, and whatever the limit is for each mall
Change the insert logic to this pseudo code:
Begin Transaction
Sanity check the nextVoucher for Mall in the sequence table; if too many exist abort
If less than MaxVouchers for Mall then {
check, fetch, lock and increment nextVoucher
if increment was successful then use the value of nextVoucher to perform your insert.
Include it in the target table.
}
Error? Rollback
No Error? Commit
A sequence table like this hurts concurrency some, but I think not as much as counting the rows in the table constantly. Be sure to perf test.
Also, the [check, fetch, lock and increment] is important - you have to exclusively lock the row in the sequence table to prevent some other connection from using the same value in the split second before you increment it. I know the SQL syntax for this, but I'm afraid I am no nHibernate expert.
For read uncommitted data errors, check this out: http://sqlblog.com/blogs/merrill_aldrich/archive/2009/07/29/transaction-isolation-dirty-reads-deadlocks-demo.aspx (disclaimer: Merrill Aldrich is me :-)

2 questions :
How frequently are vouchers deleted
Any objections (beyond purity) to a
db level trigger ?

Related

SQL Server Transaction rollback or commit using Petapoco?

I have list of rows for insert to database within a transaction using Asp.Net. But it will not commit fully. That means the last fews rows are only inserted.
My code is:
using (PetaPoco.Database db = new Database("Mydb"))
{
using (var trn = db.GetTransaction())
{
foreach(var r in rowlist)
{
db.Save(r);
}
trn.Complete();
}
}
For example rowlist has 20 elements, but some first elements are not inserted using Petapoco. But it will happen very rarely that means very slow network connection.
I don't think that the problem it's in the transaction or Petapoco.
Two guesses:
Maybe the list is not posted in full due to slow connection?
Are you aware that db.Save updates or insert depending on the object configuration and ID value? Maybe the last records are updated over the first one inserted

Avoiding inserting duplicate data into table in MS Dynamics AX

I have a custom table that I'm inserting data into. I do not want duplicate data to end up there, so I created a unique index consisting of 20ish fields that I wish to be unique. As expected, when I run my job to insert data it of course fails and tells me it was trying to insert a duplicate record and stops the job there. If I wrap a tts around it the whole thing fails.
My question is, how can I make it so that the jobs still continues and only just stops the duplicates from inserting? Note, like I mentioned above, I have 20ish fields that make up the key, it'd be cumbersome to write up something that checks for existing records with data matching all 20 fields.
I found it, keeping the unique index on the table, I wrapped it around a try catch, which apparently has its own Exception type for this, in place of just the insert():
try
{
customTable.insert();
}
catch (Exception::DuplicateKeyException)
{
//clears the last infolog message, which is created by trying to insert a duplicate
infolog.clear(Global::infologLine() - 1);
}
Man, I wouldn't delegate the management of this to exception control. If it's only in a Job, it's ok, but if you plan to manage records in other points, we warned that if you use nested try-catch blocks, the control will go to the outermost try-catch block, avoiding internal ones. Well, there are two or three exceptions that aren't (check programming manual, I don't remeber them now, they were related to DDBB record blocking and so on).
I would create a static Exists method in table, and be careful in selecting only recid for performance purposes. Yes, writing 20 fields in a select is a pain, but you will do that ONCE, and in long-time terms it's the best and maintaineable focus.
public MyTable exists(Type1 _field1, Type2 _field2...)
{
boolean ret = false
if (_field1 && _field2 && ...) //mandatory fields
{
ret = (select firstonly RecId from MyTable
where MyTable.Field1 == _field1
&& MyTable.Field2 == _field2 ...).Recid != 0;
}
return ret;
}
In general I wouldn't use this method in insert() or update() except if there's a good reason for this (in that case, It can be interesting to set AllowDuplicates == Yes if performance is critical, beacuse you're managing duplicates manually - be careful with doupdates/doinserts or external inserts/updates). I would use this method in your job or other places to check duplicates before inserting/updating.
Why don't you implement a validate write method and avoid to insert the duplicates?
if (table.validateWrite())
table.insert();
else
log

SQLite - Get a specific row index for a Sorted/Filtered Query

I'm creating a caching system to take data from an SQLite database table using a sorted/filtered query and display it. The tables I'm pulling from can be potentially very large and, of course, I need to minimize impact on memory by only retaining a maximum number of rows in memory at any given time. This is easily done by using LIMIT and OFFSET to load only the records I need and update the cache as needed. Implementing this is trivial. The problem I'm having is determining where the insertion index is for a new record inserted into a particular query so I can update my UI appropriately. Is there an easy way to do this? So far the ideas I've had are:
Dump the entire cache, re-count the Query results (there's no guarantee the new row will be included), refresh the cache and refresh the entire UI. I hope it's obvious why that's not really desirable.
Use my own algorithm to determine whether the new row is included in the current query, if it is included in the current cached results and at what index it should be inserted into if it's within the current cached scope. The biggest downfall of this approach is it's complexity and the risk that my own sorting/filtering algorithm won't match SQLite's.
Of course, what I want is to be able to ask SQLite: Given 'Query A' what is the index of 'Row B', without loading the entire query results. However, so far I haven't been able to find a way to do this.
I don't think it matters but this is all occurring on an iOS device, using the objective-c programming language.
More Info
The Query and subsequent cache is based off of user input. Essentially the user can re-sort and filter (or search) to alter the results they're seeing. My reticence in simply recreating the cache on insertions (and edits, actually) is to provide a 'smoother' UI experience.
I should point out that I'm leaning toward option "2" at the moment. I played around with creating my own caching/indexing system by loading all the records in a table and performing the sort/filter in memory using my own algorithms. So much of the code needed to determine whether and/or where a particular record is in the cache is already there, so I'm slightly predisposed to use it. The danger lies in having a cache that doesn't match the underlying query. If I include a record in the cache that the query wouldn't return, I'll be in trouble and probably crash.
You don't need record numbers.
Save the values of the ordered field in the first and last records of the LIMITed query result.
Then you can use these to check whether the new record falls into this range.
In other words, assuming that you order by the Name field, and that the original query was this:
SELECT Name, ...
FROM mytab
WHERE some_conditions
ORDER BY Name
LIMIT x OFFSET y
then try to get at the new record with a similar query:
SELECT 1
FROM mytab
WHERE some_conditions
AND PrimaryKey = LastInsertedValue
AND Name BETWEEN CachedMin AND CachedMax
Similarly, to find out before (or after) which record the new record was inserted, start directly after the inserted record and use a limit of one, like this:
SELECT Name
FROM mytab
WHERE some_conditions
AND Name > MyInsertedName
AND Name BETWEEN CachedMin AND CachedMax
ORDER BY Name
LIMIT 1
This doesn't give you a number; you still have to check where the returned Name is in your cache.
Typically you'd expect a cache to be invalidated if there were underlying data changes. I think dropping it and starting over will be your simplest, maintainable solution. I would recommend it unless you have a very good reason.
You could write another query that just returned the row count (example below) to see if your cache should be invalidated. That would save recreating the cache when it did not change.
SELECT name,address FROM people WHERE area_code=970;
SELECT COUNT(rowid) FROM people WHERE area_code=970;
The information you'd need from sqlite to know when your cache was invalidated would require some rather intimate knowledge of how the query and/or index was working. I would say that is fairly high coupling.
Otherwise, you'd want to know where it was inserted with regards to the sorting. You would probably key each page on the sorted field. Delete anything greater than the insert/delete field. Any time you change the sorting you'd drop everything.
Something like the below would be a start if you were using C++. I realize you aren't doing C++, but hopefully it is evident as to what I'm trying to do.
struct Person {
std::string name;
std::string addr;
};
struct Page {
std::string key;
std::vector<Person> persons;
struct Less {
bool operator()(const Page &lhs, const Page &rhs) const {
return lhs.key.compare(rhs.key) < 0;
}
};
};
typedef std::set<Page, Page::Less> pages_t;
pages_t pages;
void insert(const Person &person) {
if (sql_insert(person)) {
pages_t::iterator drop_cache_start = pages.lower_bound(person);
//... drop this page and everything after it
}
}
You'd have to do some wrangling to get different datatypes of key to work nicely, but its possible.
Theoretically you could just leave the pages out of it and only use the objects themselves. The database would no longer "own" the data though. If you only fill pages from the database, then you'll have less data consistency worries.
This may be a bit off topic, you aren't re-implementing views are you? It doesn't cache per se, but it isn't clear if that is a requirement of your project.
The solution I came up with is not exactly simple, but it's currently working well. I realized that the index of a record in a Query Statement is also the Count of all it's previous records. What I needed to do was 'convert' all the ORDER statements in the query to a series of WHERE statements that would return only the preceding records and take a count of those records. It's trickier than it sounds (or maybe not...it sounds tricky). The biggest issue I had was making sure the query was, in fact, sorted in a way I could predict. This meant I needed to have an order column in the Order Parameters that was based off of a column with unique values. So, whenever a user sorts on a column, I append to the statement another order parameter on a unique column (I used a "Modified Date Stamp") to break ties.
Creating the WHERE portion of the statement requires more than just tacking on a bunch of ANDs. It's easier to demonstrate. Say you have 3 Order columns: "LastName" ASC, "FirstName" DESC, and "Modified Stamp" ASC (the tie breaker). The WHERE statement would have to look something like this ('?' = record value):
WHERE
"LastName" < ? OR
("LastName" = ? AND "FirstName" > ?) OR
("LastName" = ? AND "FirstName" = ? AND "Modified Stamp" < ?)
Each set of WHERE parameters grouped together by parenthesis are tie breakers. If, in fact, the record values of "LastName" are equal, we must then look at "FirstName", and finally "Modified Stamp". Obviously, this statement can get really long if you're sorting by a bunch of order parameters.
There's still one problem with the above solution. Mathematical operations on NULL values always return false, and yet when you sort SQLite sorts NULL values first. Therefore, in order to deal with NULL values appropriately you've gotta add another layer of complication. First, all mathematical equality operations, =, must be replace by IS. Second, all < operations must be nested with an OR IS NULL to include NULL values appropriately on the < operator. This turns the above operation into:
WHERE
("LastName" < ? OR "LastName" IS NULL) OR
("LastName" IS ? AND "FirstName" > ?) OR
("LastName" IS ? AND "FirstName" IS ? AND ("Modified Stamp" < ? OR "Modified Stamp" IS NULL))
I then take a count of the RowID using the above WHERE parameter.
It turned out easy enough for me to do mostly because I had already constructed a set of objects to represent various aspects of my SQL Statement which could be assembled to generate the statement. I can't even imagine trying to manipulate a SQL statement like this any other way.
So far, I've tested using this on several iOS devices with up to 10,000 records in a table and I've had no noticeable performance issues. Of course, it's designed for single record edits/insertions so I don't really need it to be super fast/efficient.

More efficient SQL for retrieving thousands of records on a view

I am using Linq to Sql as my ORM and I have a list of Ids (up to a few thousand) passed into my retriever method, and with that list I want to grab all User records that correspond to those unique Ids. To clarify, imagine I have something like this:
List<IUser> GetUsersForListOfIds(List<int> ids)
{
using (var db = new UserDataContext(_connectionString))
{
var results = (from user in db.UserDtos
where ids.Contains(user.Id)
select user);
return results.Cast<IUser>().ToList();
}
}
Essentially that gets translated into sql as
select * from dbo.Users where userId in ([comma delimmited list of Ids])
I'm looking for a more efficient way of doing this. The problem is the in clause in sql seems to take too long (over 30 seconds).
Will need more information on your database setup like index's and type of server (Mitch Wheat's post). Type of database would help as well, some databases handle in clauses poorly.
From a trouble shooting standpoint...have you isolated the time delay to the sql server? Can you run the query directly on your server and confirm it's the query taking the extra time?
Select * can also have a bit of a performance impact...could you narrow down the result set that's being returned to just the columns you require?
edit: just saw the 'view comment' that you added...I've had problems with view performance in the past. Is it a materialized view...or could you make it into one? Recreating the view logic as a stored procedure may aslo help.
Have you tried converting this to a list, so the application is doing this in-memory? i.e.:
List<IUser> GetUsersForListOfIds(List<int> ids)
{
using (var db = new UserDataContext(_connectionString))
{
var results = (from user in db.UserDtos.ToList()
where ids.Contains(user.Id)
select user);
return results.Cast<IUser>().ToList();
}
}
This will obviously be memory-intensive if this is being run on a public-facing page on a hard-hit site. If this still takes 30+ seconds though in staging/development, then my guess is that the View itself takes that long to process -OR- you're transferring 10's of MB of data each time you retrieve the view. Either way, my only suggestions are to access the table directly and only retrieve the data you need, rewrite the view, or create a new view for this particular scenario.

LINQ: Cannot insert duplicate key row in object 'dbo.tblOutstandingCompletions' with unique index

I have an application (ASP.NET 3.5) that allows users to rerun a particular process if required. The process inserts records into an MS SQL table. I have the insert in a Try / Catch and ignore the catch if a record already exists (the error in the Title would be valid). This worked perfectly using ADO but after I conveted to LINQ I noticed an interesting thing. If on a re-run of the process there was already records in the table, any new records would be rejected with the same error even though there was no existing record.
The code is as follows:
Dim ins = New tblOutstandingCompletion
With ins
.ControlID = rec.ControlID
.PersonID = rec.peopleID
.RequiredDate = rec.NextDue
.RiskNumber = 0
.recordType = "PC"
.TreatmentID = 0
End With
Try
ldb.tblOutstandingCompletions.InsertOnSubmit(ins)
ldb.SubmitChanges()
Catch ex As Exception
' An attempt to load a duplicate record will fail
End Try
The DataContext for database was set during Page Load .
I resolved the problem by redefining the DataContext before each insert:
ldb = New CaRMSDataContext(sessionHandler.connection.ToString)
Dim ins = New tblOutstandingCompletion
While I have solved the problem I would like to know if anyone can explain it. Without the DataContext redefinition the application works perfectly if there are no duplicate records.
Regards
James
It sounds like the DataContext thinks the record was inserted the first time, so if you don't redefine the context, it rejects the second insert because it "knows" the record is already there. Redefining the context forces it to actually check the database to see if it's there, which it isn't. That's LINQ trying to save a round trip to the database. Creating a new context as you've done forces it to reset what it "knows" about the database.
I had seen a very similar issue in my code were the identity column wasn't an autoincrementing int column, but a GUID with a default value of newguid() - basically LINQ wasn't allowing the database to create the GUID, but inserting Guid.Empty instead, and the second (or later) attempts would (correctly) throw this error.
I ended up ensuring that I generated a new GUID myself during the insert. More details can be seen here: http://www.doodle.co.uk/Blogs/2007/09/18/playing-with-linq-in-winforms.aspx
This allowed me to insert multiple records with the same DataContext.
Also, have you tried calling InsertOnSubmit multiple times (once for each new record) but only calling SubmitChanges once?
gfrizzle seems to be right here...
My code fails with the duplicate key error even though I've just run a stored proc to truncate the table on the database. As far as the data context knows, the previous insertion of a record with the same key is in fact a duplicate key, and an exception is thrown.
The only way that I've found around this is:
db = null;
db = new NNetDataContext();
right after the SubmitChanges() call that executes the previous InsertOnSubmit requests. Seems kind of dumb, but it's the only way that works for me other than redesigning the code.

Resources