SQL Server Transaction rollback or commit using Petapoco? - asp.net

I have list of rows for insert to database within a transaction using Asp.Net. But it will not commit fully. That means the last fews rows are only inserted.
My code is:
using (PetaPoco.Database db = new Database("Mydb"))
{
using (var trn = db.GetTransaction())
{
foreach(var r in rowlist)
{
db.Save(r);
}
trn.Complete();
}
}
For example rowlist has 20 elements, but some first elements are not inserted using Petapoco. But it will happen very rarely that means very slow network connection.

I don't think that the problem it's in the transaction or Petapoco.
Two guesses:
Maybe the list is not posted in full due to slow connection?
Are you aware that db.Save updates or insert depending on the object configuration and ID value? Maybe the last records are updated over the first one inserted

Related

Avoiding inserting duplicate data into table in MS Dynamics AX

I have a custom table that I'm inserting data into. I do not want duplicate data to end up there, so I created a unique index consisting of 20ish fields that I wish to be unique. As expected, when I run my job to insert data it of course fails and tells me it was trying to insert a duplicate record and stops the job there. If I wrap a tts around it the whole thing fails.
My question is, how can I make it so that the jobs still continues and only just stops the duplicates from inserting? Note, like I mentioned above, I have 20ish fields that make up the key, it'd be cumbersome to write up something that checks for existing records with data matching all 20 fields.
I found it, keeping the unique index on the table, I wrapped it around a try catch, which apparently has its own Exception type for this, in place of just the insert():
try
{
customTable.insert();
}
catch (Exception::DuplicateKeyException)
{
//clears the last infolog message, which is created by trying to insert a duplicate
infolog.clear(Global::infologLine() - 1);
}
Man, I wouldn't delegate the management of this to exception control. If it's only in a Job, it's ok, but if you plan to manage records in other points, we warned that if you use nested try-catch blocks, the control will go to the outermost try-catch block, avoiding internal ones. Well, there are two or three exceptions that aren't (check programming manual, I don't remeber them now, they were related to DDBB record blocking and so on).
I would create a static Exists method in table, and be careful in selecting only recid for performance purposes. Yes, writing 20 fields in a select is a pain, but you will do that ONCE, and in long-time terms it's the best and maintaineable focus.
public MyTable exists(Type1 _field1, Type2 _field2...)
{
boolean ret = false
if (_field1 && _field2 && ...) //mandatory fields
{
ret = (select firstonly RecId from MyTable
where MyTable.Field1 == _field1
&& MyTable.Field2 == _field2 ...).Recid != 0;
}
return ret;
}
In general I wouldn't use this method in insert() or update() except if there's a good reason for this (in that case, It can be interesting to set AllowDuplicates == Yes if performance is critical, beacuse you're managing duplicates manually - be careful with doupdates/doinserts or external inserts/updates). I would use this method in your job or other places to check duplicates before inserting/updating.
Why don't you implement a validate write method and avoid to insert the duplicates?
if (table.validateWrite())
table.insert();
else
log

More efficient SQL for retrieving thousands of records on a view

I am using Linq to Sql as my ORM and I have a list of Ids (up to a few thousand) passed into my retriever method, and with that list I want to grab all User records that correspond to those unique Ids. To clarify, imagine I have something like this:
List<IUser> GetUsersForListOfIds(List<int> ids)
{
using (var db = new UserDataContext(_connectionString))
{
var results = (from user in db.UserDtos
where ids.Contains(user.Id)
select user);
return results.Cast<IUser>().ToList();
}
}
Essentially that gets translated into sql as
select * from dbo.Users where userId in ([comma delimmited list of Ids])
I'm looking for a more efficient way of doing this. The problem is the in clause in sql seems to take too long (over 30 seconds).
Will need more information on your database setup like index's and type of server (Mitch Wheat's post). Type of database would help as well, some databases handle in clauses poorly.
From a trouble shooting standpoint...have you isolated the time delay to the sql server? Can you run the query directly on your server and confirm it's the query taking the extra time?
Select * can also have a bit of a performance impact...could you narrow down the result set that's being returned to just the columns you require?
edit: just saw the 'view comment' that you added...I've had problems with view performance in the past. Is it a materialized view...or could you make it into one? Recreating the view logic as a stored procedure may aslo help.
Have you tried converting this to a list, so the application is doing this in-memory? i.e.:
List<IUser> GetUsersForListOfIds(List<int> ids)
{
using (var db = new UserDataContext(_connectionString))
{
var results = (from user in db.UserDtos.ToList()
where ids.Contains(user.Id)
select user);
return results.Cast<IUser>().ToList();
}
}
This will obviously be memory-intensive if this is being run on a public-facing page on a hard-hit site. If this still takes 30+ seconds though in staging/development, then my guess is that the View itself takes that long to process -OR- you're transferring 10's of MB of data each time you retrieve the view. Either way, my only suggestions are to access the table directly and only retrieve the data you need, rewrite the view, or create a new view for this particular scenario.

nhibernate deadlocks

I'm using the following code in an ASP.NET page to create a record, then count the records to make sure I haven't exceeded a set limit and rollback the transaction if I have.
using (var session = NhibernateHelper.OpenSession())
using (var transaction = session.BeginTransaction())
{
session.Lock(mall, LockMode.None);
var voucher = new Voucher();
voucher.FirstName = firstName ?? string.Empty;
voucher.LastName = lastName ?? string.Empty;
voucher.Address = address ?? string.Empty;
voucher.Address2 = address2 ?? string.Empty;
voucher.City = city ?? string.Empty;
voucher.State = state ?? string.Empty;
voucher.Zip = zip ?? string.Empty;
voucher.Email = email ?? string.Empty;
voucher.Mall = mall;
session.Save(voucher);
var issued = session.CreateCriteria<Voucher>()
.Add(Restrictions.Eq("Mall", mall))
.SetProjection(Projections.Count("ID"))
.UniqueResult<int>();
if (issued >= mall.TotalVouchers)
{
transaction.Rollback();
throw new VoucherLimitException();
}
transaction.Commit();
return voucher;
}
However, I'm getting a ton of deadlocks. I presume this happens because I'm trying to count the records in a table I just performed an insert on and a lock is still held on the inserted row, causing the deadlock.
Can anyone confirm this?
Can anyone suggest an fix?
I've tried calling SetLockMode(LockMode.None) on the final query, but that results in a NullReferenceException that I cannot figure out.
Edit: If I run the query before I save the object, it works, but then I'm not accomplishing the goal of verifying that my insert didn't somehow go over the limit (in the case of concurrent inserts).
Edit: I found that using IsolationLevel.ReadUncommited in the session.BeginTransaction call solves the problem, but I'm no database expert. Is this the appropriate solution to the problem or should I adjust my logic some how?
That design would be deadlock prone - typically (not always) one connection is unlikely to deadlock itself, but multiple connections that do inserts and aggregates against the same table are very likely to deadlock. That's because while all activity in one transaction looks complete from the point of view of the connection doing the work -- the db won't lock a transaction out of "its own" records -- the aggregate queries from OTHER transactions would attempt to lock the whole table or large portions of it at the same time, and those would deadlock.
Read Uncommitted is not your friend in this case, because it basically says "ignore locks," which at some point will mean violating the rules you've set up around the data. I.E. the count of records in the table will be inaccurate, and you'll act on that inaccurate count. Your count will return 10 or 13 when the real answer is 11.
The best advice I have is to rearrange your insert logic such that you capture the idea of the count, without literally counting the rows. You could go a couple of directions. One idea I have is this: literally number the inserted vouchers with a sequence and enforce a limit on the sequence itself.
Make a sequence table with columns (I am guessing) MallID, nextVoucher, maxVouchers
Seed that table with the mallids, 1, and whatever the limit is for each mall
Change the insert logic to this pseudo code:
Begin Transaction
Sanity check the nextVoucher for Mall in the sequence table; if too many exist abort
If less than MaxVouchers for Mall then {
check, fetch, lock and increment nextVoucher
if increment was successful then use the value of nextVoucher to perform your insert.
Include it in the target table.
}
Error? Rollback
No Error? Commit
A sequence table like this hurts concurrency some, but I think not as much as counting the rows in the table constantly. Be sure to perf test.
Also, the [check, fetch, lock and increment] is important - you have to exclusively lock the row in the sequence table to prevent some other connection from using the same value in the split second before you increment it. I know the SQL syntax for this, but I'm afraid I am no nHibernate expert.
For read uncommitted data errors, check this out: http://sqlblog.com/blogs/merrill_aldrich/archive/2009/07/29/transaction-isolation-dirty-reads-deadlocks-demo.aspx (disclaimer: Merrill Aldrich is me :-)
2 questions :
How frequently are vouchers deleted
Any objections (beyond purity) to a
db level trigger ?

LINQ: Cannot insert duplicate key row in object 'dbo.tblOutstandingCompletions' with unique index

I have an application (ASP.NET 3.5) that allows users to rerun a particular process if required. The process inserts records into an MS SQL table. I have the insert in a Try / Catch and ignore the catch if a record already exists (the error in the Title would be valid). This worked perfectly using ADO but after I conveted to LINQ I noticed an interesting thing. If on a re-run of the process there was already records in the table, any new records would be rejected with the same error even though there was no existing record.
The code is as follows:
Dim ins = New tblOutstandingCompletion
With ins
.ControlID = rec.ControlID
.PersonID = rec.peopleID
.RequiredDate = rec.NextDue
.RiskNumber = 0
.recordType = "PC"
.TreatmentID = 0
End With
Try
ldb.tblOutstandingCompletions.InsertOnSubmit(ins)
ldb.SubmitChanges()
Catch ex As Exception
' An attempt to load a duplicate record will fail
End Try
The DataContext for database was set during Page Load .
I resolved the problem by redefining the DataContext before each insert:
ldb = New CaRMSDataContext(sessionHandler.connection.ToString)
Dim ins = New tblOutstandingCompletion
While I have solved the problem I would like to know if anyone can explain it. Without the DataContext redefinition the application works perfectly if there are no duplicate records.
Regards
James
It sounds like the DataContext thinks the record was inserted the first time, so if you don't redefine the context, it rejects the second insert because it "knows" the record is already there. Redefining the context forces it to actually check the database to see if it's there, which it isn't. That's LINQ trying to save a round trip to the database. Creating a new context as you've done forces it to reset what it "knows" about the database.
I had seen a very similar issue in my code were the identity column wasn't an autoincrementing int column, but a GUID with a default value of newguid() - basically LINQ wasn't allowing the database to create the GUID, but inserting Guid.Empty instead, and the second (or later) attempts would (correctly) throw this error.
I ended up ensuring that I generated a new GUID myself during the insert. More details can be seen here: http://www.doodle.co.uk/Blogs/2007/09/18/playing-with-linq-in-winforms.aspx
This allowed me to insert multiple records with the same DataContext.
Also, have you tried calling InsertOnSubmit multiple times (once for each new record) but only calling SubmitChanges once?
gfrizzle seems to be right here...
My code fails with the duplicate key error even though I've just run a stored proc to truncate the table on the database. As far as the data context knows, the previous insertion of a record with the same key is in fact a duplicate key, and an exception is thrown.
The only way that I've found around this is:
db = null;
db = new NNetDataContext();
right after the SubmitChanges() call that executes the previous InsertOnSubmit requests. Seems kind of dumb, but it's the only way that works for me other than redesigning the code.

Will ASP.Net remove individual tables from a cached dataset to free up memory?

I have a strange, sporadic issue.
I have stored procedure that returns back 5 small tables (e.g. IDs and Text Descriptions for Status drop down lists and such). The code calls this and places the returning dataset into ASP.Net cache. Separate methods are called to retrieve individual tables from the dataset to databind to controls throughout my web app.
Only in my QA server does one table disappear. The table will be there for one testing scenario; however, the next time the same scenario is ran, the one table is null. The table that goes MIA is always the same (Table #4 of 5 to be precise).
If the ASP.Net WP needs memory, can it remove an individual table from a cached dataset while keeping the data set's indexes in place?
Below is the caching code:
public static DataSet GetDropDownLists()
{
DataSet ds = (DataSet)HttpContext.Current.Cache["DropDownListData"];
if (null == ds)
{
// - Database Connection Information Here
ds = db.ExecuteDataSet(CommandType.StoredProcedure, "Sel_DropDownListData");
HttpContext.Current.Cache.Add("DropDownListData", ds, null, DateTime.Now.AddMinutes(20), TimeSpan.Zero, CacheItemPriority.Normal, null);
}
return ds;
}
Here's an example of the method that returns the null table:
public static DataTable GetStatusList()
{
return GetDropDownLists().Tables[3];
}
Again, this only happens on my QA Server and not when my local code is wired up to the QA database or anything on a separate server for my own development testing.
Thanks
The ASP.NET Cache will only work directly on the objects you insert in it's IEnumerable list.
If you add a List<> of objects as one cache item, it will either keep the entire list or it will discard the entire list from cache. It will never go arround the list and remove single items from it. The same should be valid for tables in a dataset.
Something else must be messing up your dataset. Are you sure the table #4 is properly loaded on your QA server in the first place (ie. it's not empty to begin with ? )
The fact that the DataSet is cached is immaterial.
A table can't be removed from a DataSet other than explicitly.
You'll have to debug to find out what's going on, logically one of:
the table's been explicitly removed from the DataSet
the DataSet was removed from the cache and regenerated without the table

Resources