How would you go about and handle lost data from a sql connection loss on a ASP.NET application.
Lets say your running an algorithm adding and removing certain roles. A midst it, the connection to the SQL database is lost. And because the connection is not there, wont even be able to backtrack the steps done. The whole state is lost, leaving the database in an error nous condition.
Would you set the IIS Rapid Fail Protection to shut the site down upon 1 exception and manually force the function to run again (after connection have been fixed).
Or how is the professional way of handling it, i am quite new to it. Maybe there's something i do not know of it (such as iis maybe trying to rerun it/caching)
(Using entity framework)
This is not a coding problem in its own way, it is more of a question of best practice handling data loss with sql database on asp.net
You need to do batch SQL Operations inside a SQL Transaction. So that whatever the error, a rollback happens. This is a built-in SQL feature and nothing special needs to be done.
Once you start a SQL Transaction, a Commit is issued only when all operations succeed. The default behavior is normally Rollback in case of all other non-success scenarios.
If you're encountering issues with any specific logic, post the code snippet and we're glad to help.
Related
I have a head-scratcher here. Over a year ago, I wrote a website feature/form where I could submit SQL Code that is not executed but stored in a table. This feature worked when I created it, as I was able to upload several scripts into the database. I have not needed to use this feature for several months, and recent upgrades to my website had me re-checking features. The feature stopped working ... and after some research, it was determined that our company firewall was now blocking the form from submitting due to a detection of "SQL Injection".
They swear that no changes were made to the firewall, however, this seems unlikely since this feature previously functioned. Regardless ... the confusion I have is that I know many websites, like this one, that allow people to post "code" using a web form interface without being flagged as SQL Injection. I am sure websites (like this one) have firewalls protecting them as well.
Is there something that needs to be done when transmitting code on a page submit/postback to clear a firewall's SQL Injection checks?
For clarification ...
There is a form, with a LargeTextArea control, where a SQL Script is entered. This SQL code is transmitted via postback to the server, and server-side code handles the saving of the script into a table. Very similar to what this website (StackOverflow) does I would assume. We can post code here, without it being intercepted and blocked by a firewall. The code we post here in our messages is eventually stored in a database on the server. That is the same behavior that I am performing.
Because of the firewall intervening between the client browser and the web server, the postback is never completed. Therefore, the server never receives the postback data to perform any processing. The client browser simply receives a "connection-reset" error.
I always thought of SQL Injection as something that should be handled server-side ... the responsibility of the programmer to ensure it is not abused. Having a firewall interfere prior to arriving at the server and having code execute to even check for SQL Injections ... feels wrong to me. Even if you have code that prevents SQL Injection, it would not matter if the firewall is intercepting and intervening prior to any server-side logic. Am I wrong?
Your firewall rules for SQL injection are blocking parameters that "look like" SQL injection - and that can lead to false positives for code that is not executed.
The correct way to get around this is to modify the firewall rule. See this answer for a way to that in ModSecurity.
Since this doesn't seem to be an option for you, you might consider bypassing the rule with obfuscation. For example - encrypting with a simple fixed key before putting it in the database (and decrypting on display) would hide this code from the firewall. And also provide some guardrails against it being executed in the future.
With bypassing security checks you are taking on a great responsibility. You should be very careful to ensure (and warn in comments) that this code is never executed. This includes executing it to check that it is correct SQL - which could also be abused for SQL injection.
Ahoy,
We have two BizTalk applcations in BizTalk 2013 R2 that seem to be having random issues. Both applications follow the same process.
Pull data from a WCF endpoint.
Delete data from a database via a stored procedure.
Insert the new data that was pulled via WCF-SQL call.
Both applications worked great during our testing for quite a while. But, over time, we've had a few issues crop up with the insert via the WCF-SQL call.
A fatal error occurred while reading the input stream from the network. The session will be terminated (input error: 64, output error: 0).
This error showed up in the Sql Server logs. We had this one for about a day and then it just went away. Everything else continued to work fine on that target sql server. It was only BizTalk that had issues.
Our latest error is where the request to the WCF-SQL insert happens ( the data is actually inserted ), but there never is a response. So, the Send Port continues to try and send for it's retries and the Orchestration just dehydrates.
We tinkered with every setting throughout the application to try and solve this, but only a delete of the application and a redeploy fixed this ( for now at least ).
So, I guess my question is whether or not anyone else has had these sorts of issues with BizTalk having "random" errors like this where it'll work great and then go downhill like we've seen?
I'd really prefer to have something stable that is minimal maintenance. This is an enterprise product after all.
I've issues similar to this happen when moving between environments where there were data differences, e.g. a column full of NULLs in QA and a column full of actual data in PROD. There are a few things you can try.
Use SQL Sever Profiler to capture the RPC call coming from BizTalk, and try running it directly on the SQL Server BizTalk is calling remotely (wrap it in a transaction you roll back at the end if this is production). Does it take longer than expected to run? Debug the procedure to find the pain points and optimize if possible. I've written a blog about how to do this here: http://blog.tallan.com/2015/01/09/capturing-and-debugging-a-sql-stored-procedure-call-from-biztalk/
Up the timeout settings in the binding configuration for the send port to ensure that it is not timing out before SQL can finish doing its work.
Up the System.Transactions timeout in Machine.config to ensure that MSDTC isn't causing issues: http://blogs.msdn.com/b/madhuponduru/archive/2005/12/16/how-to-change-system-transactions-timeout.aspx and http://blog.brandt-lassen.dk/2012/11/overriding-default-10-minutes.html
If possible, do a data compare between the TEST/QA and PROD databases. Look for significant differences, especially in columns that you are using in JOIN conditions and WHERE clauses.
Having deployed a new build of an ASP.NET site in a production environment, I am logging dozens of data errors every second, almost always with the error "Cannot find table 0." We use datasets and frequently refer to Table[0], and while I understand the defensive coding practice of checking the dataset for tables before accessing Table[0], it's never been a problem in the past. A certain page will load fine one second, and then be missing one of its data-driven components the next. Just seeing if this rings a bell for anyone.
More detail: I used a different build server this time, and while I imagine the compiler settings are the same on both, I have a hard time thinking that there's a switch that makes 50% of my database calls come back with no tables. I also switched the project to VS 2008, but then reverted all of those changes when I switched back to VS 2005. I notice that the built assembly has a new MyLibrary.XmlSerializers.dll, where it didn't used to, but I also can't imagine that that's causing all the trouble. (It also doesn't fall down on calls to MyLibrary, or at least no more than any other time.)
Updated to add: I've discovered that the troublesome build is a "Release" build, where the working build was compiled as "Debug". Could that explain it?
Rolling back to the build before these changes fixed it. (Rebooting the SQL Server, the step we tried before that, did not.)
The trouble also seems to be load-based - this cruised through our integration and QA environments without a problem, and even our smoke test environment - the one that points to production data - is fine under light load.
Does this have the distinguishing characteristics of anything you might have seen in the past?
Bumping this old question because we have encountered the same issue and perhaps our solution would give more insight in what causes this.
Essentially this problem occurs in a production environment that is under very heavy load in a Windows service that uses multiple threads to process several jobs simultaneously (100 users use the same DB via ASP.NET web app and there are about 60 transactions/second on older hardware with SQL Server 2000).
No variables are shared, that is connections are opened anew, transaction is started, operations executed, transaction committed and connection closes.
Under heavy load sometimes one of the following exceptions occurs:
NullReferenceException: Object reference not set to an instance of an
object.
at System.Data.SqlClient.SqlInternalConnectionTds.get_IsLockedForBulkCopy()
or
System.Data.SqlClient.SqlException:
The server failed to resume the transaction. Desc:3400000178
or
New request is not allowed to start because it should come with valid transaction descriptor
or
This SqlTransaction has completed; it is no longer usable
It seems somehow the connection that is within the pool becomes corrupted and remains associated with previously used transactions. Furthermore, if such connection is retrieved from pool then sqlAdapter.Fill(dataset) results in an empty dataset, causing "Cannot find table 0". Because our service would retry the operation (reading job list) on failure and it would always get the same corrupt connection from the pool it would fail with this error until restarted.
We removed the issue by using SqlConnection.ClearPool(connection) on exception to make sure this connection is discarded from the pool and restructuring the application so less threads access the same resources simultaneously.
I have no clue who exactly caused this issue so I am not sure we have really fixed that, maybe just made it so rare it had not occurred again yet.
I've fought precisely this error message before. The key is that an underlying data method is swallowing a timeout exception.
You're probably doing something like this:
var table = GetEmployeeDataSet().Tables[0];
GetEmployeeDataSet is swallowing an exception, probably a timeout exception, which is why it only happens sporadically - it happens under load. You need to do the following to fix it:
Modify the underlying code to not swallow the exception, but rather let it bubble up to the next level so you can identify it properly.
Identify the query(s) causing the problem, and then rewrite, reindex, denormalize or throw hardware at the problem. See this for more info: System.Data.SqlClient.SqlException: Timeout expired
I've seen something similar. I believe our problem had to do with failed sessions being re-used (once the session object failed it went into a poor state and could not recover.) We fixed it by increasing the memory for the session pool and increasing the frequency of the web application recycling.
It also was "caused" by a new version that at first blush did not seem to have any change to cause such an effect. However, eventually it became clear that the logic of the program was opening and closing a lot more connections (maybe 20% more) than it used to. This small change pushed the limit of our prior configuration.
You might check the SQL Server logs for errors. Or, the Web server event log. It sounds like your connection pool could be out of open connections or your db could be out.
Which database calls changed between versions?
The error is obviously telling you one of your database calls isn't returning any data on occasion; I can't think of any cases where a code/assembly issue would cause it.
I have seen something like this when doing something with nHibernate Sessions in a non-thread-safe manner. That would explain why you only see it under load. Would need to see your code to guess at what isn't thread-safe though.
I inherited an application with a lot of stored procedures, and many of them have exception handling code that inserts a row in an error table and sends a DBMail. We have ELMAH on the ASP.NET side, so I'm wondering if exception management in the stored procs is necessary. But before I rip it out, I want to ensure that I'm not making a grave mistake because of ignorance about a best practice.
Only one application uses the stored procedures.
When would one prefer using exception management in a SQL Server 2005 stored procedure over handling the exception on the ASP.NET side?
If there are other applications utilizing these stored procedures then it might make sense to retain the error handling in the stored procedures. In your edit you indicate that this is not the case so removing the exception handling is probably not a bad idea.
In the MSDN article Exception Handling it is outlined when to catch exceptions and when to let them bubble up the stack. It can be argued that it makes sense to handle and log database exceptions that are recoverable from in the stored procedure.
There is a principle sometimes referred to as "First Failure Data Capture" - ie. it's the responsibility of the first "chunk of code" that indentifies an error to immediately capture it for future diagnosis. In multi-tier architectures this leads to some interesting questions about who "first" actually is.
I believe that it's quite reasonable for the stored procedure to log something to a db (sending an email sounds somewhet overkill for all but the most critical of errors but that's another issue). It cannot assume taht higher layers will be well behaved, you may only have one client now, but you can't predict the future.
The stored procedure can still throw an exception as well as logging. And sometimes in difficult situations being able to correlate errors in the different layers is actually very handy.
I would take a lot of persuading to remove that error logging.
I believe that logging to a table only works for simpler systems where everything is done within a single stored procedure call.
Once the system is complex enough that you implement transactions across database calls, logging to the database within the stored procedure becomes much more of a problem.
The rollbacks undo the logging to the table.
Logic that allows rollbacks and logging, in my opinion, creates too much potential for defects.
Just curious if anyone else has got this particular error and know how to solve it?
The scenario is as follow...
We have an ASP.NET web application using Enterprise Library running on Windows Server 2008 IIS farm connecting to a SQL Server 2008 cluster back end.
MSDTC is turned on. DB connections are pooled.
My suspicion is that somewhere along the line there is a failed MSDTC transaction, the connection got returned to the pool and the next query on a different page is picking up the misbehaving connection and got this particular error. Funny thing is we got this error on a query that has no need whatsoever with distributed transaction (committing to two database, etc.). We were only doing select query (no transaction) when we got the error.
We did SQL Profiling and the query got ran on the SQL Server, but never came back (since the MSDTC transaction was already aborted in the connection).
Some other related errors to accompany this are:
New request is not allowed to start
because it should come with valid
transaction descriptor.
Internal .Net Framework Data Provider error 60.
MSDTC has default 90 seconds timeout, if one query execute exceed this time limit, you will encounter this error when the transaction is trying to commit.
A bounty may help get the answer you seek, but you're probably going to get better answers if you give some code samples and give a better description of when the error occurs.
Does the error only intermittently occur? It sounds like it from your description.
Are you enclosing the close that you want to be done as a transaction in a using TransactionScope block as Microsoft recommends? This should help avoid weird transaction behavior. Recall that a using block makes sure that the object is always disposed regardless of exceptions thrown. See here: http://msdn.microsoft.com/en-us/library/ms172152.aspx
If you're using TransactionScope there is an argument System.TransactionScopeOption.RequiresNew that tells the framework to always create a new transaction for this block of code:
Using ts As New Transactions.TransactionScope(Transactions.TransactionScopeOption.RequiresNew)
' Do Stuff
End Using
Also, if you're suspicious that a connection is getting faulted and then put back into the connection pool, the likely solution is to enclose the code that may fault the connection in a Try-Catch block and Dispose the connection in the catch block.
Old question ... but ran into this issue past few days.
Could not find a good answer until now. Just wanted to share what I found out.
My scenario contains multiple sessions being opened by multiple session factories. I had to correctly rollback and wait and make sure the other transactions were no longer active. It seems that just rolling back one of them will rollback everything.
But after adding the Thread.Sleep() between rollbacks, it doesn't do the other and continues fine with the rollback. Subsequent hits that trigger the method don't result in the "New request is not allowed to start because it should come with valid transaction descriptor." error.
https://gist.github.com/josephvano/5766488
I have seen this before and the cause was exactly what you thought. As Rice suggested, make sure that you are correctly disposing of the db related objects to avoid this problem.