SQL Server Connection Issue - asp.net

We recently launched a new web site... there are roughly ~150 users active during peak hours. During peak hours, we are experiencing an issue every few minutes, the exception text is listed below.
System.Web.HttpUnhandledException:
Exception of type 'System.Web.HttpUnhandledException' was thrown.
---> System.Data.SqlClient.SqlException: The client was unable to establish a connection because of an error during connection initialization process before login.
Possible causes include the following:
the client tried to connect to an unsupported version of SQL Server;
the server was too busy to accept new connections;
or there was a resource limitation (insufficient memory or maximum allowed connections) on the server. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.)
Our data access layer calls various DataTableAdapters using the following syntax.
EDIT
Yes, da is the name assigned to the DataTableAdapter. There is no connection.Open() because the DataTableAdapter takes care of all that, right?
using(TheDataLayer.some.strongly.typedNameTableAdapters.suchAndSuchTableAdapter da = new TheDataLayer.some.strongly.typedNameTableAdapters.suchAndSuchTableAdapter())
{
StronglyTyped.DataTable dt = new StronglyTyped.DataTable();
da.FillByVariousArguments(dt, ..., ...);
//da.Dispose();
return something;
}
The connection string looks something like:
<add name="MyConnectionString"
connectionString="Data Source=myDBServerName;Initial Catalog=MyDB;User ID=MyUserName;Password=MyPassword"
providerName="System.Data.SqlClient" />
I'm trying to rule the problem being in Code. Is there anything "simple" that can be done to minimize this issue?
Thanks.

Have you tried "Connection Pooling" directly in connection string settings?
Example:
connectionString="....;Pooling=true;Min Pool Size=1;Max Pool Size=10;..."
You can read more info here: http://msdn.microsoft.com/en-us/library/8xx3tyca%28v=vs.71%29.aspx

Without seeing the code that actually opens and uses the connection, it's hard to say where the problem is.
Please update your question with what happens when you create that DataAdapter (I'm guessing that's what da means).
Also, if you're using the using statement, you shouldn't be disposing of the thing you created the using statement for.

We had similar issue which only happenes in our production environment and it was particularly associated with load. During busy time of day we would recieve several of the above mentioned exception.
We gone through a massive investigation around why this exception occurs and did a lot of changes to fix the issue. The defacto change we did which aleviated the problem was connection pool setting by setting min pool size to 1 and max pool size to 10. (It can vary based on your situation)
This issue will be more prevalent when you have several i.e. 1000's of Customer DB and use default connection string (i.e. database=DBName;server=ServerName). We were not explicitly setting min/max pool size hence it took default settings which set Min pool size to 0 and max pool size to 100.
Again, I dont have concrete proof but the theory is that during busy time of the day based on load it made several connection to DB server and DB server was bombarded with a lot of connection request at single point to several databases. Either Application server or DB server did have bandwidth to handle that many connection in a short period of time. Also, it was happening with server with most databases. Though we did not see a lot of connection at a time but Application server was not able to make connection to databases for a short duration when it had surge of requests going in.
After we set min pool size we aliveated this problem as there is atleast one connection to each database which is available all the time and if there is blast of request which required to make connection to several databases we already had atleast one connection to the database available before we request a new one.

Maybe unrelated to the actual problem you were facing, but this error is also thrown if you are trying to connect without specifying the correct port along with the database server name.

Related

dbPool object expiring

I am using the pool package to connect a Shiny application to a fairly large SQLite database (3Gb, 70M rows).
I create a pool using:
pool <- dbPool(
drv = RSQLite::SQLite(),
dbname = "mydb.db")
Everything works perfectly locally, but when I put it on my server (I am using a DigitalOcean droplet running Shiny server), the pool expires very quickly after ~15 seconds, whether I am active or not on the app.
In the logs I see
Error in pool$fetch: This pool is no longer valid. Cannot fetch new objects.
I have tried changing the idleTimeout and minSize parameters when creating the pool, to no avail.
How can I prevent this? Is there a way I can check whether the pool is still valid and if not reconnect to the DB?
Also, it would be good if someone could give some insights on why this could be happening.
Just in case someone else bumps into this problem, the way I have solved the issue is to check the valid attribute of the pool connection. So I can prepend queries with something like:
if (!conn$valid) # pool has expired
connect_to_db() # This re-connects to the DB
<do query>
This seems to have solved the issue

EntityException: The underlying provider failed on Open. Can one server closing a db connection, make another server fail on opening?

I am experiencing database connection errors with an ASP.NET application written in VB, running on three IIS servers. The underlying database is MS Access, which is on a shared network device. It uses Entity Framework, code first implementation and JetEntityFrameworkProvider.
The application is running stable. But, approximately 1 out of 1000 attempts to open the database connection fails with either one of the following two errors:
06:33:50 DbContext "Failed to open connection at 2/12/2020 6:33:50 AM +00:00 with error:
Cannot open database ''. It may not be a database that your application recognizes, or the file may be corrupt.
Or
14:04:39 DbContext "Failed to open connection at 2/13/2020 2:04:39 PM +00:00 with error:
Could not use ''; file already in use.
One second later, with refreshing (F5), the error is gone and it works again.
Details about the environment and used code.
Connection String
<add name="DbContext" connectionString="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=x:\thedatabase.mdb;Jet OLEDB:Database Password=xx;OLE DB Services=-4;" providerName="JetEntityFrameworkProvider" />
DbContext management
The application uses public property to access DbContext. DbContext is kept in the HttpContext.Current.Items collection for the lifetime of the request, and is disposed at it’s end.
Public Shared ReadOnly Property Instance() As DbContext
Get
SyncLock obj
If Not HttpContext.Current.Items.Contains("DbContext") Then
HttpContext.Current.Items.Item("DbContext") = New DbContext()
End If
Return HttpContext.Current.Items.Item("DbContext")
End SyncLock
End Get
End Property
BasePage inits and disposes the DbContext.
Protected Overrides Sub OnInit(e As EventArgs)
MyBase.OnInit(e)
DbContext = Data.DbContext.Instance
...
End Sub
Protected Overrides Sub OnUnload(e As EventArgs)
MyBase.OnUnload(e)
If DbContext IsNot Nothing Then DbContext.Dispose()
End Sub
What I have tried
Many of the questions on SO which address above error messages, deal with generally not being able to establish a connection to the database – they can’t connect at all. That’s different with this case. Connection works 99,99% of the time.
Besides that, I have checked:
Permissions: Full access is granted for share where .mdb (database) and .ldb (locking file) resides.
Network connection: there are no connection issues to the shared device; it’s a Gigabit LAN connection
Maximum number of 255 concurrent connections is not reached
Maximum size of database not exceeded (db has only 5 MB)
Changed the compile option from “Any CPU” to “x86” as suggested in this MS Dev-Net post
Quote: I was getting the same "Cannot open database ''" error, but completely randomly (it seemed). The MDB file was less than 1Mb, so no issue with a 2Gb limit as mentioned a lot with this error.
It worked 100% on 32 bit versions of windows, but I discovered that the issues were on 64 bit installations.
The app was being compiled as "Any CPU".
I changed the compile option from "Any CPU" to "x86" and the problem has disappeared.
Nothing helped so far.
To gather more information, I attached an Nlog logger to the DbContext which writes all database actions and queries to a log file.
Shared Log As Logger = LogManager.GetLogger("DbContext")
Me.Database.Log = Sub(s) Log.Debug(s)
Investigating the logs I figured out that when one of the above errors occured on one server, another one of the servers (3 in total) has closed the db connection at exactly the same time.
Here two examples which correspond to the above errors:
06:33:50 DbContext "Closed connection at 2/12/2020 6:33:50 AM +00:00
14:04:39 DbContext "Closed connection at 2/13/2020 2:04:39 PM +00:00
Assumption
When all connections of a DbContext have been closed, the according record is removed from the .ldb lock file. When a connection to the db is being opened, a record will be added to the lock file. When these two events occur at the exact same time, from two different servers, there is a write conflict to the .ldb lock file, which results in on of the errors from above.
Question
Can anyone confirm or prove this wrong? Has anyone experienced this behaviour? Maybe I am missing something else. I’d appreciate your input and experience on this.
If my assumption is true, a solution could be to use a helper class for accessing db, which catches and handles this error, waiting for a minimal time period and trying again.
But this feels kind of wrong. So I am also open to suggestions for a “proper” solution.
EDIT: The "proper" solution would be using a DBMS Server (as stated in the comments below). I'm aware of this. For now, I have to deal with this design mistake without being responsible for it. Also, I can't change it in the short run.
I write this as an aswer because of space but this is not really an answer.
It's for sure an OleDb provider issue.
I think that is a sharing issue.
You could do some tries:
use a newer OleDb provider instead of Microsoft.Jet.OLEDB.4.0. (if you have try 64 bits you could already have try another provider because Jet.OLEDB.4.0 is 32 bits only)
Implement a retry mechanism on the new DbContext()
Reading your tests this is probaly not your case. I THINK that Dispose does not always work properly on Jet.OLEDB.4.0 connections. I noted it on tests and I solved it using a different testing engine. Before giving up I used this piece of code
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced, true);
GC.WaitForPendingFinalizers();
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced, true);
As you can understand reading this code, they are tries and the latest solution was changing the testing engine.
If your app is not too busy you could try to lock the db using a different mechanism (for example using a lock file). This is not really different from new DbContext() retries.
In late '90s I remember I had an issue related to disk sharing OS (I were using Novel Netware). Actually I have not experience in using mdb files on a network share. You could try to move the mdb on a folder shared with Windows
Actually I use Access databases only for tests. If you really need to use a single file database you could try other solutions: SQL Lite (you need a library, also this written by me, to apply code first https://www.nuget.org/packages/System.Data.SQLite.EF6.Migrations/ ) or SQL Server CE
Use a DBMS Server. This is for sure the best solution. As the writer of JetEntityFrameworkProvider I think that single file databases are great for single user apps (for this apps I suggest SQL Lite), for tests (I think that for tests JetEntityFrameworkProvider is great), for transfering data or, also, for readonly applications. In other cases use a DBMS Server. As you know, with EF, you can change from JetEntityFrameworkProvider to SQL Server or to MySql without effort.
You went wrong at the design stage: The MS Access database engine is unfit for ASP.Net sites, and this is explicitly stated on multiple places, e.g. the official download page under details.
The Access Database Engine 2016 Redistributable is not intended .... To be used by ... a program called from server-side web application such as ASP.NET
If you really have to work with an Access database, you can run a helper class that retries in case of common errors. But I don't recommend it.
The proper solution here is using a different RDBMS which exhibits stateless behavior. I recommend SQL Server Express, which has limitations, but if you exceed those you will be far beyond what Access supports, and wont cause errors like this.

Why I am getting Timeout Error in my live site

I am facing a problem with my live site. The Issue is a SQLTimeout error.
I have followed the below scenario to solve the issue. But I can't do it .
Steps taken:
Increased the SqlCommand Timeout = 0 and 240
Increased the SqlCommand Connection Timeout = 0
I have applied raw SQL in code to fetch the data from SQL Server
Kindly share with me if you have any suggestions about this issue.
Thanks
It is really hard to address your issue with so little info you provided.
Generally I would recommend to execute your query in SQL Server Management Studio and see what happens.
It could be either really long query or locking issue in database.
Also be aware, if you host you site on IIS, that apart from SQL Server timeout, the IIS request timeout would apply.
Example :
Table 1,
Table 2 ,
Table 3
Issue Scenario:
You have request to get those three table values .But the second request used to store some value in any one table at the same time existing request is not completed .
you will get the timeout error here , because existing request is not completed ,
Solutions :
Please use isolation concept to avoid this error while writing and read and updated time
Thanks.

Azure SQL Database sometimes unreachable from Azure Websites

I have a asp.net application deployed to Azure websites connecting to Azure SQL Database. This has been working fine for the last year, but last weekend I have started getting errors connecting to the database giving the following stacktrace.
[Win32Exception (0x80004005): Access is denied]
[SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)]
This error comes and goes staying a few hours and then goes away for a few hours. The database is always accessible from my machine.
A few things that I have tried are:
Adding a "allow all" firewall rule (0.0.0.0-255.255.255.255) has no effect
Changing the connection string to use the database owner as credential has no effect.
what does have effect is to change the azure hosting level to something else. This resolves the issue temporarily and the website can access the database for a few hours more.
What could have have happened for this error to start showing up? The application hasn't been changed since August.
EDIT:
Azure support found that there was socket and port exhaustion on the instance. What the root cause for that is, is still unkown.
Craig is correct in that you need to implement SQLAzure Transient Fault Handling. You can find instructions here: https://msdn.microsoft.com/en-us/library/hh680899(v=pandp.50).aspx
From the article
You can instantiate a PolicyRetry object and wrap the calls that you
make to SQL Azure using the ExecuteAction method using the methods
show in the previous topics. However, the block also includes direct
support for working with SQL Azure through the ReliableSqlConnection
class.
The following code snippet shows an example of how to open a reliable
connection to SQL Azure.
using Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling;
using Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling.AzureStorage;
using Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling.SqlAzure;
...
// Get an instance of the RetryManager class.
var retryManager = EnterpriseLibraryContainer.Current.GetInstance<RetryManager>();
// Create a retry policy that uses a default retry strategy from the
// configuration.
var retryPolicy = retryManager.GetDefaultSqlConnectionRetryPolicy();
using (ReliableSqlConnection conn =
new ReliableSqlConnection(connString, retryPolicy))
{
// Attempt to open a connection using the retry policy specified
// when the constructor is invoked.
conn.Open();
// ... execute SQL queries against this connection ...
}
The following code snippet shows an example of how to execute a SQL command with retries.
using Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling;
using Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling.AzureStorage;
using Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling.SqlAzure;
using System.Data;
...
using (ReliableSqlConnection conn = new ReliableSqlConnection(connString, retryPolicy))
{
conn.Open();
IDbCommand selectCommand = conn.CreateCommand();
selectCommand.CommandText =
"UPDATE Application SET [DateUpdated] = getdate()";
// Execute the above query using a retry-aware ExecuteCommand method which
// will automatically retry if the query has failed (or connection was
// dropped).
int recordsAffected = conn.ExecuteCommand(selectCommand, retryPolicy);
}
The answer from Azure support indicated that the connection issues I was experiencing was due to port/socket exhaustion. This was probably caused by another website on the same hosting plan.
Some answers to why the symptoms were removed by changing hosting service level:
Changing the hosting plan helped for a while since this moved the virtual machine and closed all sockets.
Changing the hosting plan from level B to level S helped since azure limits the number of sockets on level B.

Timeout expired error in ASP.NET but not on SQL Server Studio

In my ASP.NET application, I am getting the following error:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
But I can successfully connect to the database server using 'SQL Server Mangement Studio' and I can also correctly PING the host where SQL Server is hosted.
What can be wrong here.
Check your connection string in web.config. The connection string you are using via SQL Management Studio is different to the one in the web.config.
You can increase timeout in web.config too which is better or say best approach.
Also whenever you get this type of error and everything is good in confirable, then first debug the code,
if the issue code side, then you manage simply by changing logic.
If it is sqlserver side, then get the parameter value and sp or query. Run in to SSMS which gives you better idea.
Increasing Command timeout to 120 fixed my problem for me.
adapter.SelectCommandCommandTimeout=120;

Resources