DB timeout when generating report from the web - asp.net

I've got a report in ASP.NET app. When I trying to generate it from the browser it crashes with DB timeout error, but when i'm executing exact same query in SQL Management Studio it shows the result set within 5 seconds.
Query is written unclean SQL in the code-behind file (no ORMs are used), it's parameters are from the web form, so i know what exact generated query will be.
What can be the cause of the problem?

First, use SQL Profiler to attach to the database and see exactly what query is being sent. Use that for other testing.
Second, set your connection timeout to something ridiculous like 300 seconds. Then do the same thing for the command timeout.
Third, make sure both your application and your management studio instance are talking to the same database... Preferably with the exact same user rights.
Run again. Then run it again.
It's possible that the database is taking time to do an initial load (hence the first query taking awhile) and the query through management studio is executing while the database is still "hot" so to speak.
Finally, you say that management studio shows results within 5 seconds.. Is it 5 seconds for it to start populating the query results window or 5 seconds for the entire query to finish executing. Those can be radically different times.

Related

Report Server Reports Hanging

I'm working on a issue with heavily fragmented indexes on a large production DB. I've pretty much identified the indexes that are heavily fragmented, including those that are not really being used. I plan to rebuild some and remove others. So my next step is to devise a before and after time test.
One of the symptoms of this is SSRS reports taking about an hour to render. I'm new to Reports Services. I can see that a report is being embedded in the ASPX page using a ReportViewer control with the ServerPort ReportPath and ReportServerUrl properties set. My problem is trying to figure out how to time the display of the report from start to finish in the code-behind. I can write the start time to a file in the Page_Load but I can't figure out how to record the end time... Pre-Render could just hang and I'm not sure if this is the only page lifecycle event I can tap into to record this. Should I use a Windows Service, and if so, how would I trigger/record the start and end times that way?
I'd really appreciate some feedback on if this is possible via the display page's code-behind.
Have you tried looking in the Reporting Services execution logs. That contains several timed events such as data retrieval time, render time, process time and the actual start and end time. Check ReportServer.dbo.Executionlog and ReportServer.dbo.Catalog
To check the log settings. Connect to your SSRS server using SQL Server Management Studio (not the database engine, select Reporting Services from the connection dialogue). Once connected, right-click the server and choose properties. On the logging tab you will see the number of days history to retain. By default this is 60 days.
Assuming that is no zero then you can do a simple query like this to get the report execution details.
SELECT *
FROM ReportServer..ExecutionLog e
JOIN ReportServer..Catalog c
ON e.ReportID = c.ItemID
WHERE c.name = 'myReportName'

ORACLE/ASP.NET: ORA-2020 - Too many database links... what's causing this?

Here's the scenario...
We have an internal website that is running the latest version of the ODAC (Oracle Client). It opens database connections, runs a stored procedure or packaged method, then disconnects. Connection pooling is turned on, and we are currently under version 11g in both our development and test environments, but under 10gR2 in our production environment. This happens on Production.
A few days ago, a process began firing off a ORA-2020 error. The process is called from a webpage on our internal website. The user simply sets a date, hits a button, and a job is started on another system that is separate from the website. The call itself, however, uses a database link to run a function.
We've scoured the SQL to find that it only uses that one database link. And since these links are on a per session basis and the user isn't exceeding the default limit of 4, how is it possible that we are getting a ORA-2020 error.
We have ran a number of tests to try to push over the default limit of 4. ODAC, from what I recall, runs a commit after each connection, and I can't seem to run 4 DB links, then run a piece of SQL with 1 DB link directly after with any errors. The only way I can bring up this error is if I run a query with 4 DB links, then a function or piece of dynamic SQL with a database link within it. We don't have that problem as this issue is sporadic. It isn't ALWAYS happening.
Questions
Is it possible that connection pooling is allowing User B to use User A's connection after the initial process was run, thus adding to the open links number if User B runs a SQL statement with more database links?
Is this a scenario where we should up our limit past 4? What are the disadvantages of increasing the number?
Do I need to explicitly close open database links before disconnecting from the database? Oracle documentation seems to suggest it should automatically happen, but "on occasion"... doesn't.
Firstly, the simple solution: I'd double check that in the production database the number of default links is actually 4.
select *
from v$system_parameter
where name = 'OPEN_LINKS'
Assuming you're not going to get off that lightly:
Is it possible that connection pooling is allowing User B to use User
A's connection after the initial process was run, thus adding to the
open links number if User B runs a SQL statement with more database
links?
You say that you explicitly close the session, which, according to the documentation, should mean that all links associated with that session are closed. Other than that I confess complete ignorance on this point.
Is this a scenario where we should up our limit past 4? What are the
disadvantages of increasing the number?
There aren't any disadvantages that I can think of. Tom Kyte suggests, albeit a long time ago, that each open database link uses 500k of PGA memory. If you don't have any then this will obviously cause a problem but it should be more than fine for most situations.
There are, however, unintended consequences: Imagine that you up this number to 100. Somebody codes something that continually opens links and draws a lot of data through all them select * from my_massive_table or similar. Instead of 4 sessions doing this you have 100, which is attempting to transfer hundreds of gigabytes simultaneously. Your network dies under the strain...
There's probably more but you get the picture.
Do I need to explicitly close open database links before disconnecting
from the database? Oracle documentation seems to suggest it should
automatically happen, but "on occasion"... doesn't.
As you've noted the best answer is "probably not", which isn't much help. You don't mention exactly how you're terminating the session but if you're killing it rather than closing gracefully then definitely.
Using a database link spawns a child process on the remote server. Because your server is no longer in absolute charge of this process there's a myriad of things that could cause it to become orphaned or otherwise not close on termination of the parent process. By no means does this happen the whole time but it can and does.
I would do two things.
In your process, if an exception is encountered, e-mail the results of the following query to yourself.
select *
from v$dblink
As a minimum at least you will know what database links are open in the session and give you some way of tracing them.
Follow the documentations advice; specifically the following:
"You may have occasion to close the link manually. For example, close
links when:
The network connection established by a link is used infrequently in an application.
The user session must be terminated."
The first seems to exactly fit your situation. Unless your process is time-sensitive, which doesn't seem to be the case, then what have you got to lose? The syntax is:
alter session close database link <linkname>
We ended up increasing the link amount, but we never did find the root cause.

SQL connection timeout

We have been facing weird connection timeouts on one of our websites.
Our environment is composed of an IIS 7 web server (running on Windows Server 2008 R2 Standard Edition) and an SQL Server 2008 database server.
When debugging the website functionality that provokes the timeout, we notice that the connection itself takes milliseconds to complete, but the SqlCommand, which invokes a stored procedure on the database, hangs for several minutes during execution, then raises the timeout exception.
On the other hand, when we run the stored procedure directly on the database, it takes only 2 seconds to correctly finish execution.
We already tried the following:
Modified SqlCommand timeout on the website code
Modified execution timeout on the web.config file
Modified sessionState timeout on the web.config file
Modified authorization cookie timeout on the web.config file
Modified the connection timeout on the website properties on IIS
Modified the application pool shutdown time limit on IIS
Checked the application pool idle timeout on IIS
Checked the execution timeout on the SQL Server properties (it's set to 0, unlimited)
Tested the stored procedure directly on the database with other parameters
We appreciate any help.
Nirav
I've had this same issue with a stored procedure that was a search feature for the users. I tried everything, include ARTIHABORT etc. The SP joined many tables, as the users could search on anything. Many of the parameters for the SP were optional, meaning they had a default value of NULL in the SP. Nothing worked.
I "fixed" it by making sure my ADO.NET code only added parameters where the user selected a value. The SP went from many minutes to seconds in execution time. I'm assuming that SQL Server handled the execution plan better when only parameters with actual values were passed to the SP.
Note that this was for SQL Server 2000.
A few years ago I had a similar problem when migrating an app from SQL2000 to SQL2008.
I added OPTION (RECOMPILE) to the end of all the stored procs in the database that was having problems. In my case it had to do with parameters that were very different between calls to the stored proc. Forcing the proc to recompile will force SQL to come up with a new execution plan instead of trying to use a cached version that may be sub-optimal for the new params.
And in case you haven't done it already, check your indexes. Nothing can kill db performance like lack of a badly needed index. Here is a good link (http://sqlfool.com/2009/04/a-look-at-missing-indexes/) on a query that will display missing indexes.
Super-super late suggestion, but might come handy for others: A typical issue I saw and rather applicable to Java is the following:
You have a query which takes a string as a parameter. That string is search criterion on a varchar(N) column in the database. however, you submit the string param in the query as Unicode (nvarchar(N)). This will result in a full-table scan and conversion of every single field values to Unicode for proper comparison, to avoid potential data loss (if SQL Server converted the input param to non-Unicode, it may lose information).
Simple test: run the query twice (for the sake of simplicity, I'm assuming it's an SP):
exec spWhatever 'input'
exec spWhatever N'input'
See how they behave. Also, you may want to take a look at the Recent Expensive Queries section on the Activity Monitor in SSMS and ask for the execution plan, to clarify the situation.
Cheers,
Erik

LINQ to SQL Stored Procedure. Exception says it times out but it is not timing out

Website using .NET Framework v3.5, SQL Server 2008, written in C#
I have a stored procedure which I have added to my DBML by dragging it across from the server explorer.
In it's properties it returns Auto-generated type.
The procedure takes < 1 second to run from within SQL Mgmt Studio for all inputs.
However from the code for 1 particular input (which takes < 1 second in the Mgmt studio) it hangs and then throws:
System.Data.SqlClient.SqlException: Timeout expired.
This didn't always happen for this one input! It used to also work fine when called from the code. The last time it didn't work I deleted and re-added the same stored procedure to the DBML. This "fixed" it, and that input ran fine and in the same time as all the others. However this is not an adequate fix! It has happened again and I can't keep deleting and re-adding as required.
I made no changes to the data that's being returned during the point at which it was "fixed", so I can't think what the problem could be. Any help on this would be much appreciated!
Exception says it times out but it is
not timing out
If it says it's timing out, it's timing out. The only question is "why"?
Run a SQL Server Profiler trace against your database and see what query is actually going to the server. It's possible that another query is being issued too. It's possible there is another transaction interfering in your production scenario.
It turns out that this is parameter sniffing - this is explained in another post: Executing stored proc from DotNet takes very long but in SSMS it is immediate
Also, be sure that the stored procedure is not being held up inside of a transaction, waiting for another process to complete. I just ran across this with a Linq to Sql stored procedure being called multiple times within a transaction. It gave me a timeout expired error and I just realized it was waiting for a previous call to complete, and thus timing out.

Why would bulk Inserts cause an ASP.net application to become Unresponsive?

Setup: ASP.net 3.5, Linq-to-Sql. Separate Web and DB servers (each 8-core, 8GB RAM). 4 databases. I am running an insert operation with a few million records into DB4 (using Linq-to-Sql for now, though I might switch to SqlBulkCopy). Logging shows that records are being put in consistently at a rate of 600-700 per second (I am running DataContext.SubmitChanges() every 1000 records to keep the transaction size down). The insert is run during one Http Request (timeout is set pretty high).
The problem is that while this insert operation is running, the web application becomes completely unresponsive (both within different browser windows on my machine, and on other browsers in remote locations).
This insert operation is touching one table in DB4. Most pages will only touch DB1 (so I don't think that it is a locking issue - I also checked in through Management Studio, and no objects are being locked unnecessarily). I have checked out performance stats on both the Web and DB servers, and while they may spike from time to time, throughout the inserts they stay well within the "green".
Any idea about what can be causing the app to become unresponsive or suggestions about things that I should do in order to narrow down the issue?
Responses to suggestions:
Suggestion that inserts are using all DB connections: the inserts are being done off of a different connection string (and DB) than what other pages in the app use. Also, I checked in SSMS, and there is just one connection open for DB4, and one open for DB1 (so it doesn't look like it is running out of connections).
Suggestion that inserts are maxing out CPU on web server: this is the only application on the server (and less than 5 users at any one time). Performance monitor shows CPU staying in between 12%-20%. Memory is hardly being touched.
My first guess would be that you are using up available connections to the database with the insert operations that you are doing and the web applications are waiting to get a connection to the database.
You have a few options.
Look in SSMS and see what you have for open and active connections under regular load and when doing the inserts see if that is a problem.
Use a profiling tool such as ANTS profiler to see what is going on with the web application at the time of the slow down, it might help pinpoint the issue.
You could also try manually executing the queries that the web application is using, on the SQL Server and see if you notice a similar behavior.
The other option, a bit less likely, but it could be that the web application doing the bulk insert, it taking all of the CPU time from the other web applications on the server, preventing use. If you haven't done so already, split out the application to its own pool so you can monitor its load.
I don't know about Linq-to-Sql, but NHibernate specifically states that using it for bulk inserts is a bad idea. I have found Array Binding in ADO.NET to be very fast, Here is an article explaining how to do it with Oracle, but it should work with other providers too.
Seems like it is bad idea to do long operations in web app (for example, your IIS server can restart your application for next to no reason). Split your long application into Web App and Service App. Do long operations in Service App. Communicate between them via WCF & pipes.
Eventual Solution: I changed the data insertions from LinqToSql to use SqlBulkCopy via DataTable. The first time I did this, I got an OutOfMemory exception when trying to build a DataTable with 2 million rows in memory. So I am adding 50,000 rows at a time, and loading them into the DB with SqlBulkCopy (Batch Rate: 10,000) and then clearing the DataTable Rows collection. I am now getting in 2.1 million rows in 108 seconds (About 20,000 per second; Rate rate last night was average of 200 per second with L2S). With the increased data insertion performance, the app-wide unresponsiveness has gone away.
It possible what you have a lock statement some where in you web application what blocking some important resurse during the whole time you loading you data into DB.

Resources