There are a couple reports which stopped working, and I'm getting an error "An Existing connection was forcibly closed by the remote host." When I try to look at the reports they are taking forever to run, and in the event log there were a couple timeout errors... so guessing I'm getting that error due to timeout.
Now the problem is figuring out why the reports are running so slow. I already changed the proc to prevent parameter sniffing... but basically:
Run the proc through SSMS: 1:42
Run the report through Report Server: 6:45
Run report through ASP.NET ReportViewer control: 13:00 minutes
So the real mystery here for me is why it's twice as slow through the ReportViewer control as through SSRS itself? (I can deal with the report being slower than the proc later...)
EDIT:
Ran some profiling as suggested in the comments. The stored procedure is running at normal speed (55 seconds) when being called from the report itself. So the problem is either the SSRS server, the ReportViewer control, both... or the network between the ReportViewer and the SSRS server.
Also if I run the report on my desktop PC (over VPN) in Visual Studio it works just fine.
Also, there are some other shorter reports that are running just fine. Wondering if it's just they pull so much less data though.
One last thing I've noticed is that the query seems to be running multiple times when run through the ReportViewer control.
EDIT Again:
Looked at the ExecutionLog tables, and the time is definitely going to rendering the report. The time for getting the data is pretty consistent. Also the rendering takes 6 minutes longer on the production server than the test server (even running the query against the production database) so it's definitely something with Reporting Services.
It's a shot in the dark, but whenever we run into this problem one of the first things that we do is drop statistics on the tables involved, and let the DBMS recreate them as needed. We have noticed that WITH RECOMPILE does not seem to help as well.
Related
In a hybrid asp.net web application, framework 4.5.1, using LINQ to SQL (not Entity Framework) I'm getting the exception
"An unhandled exception of type 'System.StackOverflowException' occurred in System.Data.Linq.dll"
on any call to DataContext.SubmitChanges().
Every call to SubmitChanges() causes the error, it does not matter what specific entity is being altered. The error is thrown immediately (unlike most StackOverflow exceptions, which normally take a few seconds to occur while the errant code overflows the stack).
The asp.net web application is running on my local host in IIS express using Visual Studio 2013. The database is SQL Server 2005.
My question is, how does one debug a StackOverflow exception in this environment? Right now the above error message is all I get.
The Event Viewer notes that the browser crashed (it happens in both IE 11 and Chrome) but nothing about the LINQ to SQL exception.
The SQL Server process monitor does not register any database call.
I have a log hooked up to my DataContext but it records nothing.
It appears the stack overflow is happening inside System.Data.dll before any database call occurs and before anything can be logged.
This suddenly started happening several hours ago, after a windows update ran and the machine rebooted. That might be a coincidence.
Something else extremely odd: we have four developers in our shop, all using Visual Studio 2013. Two of us suddenly began having this problem, and two of us never had it. We're all running identical code and hitting the same database. The two of us having the problem rebooted, and the problem disappeared on one machine, but is still occurring on my machine.
In addition to rebooting, I've deleted the project from machine and pulled it down from source control so that it is identical to what my 3 co-workers have, deleted all temporary internet files on my machine, and deleted all of my AppData\Local\temp files for my login.
Is there any way to debug this issue?
Clip of call stack when exception occurs (the calls to VisitExpression and etc repeat many dozens of times until it ends).
The unsatisfying "answer" in this case was to delete the *.dbml file and re-create it. That fixed the stack overflow error.
My comment in reply to #GertArnold above was not accurate. Only one DataContext was throwing the stack overflow exception. It was doing it for every entity in the DataContext, but other DataContexts in the application were working properly.
This particular *.dbml file has been growing over the years to gargantuan size. While re-creating I was careful to only add database objects that are being referenced, which resulted in a much smaller *.dbml file, which might itself have fixed the problem.
Thanks a lot Tom for the info!
Just in case other people may hit the same problem, here is extra info from my case. I have got a very similar issue after my PC got a batch of Windows updates yesterday, the updates including windows10 OS, VS2013/VS2015 and etc. I primarily use VS2013, some differences with Tom's case are,
it only pops up when update one entity, other entities in the same DataContext work fine
only affects my ASP.NET Web API project, console applications are fine, even all app projects ref to the same unitofwork data layer project (in where the dbml file sits)
replacing the dbml file didn't work for me, I finally solved it by opening the solution in VS2015 >> debugging >> closing VS2015 >> opening the solution in VS2013, the problem just disappeared
I have an extra report with "62976" records with 71 columns. I have tried to load this report in a reportviewer control and I got "Out of Memory Exception" after several minutes. Is there any way to load this report in report viewer? I am using xtraReports of devexpress.
My code consists of a dataset that is filled with this 62976 records. I do some record manipulations using foreach for this many records and finally the output from this foreach is fed to the reportviewer. I have tried debugging and the debugger passes all the code lines without failure. Finally I see the reportviewer with the loading image running (which implies that large amount of data is being processed) and after around 60 seconds, the exception is thrown.
Things I've tried.
Clearing the temp folder
Restarting visual studio
Restarting my machine
Some of the Devexpress report controls seems to have lot of memory leak issues.We had an issue with the XtaRichEditControl and has to totally remove it from the project.However 67000 is huge number depending on your system configuration.The reports do consumes lot of resources.
You can try using a memory profiler to find out which class/object is using most memory.We used Ans memory profiler
The other option is to use WinDbg with SOS .
I have been given an ad hoc reporting tool from another individual that has successfully deployed it to the field. He uses Web Logic servers and an Oracle database.
I tried to deploy the same application in my local environment (WAS 7 and Oracle). The first report runs flawlessly. However, when I run the second (or third or fourth) report, I get a very strange error: the second report is appended to the first report.
There is nothing in the code to account for this. This problem can be temporarily solved by stopping and starting the servers every time a report is run (obviously not a real solution). I think this has something to do with data sources and cached information. I then took a step back and tried to deploy it to a Tomcat server. It works perfectly, just like it does in the field. So my question is: are there any known issues between WAS 7 and Oracle 11g that could be causing this kind of problem? Any information would be very helpful.
Please ask about any specifics you may want to know and I will do my best to provide that information.
Thank you for your time.
EDIT: For anyone else looking into this, the problem was due to an incompatibility with the proprietary Oracle calls and Websphere. Once the application was edited to use only JDBC calls, everything works perfectly. Thanks.
This ended up being a incompatibility with using proprietary Oracle calls and Websphere. It was fixed by changing all of proprietary call to normal JDBC calls.
We have been working on our application for about a year now and today we performed a manual stress test with about 70 users. Our SQL server and WinForms application ran smooth, however, once the web application hit around 20 users, the server started acting strange.
One error that we received multiple times was when a stored procedure executed and loaded a DataTable, it would report "Column '{dataColumn}' does not belong to table {dataTable}". The odd thing was that after you received the error, you could refresh the page and the error would go away and the page would work correctly.
One of our questions is would this be caused by having IIS running on a multi processor server? If so, is there a server setting or a code modification that can resolve this error?
It appears that we may have resolved this error by removing the Shared/Static access modifier on the functions that use database functionality. Will follow up after more testing.
I have deployed an application written in ASP.NET 2.0 into production and it's experiencing some latency issues. Pages are taking about 4-5 seconds to load. GridView refreshing are taking around the same time to load.
The app runs fine on the develpment box. I did the following investigation on the server
Checked the available memory ... 80% used.
Cheched the processor ... 1%
Checked disk IO from perfmon, less than 15%
The server config is
Windows Server 2003 Sp2
Dual 2.0 GZH
2GB RAM
Running SQL Server 2005 and IIS only
Is there anything else I can troubleshoot? I also checked the event log for errors, it's clean.
EDITED ~ The only difference I just picked up is on the DEV box I am using IE7 and the clients are using IE6 - Could this be an issue?
UPDATE ~ I updated all clients to IE8 and noticed a 30% increase in the performance. I finally found out I left my debug=true in the web.config file. Setting that to flase got the app back to the stable performance... I still can't believe I did that.
First thing I would do is enable tracing. (see: https://web.archive.org/web/20210324184141/http://www.4guysfromrolla.com/webtech/081501-1.shtml)
then add tracing points to your page generation code to give you an idea of how long each part of the page build takes:
System.Diagnostics.Trace.Write(
"Starting Page init",
"TraceCheck");
//Init page
System.Diagnostics.Trace.Write(
"End Page init",
"TraceCheck");
System.Diagnostics.Trace.Write(
"Starting Data Fetch",
"TraceCheck");
//Get Data
System.Diagnostics.Trace.Write(
"End Data Fetch",
"TraceCheck");
etc
this way you can see exactly how long each stage is taking and then target that area.
Double check that you application is not running in debug mode. In your web.config file check that the debug attribute under system.web\compilation is set to false.
Besides making the application run slower and using more system memory you will also experience slow page loading since noting is cached when in debug mode.
Also check your page size. A developer friend of mine once loaded an entire table into viewstate. A 12 megabyte page will slip by when developing on your local machine, but becomes immediately noticeable in production.
Are you running against the same SQL Server as in your tests or a different one?
In order to find out where the time's coming from you could add some trace statements to your page load, and then load the page with tracing turned on. That might help point to the problem.
Also, what are the specs of your development box? The same?
Depending on the version of visual studio you have, Team Developer has a Performance Wizard you might want to investigate.
Also, if you use IE 8, it has a Profiler which will let you see how long the site takes to load in the browser itself. One of the first things to determine is whether the time is client side or server side.
If client side, start looking at what javascript you have and optimize / get rid of it.
If server side, you need to look at all of the performance counters (perfmon). For example, we had an app that crawled on the production servers due to a tremendous amount of JIT going on.
You also need to look at the communication between the web and database server. How long are queries taking? Are the boxes thrashing the disk drives? etc.