Beginner: ASP.NET window output html takes long time - asp.net

The web site that I have run for long time, sometimes it will have some speed issues, but after we clean up the MSSQL data, it will work fine.
But this time, it doesn't work any more, we always got Timeout error, and the IIS causes the CPU runs very high.
We took out some features, and the site runs back OK, but slow without error.
For example, when we do a search, if we have less than 10 results, the page/output is really fast.
When we have more than 200 results, the page is very slow, almost take about 15 to 20 secs to output the whole page
I know if you have more data to output, of course it takes more time to run, but we used to have more than 500 results, it ran/output very fast also.
Do you know anywhere I should look at to solve this speed problem?

You need to look at the code to see what is executed with displaying those results. Implement some logging or step through the execution of a result with a debugger.
If you have a source control system, now is the time to review what changes have been made between fast code and now slow code.
10 results could be taking 1 second to display which is barely tolerable but as you say 200 results takes 20 seconds. So the problem is some bad code somewhere. I think someone has made a code change.

I would start with breaking down the issue. For example sql server times and iis times. You can separate different parts of code and measure execution times, etc.
SQL Server Profiler is good tool to start with and for ASP.NET You can start with simple trace and page tracing.
Some more info about testing and performance

Related

Performance Issue with JavaFX multiple tabs simultaneous updates of TextArea

I'm relatively new to JavaFX and have written a small applet which launches a number of (typically between 3 and 10) sub-processes. Each process has a dedicated tab displaying current status and a large TextArea where the process output is appended to. For simplicity all tabs are generated on startup.
javafx.application.Platform.runLater(() -> logTextArea.appendText(line)))
The applet works fine when workloads on sub-processes are low-moderate (not many logs), but starts to freeze when sub-processes are heavily used and generate a decent amount of logging output (a good few hundreds of lines per second in total).
I looked into binding the TextArea to the output, but my understanding is it effectively calls the Platform.runLater() method so there will still be hundreds of calls to JavaFX application thread per second.
Batching logging outputs isn't an ideal solution either because I'd like to keep the displayed log as real-time as possible.
The only solution which I think might solve the problem seems to be dynamic loading of individual tabs. This would definitely prevent unnecessary calls to update logging textareas that aren't currently visible, but before I go ahead to make the changes, I'd like to get some helpful advice from you here. Thanks!
Thanks for all your suggestions. Finally got around to implementing a fix today.
The issue was fixed by using a buffer coupled with a secondary check for time lapse (maximum 20 lines or 100 ms).
In addition, I also implemented rolling output to limit the total process output to 1,000 lines.
Thanks again for your invaluable contribution!

MiniProfiler not logging all steps - only with active breakpoints

When using the MiniProfiler to hunt down a performance issue, I encountered a situation, where the MiniProfiler would log only a few of the calls to MiniProfiler.Step().
This is the code:
The breakpoints are set to only count the number of hits (313 per run), they are not interrupting the execution. Notice, that they are deactivated in the screenshot above. After running the application, I get a very incomplete log from MiniProfiler, which, from run to run, has a differing number of entries, usually 2 to 5.
However, when I activate the breakpoints, the log is complete. Remember, that the breakpoints still do not interrupt the execution.
Is this a bug in the MiniProfiler?
Click show trivial - I suspect the ones you're not seeing are "trivial". When you hit the breakpoints, you make them appear not-so-trivial by increasing their execution time.
EDIT:
... as I was poking around their code I stumbled on this
public bool IsTrivial...

"random" kernel crash after running for minutes.... HEP!? -- [same question posted on Khronos]

I have a thoroughly complex kernel processing audio input data. It will run for a couple of minutes, 60 times a second, and then hang. That's on the GPU; on the CPU it will run for hours. The input data are constantly changing, but each variable is always within proscribed ranges. I have inserted test code before uploading the inputs to the kernel each frame; in this test code, I can force these inputs to be well below their valid input range, but it still will eventually crash. (Say the valid range for a particular input is 0->400; I can force it to 0->1 and it will STILL eventually crash. I can force it to be below 0.1 and it will still ultimately bite the dust.) However, if I force the input variables to zero, the GPU will happily dance for hours. Of course, that input-free dance is not so particularly interesting.
I'm at a loss so far, though I have clues. I can make it crash much faster than 2 minutes if an input variable is high in its approved range. I can make it crash in less then 10 seconds under the right circumstances. BUT, I can't seem to _back_off_of_ those certain circumstances such that they go away. As said above, I can force the input vars into ridiculously small portions of their valid range, and the kernel (let's call him Harlan Sanders) will eventually go belly-up. BUT, if they're forced to actual zero, no problems puppy, we can run all day long.
To repeat, I'm a bit at a loss - although I have things that look like clues, I have not yet figured out what they are hinting at, though I've been trying for a few days. Frankly, I do not expect to find a real solution by asking here; whenever I stumble over a problem in opencl it seems that my fate is to be the first to articulate that particular problem. I guess this is part of the fun of being in on a technology during its infancy!!!!!!!!!! BUT, I want to do some serious, sustainable work with this "baby" (or, maybe, "toddler").
Op details: MacBook Pro 2010, OS 10.6.8, nv 330M GPU, xcode 3.2.5, shorts, teeshirt.
bonus P.S. for those who've read this far, including a related question:
My laptop, soldier that it has proved to be, is not powerful enough for the next stage. I must sell some stocks/bonds and purchase a Mac Pro. I'm looking at the ATI 5870. So, PERHAPS my problem will simply go away when I compile the .cl for the ATI??? Maybe I have run into a bug in the nV implementation. Maybe my kernel is so complex that I'm running into undetected resource limits (it's 1300 lines of code). So, SINCE I run fine on the CPU, perhaps I'll have no bugs, or different bugs, on the ATI card???
Any thoughts?
Thanks, guys & dolls --
Dave
Use "cl_" data types on the CPU side, because maybe you are not coping data the right way, or it is not being understood by the GPU. This could lead to GPU hangs on invalid pointers while handing the data.
You should also try -Werror, and read the error output. You can be doing smt wrong.
Without any code, we can only guess. But I haven't found any bug in the actual OpenCL NV or ATI implementations.
Make sure you release all resources. Events returned by Enqueue functions must be released. This error sometimes occurs after accessing buffers out of range.

Asp.net sql server 2005 timeout issue

HI
We am getting time outs in our asp.net application. We are using sql server 2005 as the DB.
The queries run very fast in the query analyser . However when we check the time through the profiler it shows a time that is many times more than what we get in query analyser.
(paramter sinffing is not the cause)
Any help is much appreciated
thanks
We are on a SAN
Cleared the counters. The new counters are
ASYNC_NETWORK_IO 540 9812 375 78
WRITELOG 70 1828 328 0
The timeout happens only on a particular SP which a particular set of params. if we change the params and access the app it works fine. We ran the profiler and found that the SP batchcompleted statement comes up in the profiler after the timeout happens on asp.net side. If we restart the server everything works fine
if we remove the plan from the cache the app works fine. However we have taken into consideration parameter sniffing in the sp. what else could be the reason
If I was to take a guess, I would assume that the background database load from the webserver is elevating locks and causing the whole thing to slow down. Then you take a large-ish query and run it and that causes lock (and resource) contension.
I see this ALL THE TIME with companies complaining of performance problems with their client-server applications when going from one SQL server to a cluster. In the web-world, we get those issues much earlier.
The solution (most times) to lock issues with one of the following:
* Refactor your queries to work better (storing SCOPE_IDENTITY instead of calling it 5 times for example)
* Use the NO LOCK statement everywhere it makes sense.
EDIT:
Also, try viewing the server with the new 2008 SQL Management Studio 'Activity Monitor'. You can find it by right-clicking on your server and selecting 'Activity Monitor'.
Go to the Processes section and look at how many processes are 'waiting'. Your wait time should be near-0. If you see alot of stuff under 'Wait Type', post a screen shot and I can give you an idea of what the next step is.
Go to the Resource Waits section and see what the numbers look like there. Your waiters should always be near-0.
And 'Recent Expensive Queries' is awesome to look at to find out what you can do to improve your general performance.
Edit #2:
How much slower is it? Your SAN seems to be taking up about 10 seconds worth, but if you are talking 20 seconds vs. 360 seconds, then that would not be relevent, and there is no waits for locks, so I guess I am drawing a blank. If the differene is between 1 second and 10 seconds then it seems to be network related.
Run the following script to create this stored proc:
CREATE PROC [dbo].[dba_SearchCachedPlans]
#StringToSearchFor VARCHAR(255)
AS
/*----------------------------------------------------------------------
Purpose: Inspects cached plans for a given string.
------------------------------------------------------------------------
Parameters: #StringToSearchFor - string to search for e.g. '%<MissingIndexes>%'.
Revision History:
03/06/2008 Ian_Stirk#yahoo.com Initial version
Example Usage:
1. exec dba_SearchCachedPlans '%<MissingIndexes>%'
2. exec dba_SearchCachedPlans '%<ColumnsWithNoStatistics>%'
3. exec dba_SearchCachedPlans '%<TableScan%'
4. exec dba_SearchCachedPlans '%CREATE PROC%MessageWrite%'
-----------------------------------------------------------------------*/
BEGIN
-- Do not lock anything, and do not get held up by any locks.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 100
st.TEXT AS [SQL],
cp.cacheobjtype,
cp.objtype,
DB_NAME(st.dbid) AS [DatabaseName],
cp.usecounts AS [Plan usage],
qp.query_plan
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) qp
WHERE CAST(qp.query_plan AS NVARCHAR(MAX)) LIKE #StringToSearchFor
ORDER BY cp.usecounts DESC
END
Then execute:
exec dba_SearchCachedPlans '%<MissingIndexes>%'
And see if you are missing any recommended indexes.
When SQL server creates a plan it saves it, along with any recommended indexes. Just click on the query_plan column text to show you the graph. On the top there will be recommended indexes you should implement.
I don't have the answer for you, because I'm not a guru. But I do remember reading on some SQL blogs recently that SQL 2008 has some extra things you can add to the query/stored procedure so it calculates things differently. I think one thing you could try searching for is called 'hints'. Also, how SQL uses the current 'statistics' makes a difference too. Look that up. And how the execution plan is only generated for the first run--if that plan doesn't work with different parameter values because there would be a vast difference in what would be searched/returned, it can present this behavior I think.
Sorry I can't be more helpful. I'm just getting my feet wet with SQL Server performance at this level. I bet if you asked someone like Brent Ozar he could point you in the right direction.
I've had this exact same issue a couple of times before. It seemed to happen to me when a particular user was on the site when it was deployed. When that user would run certain stored procedures with their ID it would timeout. When others would run it, or I would run it from the DB, it would run in no time. We had our DBA's watch everything they could and they never had an answer. In the end, everything was fixed whenever I re-deployed the site and the user was not already logged in.
I've had similar issues and with my case it had to do with the SP recompiling. Specifically it was my use of temp tables vs table variables.

Performance monitor shows 4294967293 sessions active

I have an ASP.Net 3.5 website running in IIS 6 on Windows Server 2003 R2. It is a relatively small internal application that probably serves less than ten users at any given time. The server has 4 Gig of memory and shows that 3+ Gig is available while the site is active.
Just minutes after restarting the web application Performance monitor shows that there is a whopping 4,294,967,293 sessions active! I am fairly certain that this number is incorrect; at the time this reading there were only 100 requests to the website.
Has anyone else experienced this kind odd behavior from perf mon? Any ideas on how to get an accurate reading?
UPDATE: After running for about an hour the number of active sessions has dropped by 4. So it does seem to be responding to sessions timing out.
Could be an overflow, but my money's on an underflow. I think that the program started with 0 people, someone logged off, and then the number of sessions went negative.
Well, 2^32 = 4,294,967,296, so sounds like there's some kind of overflow occurring. Can't say exactly why.
We have the same problem. It looks like MS has a Hotfix available: http://support.microsoft.com/kb/969722
Update 9/10/2009: Our IT department contacted MS for the Hotfix. It fixed our issue. We are running .NET 2.0 if it matters any.
I am also showing a high number, currently 4,294,967,268.
Every time I abandon a Session, the Sessions abandoned count goes up by 1, and the Sessions Active count decreases by 1. Currently my abandoned session count = 16, so this number probably started at 4,294,967,84.
Is there a fix for this?
My counters were working fine, but one morning I logged in remotely to the production server, and the counter was on this huge number (which is as somebody mentioned very close to 2^32 indicating an underflow). But the only difference from the day before when everything worked was the fact that during the night, windows had installed updates.
So for some reason these updates caused this pretty annoying error.
Observing the counter a little more, I found out that whenever the application is restarted - after some time with no traffic, the counter starts correctly at zero. When users start logging on, it increments fine. When they start logging off again, it still decrements fine until it reaches what is supposed to be zero. At that point it goes bananas...
Sigh...
If you have to use your existing statistics, I opened the log file in Excel and used a formula to bring a more accurate value. I cannot guarantee its accuracy, but the results did look okay:
If B2 is the (aspnet_wp)\Sessions Active value , and the formula sits in C2
/* This one is quicker as it doesn't have to do the extra calculations */
=IF(B2>1073741824,4294967296-B2,B2)
Or
/* This one is clearer what is going on */
=IF(B2>power(2,30),(4*power(2,30))-B2,B2)
P.S. (I feel your pain - I have to explain why they have 4.2 billion sessions opening whereas a second earlier they had 0!)

Resources