Asp.net PreRenderComplete taking a long time to finish - asp.net

I'm having an issue with occassional slow performance on button click events on a particular page. There are times that it performs well within normal parameters, but it seems like whenever the server is under even moderate load(meaning I can produce this issue in our production environment not in our dev or test environments) it seems to just hang. After enabling tracing I see that it seems to just hang between the Begin PreRenderComplete and End PreRenderComplete. It just sits there for close to 30 sec. I don't have any specific code that executes in that event space. My understanding was that this event is supposed to be a non event in the life cycle since it is just to make sure that the PreRender phase finished. This page has a large number of controls and as such has a sizable viewstate, but my understanding is that the view state is handled in the LoadState and SaveState events which don't seem to be the phases eating all of my time.
I've run perfmon against the server, and at times when I am able to produce this behavior system resources are normal, there aren't requests queuing. I'm trying to understand what actions might be taking place behind the scenes causing this slowness.

Are there any asynchronous actions on the page? I think that async calls will complete prior to that event, so if some of them are taking a while, such as a slow database or slow network call, that might cause the delay you're seeing.

I think I've found the problem. Through more indepth profiling it seems that my ScriptManager control was causing the delay in trying to combine scripts. Apparently this only presented a problem under load and most take place in the prerendercomplete event. By setting the CombineScripts="false" attribute it seems to have cleared this issue.

Related

Do I always have to "wait" for page loads when using selenium on non-ajax pages?

I'm writing some BDD tests using Cucumber, Selenium and Xunit for a legacy ASP.Net application. The way the pages are designed, every "click" leads to a new page being fetched from the server. If I have to automate the tests for a particular page, should I have a line similar to the following after every "click"?
WebDriverWait wait = new WebDriverWait(driver, TimeSpan.FromSeconds(timeout));
wait.Until(...); //Wait until something about the page is true
I'm not sure if Selenium would wait implicitly for page loads without my explicitly having to state this all the time. What is the recommended pattern to handle this scenario?
It's cumbersome to always have an idea of "some element" so that I can put it in the Until method and that leads to brittle tests. The ASP.Net pages are littered with lots of dynamic controls and a whole slew of page refreshes which makes the test code quite unreadable.
My Proposed Solution: Write an extension method that does the waiting implicitly and that takes a parameter of an element-id to wait on. But I'm just refactoring the above problem into a more manageable place. I still have a wait be explicitly performed. Is there no way to eliminate it? Does selenium have some obvious default that would handle this case without the need for such an extension method or is this really a natural way of doing it?
If you want your tests be reliable and wait only the exactly needed time interval - then yes, Explicit Waits with WebDriverWait is a perfect solution. And, it's actually a very "natural" solution - think about how you, as a user, define that the page loaded - it's usually when you see the desired content, correct? When you look at the loading page, you are constantly reevaluating the state of the page, checking whether the desired content appeared or not. Explicit Waits follow the same logic - by default, every 500 ms it checks if the expected condition is true or not, but no more than X seconds you've configured when instantiating the
WebDriverWait.
If you need to use wait.until() calls often and want to follow the DRY principle, think about applying the "Extracting Method" or other refactoring methods.
You can set the implicit wait which would be applied on every element search, or introduce hardcoded "artificial" delays, but that's not going to be reliable and would be time-wasteful - you'll end up waiting more than needed and having occasional test failures.

Detect session hang and kill it

I have an asp.net page that runs certain algorithm and returns it's output. I was wondering what will happen and how to handle a case where the algorithm due to a bug goes into infinite loop. It will hog the cpu and other sessions will be served very slowly.
I would love to have a way to tell IIS, if processing Algo.aspx takes more than 5 seconds, kill it or something like that.
Thanks in advance
There is no such thing in IIS. What you can do instead is to perform the work in a background thread and measure the time it takes to complete this background task and simply kill the thread if the wait time is longer than expected.
You may take a look at the WaitHandle.WaitOne method which allows you to specify a timeout for waiting for a particular event to be signaled from a background thread for example.
Set the ScriptTimeout property. It will abort the page if the time is exceeded. It only works when the debugging property is set to true, though.

Flex Action Script timeout issue

Our flex (flare) application keeps timing out when rendering large datasets. Is there anyway to prevent this? we have tried to increase the timeout in the Application tag and the compiler settings. Not mushc success.
Any other thoughts?
regards
Sameer
You may organize the rendering work in chunks and after processing each chunk, give back control to the system. There are many possible implementations, for instance start a timer that fires event each 500 ms and process a small chunk of the dataset in the event handler.
As a bonus, processing a large dataset in chunks will enable you to provide the user the option to cancel rendering easily.
Increasing timeout is not advisable because on high network traffic it will bug users even more, the only solution is to do paging of long datasets, always load 50 to 100 items only at a time and let user navigate to pages by using pager controls.

Practical value for concurrent-request-timeout parameter or options for avoiding concurrent access to conversation exception

In the Seam Reference Guide, one can find this paragraph:
We can set a sensible default for the concurrent request timeout (in ms) in components.xml:
<core:manager concurrent-request-timeout="500" />
However, we found that 500 ms is not nearly enough time for most of the cases we had to deal with, especially with the severe restriction seam places on conversation access.
In our application we have a combination of page scoped ajax requests (triggered by various user actions), some global scoped polling notification logic (part of the header, so included in every page) and regular links that invoke actions and/or navigate to other pages.
Therefore, we get the dreaded concurrent access to conversation exception way too often, even without any significant load on the site.
After researching the options for quite a bit, we ended up bumping this value to several seconds (we're debating whether to bump it up to 10s), as none of the recommended solutions seemed able to solve our issue completely (even forcing a global queue for all the ajax requests would still leave us exposed to a user deciding to click a link right when one of our polling calls was in progress). And we'd much rather have the users wait for a second or two instead of getting an error page just because they clicked a link at the wrong moment.
And now to the question: is there something obvious we're missing (like a way to allow concurrent access to conversations and taking care of the needed locking ourselves, for instance :)? How do people solve this problem (ajax requests mixed with user driven interaction) in seam? Disabling all the links on the page while ajax requests are in progress (as suggested by one blog page) is really not a viable option.
Any other suggestions?
TIA,
Andrei
We use 60000 or 120000 (1-2 minutes). Concurrent-request-timeout is designed to avoid deadlocks. Historically we have far more problems with timeouts than deadlocks. A better approach is to use a client-side queue (<a4j:ajaxQueue> if using RichFaces) to serialize and remove duplicate requests as much as possible, then set the timeout high enough to avoid any remaining problems.
There are many serious issues resulting from Seam's concurrent request timeouts:
The issue is the last request gets the ConcurrentRequestTimeoutException. If the user double-clicks or reloads the page, only the last request matters -- why should he get an error?
Usually the ConcurrentRequestTimeoutException is suppressed, and only secondary NullPointerExceptions and #In injection failures are shown, making debugging difficult.
Seam 2.2.1 has a severe problem where transactions, ThreadLocals, and locks may leak after a timeout occurs, especially when used with <spring:spring-transaction/>. Look at SeamPhaseListener.afterRestoreView: there's no finally block to clean up after restoreConversation fails!
In my opinion there are many poor aspects to this design, so it's best to use a much higher timeout and try to avoid the issues.
This is what we have and it works fine for us:
<core:manager concurrent-request-timeout="5000"
conversation-timeout="120000" conversation-id-parameter="cid"
parent-conversation-id-parameter="pid" />
We also use a much higher value for the concurrent-request-timeout.
At least for duplicate events you can use settings in the a4j components to filter and delay them with eventsQueue, requestDelay and ignoreDupResponses=”true”.
(Last point http://docs.jboss.org/seam/2.0.1.GA/reference/en/html/conversations.html )
Can you analyse which types of request are taking a long time? Is there a particular type which you could reduce the request time by doing the "work" asynchronously and getting the update back in your poll?
In my opinion, ajax requests should always complete fairly quickly, then you can calculate a max concurrent request time by (request time * max number of requests likely to be initiated)

QTimer firing issue in QGIS(Quantum GIS)

I have been involved in building a custum QGIS application in which live data is to be shown on the viewer of the application.
The IPC being used is unix message queues.
The data is to be refreshed at a specified interval say, 3 seconds.
Now the problem that i am facing is that the processing of the data which is to be shown is taking more than 3 seconds,so what i have done is that before the app starts to process data for the next update,the refresh QTimer is stopped and after the data is processed i again restart the QTimer.The app should work in such a way that after an update/refresh(during this refresh the app goes unresponsive) the user should get ample time to continue to work on the app apart from seeing the data being updated.I am able to get acceptable pauses for the user to work-- in one scenario.
But on different OS(RHEL 5.0 to RHEL 5.2) the situation is something different.The timer goes wild and continues to fire without giving any pauses b/w the successive updates thus going into an infinite loop.Handling this update data definitely takes longer than 3 sec,but for that very reason i have stopped-restarted the timer while processing..and the same logic works in one scenario while in other it doesnt.. The other fact that i have observed is that when this quick firing of the timer happens the time taken by the refreshing function to exit is very small abt 300ms so the start-stop of the timer that i have placed at the start-and-end of this function happens in that small time..so before the actual processing of the data finishes,there are 3-4 starts of the timer in queue waiting to be executed and thus the infinite looping problem gets worse from that point for every successive update.
The important thing to note here is that for the same code in one OS the refresh time is shown to be as around 4000ms(the actual processing time taken for the same amount of data) while for the other OS its 300ms.
Maybe this has something to do with newer libs on the updated OS..but I dont know how to debug it because i am not able to get any clues why its happening as such... maybe something related to pthreads has changed b/w the OSs??
So, my query is that is there any way that will assure that some processing in my app is timerised(and which is independent of the OS) without using QTimer as i think that QTimer is not a good option to achieve what i want??
What option can be there?? pthreads or Boost threads? which one would be better if i am to use threads as an alternate??But how can i make sure atleast a 3 second gap b/w successive updates no matter how long the update processing takes?
Kindly help.
Thanks.
If I was trying to get an acceptable, longer-term solution, I would investigate updating your display in a separate thread. In that thread, you could paint the display to an image, updating as often as you desire... although you might want to throttle the thread so it doesn't take all of the processing time available. Then in the UI thread, you could read that image and draw it to screen. That could improve your responsiveness to panning, since you could be displaying different parts of the image. You could update the image every 3 seconds based on a timer (just redraw from the source), or you could have the other thread emit a signal whenever the new data is completely refreshed.

Resources