When using the MiniProfiler to hunt down a performance issue, I encountered a situation, where the MiniProfiler would log only a few of the calls to MiniProfiler.Step().
This is the code:
The breakpoints are set to only count the number of hits (313 per run), they are not interrupting the execution. Notice, that they are deactivated in the screenshot above. After running the application, I get a very incomplete log from MiniProfiler, which, from run to run, has a differing number of entries, usually 2 to 5.
However, when I activate the breakpoints, the log is complete. Remember, that the breakpoints still do not interrupt the execution.
Is this a bug in the MiniProfiler?
Click show trivial - I suspect the ones you're not seeing are "trivial". When you hit the breakpoints, you make them appear not-so-trivial by increasing their execution time.
EDIT:
... as I was poking around their code I stumbled on this
public bool IsTrivial...
Related
I'm relatively new to JavaFX and have written a small applet which launches a number of (typically between 3 and 10) sub-processes. Each process has a dedicated tab displaying current status and a large TextArea where the process output is appended to. For simplicity all tabs are generated on startup.
javafx.application.Platform.runLater(() -> logTextArea.appendText(line)))
The applet works fine when workloads on sub-processes are low-moderate (not many logs), but starts to freeze when sub-processes are heavily used and generate a decent amount of logging output (a good few hundreds of lines per second in total).
I looked into binding the TextArea to the output, but my understanding is it effectively calls the Platform.runLater() method so there will still be hundreds of calls to JavaFX application thread per second.
Batching logging outputs isn't an ideal solution either because I'd like to keep the displayed log as real-time as possible.
The only solution which I think might solve the problem seems to be dynamic loading of individual tabs. This would definitely prevent unnecessary calls to update logging textareas that aren't currently visible, but before I go ahead to make the changes, I'd like to get some helpful advice from you here. Thanks!
Thanks for all your suggestions. Finally got around to implementing a fix today.
The issue was fixed by using a buffer coupled with a secondary check for time lapse (maximum 20 lines or 100 ms).
In addition, I also implemented rolling output to limit the total process output to 1,000 lines.
Thanks again for your invaluable contribution!
I'm having an issue with occassional slow performance on button click events on a particular page. There are times that it performs well within normal parameters, but it seems like whenever the server is under even moderate load(meaning I can produce this issue in our production environment not in our dev or test environments) it seems to just hang. After enabling tracing I see that it seems to just hang between the Begin PreRenderComplete and End PreRenderComplete. It just sits there for close to 30 sec. I don't have any specific code that executes in that event space. My understanding was that this event is supposed to be a non event in the life cycle since it is just to make sure that the PreRender phase finished. This page has a large number of controls and as such has a sizable viewstate, but my understanding is that the view state is handled in the LoadState and SaveState events which don't seem to be the phases eating all of my time.
I've run perfmon against the server, and at times when I am able to produce this behavior system resources are normal, there aren't requests queuing. I'm trying to understand what actions might be taking place behind the scenes causing this slowness.
Are there any asynchronous actions on the page? I think that async calls will complete prior to that event, so if some of them are taking a while, such as a slow database or slow network call, that might cause the delay you're seeing.
I think I've found the problem. Through more indepth profiling it seems that my ScriptManager control was causing the delay in trying to combine scripts. Apparently this only presented a problem under load and most take place in the prerendercomplete event. By setting the CombineScripts="false" attribute it seems to have cleared this issue.
The web site that I have run for long time, sometimes it will have some speed issues, but after we clean up the MSSQL data, it will work fine.
But this time, it doesn't work any more, we always got Timeout error, and the IIS causes the CPU runs very high.
We took out some features, and the site runs back OK, but slow without error.
For example, when we do a search, if we have less than 10 results, the page/output is really fast.
When we have more than 200 results, the page is very slow, almost take about 15 to 20 secs to output the whole page
I know if you have more data to output, of course it takes more time to run, but we used to have more than 500 results, it ran/output very fast also.
Do you know anywhere I should look at to solve this speed problem?
You need to look at the code to see what is executed with displaying those results. Implement some logging or step through the execution of a result with a debugger.
If you have a source control system, now is the time to review what changes have been made between fast code and now slow code.
10 results could be taking 1 second to display which is barely tolerable but as you say 200 results takes 20 seconds. So the problem is some bad code somewhere. I think someone has made a code change.
I would start with breaking down the issue. For example sql server times and iis times. You can separate different parts of code and measure execution times, etc.
SQL Server Profiler is good tool to start with and for ASP.NET You can start with simple trace and page tracing.
Some more info about testing and performance
and by unresponsive I mean that after the first three successful connections, the fourth connection is initiated and nothing happens, no crashes, no delegate functions called, no data is sent out (according to wireshark)... it just sits there?!
I've been beating my head against this for a day and half...
iOS 4.3.3
latest xCode, happens the same way on a real device as in the simulator.
I've read all the NSURLConnection posts in the Developer Forums... I'm at a loss.
From my application delegate, I kick off an async NSURLConnection according to Apple docs, using the App Delegate as the delegate for the NSURLConnection.
From my applicationDidFinishLaunching... I trigger the initial two queries which successfully return XML that I then pass off to a OperationQueue to be parsed.
I can even loop, repeating these queries with no issues, repeated them 10 times and worked just fine.
The next series of five queries are triggered via user input. The first query runs successfully and returns the correct response, then the next query is created and when used to create a NSURLConnection (just like all the others), just sits there.?!
The normal delegate calls I see on all the other queries are never seen.
Nothing goes over the wire according to Wireshark?
I've reordered the queries and regardless of the query, after the first one the next one fails (fails as in does nothing, no errors or aborts, just sits there)
It's obviously in my code, but I am blind to it.
So what other tools can I use to debug the async NSURLConnection... how can I tell what it's doing? if at all.
Any suggestions for debugging a NSURLConnection or other ways accomplish doing the same thing a NSURLConnection does??
Thanks for any help you can offer...
OK tracked it down...
I was watching the stack dump in each thread as I was about to kick off each NSURLConnection, the first three were all in the main thread as expected... the fourth one ended up in a new thread?! In one of my NSOperation thread?!?!
As it turns out I inadvertently added logic(?) that started one my NSURLConnection in the last NSOperation call to didFinishParsing: so the NSURLConnection was async started and then the NSOperation terminated... >.<
So I'll move the NSURLConnection out of the didFinishParsing and it should stay in the main loop and I should be good!
I have been involved in building a custum QGIS application in which live data is to be shown on the viewer of the application.
The IPC being used is unix message queues.
The data is to be refreshed at a specified interval say, 3 seconds.
Now the problem that i am facing is that the processing of the data which is to be shown is taking more than 3 seconds,so what i have done is that before the app starts to process data for the next update,the refresh QTimer is stopped and after the data is processed i again restart the QTimer.The app should work in such a way that after an update/refresh(during this refresh the app goes unresponsive) the user should get ample time to continue to work on the app apart from seeing the data being updated.I am able to get acceptable pauses for the user to work-- in one scenario.
But on different OS(RHEL 5.0 to RHEL 5.2) the situation is something different.The timer goes wild and continues to fire without giving any pauses b/w the successive updates thus going into an infinite loop.Handling this update data definitely takes longer than 3 sec,but for that very reason i have stopped-restarted the timer while processing..and the same logic works in one scenario while in other it doesnt.. The other fact that i have observed is that when this quick firing of the timer happens the time taken by the refreshing function to exit is very small abt 300ms so the start-stop of the timer that i have placed at the start-and-end of this function happens in that small time..so before the actual processing of the data finishes,there are 3-4 starts of the timer in queue waiting to be executed and thus the infinite looping problem gets worse from that point for every successive update.
The important thing to note here is that for the same code in one OS the refresh time is shown to be as around 4000ms(the actual processing time taken for the same amount of data) while for the other OS its 300ms.
Maybe this has something to do with newer libs on the updated OS..but I dont know how to debug it because i am not able to get any clues why its happening as such... maybe something related to pthreads has changed b/w the OSs??
So, my query is that is there any way that will assure that some processing in my app is timerised(and which is independent of the OS) without using QTimer as i think that QTimer is not a good option to achieve what i want??
What option can be there?? pthreads or Boost threads? which one would be better if i am to use threads as an alternate??But how can i make sure atleast a 3 second gap b/w successive updates no matter how long the update processing takes?
Kindly help.
Thanks.
If I was trying to get an acceptable, longer-term solution, I would investigate updating your display in a separate thread. In that thread, you could paint the display to an image, updating as often as you desire... although you might want to throttle the thread so it doesn't take all of the processing time available. Then in the UI thread, you could read that image and draw it to screen. That could improve your responsiveness to panning, since you could be displaying different parts of the image. You could update the image every 3 seconds based on a timer (just redraw from the source), or you could have the other thread emit a signal whenever the new data is completely refreshed.