CADisplayLink on iPhone 3GS and iPhone 4 drops from 60 to 40 FPS randomly - frame-rate

I am using CADisplayLink to perform a function called gameLoop at 60 FPS and at first the application runs perfectly. Then at a random time (could be 5 seconds or 1 minute the fps drops to around 40 FPS.
After days of searching forums, debugging, optimizing my rendering, profiling, and analyzing my app, I decided to remove everything from the gameLoop function except for a few lines which calculate how long it has been since the last call to gameLoop. I cannot figure out why CADisplayLink calls gameLoop at 60 FPS for a while then calls it at 40 FPS thereafter when gameLoop does almost nothing.
I implemented a pause/unpause function which invalidates the display link and restarts it. When I restart the display link, the app runs at 60 FPS again until it drops randomly.
Thanks in advance to anyone who can give me some insight as to why this is happening.

I decided to try OpenGL ES 2.0 and the GLKit API to see if that would fix the issue. After reading the apple docs and some tutorials I ported the code and tested it with the Xcode analyzer and the Performance Detective. The GLKViewController's Update and drawInRect functions get called at 60 FPS and everything runs perfectly.

If you're on iOS 5 make sure to set your view's opaque paramater to YES. I'm guessing something with compositing the screen with other CoreAnimation layers is causing the slow down. And I'm guessing the GLKIT API does this in it's setup.
http://www.cocos2d-iphone.org/forums/topic/app-often-but-not-always-runs-at-40-fps-at-startup-rather-than-60-fps/

Related

Here SDK PositionSimulator - Issue, or surprising behaviour, if the signal is temporarily lost (in the file)

I'm using a GPX file which contains one point having the network source, the other points come from the gps satellite source. When playing the file and getting this 'network point', then the OnPositionChanged Listener is not triggered during 20 seconds (it should consider that the signal is still the same), I lose the next points and my app considers that the signal is lost during 20 seconds.
This behaviour comes up when playing the file without navigation, it also comes up when navigating, ...but it doesn't occur when the navigation has been launched and it is stopped. In this case, the position of the points after the network point is got normally without the 20s delay.
HERE Developer Support, could you please investigate?
Try please new 3.14 version there is fix some delays in PositioningManager. Some suggestion: if possible use please only LocationMethod.GPS during navigation and LocationMethod.GPS_NETWORK at all other cases. This suggestion works for PositionSimulator too.

Performance Issue with JavaFX multiple tabs simultaneous updates of TextArea

I'm relatively new to JavaFX and have written a small applet which launches a number of (typically between 3 and 10) sub-processes. Each process has a dedicated tab displaying current status and a large TextArea where the process output is appended to. For simplicity all tabs are generated on startup.
javafx.application.Platform.runLater(() -> logTextArea.appendText(line)))
The applet works fine when workloads on sub-processes are low-moderate (not many logs), but starts to freeze when sub-processes are heavily used and generate a decent amount of logging output (a good few hundreds of lines per second in total).
I looked into binding the TextArea to the output, but my understanding is it effectively calls the Platform.runLater() method so there will still be hundreds of calls to JavaFX application thread per second.
Batching logging outputs isn't an ideal solution either because I'd like to keep the displayed log as real-time as possible.
The only solution which I think might solve the problem seems to be dynamic loading of individual tabs. This would definitely prevent unnecessary calls to update logging textareas that aren't currently visible, but before I go ahead to make the changes, I'd like to get some helpful advice from you here. Thanks!
Thanks for all your suggestions. Finally got around to implementing a fix today.
The issue was fixed by using a buffer coupled with a secondary check for time lapse (maximum 20 lines or 100 ms).
In addition, I also implemented rolling output to limit the total process output to 1,000 lines.
Thanks again for your invaluable contribution!

WMV decoder inconsistent sample timestamp

I am running into a weird issue. I have a pipeline with a custom renderer written in-house. The pipeline is built with IMFMediaSession and the rendere user the Media Session presentation clock in order to schedule samples. This is all good. I am testing the pipeline with various videos. I have a WMV video which acts weird.
The issue happens during seeking. If MediaSession->Start is called with any time besides '0', the first sample received is usually way ahead of the clock. For example, the video is 163 seconds. Passing 138x10^7 nano-seconds to the start method, sets the clock at 138 seconds. However the first frame I get has the timestamp of 162. Another example is starting at 38 seconds, results in frames starting at 78. This is around 30 seconds difference.
I tested this with TopoEdit.exe (both installed and source code from the SDK) and it is consistent. In case of TopoEdit, some of the seeks takes around 8-10 seconds for the drawing to start while the clock is progressing.
I did not see this happening with .mov or mp4 files. Has anyone ran into this issue before?
Update: .mov files also give me frames from the range of 1 to 5 seconds ahead of the clock

Beginner: ASP.NET window output html takes long time

The web site that I have run for long time, sometimes it will have some speed issues, but after we clean up the MSSQL data, it will work fine.
But this time, it doesn't work any more, we always got Timeout error, and the IIS causes the CPU runs very high.
We took out some features, and the site runs back OK, but slow without error.
For example, when we do a search, if we have less than 10 results, the page/output is really fast.
When we have more than 200 results, the page is very slow, almost take about 15 to 20 secs to output the whole page
I know if you have more data to output, of course it takes more time to run, but we used to have more than 500 results, it ran/output very fast also.
Do you know anywhere I should look at to solve this speed problem?
You need to look at the code to see what is executed with displaying those results. Implement some logging or step through the execution of a result with a debugger.
If you have a source control system, now is the time to review what changes have been made between fast code and now slow code.
10 results could be taking 1 second to display which is barely tolerable but as you say 200 results takes 20 seconds. So the problem is some bad code somewhere. I think someone has made a code change.
I would start with breaking down the issue. For example sql server times and iis times. You can separate different parts of code and measure execution times, etc.
SQL Server Profiler is good tool to start with and for ASP.NET You can start with simple trace and page tracing.
Some more info about testing and performance

QTimer firing issue in QGIS(Quantum GIS)

I have been involved in building a custum QGIS application in which live data is to be shown on the viewer of the application.
The IPC being used is unix message queues.
The data is to be refreshed at a specified interval say, 3 seconds.
Now the problem that i am facing is that the processing of the data which is to be shown is taking more than 3 seconds,so what i have done is that before the app starts to process data for the next update,the refresh QTimer is stopped and after the data is processed i again restart the QTimer.The app should work in such a way that after an update/refresh(during this refresh the app goes unresponsive) the user should get ample time to continue to work on the app apart from seeing the data being updated.I am able to get acceptable pauses for the user to work-- in one scenario.
But on different OS(RHEL 5.0 to RHEL 5.2) the situation is something different.The timer goes wild and continues to fire without giving any pauses b/w the successive updates thus going into an infinite loop.Handling this update data definitely takes longer than 3 sec,but for that very reason i have stopped-restarted the timer while processing..and the same logic works in one scenario while in other it doesnt.. The other fact that i have observed is that when this quick firing of the timer happens the time taken by the refreshing function to exit is very small abt 300ms so the start-stop of the timer that i have placed at the start-and-end of this function happens in that small time..so before the actual processing of the data finishes,there are 3-4 starts of the timer in queue waiting to be executed and thus the infinite looping problem gets worse from that point for every successive update.
The important thing to note here is that for the same code in one OS the refresh time is shown to be as around 4000ms(the actual processing time taken for the same amount of data) while for the other OS its 300ms.
Maybe this has something to do with newer libs on the updated OS..but I dont know how to debug it because i am not able to get any clues why its happening as such... maybe something related to pthreads has changed b/w the OSs??
So, my query is that is there any way that will assure that some processing in my app is timerised(and which is independent of the OS) without using QTimer as i think that QTimer is not a good option to achieve what i want??
What option can be there?? pthreads or Boost threads? which one would be better if i am to use threads as an alternate??But how can i make sure atleast a 3 second gap b/w successive updates no matter how long the update processing takes?
Kindly help.
Thanks.
If I was trying to get an acceptable, longer-term solution, I would investigate updating your display in a separate thread. In that thread, you could paint the display to an image, updating as often as you desire... although you might want to throttle the thread so it doesn't take all of the processing time available. Then in the UI thread, you could read that image and draw it to screen. That could improve your responsiveness to panning, since you could be displaying different parts of the image. You could update the image every 3 seconds based on a timer (just redraw from the source), or you could have the other thread emit a signal whenever the new data is completely refreshed.

Resources