Actionscript Flex getTimer() maximum - apache-flex

Documentation on the flex getTimer() method states:
int - The number of milliseconds since the runtime was initialized (while processing ActionScript 2.0), or since the virtual machine started (while processing ActionScript 3.0). If the runtime starts playing one SWF file, and another SWF file is loaded later, the return value is relative to when the first SWF file was loaded.
The maximum value for an int is: 2,147,483,647 which is a bit less than 25 days. If someone were to leave the flash application running for an extended period of time, does anyone know what happens when this method reaches the maximum value for int? Does it reset to zero?

When int reaches it maximum value 2147483647 and on adding 1, it should resets to its maximum -ve value -2147483648 and it's iterative in nature, so function should not fails
EDIT Code sample is added
private function intcheck():void
{
var a:int = 2147483647;
var b:int = 1;
var c:int = a+b;
Alert.show(c.toString());
}
Hopes that helps

I don't know the answer for sure, but I would assume the number would roll over. However, if you're concerned about the roll over, you might want to look at the Timer class, or just use a good ol' timestamp with new Date().getTime() and then doing a comparison between the times.

Related

Qt measuring rendering time during which application is frozen

I have a Qt application, where, among other things, there is a function render which goes through a list of objects and creates for each an according (subclassed) QGraphicsPathItem which it then puts as a child in a (subclassed) QGraphicsScene. (It is done in the code below through a visitor GGObjConstructor which gets initialized with the variable scene which is the scene where the items are to be added to).
XTimer timer;
timer.start();
gobjlist gobjlis = ogc._gobjectlis;
GGObjConstructor ggoc(scene, z_value, bkground_color);
for(auto obj: gobjlis) {
obj->exec( &ggoc );
}
timer.stop();
My class XTimer is used in an obvious way to measure the time for this proceeding.
Now the problem is: Only the time spent in the loop where all the items are prepared and inserted into the scene is measured by timer. For a typical example with ~165000 items this gives about 7.5 sec as timer-value at reaching timer.stop(). But the application is after these 7.5 sec still frozen, with the screen-window where the scene is to by displayed yet invisible and only after about 25 sec (hand-stopped) suddenly the display window appears with all the items to be displayed.
Now of course I would like to measure these "freeze time" (or time till the application becomes responsive again, or maybe time till display window appears). But I found no way to do this although I looked some time through stackoverflow or the net in general. The best hint I found was
stackoverflow question
The answer there seemed to imply, that it would be not really simple to achieve (overriding paintEvent method and such things).
Question: Is this true? Or is there a simple way to measure the time till the application becomes responsive again/the image is really displayed?
I had a similar problem with an application once, where I wanted to measure the time the app freezes to figure out with logging what was causing these freezes. What I came up was to measure how long the Eventloop of the mainthread was not responding, because this directly corresponds to a frozen app.
The basic idea is to not run a QApplication but inherit from QApplication and override the notify() function. Some apps do this anyway to catch exceptions which would otherwise break the eventloop. Here is some pseudo-code which should bring the idea across:
bool MyApplication::notify( QObject * receiver, QEvent * event )
{
// something like storing current time like:
// auto start = std::chrono::system_clock::now();
// auto elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(start - end );
// if( elapsed.count() > 1000 ){
// log something like: Mainthread responds again after <elapsed> seconds
// }
// Note that end must be a member variable
// end = start;
return QApplication::notify(receiver, event);
}
Note 1: If your app does not continuesly run through notify() you can for testing purposes introduce a dummy QTimer which triggers faster than the logging time threshold.
Note 2: if you use multiple threads, esp. QThreads it could be necessary to filter the receiver object and perform that code only if the reciever is in the mainthread.
With this code you can log every freeze of the main-thread (frozen GUI) and determine the length of the freeze. With proper logging you can figure out whats causing the freezes.
Keep in mind that this will only log after the freeze has resolved!
Addition:
It is more complicated and slow, but for debugging/investigation purposes you can store the last event and Object-Tree of the reciever and log that as well. Than you even know which was the last event which triggered the freeze and the recieving Object.

QThreadStorage vs C++11 thread_local

As the title suggests: What are the differences. I know that QThreadStorage existed long before the thread_local keyword, which is probably why it exists in the first place - but the question remains: Is there any significant difference in what they do between those two (except for the extended API of QThreadStorage).
Im am assuming a normal use case of a static variable - as using QThreadStorage as non static is possible, but not recommended to do.
static thread_local int a;
// vs
static QThreadStorage<int> b;
Well, since Qt is open-source you can basically figure out the answers from the Qt sources. I am currently looking at 5.9 and here are some things, that I'd classify as significant:
1) Looking at qthreadstorage.cpp, the get/set methods both have this block in the beginning:
QThreadData *data = QThreadData::current();
if (!data) {
qWarning("QThreadStorage::set: QThreadStorage can only be used with threads started with QThread");
return 0;
}
So (an maybe this has changed/will change) you can't mix QThreadStorage with anything else than QThread, while the thread_local keyword does not have similar restrictions.
2) Quoting cppreference on thread_local:
thread storage duration. The object is allocated when the thread begins and deallocated when the thread ends.
Looking at qthreadstorage.cpp, the QThreadStorage class actually does not contain any storage for T, the actual storage is allocated upon the first get call:
QVector<void *> &tls = data->tls;
if (tls.size() <= id)
tls.resize(id + 1);
void **v = &tls[id];
So this means that the lifetime is quite different between these two - with QThreadStorage you actually have more control, since you create the object and set it with the setter, or default initialize with the getter. E.g. if the ctor for the type could throw, with QThreadStorage you could catch that, with thread_local I am not even sure.
I assume these two are significant enough not to go any deeper.

Adobe AIR issue

I have an Adobe AIR application with a chart component displaying some online data.
The chart component has to display a bunch of parameters about 10 of them.
The application has a timer to display current system time.
The chart component is invoked from the timer(using a bindable object) as shown below. A separate bindable object is used to prevent delay in timer.
private function onTimerEvent(event:TimerEvent):void
{
//Update time in screen
oCurrDate = new Date();
curTime = oDateformatter.format(oCurrDate).toString();
oCurrDate = null;
//oUpdate- Bindable object to inform chart about update
//Call chart every 250 ms
oUpdate.bUpdate = !oUpdate.bUpdate;
}
Iam using a calllater function in chart's update event as shown below. updateParameter() will update the dataprovider of each of the parameter and draws it.
public function onUpdateEvent(evt:PropertyChangeEvent):void
{
//aPlotIndex set to 0 for first parameter
aPlotIndex.push(0);
this.callLater(updateParameter, aPlotIndex);
}
The problem is the timer and chart updation stops after running 20 to 21 days. After 20 or 21 days, the update happens only on mouse move. When I moved mouse the time displayed & data in chart are updated.
I profiled the application and found that there is no memory leakage issues. I am logging all the errors but I didn't get any error in the log also. Everything seems weird.
Is it because of using the calllater function.
Is it that the timer is still running but frame update not happening.
I am using Flex SDK3.3 and Flex AIR version 2.0.4.13090.
Please guide me to resolve this issue.
Please read the documentation for Timer.
The total number of times the timer is set to run. If the repeat count is set to 0, the timer continues indefinitely, up to a maximum of 24.86 days, or until the stop() method is invoked or the program stops. If the repeat count is nonzero, the timer runs the specified number of times. If repeatCount is set to a total that is the same or less then currentCount the timer stops and will not fire again.
The maximum is due to the max of an int, which is what the timer uses to store its current count. int has a max value of 2,147,483,647. 2147483647 / 24.86 days / 1440 minutes / 60 seconds /1000 milliseconds is equal to 1 (give or take, differences with the way the docs' number is rounded), which aligns with the fact that the internal clock stores its count per millisecond the Timer has run.
So you cannot set an option to fix this. Instead, you can work around it, assuming nothing actually relies on your timer's value. Each time the timer runs, call
timer.reset();
timer.start();
That will reset the timer to 0 and start it back up again, meaning you will never hit that 24.86 day limit, unless your delay is longer than that (which it cannot possibly be)
Additionally, I'd like to point out that the Flex and AIR versions you are running are severely out of date. Both are nearly 5 years old at this point. AIR may not even run properly on newer versions of Windows and OS X because of it (and won't run on mobile at all). Not a big deal if everything works, but always good to know (many posters on SO do not realize how out of date their Flex and AIR SDKs are)
Timer.repeatCount
int.MAX_VALUE

How do game trainers change an address in memory that's dynamic?

Lets assume I am a game and I have a global int* that contains my health. A game trainer's job is to modify this value to whatever in order to achieve god mode. I've looked up tutorials on game trainers to understand how they work, and the general idea is to use a memory scanner to try and find the address of a certain value. Then modify this address by injecting a dll or whatever.
But I made a simple program with a global int* and its address changes every time I run the app, so I don't get how game trainers can hard code these addresses? Or is my example wrong?
What am I missing?
The way this is usually done is by tracing the pointer chain from a static variable up to the heap address containing the variable in question. For example:
struct CharacterStats
{
int health;
// ...
}
class Character
{
public:
CharacterStats* stats;
// ...
void hit(int damage)
{
stats->health -= damage;
if (stats->health <= 0)
die();
}
}
class Game
{
public:
Character* main_character;
vector<Character*> enemies;
// ...
}
Game* game;
void main()
{
game = new Game();
game->main_character = new Character();
game->main_character->stats = new CharacterStats;
// ...
}
In this case, if you follow mikek3332002's advice and set a breakpoint inside the Character::hit() function and nop out the subtraction, it would cause all characters, including enemies, to be invulnerable. The solution is to find the address of the "game" variable (which should reside in the data segment or a function's stack), and follow all the pointers until you find the address of the health variable.
Some tools, e.g. Cheat Engine, have functionality to automate this, and attempt to find the pointer chain by themselves. You will probably have to resort to reverse-engineering for more complicated cases, though.
Discovery of the access pointers is quite cumbersome and static memory values are difficult to adapt to different compilers or game versions.
With API hooking of malloc(), free(), etc. there is a different method than following pointers. Discovery starts with recording all dynamic memory allocations and doing memory search in parallel. The found heap memory address is then reverse matched against the recorded memory allocations. You get to know the size of the object and the offset of your value within the object. You repeat this with backtracing and get the jump-back code address of a malloc() call or a C++ constructor. With that information you can track and modify all objects which get allocated from there. You dump the objects and compare them and find a lot more interesting values. E.g. the universal elite game trainer "ugtrain" does it like this on Linux. It uses LD_PRELOAD.
Adaption works by "objdump -D"-based disassembly and just searching for the library function call with the known memory size in it.
See: http://en.wikipedia.org/wiki/Trainer_%28games%29
Ugtrain source: https://github.com/sriemer/ugtrain
The malloc() hook looks like this:
static __thread bool no_hook = false;
void *malloc (size_t size)
{
void *mem_addr;
static void *(*orig_malloc)(size_t size) = NULL;
/* handle malloc() recursion correctly */
if (no_hook)
return orig_malloc(size);
/* get the libc malloc function */
no_hook = true;
if (!orig_malloc)
*(void **) (&orig_malloc) = dlsym(RTLD_NEXT, "malloc");
mem_addr = orig_malloc(size);
/* real magic -> backtrace and send out spied information */
postprocess_malloc(size, mem_addr);
no_hook = false;
return mem_addr;
}
But if the found memory address is located within the executable or a library in memory, then ASLR is likely the cause for the dynamic. On Linux, libraries are PIC (position-independent code) and with latest distributions all executables are PIE (position-independent executables).
EDIT: never mind it seems it was just good luck, however the last 3 numbers of the pointer seem to stay the same. Perhaps this is ASLR kicking in and changing the base image address or something?
aaahhhh my bad, i was using %d for printf to print the address and not %p. After using %p the address stayed the same
#include <stdio.h>
int *something = NULL;
int main()
{
something = new int;
*something = 5;
fprintf(stdout, "Address of something: %p\nValue of something: %d\nPointer Address of something: %p", &something, *something, something);
getchar();
return 0;
}
Example for a dynamicaly allocated varible
The value I want to find is the number of lives to stop my lives from being reduced to 0 and getting game over.
Play the Game and search for the location of the lifes variable this instance.
Once found use a disassembler/debugger to watch that location for changes.
Lose a life.
The debugger should have reported the address that the decrement occurred.
Replace that instruction with no-ops
Got this pattern from the program called tsearch
A few related websites found from researching this topic:
http://deviatedhacking.com/index.php?/topic/75-dynamic-memory-allocation/
http://www.edgeofnowhere.cc/viewforum.php?f=183
http://www.oldschoolhack.de/tutorials/Theories%20and%20methods%20of%20code-caves.htm
http://webcache.googleusercontent.com/search?q=cache:4wzMzFIZx54J:gamehacking.com/forums/tutorials-beginners/11597-c-making-game-trainer.html+reading+a+dynamic+memory+address+game+trainer&cd=2&hl=en&ct=clnk&gl=au&client=firefox-a (A google cache version)
http://www.codeproject.com/KB/cpp/codecave.aspx
The way things like Gameshark codes were figured out were by dumping the memory image of the application, then doing one thing, then looking to see what changed. There might be a few things changing, but there should be patterns to look for. E.g. dump memory, shoot, dump memory, shoot again, dump memory, reload. Then look for changes and get an idea for where/how ammo is stored. For health it'll be similar, but a lot more things will be changing (since you'll be moving at the very least). It'll be easiest though to do it when minimizing the "external effects," e.g. don't try to diff memory dumps during a firefight because a lot is happening, do your diffs while standing in lava, or falling off a building, or something of that nature.

Memory usage in Flash / Flex / AS3

I'm having some trouble with memory management in a flash app. Memory usage grows quite a bit, and I've tracked it down to the way I load assets.
I embed several raster images in a class Embedded, like this
[Embed(source="/home/gabriel/text_hard.jpg")]
public static var ASSET_text_hard_DOT_jpg : Class;
I then instance the assets this way
var pClass : Class = Embedded[sResource] as Class;
return new pClass() as Bitmap;
At this point, memory usage goes up, which is perfectly normal. However, nulling all the references to the object doesn't free the memory.
Based on this behavior, looks like the flash player is creating an instance of the class the first time I request it, but never ever releases it - not without references, calling System.gc(), doing the double LocalConnection trick, or calling dispose() on the BitmapData objects.
Of course, this is very undesirable - memory usage would grow until everything in the SWFs is instanced, regardless of whether I stopped using some asset long ago.
Is my analysis correct? Can anything be done to fix this?
Make sure you run your tests again in the non-debug player. Debug player doesn't always reclaim all the memory properly when releasing assets.
Also, because you're using an Embedded rather than loaded asset, the actual data might not ever be released. As it's part of your SWF, I'd say you could reasonably expect it to be in memory for the lifetime of your SWF.
Why do you want to use this ???
var pClass : Class = Embedded[sResource] as Class;
return new pClass() as Bitmap;
Sometimes dynamically resource assignment is buggy and fails to be freed up. I had similar problems before with flash player and flex, for ex. loading and unloading the same external swf... the memory was increasing constantly with the size of the loaded swf without going down even if I was calling the system.gc(); after unloading the swf.
So my suggestion is to skip this approach and use the first case you have described.
UPDATE 1
<?xml version="1.0" encoding="utf-8"?>
<s:Application
xmlns:fx = "http://ns.adobe.com/mxml/2009"
xmlns:s = "library://ns.adobe.com/flex/spark"
xmlns:mx = "library://ns.adobe.com/flex/mx"
creationComplete = "creationComplete()">
<fx:Script>
<![CDATA[
[Embed(source="/assets/logo1w.png")]
private static var asset1:Class;
[Embed(source="/assets/060110sinkhole.jpg")]
private static var asset2:Class;
private static var _dict:Dictionary = new Dictionary();
private static function initDictionary():void
{
_dict["/assets/logo1w.png"] = asset1;
_dict["/assets/060110sinkhole.jpg"] = asset2;
}
public static function getAssetClass(assetPath:String):Class
{
// if the asset is already in the dictionary then just return it
if(_dict[assetPath] != null)
{
return _dict[assetPath] as Class;
}
return null;
}
private function creationComplete():void
{
initDictionary();
var asset1:Class = getAssetClass("/assets/logo1w.png");
var asset2:Class = getAssetClass("/assets/060110sinkhole.jpg");
var asset3:Class = getAssetClass("/assets/logo1w.png");
var asset4:Class = getAssetClass("/assets/060110sinkhole.jpg");
var asset5:Class = getAssetClass("/assets/logo1w.png");
}
]]>
</fx:Script>
At this point, memory usage goes up,
which is perfectly normal. However,
nulling all the references to the
object doesn't free the memory.
That's also perfectly normal. It's rare for any system to guarantee that the moment you stop referencing an object in the code is the same moment that the memory for it is returned to the operating system. That's exactly why methods like System.gc() are there, to allow you to force a clean-up when you need one. Usually the application may implement some sort of pooling to keep around objects and memory for efficiency purposes (as memory allocation is typically slow). And even if the application does return the memory to the operating system, the OS might still consider it as assigned to the application for a while, in case the app needs to request some more shortly afterwards.
You only need to worry if that freed memory is not getting reused. For example, you should find that if you create an object, free it, and repeat this process, memory usage should not grow linearly with the number of objects you create, as the memory for previously freed objects gets reallocated into the new ones. If you can confirm that does not happen, edit your question to say so. :)
It turns out I was keeping references to the objects that didn't unload. Very tricky ones. The GC does work correctly in all the cases I described, which I suspected may have been different.
More precisely, loading a SWF, instancing lots of classes defined in it, adding them to the display list, manipulating them, and then removing all references and unloading the SWF leaves me with almost exactly the same memory I started with.

Resources