heap memory release policy in Arduino - arduino

#include <Arduino.h>
#include "include/MainComponent.h"
/*
 Turns on an LED on for one second, then off for one second, repeatedly.
*/
MainComponent* mainComponent;
void setup()
{
   mainComponent = new MainComponent();
   mainComponent->beginComponent();
}
void loop()
{
   mainComponent->runComponent();
}
is there any callback to release memory in Arduino ?(e.g to call delete mainComponent)
or this will happen automatically as the loop ends?
what is the strategy to ensure freeing the memory allocated in that code snippet?
SCENARIO :"I wanted to access the object in both methods , so the  object is declared in the global scope then instantiated at setup."
What happen when loop() terminated ? will  mainComponent still remain in the memory?
If it was in OS NO , process will terminated then resources will be deallocated.
So in Arduino how can I achieve above SCENARIO , by ensuring memory will be deallocated when the controller is switched off ?

What is confusing you is that the main() function is hidden by the basic Arduino IDE. Your programs have a main() function just like on any other platform, and have a lifecycle same as when run on a computer with OS. If you look under arduino___\hardware\cores\aduino, you will find a file main.cpp, which is included into your binaries:
int main(void)
{
init();
//...
setup();
for (;;) {
loop();
if (serialEventRun) serialEventRun();
}
return 0;
}
Considering this file you will now see, that while you exit the loop(), it is continuously called. Your program never exits. In general, your best pattern is to new objects once and never delete, like you have done here. If you are new'ing and delete'ing objects repeatedly on a microcontroller, you are not thinking about lifecycles and resources wisely.
So
"is the new'd object deleted at return from loop()?" No, the program is still running and it stays on the heap.
"What happens at power off? Is there a way to clean up?" The moment the supply voltage drops too low, the microcontroller will stop executing instructions. Power supervisor circuitry prevents the controller from doing anything erratic as the voltage drops (should prevent) When the voltage is conpletely drained, all the RAM is lost. Without adding circuitry, you have no way to execute any clean up at power off.
"Do I need to clean up?" No, at power up, everything is reset to a known state. Operation cannot be affected by anything left behind in RAM (presumes you initialize all your variables).

Related

Device-side enqueue causes CL_OUT_OF_RESOURCES

I have a program utilizing OpenCL 2.0 because I want to take advantage of device-side enqueue. I have a test program that performs the following tasks on the host side:
Allocates 16 kilobytes of floating point memory on the device and zeros it out.
Builds the OpenCL program below, and creates a kernel of masterKernel()
Sets the first argument of masterKernel() (heap) to the allocated memory in step 1
Enqueues that masterKernel() via clEnqueueNDRangeKernel() with a work_dim of 1 and a global work size of 1. (So it only runs once, with get_global_id(0) always being zero)
Reads the memory back into the host and displays it.
Here is the OpenCL code:
//This function was stripped down to nothing for testing purposes.
kernel void childKernel(global float* heap)
{
}
//Enqueues the child kernel.
kernel void masterKernel(global float* heap)
{
ndrange_t ndRange = ndrange_1D(16); //Arbitrary, could be any number.
if(get_global_id(0) == 0)
{
enqueue_kernel(get_default_queue(), 0, ndRange,
^{ childKernel(heap); });
}
}
The program builds successfully. However, when I try to run masterKernel(), The call to enqueue_kernel() here causes the host side call to clEnqueueNDRangeKernel() to fail with an error code of CL_OUT_OF_RESOURCES. OpenCL's documentation says enqueue_kernel() should return CL_SUCCESS or CL_ENQUEUE_FAILURE depending on if the block enqueues successfully or not. It does not say that clEnqueueNDRangeKernel() itself should fail. Here are some other things I've tried:
Commenting out the call to enqueue_kernel() causes the program to succeed.
Adding a line that sets heap[0] to any number causes the host-side program to reflect that change. So I know that it's not a problem with how I'm feeding the arguments in
Modifying the if statement so that it reads something impossible like if(get_global_id(0) == 6000) still causes the error. This tells me that the error is not caused by enqueue_kernel() executing (I verified get_global_size(0) == 1), but merely that it exists in the program at all.
Modifying the if statement to if(0) does make the error not happen.
Making it so childKernel() actually does something does not make the error go away.
I am not really sure what to try next. I know my device supports OpenCL 2.0. My device is an AMD Radeon R9 380 graphics card. I do not have access to any other OpenCL 2.0 capable hardware to test it on.
I ended up figuring this one out. This issue happened because I did not create a device-side queue (one with the flags of CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE | CL_QUEUE_ON_DEVICE | CL_QUEUE_ON_DEVICE_DEFAULT).

How to use Serial as an interrupt for other functions

I'm making an LED control program (using FastLED, of course) and using Serial (with the Serial monitor) to control it. I just connected it via USB and for the most part it works just fine. However, I noticed with the long, flashing routines that I couldn't stop them and make the LEDs do something else. The flash routine in my code is very simple:
void flash(CRGB color, int count, int del){
for(int i = 0; i < count; i++){
if(pause){
break;
}
fillLeds(color.r, color.g, color.b);
milliDelay(del);
fillLeds(0,0,0);
milliDelay(del);
}
}
With fillLeds(r,g,b) being a for loop, looping through and setting all LEDs to a certain color, and milliDelay is just delay() using millis and not the delay() function.
I need to be able to pause not just this, but other functions as well (probably using break;) and then execute other code. It seems easy, right? Well, I've noticed that when I send a byte over Serial, it goes into this "queue," if you will, and then is sequentially read.
I can’t have this happen. I need the next byte entering Serial to activate some kind of event that pauses the other flash() function running, and then be used. I have implemented this like:
void loop()
{
if (Serial.available() > 0)
{
int x = Serial.read();
Serial.print(x);
handleRequest(x);
}
FastLED.show();
FastLED.delay(1000 / UPDATES_PER_SECOND);
}
Where handleRequest(x); is just a long switch statement with calls to the flash method, with different colors being used, etc.
How can I make the Arduino pause other loops whenever a new byte is received, instead of adding it to this "queue" to be acted upon later? If this is not possible thanks for reading anyway. I've tried using serialEvent() which doesn't appear to work.
I think you need two loops. You have one, which is your main loop, and you can add another (like a multi threading) with the TimerOne library. Like this:
Timer1.initialize(your desired delay in this loop);
Timer1.attachInterrupt(your desired function in this loop);
So maybe you can add an if statement with a variable in your second loop to prevent some function and update the variable in your first loop or something like that.
Presuming you want interrupt-like functionality when a new byte arrives:
Unfortunately, serialEvent() is not a true interrupt. It only runs at the end of loop(), if there is serial data available.
However, serialEvent() is just a function, and there isn't any reason why you can't call it in your code as often as you like. This is effectively polling for new serial data as often as possible. So, while your loops are running, call serialEvent() during your delays and handle the serial data there.
You may need to restructure your code to avoid recursion though. If flash calls serialEvent(), which calls flash, which calls serialEvent, etc... then you may end up overflowing the stack.

Restore serial port attributes even after control-C?

When using a serial port via POSIX, it's recommended to save the original attributes using tcgetattr() before changing them with tcsetattr(), and then restore them before closing the port. What about when a program is terminated by pressing control-C or when the program receives SIGINT? I haven't seen this covered in any of the serial tutorials.
Apparently an atexit() function wouldn't be sufficient, because it's not called by the default SIGINT handler. So it seems installation of a signal handler would be necessary that restores the attributes to any serial ports still open. Is it even safe to call tcsetattr() from a signal handler?
One might simply dismiss this issue as insignificant, but it's common to terminate a program with control-C, especially one that can take tens of seconds to complete operations. If it's OK not to preserve serial port settings in this case, then there seems little reason to preserve them at all. If anything, it might be better not to bother, rather than do it inconsistently.
I found some examples of source code doing the above, but nothing well-documented. I guess I'm interested in some discussion of whether this is a good idea. Thanks.
After further research I think I've answered this to my satisfaction.
First, in the man page for signal I noticed that a signal handler is specifically allowed to call tcsetattr(), along with a few others:
The signal handler routine must be very careful, since processing elsewhere was interrupted at some arbitrary point. POSIX has the concept of "safe function". If a signal interrupts an unsafe function, and handler calls an unsafe function, then the behavior is undefined. Safe functions are listed explicitly in the various standards. The POSIX.1-2003 list is ... `raise()` ... `signal()` ... `tcsetattr()` [trimmed to relevant ones]
This strongly suggests that the POSIX committee had this exact kind of thing in mind, and leads to a straight forward approach where you change the SIGINT handler once you've opened serial and saved its attributes, then in your handler, restore them and the old SIGINT handler, then re-raise the signal:
static void (*prev_sigint)( int );
static termios saved_attr;
static int fd;
static void cleanup( int ignored )
{
tcsetattr( fd, TCSANOW, &saved_attr );
signal( SIGINT, prev_sigint );
raise( SIGINT );
}
int main( void )
{
open_serial_and_save_attrs();
prev_sigint = signal( SIGINT, cleanup );
...
}

How do game trainers change an address in memory that's dynamic?

Lets assume I am a game and I have a global int* that contains my health. A game trainer's job is to modify this value to whatever in order to achieve god mode. I've looked up tutorials on game trainers to understand how they work, and the general idea is to use a memory scanner to try and find the address of a certain value. Then modify this address by injecting a dll or whatever.
But I made a simple program with a global int* and its address changes every time I run the app, so I don't get how game trainers can hard code these addresses? Or is my example wrong?
What am I missing?
The way this is usually done is by tracing the pointer chain from a static variable up to the heap address containing the variable in question. For example:
struct CharacterStats
{
int health;
// ...
}
class Character
{
public:
CharacterStats* stats;
// ...
void hit(int damage)
{
stats->health -= damage;
if (stats->health <= 0)
die();
}
}
class Game
{
public:
Character* main_character;
vector<Character*> enemies;
// ...
}
Game* game;
void main()
{
game = new Game();
game->main_character = new Character();
game->main_character->stats = new CharacterStats;
// ...
}
In this case, if you follow mikek3332002's advice and set a breakpoint inside the Character::hit() function and nop out the subtraction, it would cause all characters, including enemies, to be invulnerable. The solution is to find the address of the "game" variable (which should reside in the data segment or a function's stack), and follow all the pointers until you find the address of the health variable.
Some tools, e.g. Cheat Engine, have functionality to automate this, and attempt to find the pointer chain by themselves. You will probably have to resort to reverse-engineering for more complicated cases, though.
Discovery of the access pointers is quite cumbersome and static memory values are difficult to adapt to different compilers or game versions.
With API hooking of malloc(), free(), etc. there is a different method than following pointers. Discovery starts with recording all dynamic memory allocations and doing memory search in parallel. The found heap memory address is then reverse matched against the recorded memory allocations. You get to know the size of the object and the offset of your value within the object. You repeat this with backtracing and get the jump-back code address of a malloc() call or a C++ constructor. With that information you can track and modify all objects which get allocated from there. You dump the objects and compare them and find a lot more interesting values. E.g. the universal elite game trainer "ugtrain" does it like this on Linux. It uses LD_PRELOAD.
Adaption works by "objdump -D"-based disassembly and just searching for the library function call with the known memory size in it.
See: http://en.wikipedia.org/wiki/Trainer_%28games%29
Ugtrain source: https://github.com/sriemer/ugtrain
The malloc() hook looks like this:
static __thread bool no_hook = false;
void *malloc (size_t size)
{
void *mem_addr;
static void *(*orig_malloc)(size_t size) = NULL;
/* handle malloc() recursion correctly */
if (no_hook)
return orig_malloc(size);
/* get the libc malloc function */
no_hook = true;
if (!orig_malloc)
*(void **) (&orig_malloc) = dlsym(RTLD_NEXT, "malloc");
mem_addr = orig_malloc(size);
/* real magic -> backtrace and send out spied information */
postprocess_malloc(size, mem_addr);
no_hook = false;
return mem_addr;
}
But if the found memory address is located within the executable or a library in memory, then ASLR is likely the cause for the dynamic. On Linux, libraries are PIC (position-independent code) and with latest distributions all executables are PIE (position-independent executables).
EDIT: never mind it seems it was just good luck, however the last 3 numbers of the pointer seem to stay the same. Perhaps this is ASLR kicking in and changing the base image address or something?
aaahhhh my bad, i was using %d for printf to print the address and not %p. After using %p the address stayed the same
#include <stdio.h>
int *something = NULL;
int main()
{
something = new int;
*something = 5;
fprintf(stdout, "Address of something: %p\nValue of something: %d\nPointer Address of something: %p", &something, *something, something);
getchar();
return 0;
}
Example for a dynamicaly allocated varible
The value I want to find is the number of lives to stop my lives from being reduced to 0 and getting game over.
Play the Game and search for the location of the lifes variable this instance.
Once found use a disassembler/debugger to watch that location for changes.
Lose a life.
The debugger should have reported the address that the decrement occurred.
Replace that instruction with no-ops
Got this pattern from the program called tsearch
A few related websites found from researching this topic:
http://deviatedhacking.com/index.php?/topic/75-dynamic-memory-allocation/
http://www.edgeofnowhere.cc/viewforum.php?f=183
http://www.oldschoolhack.de/tutorials/Theories%20and%20methods%20of%20code-caves.htm
http://webcache.googleusercontent.com/search?q=cache:4wzMzFIZx54J:gamehacking.com/forums/tutorials-beginners/11597-c-making-game-trainer.html+reading+a+dynamic+memory+address+game+trainer&cd=2&hl=en&ct=clnk&gl=au&client=firefox-a (A google cache version)
http://www.codeproject.com/KB/cpp/codecave.aspx
The way things like Gameshark codes were figured out were by dumping the memory image of the application, then doing one thing, then looking to see what changed. There might be a few things changing, but there should be patterns to look for. E.g. dump memory, shoot, dump memory, shoot again, dump memory, reload. Then look for changes and get an idea for where/how ammo is stored. For health it'll be similar, but a lot more things will be changing (since you'll be moving at the very least). It'll be easiest though to do it when minimizing the "external effects," e.g. don't try to diff memory dumps during a firefight because a lot is happening, do your diffs while standing in lava, or falling off a building, or something of that nature.

How can I terminate a QThread

Recently ,I come across this problem as I memtioned in this Title.
I have tried by using QThread::terminate(),but I just can NOT stop
the thread ,which is in a dead loop (let's say,while(1)).
thanks a lot.
Terminating the thread is the easy solution to stopping an async operation, but it is usually a bad idea: the thread could be doing a system call or could be in the middle of updating a data structure when it is terminated, which could leave the program or even the OS in an unstable state.
Try to transform your while(1) into while( isAlive() ) and make isAlive() return false when you want the thread to exit.
QThreads can deadlock if they finish "naturally" during termination.
For example in Unix, if the thread is waiting on a "read" call, the termination attempt (a Unix signal) will make the "read" call abort with an error code before the thread is destroyed.
That means that the thread can still reach it's natural exit point while being terminated. When it does so, a deadlock is reached since some internal mutex is already locked by the "terminate" call.
My workaround is to actually make sure that the thread never returns if it was terminated.
while( read(...) > 0 ) {
// Do stuff...
}
while( wasTerminated )
sleep(1);
return;
wasTerminated here is actually implemented a bit more complex, using atomic ints:
enum {
Running, Terminating, Quitting
};
QAtomicInt _state; // Initialized to Running
void myTerminate()
{
if( _state.testAndSetAquire(Running, Terminating) )
terminate();
}
void run()
{
[...]
while(read(...) > 0 ) {
[...]
}
if( !_state.testAndSetAquire(Running, Quitting) ) {
for(;;) sleep(1);
}
}
Have you tried exit or quit?
Did the thread call QThread::setTerminationEnabled(false)? That would cause thread termination to delay indefinitely.
EDIT: I don't know what platform you're on, but I checked the Windows implementation of QThread::terminate. Assuming the thread was actually running to begin with, and termination wasn't disabled via the above function, it's basically a wrapper around TerminateThread() in the Windows API. This function accepts disrespect from no thread, and tends to leave a mess behind with resource leaks and similar dangling state. If it's not killing the thread, you're either dealing with zombie kernel calls (most likely blocked I/O) or have even bigger problems somewhere.
To use unnamed pipes
int gPipeFdTest[2]; //create a global integer array
As an when where you intend to create pipes use
if( pipe(gPipeFdTest) < 0)
{
perror("Pipe failed");
exit(1);
}
The above code will create a pipe which has two ends gPipeFdTest[0] for reading and gPipeFdTest[1] for writing. What you can do is in your run function set up to read the pipe using select system call. And from where you want to come out of run, there set up to write using write system call. I have used select system call for monitoring the read end of the pipe as it suits my implmentation. Try to figure all this out in your case. If you need any more help, give me a buzz.
Edit:
My problem was just like yours. I had a while(1) loop and the other things I tried needed mutexes and other fancy multithreading mumbo jumbo, which added complexity and debugging was nightmare. Using pipes absolved me from those complexities besides simplified the code. I am not saying that it is the best option but in my case it turned out to be the best and cleanest alternative. I was bugged my hung application before this solution.

Resources