Cannot link Arduino project to include Simulink Code - arduino

At work, I recently took training on MATLAB/Simulink, including the Simulink Coder that can generate C code for embedded applications. I wanted to try my hand at it, so I bought an Arduino, and dove in. I am able to write simple sketches with no problem, but have been hitting a brick wall when trying to integrate the code generated by Simulink.
I initially used the Arduino IDE, then Eclipse with the Arduino plug-in, and finally Xcode with the embedXcode templates. (My work machine with Simulink is a PC, but I'm not allowed to install "unauthorized software", so I did the rest on my home Mac.) All three use the same avr-gcc compiler.
All three had the same point of failure: "Undefined Reference" errors on the generated function calls. I believe this to be a linker issue rather than basic syntax or header inclusion, as the Eclipse and Xcode code-completion are working OK, and if I change the call signature in any way, the error changes. I can make references to the data structures OK.
As far as I can tell, the default makefiles are set up to compile and link any files within the folder. A "mass_model2.o" file is being created, at least with Xcode. Finally, if I manually write a separate "myFunction.c" and "MyFunction.h" file with a simple function call, this does compile and run on the device as expected.
In desparation, I copied the entire contents of the generated ".c" file, and pasted them in the main sketch file after my setup() and loop() functions, keeping the same ".h" references, and removed the ".c" file from the project. This actually did compile and run! However, I should not have to touch the generated code in order to use it.
What do I need to do to get this to compile and link properly?
The Simulink code is quite verbose, so here are the key parts:
mass_model2.h excerpts:
#include "rtwtypes.h"
#include "mass_model2_types.h"
/* External inputs (root inport signals with auto storage) */
typedef struct {
int16_T PotPos; /* '<Root>/PotPos' */
} ExternalInputs_mass_model2;
/* External outputs (root outports fed by signals with auto storage) */
typedef struct {
int16_T ServoCmd; /* '<Root>/ServoCmd' */
} ExternalOutputs_mass_model2;
/* External inputs (root inport signals with auto storage) */
extern ExternalInputs_mass_model2 mass_model2_U;
/* External outputs (root outports fed by signals with auto storage) */
extern ExternalOutputs_mass_model2 mass_model2_Y;
/* Model entry point functions */
extern void mass_model2_initialize(void);
extern void mass_model2_step(void);
mass_model2.c excerpts:
#include "mass_model2.h"
#include "mass_model2_private.h"
/* External inputs (root inport signals with auto storage) */
ExternalInputs_mass_model2 mass_model2_U;
/* External outputs (root outports fed by signals with auto storage) */
ExternalOutputs_mass_model2 mass_model2_Y;
/* Model step function */
void mass_model2_step(void)
{
// lots of generated code here
}
/* Model initialize function */
void mass_model2_initialize(void)
{
// generated code here
}
The other referenced headers, "rtwtypes.h" and "mass_model2_private.h" define specific types that are used by the generated code, like int16_T. These files are included in the project, and I do not receive any errors associated with them.
In my sketch file, the setup() function calls mass_model2_initialize(). loop() reads my input (a potentiometer), sets the value in mass_model2_U.PotPos, and calls mass_model2_step(). It then gets mass_model2_Y.ServoCmd and writes the value to a servo for output, and finally has a delay().

You can use this download, http://www.mathworks.com/matlabcentral/fileexchange/24675, with Simulink, Simulink Coder and Embedded Coder. Make sure you have the correct version numbers of each tool.

The #include "Arduino.h" statement is required on the main sketch.

Related

How to layout arduino code files with external unit tests

I have three code files for an arduino project:
main.ino <-- main sketch
helper.cpp <-- helper functions, no avr code
helper_test.cpp <-- unit test for helpers
Arduino will attempt to include helper_test.cpp, and will be confused by its inclusion of the unit test library header files (which happen to be google test). If the unit test contains a C main function it will skip everything in main.ino and try to use only that.
I know there are dedicated arduino unit test frameworks, these are just regular c++ unit tests for math functions that don't touch any avr-related code.
How can I lay these files out or make this so arduino won't try to include helper_test.cpp but will include helper.cpp?
Just let him include it, but exclude all the code. I mean:
File test_enablers.h
#define INCLUDE_HELPER_TEST
File helper_test.cpp
#include "test_enablers.h"
#ifdef INCLUDE_HELPER_TEST
... your helper test code
#endif /* INCLUDE_HELPER_TEST */
You can add as many test files as you want; just add the flags in the .h file
If you want to disable the test file just comment the #define and the file won't be compiled.
If you want to exclude some code in some other files (e.g. the setup and loop functions in the main, since you are already implementing it in the test, just write
File main.ino
#include "test_enablers.h"
#ifndef INCLUDE_HELPER_TEST
... your production setup and loop
#endif /* INCLUDE_HELPER_TEST */
(note the ifndef in place of the ifdef)
If you want to unit test, simply create code that is going to execute one task. One can purchase a Logic Analyzer and examine the ports' outputs to see if the right signal is being delivered. Unit Testing has to be done differently in most cases due to the nature of Arduinos. Hope this helps.
I don't think the Arduino compiler (and related systems) are set up to handle unit testing, I had to build my own framework to do this that I've written about in this answer to a related Arduino-unit-testing question.
For this to work, you'd need to put the bulk of your code in a library, and the unit tests would be stored as a subdirectory of that library called test/. You could test the complete sketches by putting them in the examples/ directory of the library, but only compilation can be tested on those.
You need to create two separate projects. One project will contain your production code. It will have these files:
main.ino <-- main sketch
helper.h <-- header for helper functions
helper.cpp <-- implementation for helper functions
and all other files necessary to create your arduino app. You will build it when you want to create the app itself. The other project will be a standard executable that will run unit tests:
main_test.cpp <-- main
helper_test.cpp <-- unit test for helpers
../arduino_project/helper.h <-- header for helper functions
../arduino_project/helper.cpp <-- implementation for helper functions
Every time you add new function or make change to some helper function, you build this executable and run it to see if all tests pass.As you can see, this project will add files (that contain function which will be tested) from your main project. The best way would be to add the location of source files from the main project as additional include directories for unit tests project.

Arduino: issue including ShiftPWM library in custom library

I recently came across ShiftPWM, which is optimized to control RGB LEDs using shift registers.
I downloaded the source code and was able to run the example provided on my breadboard no problem. I even tweaked the sketch a bit and feel comfortable using the functions in the ShiftPWM library.
I would like to include the ShiftPWM functionality in a current project I'm working on, which already has a ton of library files. I'd like to #include "ShiftPWM.h into my Registers.h file, so that I can use my own syntax and object-oriented hierarchy to light up my LEDs.
In the example arduino sketch, I need to define a few constants before including the ShiftPWM library file:
#define SHIFTPWM_NOSPI
const int ShiftPWM_latchPin = 2;
const int ShiftPWM_dataPin = 4;
const int ShiftPWM_clockPin = 3;
const bool ShiftPWM_invertOutputs = true;
const bool ShiftPWM_balanceLoad = false;
#include "ShiftPWM.h"
So I figured I could simply copy these lines into my Registers.h file, as well as the setup code and other functions that go about lighting up the LEDs.
But, when I do this and try to compile my arduino sketch, which only includes Registers.h, my compiler gives me the following (abbreviated) errors:
multiple definitions of '__vector_11'
.../ShiftPWM/ShiftPWM.h:175 first defined here
multiple definitions of 'ShiftPWM'
.../ShiftPWM/ShiftPWM.h:175 first defined here
The line it is referring to is this:
ISR(TIMER1_COMPA_vect) {
ShiftPWM_handleInterrupt();
}
My research indicates this line of code has something to do with the timers used by the arduino to run interrupts. People often get the 'multiple definitions of __vector_11' error when they try to run the arduino Servo and Tone library, for instance, because they both attempt to use timer1.
I am wondering why I am getting this error simply by attempting to link to the ShiftPWM library from my own custom library. Nowhere in my code do I manually set any sort of timer. I am assuming it stems from some linking issue, but I really can't figure out what that may be.
Any help is appreciated.

Changing function reference in Mach-o binary

I need to change to reference of a function in a mach-o binary to a custom function defined in my own dylib. The process I am now following is,
Replacing references to older functions to the new one. e.g _fopen to _mopen using sed.
I open the mach-o binary in MachOView to find the address of the entities I want to change. I then manually change the information in the binary using a hex editor.
Is there a way I can automate this process i.e write a program to read the symbols, and dynamic loading info and then change them in the executable. I was looking at the mach-o header files at /usr/include/mach-o but am not entire sure how to use them to get this information. Do there exist any libraries present - C or python which help do the same?
interesting question, I am trying to do something similar to static lib; see if this helps
varrunr - you can easily achieve most if not all of the functionality using DYLD's interposition. You create your own library, and declare your interposing functions, like so
// This is the expected interpose structure
typedef struct interpose_s {
void *new_func;
void *orig_func;
} interpose_t;
static const interpose_t interposing_functions[] \
__attribute__ ((section("__DATA, __interpose"))) = {
{ (void *)my_open, (void *) open }
};
.. and you just implement your open. In the interposing functions all references to the original will work - which makes this ideal for wrappers. And, you can insert your dylib forcefully using DYLD_INSERT_LIBRARIES (same principle as LD_PRELOAD on Linux).

Testing with Qt's QTestLib module

I started writing some tests with Qt's unit testing system.
How do you usually organize the tests? It is one test class per one module class, or do you test the whole module with a single test class? Qt docs suggest to follow the former strategy.
I want to write tests for a module. The module provides only one class that is going to be used by the module user, but there is a lot of logic abstracted in other classes, which I would also like to test, besides testing the public class.
The problem is that Qt's proposed way to run tests involved the QTEST_MAIN macro:
QTEST_MAIN(TestClass)
#include "test_class.moc"
and eventually one test program is capable of testing just one test class. And it kinda sucks to create test projects for every single class in the module.
Of course, one could take a look at the QTEST_MAIN macro, rewrite it, and run other test classes. But is there something, that works out of the box?
So far I do it by hand:
#include "one.h"
#include "two.h"
int main(int argc, char *argv[])
{
QCoreApplication app(argc, argv);
TestOne one;
QTest::qExec(&one, argc, argv);
TestOne two;
QTest::qExec(&two, argc, argv);
}
Related to the answer posted by #cjhuitt
This is an example that removes the need of manually calling each test object
I TRY TO AVOID THINGS LIKE THIS:
MyTestClass1 t1; t1.run();
MyTestClass2 t2; t2.run();
//etc...
My solution is to let the test objects inherit from a base class that adds itself to a static list
The main program then executes all the test objects in that list. In that way, none of the supporting framework code needs to be changed. The only things that change are the test classes themselves.
Here is how i do it:
qtestsuite.h - base class for the test objects
#ifndef QTESTSUITE_H
#define QTESTSUITE_H
#include <QObject>
#include <vector>
class QTestSuite : public QObject
{
Q_OBJECT
public:
static std::vector<QObject*> m_suites;
public:
explicit QTestSuite();
};
#endif // QTESTSUITE_H
qtestsuite.cpp
#include "qtestsuite.h"
#include <iostream>
std::vector<QObject*> QTestSuite::m_suites;
QTestSuite::QTestSuite() : QObject()
{
m_suites.push_back(this);
}
testall.cpp - runs the tests
#include "qtestsuite.h"
#include <QtTest/QtTest>
#include <iostream>
int main(int, char**)
{
int failedSuitesCount = 0;
std::vector<QObject*>::iterator iSuite;
for (iSuite = QTestSuite::m_suites.begin(); iSuite != QTestSuite::m_suites.end(); iSuite++)
{
int result = QTest::qExec(*iSuite);
if (result != 0)
{
failedSuitesCount++;
}
}
return failedSuitesCount;
}
mytestsuite1.cpp - an example test object, create more of these
#include "qtestsuite.h"
#include <QtTest/QtTest>
class MyTestSuite1: public QTestSuite
{
Q_OBJECT
private slots:
void aTestFunction();
void anotherTestFunction();
};
void MyTestSuite1::aTestFunction()
{
QString str = "Hello";
QVERIFY(str.toUpper() == "this will fail");
}
void MyTestSuite1::anotherTestFunction()
{
QString str = "Goodbye";
QVERIFY(str.toUpper() == "GOODBYE");
}
static MyTestSuite1 instance; //This is where this particular test is instantiated, and thus added to the static list of test suites
#include "mytestsuite1.moc"
also, to create the .pro file
qmake -project "CONFIG += qtestlib"
In our setup with QTest, we did a few things to make it nicer.
Define a subclass of QObject that is used as a base class for any new unit-test class.
In the constructor for that class, we add the instance of the test to a static list of tests, and in the destructor we remove it.
We then have a static function that loops through the tests and runs them using QTest::qExec(). (We accumulate the values returned each time, and return that from our function.)
main() calls this function, and returns the result as the success/failure.
Finally, in the compilation unit of the specific test itself, we usually include a static instance of that class.
This setup means that the class will be instantiated before main() is run, so it will be added to the list of classes to test when main runs. The framework requires that you just need to inherit your class properly, and instantiate a static instance if you always want it run.
We also occasionally create other optional tests, that are added based on command line switches.
Yeah, QTest forces bit strange test structure and is generally inferior to Google Test/Mock Framework. For one project I'm forced to use QTest (client requirement), and here's how I use it:
I compile all test together as a subdir template project
To make creating new tests easier, I share a lot of project configuration by using common.pri file I include in every test .pro file
If possible I share the object files directory to speed up compilation
I run them all using a batch+awk+sed script.
Setting up this four points is very easy and makes usage of QTest almost pleasant. Do you have some problems with running multiple tests that are not solved by the config described above?
PS: running tests the way you do, i.e. calling multiple QTest::qExec causes problems with -o command line switch - you'll get only results for the last tested class.
Usually I organize tests with one test executable per class under test.
and eventually one test program is
capable of testing just one test
class.
This is a good thing. It isolates your tests from each other, preventing things like a crash in one test from blocking all your other tests. That crash could be caused by a common component in several classes under test. The pattern of the failures would then tip you off to the underlying origin of the problem. Basically, you have better diagnostic information for failures if your tests are independent of each other.
Make it easy to set up multiple executables and run each test separately. Use a test runner to spawn off all the test processes.
Update:
I've changed my mind on this somewhat. Once you have a big program with lots of tests, linking hundreds of test executables becomes very slow. My new preference is to put all the tests for a library into an executable and choosing which tests to invoke using command-line arguments passed to the test executable.
That cuts down the number of executables from hundreds to dozens, but retains the advantages of running tests separately.

ld linker question: the --whole-archive option

The only real use of the --whole-archive linker option that I have seen is in creating shared libraries from static ones. Recently I came across Makefile(s) which always use this option when linking with in house static libraries. This of course causes the executables to unnecessarily pull in unreferenced object code. My reaction to this was that this is plain wrong, am I missing something here ?
The second question I have has to do with something I read regarding the whole-archive option but couldn't quite parse. Something to the effect that --whole-archive option should be used while linking with a static library if the executable also links with a shared library which in turn has (in part) the same object code as the static library. That is the shared library and the static library have overlap in terms of object code. Using this option would force all symbols(regardless of use) to be resolved in the executable. This is supposed to avoid object code duplication. This is confusing, if a symbol is refereed in the program it must be resolved uniquely at link time, what is this business about duplication ? (Forgive me if this paragraph is not quite the epitome of clarity)
Thanks
There are legitimate uses of --whole-archive when linking executable with static libraries. One example is building C++ code, where global instances "register" themselves in their constructors (warning: untested code):
handlers.h
typedef void (*handler)(const char *data);
void register_handler(const char *protocol, handler h);
handler get_handler(const char *protocol);
handlers.cc (part of libhandlers.a)
typedef map<const char*, handler> HandlerMap;
HandlerMap m;
void register_handler(const char *protocol, handler h) {
m[protocol] = h;
}
handler get_handler(const char *protocol) {
HandlerMap::iterator it = m.find(protocol);
if (it == m.end()) return nullptr;
return it->second;
}
http.cc (part of libhttp.a)
#include <handlers.h>
class HttpHandler {
HttpHandler() { register_handler("http", &handle_http); }
static void handle_http(const char *) { /* whatever */ }
};
HttpHandler h; // registers itself with main!
main.cc
#include <handlers.h>
int main(int argc, char *argv[])
{
for (int i = 1; i < argc-1; i+= 2) {
handler h = get_handler(argv[i]);
if (h != nullptr) h(argv[i+1]);
}
}
Note that there are no symbols in http.cc that main.cc needs. If you link this as
g++ main.cc -lhttp -lhandlers
you will not get an http handler linked into the main executable, and will not be able to call handle_http(). Contrast this with what happens when you link as:
g++ main.cc -Wl,--whole-archive -lhttp -Wl,--no-whole-archive -lhandlers
The same "self registration" style is also possible in plain-C, e.g. with the __attribute__((constructor)) GNU extension.
Another legitimate use for --whole-archive is for toolkit developers to distribute libraries containing multiple features in a single static library. In this case, the provider has no idea what parts of the library will be used by the consumer and therefore must include everything.
An additional good scenario in which --whole-archive is well-used is when dealing with static libraries and incremental linking.
Let us suppose that:
libA implements the a() and b() functions.
Some portion of the program has to be linked against libA only, e.g. due to some function wrapping using --wrap (a classical example is malloc)
libC implements the c() functions and uses a()
the final program uses a() and c()
Incremental linking steps could be:
ld -r -o step1.o module1.o --wrap malloc --whole-archive -lA
ld -r -o step2.o step1.o module2.o --whole-archive -lC
cc step3.o module3.o -o program
Failing to insert --whole-archive would strip function c() which is anyhow used by program, preventing the correct compilation process.
Of course, this is a particular corner case in which incremental linking must be done to avoid wrapping all calls to malloc in all modules, but is a case which is successfully supported by --whole-archive.
I agree that using —whole-archive to build executables is probably not what you want (due to linking in unneeded code and creating bloated software). If they had a good reason to do so they should have documented it in the build system, as now you are left to guessing.
As to your second part of the question. If an executable links both a static library and a dynamic library that has (in part) the same object code as the static library then the —whole-archive will ensure that at link time the code from the static library is preferred. This is usually what you want when you do static linking.
Old query, but on your first question ("Why"), I've seen --whole-archive used for in-house libraries as well, primarily to sidestep circular references between those libraries. It tends to hide poor architecture of the libraries, so I'd not recommend it. However it's a fast way of getting a quick trial working.
For your second query, if the same symbol was present in a shared object and a static library, the linker will satisfy the reference with whichever library it meets first.
If the shared library and static library have an exact sharing of code, this may all just work. But where the shared library and the static library have different implementations of the same symbols, your program will still compile but will behave differently based on the order of libraries.
Forcing all symbols to be loaded from the static library is one way of removing confusion as to what is loaded from where. But in general this sounds like solving the wrong problem; you mostly won't want the same symbols in different libraries.

Resources