blackfin gcc-toolchain linking error math function like atan2 :undefined reference to ... within watan2.o - math

I've a problem with the classic math function linking of my bare metal program with blackfin tool chain linker. I tried many things but I cannot see why the libm.a does't provide the definitions for the function it use. Do I need to add an extra library? if yes which one?
I've put my linker verbose lign with linked libraries and the example linking error I got.
Thanks,
William
bfin-elf-ld -v -o test_ad1836_driver -T coreb_test_ad1836_driver.lds --just-symbol ../../icc_core/icc queue.o ezkit_561.o heap_2.o port.o tasks.o test_ad1836_driver.o list.o croutine.o user_isr.o bfin_isr.o app_c.o context_sl_asm.o cycle_count.o CFFT_Rad4_NS_NBRev.o fir_decima.o fir_decima_spl.o math_tools.o -Ttext 0x3c00000 -L /opt/uClinux/bfin-elf/bfin-elf/lib -lbffastfp -lbfdsp -lg -lc -lm -Map=test_ad1836_driver.map
argv[0] = 'bfin-elf-ld'
bindir = '/opt/uClinux/bfin-elf/bin/'
tooldir = '/opt/uClinux/bfin-elf/bin/../bfin-elf/bin/'
linker = '/opt/uClinux/bfin-elf/bin/../bfin-elf/bin/ld.real'
elf2flt = '/opt/uClinux/bfin-elf/bin/../bfin-elf/bin/elf2flt'
nm = '/opt/uClinux/bfin-elf/bin/../bfin-elf/bin/nm'
objdump = '/opt/uClinux/bfin-elf/bin/bfin-elf-objdump'
objcopy = '/opt/uClinux/bfin-elf/bin/bfin-elf-objcopy'
ldscriptpath = '/opt/uClinux/bfin-elf/bin/../bfin-elf/bin/../lib'
Invoking: '/opt/uClinux/bfin-elf/bin/../bfin-elf/bin/ld.real' '-v' '-o' 'test_ad1836_driver' '-T' 'coreb_test_ad1836_driver.lds' '--just-symbol' '../../icc_core/icc' 'queue.o' 'ezkit_561.o' 'heap_2.o' 'port.o' 'tasks.o' 'test_ad1836_driver.o' 'list.o' 'croutine.o' 'user_isr.o' 'bfin_isr.o' 'app_c.o' 'context_sl_asm.o' 'cycle_count.o' 'CFFT_Rad4_NS_NBRev.o' 'fir_decima.o' 'fir_decima_spl.o' 'math_tools.o' '-Ttext' '0x3c00000' '-L' '/opt/uClinux/bfin-elf/bfin-elf/lib' '-lbffastfp' '-lbfdsp' '-lg' '-lc' '-lm' '-Map=test_ad1836_driver.map'
GNU ld version 2.17
/opt/uClinux/bfin-elf/bfin-elf/lib/libm.a(w_atan2.o): In function `atan2':
/usr/src/packages/BUILD/blackfin-toolchain-2010R1/gcc-4.3/newlib/libm/math/w_atan2.c:96: undefined reference to `__eqdf2'
/usr/src/packages/BUILD/blackfin-toolchain-2010R1/gcc-4.3/newlib/libm/math/w_atan2.c:96: relocation truncated to fit: R_BFIN_PCREL24 against undefined symbol `__eqdf2'
.....
/opt/uClinux/bfin-elf/bfin-elf/lib/libm.a(e_sqrt.o): In function `_ieee754_sqrt':
/usr/src/packages/BUILD/blackfin-toolchain-2010R1/gcc-4.3/newlib/libm/math/e_sqrt.c:110: undefined reference to `__muldf3'
/usr/src/packages/BUILD/blackfin-toolchain-2010R1/gcc-4.3/newlib/libm/math/e_sqrt.c:110: undefined reference to `__adddf3'
.....
/opt/uClinux/bfin-elf/bfin-elf/lib/libm.a(s_atan.o): In function `atan':
/usr/src/packages/BUILD/blackfin-toolchain-2010R1/gcc-4.3/newlib/libm/math/s_atan.c:169: undefined reference to `__muldf3'
/usr/src/packages/BUILD/blackfin-toolchain-2010R1/gcc-4.3/newlib/libm/math/s_atan.c:170: undefined reference to `__muldf3'
/usr/src/packages/BUILD/blackfin-toolchain-2010R1/gcc-4.3/newlib/libm/math/s_atan.c:172: undefined reference to `__muldf3'

Add -lgcc. You need the functions to compare, add and multiply C double type values, respectively, __eqdf2, __adddf3 and __muldf3.
Usually, I'd recommend using the compiler driver (gcc) instead of linking directly with ld, even for firmware/kernel type outputs, because the former will take care of the necessary startup files and compiler runtime libraries.

Hi I think I know the problem, blackfin is not really compatible with the maths std lib. It is why in the VDSP version the maths functions are re implemented. To solve my problem, I did convert the VDSP maths lib to gcc and it compile fine now.
Thanks

Actually I found a better answer,
blackfin does in fact support std math. I just had some library flag in the wrong order.
For the linker, use the following lib flag order and it should work:
/opt/uClinux/bfin-elf/bin/../bfin-elf/bin/ld.real' '-v' '-o' .... '-L' '/opt/uClinux/bfin-elf/lib/gcc/bfin-elf/4.3.5/' '-lgcc' '-L' '/opt/uClinux/bfin-elf/bfin-elf/lib' '-lbfdsp' '-lg' '-lm' '-lbffastfp' '-lc'

Related

Is it possible to uniquely identify dynamically imported functions by their name?

I used
readelf --dyn-sym my_elf_binary | grep FUNC | grep UND
to display the dynamically imported functions of my_elf_binary, from the dynamic symbol table in the .dynsym section to be precise. Example output would be:
[...]
3: 00000000 0 FUNC GLOBAL DEFAULT UND tcsetattr#GLIBC_2.0 (3)
4: 00000000 0 FUNC GLOBAL DEFAULT UND fileno#GLIBC_2.0 (3)
5: 00000000 0 FUNC GLOBAL DEFAULT UND isatty#GLIBC_2.0 (3)
6: 00000000 0 FUNC GLOBAL DEFAULT UND access#GLIBC_2.0 (3)
7: 00000000 0 FUNC GLOBAL DEFAULT UND open64#GLIBC_2.2 (4)
[...]
Is it safe to assume that the names associated to these symbols, e.g. the tcsetattr or access, are always unique? Or is it possible, or reasonable*), to have a dynamic symbol table (filtered for FUNC and UND) which contains two entries with the same associated string?
The reason I am asking is that I am looking for a unique identifier for dynamically imported functions ...
*) Wouldn't the dynamic linker resolve all "UND FUNC symbols" with the same name to the same function anyway?
Yes, given a symbol name and the set of libraries an executable is linked against, you can uniquely identify the function. This behavior is required for linking and dynamic linking to work.
An illustrative example
Consider the following two files:
librarytest1.c:
#include <stdio.h>
int testfunction(void)
{
printf("version 1");
return 0;
}
and librarytest2.c:
#include <stdio.h>
int testfunction(void)
{
printf("version 2");
return 0;
}
Both compiled into shared libraries:
% gcc -fPIC -shared -Wl,-soname,liblibrarytest.so.1 -o liblibrarytest.so.1.0.0 librarytest1.c -lc
% gcc -fPIC -shared -Wl,-soname,liblibrarytest.so.2 -o liblibrarytest.so.2.0.0 librarytest2.c -lc
Note that we cannot put both functions by the same name into a single shared library:
% gcc -fPIC -shared -Wl,-soname,liblibrarytest.so.0 -o liblibrarytest.so.0.0.0 librarytest1.c librarytest2.c -lc
/tmp/cctbsBxm.o: In function `testfunction':
librarytest2.c:(.text+0x0): multiple definition of `testfunction'
/tmp/ccQoaDxD.o:librarytest1.c:(.text+0x0): first defined here
collect2: error: ld returned 1 exit status
This shows that symbol names are unique within a shared library, but do not have to be among a set of shared libraries.
% readelf --dyn-syms liblibrarytest.so.1.0.0 | grep testfunction
12: 00000000000006d0 28 FUNC GLOBAL DEFAULT 10 testfunction
% readelf --dyn-syms liblibrarytest.so.2.0.0 | grep testfunction
12: 00000000000006d0 28 FUNC GLOBAL DEFAULT 10 testfunction
Now lets link our shared libraries with an executable. Consider linktest.c:
int testfunction(void);
int main()
{
testfunction();
return 0;
}
We can compile and link this against either shared library:
% gcc -o linktest1 liblibrarytest.so.1.0.0 linktest.c
% gcc -o linktest2 liblibrarytest.so.2.0.0 linktest.c
And run each of them (note I'm setting the dynamic library path so the dynamic linker can find the libraries, which are not in a standard library path):
% LD_LIBRARY_PATH=. ./linktest1
version 1%
% LD_LIBRARY_PATH=. ./linktest2
version 2%
Now lets link our executable to both libraries. Each is exporting the same symbol testfunction and each library has a different implementation of that function.
% gcc -o linktest0-1 liblibrarytest.so.1.0.0 liblibrarytest.so.2.0.0 linktest.c
% gcc -o linktest0-2 liblibrarytest.so.2.0.0 liblibrarytest.so.1.0.0 linktest.c
The only difference is the order the libraries are referenced to the compiler.
% LD_LIBRARY_PATH=. ./linktest0-1
version 1%
% LD_LIBRARY_PATH=. ./linktest0-2
version 2%
Here are the corresponding ldd output:
% LD_LIBRARY_PATH=. ldd ./linktest0-1
linux-vdso.so.1 (0x00007ffe193de000)
liblibrarytest.so.1 => ./liblibrarytest.so.1 (0x00002b8bc4b0c000)
liblibrarytest.so.2 => ./liblibrarytest.so.2 (0x00002b8bc4d0e000)
libc.so.6 => /lib64/libc.so.6 (0x00002b8bc4f10000)
/lib64/ld-linux-x86-64.so.2 (0x00002b8bc48e8000)
% LD_LIBRARY_PATH=. ldd ./linktest0-2
linux-vdso.so.1 (0x00007ffc65df0000)
liblibrarytest.so.2 => ./liblibrarytest.so.2 (0x00002b46055c8000)
liblibrarytest.so.1 => ./liblibrarytest.so.1 (0x00002b46057ca000)
libc.so.6 => /lib64/libc.so.6 (0x00002b46059cc000)
/lib64/ld-linux-x86-64.so.2 (0x00002b46053a4000)
Here we can see that while symbols are not unique, the way the linker resolves them is defined (it appears that it always resolves the first symbol it encounters). Note that this is a bit of a pathological case as you normally wouldn't do this. In the cases where you would go this direction there are better ways of handling symbol naming so they would be unique when exported (symbol versioning, etc)
In summary, yes, you can uniquely identify the function given its name. If there happens to be multiple symbols by that name, you identify the proper one using the order the libraries are resolved in (from ldd or objdump, etc). Yes, in this case you need a bit more information that just its name, but it is possible if you have the executable to inspect.
Note that in your case, the name of the first function import is not just tcsetattr but tcsetattr#GLIBC_2.0. The # is how the readelf program displays a versioned symbol import.
GLIBC_2.0 is a version tag that glibc uses to stay binary compatible with old binaries in the (unusual but possible) case that the binary interface to one of its functions needs to change. The original .o file produced by the compiler will just import tcsetattr with no version information but during static linking, the linker has noticed that the actual symbol exported by lic.so carries a GLIBC_2.0 tag, and so it creates a binary that insists on importing the particular tcsetattr symbol that has version GLIBC_2.0.
In the future there might be a libc.so that exports one tcsetattr#GLIBC_2.0 and a different tcsetattr#GLIBC_2.42, and the version tag will then be used to find which one a partcular ELF object refers to.
It is possible that the same process may also use tcsetattr#GLIBC_2.42 at the same time, such as if it uses another dynamic library which was linked against a libc.so new enough to provide it. The version tags ensure that both the old binary and the new library get the function they expect from the C library.
Most libraries don't use this mechanism and instead just rename the entire library if they need to make breaking changes to their binary interfaces. For example, if you dump /usr/bin/pngtopnm you'll find that the symbols it imports from libnetpbm and libpng are not versioned. (Or at least that's what I see on my machine).
The cost of this is that you can't have a binary that links against one version of libpng and also links against another library that itself links against a different libpng version; the exported names from the two libpng's would clash.
In most cases this is manageable enough through careful packaging practice that maintaining the library source to produce useful version tags and stay backwards compatible is not worth the trouble.
But in the particular case of the C library and a few other vital system libraries, changing the name of the library would be so extremely painful that it makes sense for the maintainers to jump through some hoops in order to ensure it will never need to happen again.
Although in most cases every symbol is unique, there are a handful of exceptions. My favorite is multiple identical symbol import used by PAM (pluggable authentication modules) and NSS (Name Service Switch). In both cases all modules written for either interface use a standard interface with standard names. A common and frequently used example is what happens when you call get host by name. The nss library will call the same function in multiple libraries to get an answer. A common configuration calles the same function in three libraries! I have seen the same function called in five different libraries from one function call, and that was not the limit just what was useful. There is special calls to the dynamic linker need to do this and I have not familiarised myself with the mechanics of doing this, but there is nothing special about the linking of the library module that is so loaded.

ld : 0711-224 warning : Duplicate symbol (Overriding a System Call in AIX)

I read this article (http://qasim.zaidi.me/2009/05/overriding-system-call-in-aix.html;) about overriding a systemcall in aix;
I wrote two kernel extensions just like the article said "The first kernel extension would merely re-export the original system call with a different name. The second, will actually override the syscall by redefining it, and then call the original one as exported by the first module."
But there is a error when I make the second extension:
1> gcc -O2 -maix64 -ffreestanding -o my_syscall.o -c my_syscall.c -D_KERNEL
1> ld -b64 -o my_syscall my_syscall.o -e my_syscall_init -bI:/home/rabbitte/output/test_system/my_syscall.exp -bI:/usr/lib/kernex.exp -lsys -lcsys
1>ld : 0711-224 warning : Duplicate symbol: .getpid
1> ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information.
The file "/home/rabbitte/output/test_system/my_syscall.exp" is the export file of the first extension.
I don't understand the what the "Duplicate symbol: .getpid" means. Could you tell me how to solve this problem?
Thanks very much.
The reason is that the file “kernex.exp” also include the symbol “getpid”. I should comment the getpid in file kernex.exp.
I've never done any of this, so this might not be sufficient to solve your problem, but your linker line has two -bI options. It should be -bI for the AIX kernex.exp, and -bE for your exp file according to the docs you point to.

Clang + AVR Compilation error: '=w' in asm

I got this error below and I can not find a solution. Does anybody know how
to fix this error?
rafael#ubuntu:~/avr/projeto$ clang -fsyntax-only -Os -I /usr/lib/avr/include -D__AVR_ATmega328P__ -DARDUINO=100 -Wno-ignored-attributes -Wno-attributes serial_tree.c
In file included from serial_tree.c:3:
In file included from /usr/lib/avr/include/util/delay.h:43:
/usr/lib/avr/include/util/delay_basic.h:108:5: error: invalid output
constraint
'=w' in asm
: "=w" (__count)
^
1 error generated.
util/delay.h is a Header from avr-libc, and the feature which you are using (some of the delay stuff) is using inline asm to implement it. avr-libc is written to be used with avr-gcc and uses features from avr-gcc like machine dependent constrains in inline asm. llvm does not recognize constraint "w", thus you have to use a different approach like using avr-gcc or porting that code to llvm.
avr-gcc also implements built-in function __builtin_avr_delay_cycles to implement wasting a specified number of clock cycles. If llvm is properly mimicking avr-gcc, then you can use that function as a starting point.

CMake, Qt, gcc and precompiled headers

I'm (once again) struggling with the creation of precompiled headers in conjunction with gcc and Qt on the Apple platform.
When now creating my precompiled header I use a code section (based on good old "PCHSupport_26.cmake") to extract the compile flags as follows:
STRING(TOUPPER "CMAKE_CXX_FLAGS_${CMAKE_BUILD_TYPE}" _flags_var_name)
SET(_args ${CMAKE_CXX_FLAGS} ${${_flags_var_name}})
GET_DIRECTORY_PROPERTY(DIRINC INCLUDE_DIRECTORIES )
FOREACH(_item ${DIRINC})
LIST(APPEND _args "-I${_item}")
ENDFOREACH(_item)
GET_DIRECTORY_PROPERTY(_defines_global COMPILE_DEFINITIONS)
LIST(APPEND defines ${_defines_global})
STRING(TOUPPER "COMPILE_DEFINITIONS_${CMAKE_BUILD_TYPE}" _defines_for_build_name) GET_DIRECTORY_PROPERTY(defines_build ${_defines_for_build_name})
LIST(APPEND _defines ${_defines_build})
FOREACH(_item ${_defines})
LIST(APPEND _args "-D${_item}")
ENDFOREACH(_item ${_defines})
LIST(APPEND _args -c ${CMAKE_CURRENT_SOURCE_DIR}/${PRECOMPILED_HEADER} -o ${_gch_filename})
SEPARATE_ARGUMENTS(_args)
Unfortunately the above compiler flags miss two important parameter that CMake does generate when using the build-in compiler rules:
-DQT_DEBUG
and when compiling with the generated precompiled header, I get errors as follows:
file.h: not used because QT_DEBUG is defined.
I would need your help with the following:
Is the above way to retrieve the compiler flags correct ?
Is there a better, easier, simpler way to do this ?
Why does -DQT_DEBUG not show up in COMPILE_DEFINITIONS or COMPILE_DEFINITIONS_${CMAKE_BUILD_TYPE}
If you are using XCode simply:
SET_TARGET_PROPERTIES(${target} PROPERTIES XCODE_ATTRIBUTE_GCC_PRECOMPILE_PREFIX_HEADER YES)
SET_TARGET_PROPERTIES(${target} PROPERTIES XCODE_ATTRIBUTE_GCC_PREFIX_HEADER "${target}/std.h")
I'm trying myself to use precompiled headers with raw gcc on linux through CMake but I haven't still figured it out. It look like it's working but I don't see any speed improvements.
Edit:
I managed to use pch in gcc finally with the macro you can find here: http://www.mail-archive.com/cmake#cmake.org/msg04394.html

UNIX 'comm' utility allows for case insensitivity in BSD but not Linux (via -i flag). How can I get it in Linux?

I'm using the excellent UNIX 'comm' command line utility in an application which I developed on a BSD platform (OSX). When I deployed to my Linux production server I found out that sadly, Ubuntu Linux's 'comm' utility does not take the -i flag to indicate that the lines should be compared case-insensitive. Apparently the POSIX standard does not require the -i option.
So... I'm in a bind. I really need the -i option that works so well on BSD. I've gone so far to try to compile the BSD comm.c source code on the Linux box but I got:
http://svn.freebsd.org/viewvc/base/user/luigi/ipfw3-head/usr.bin/comm/comm.c?view=markup&pathrev=200559
me#host:~$ gcc comm.c
comm.c: In function ‘getline’:
comm.c:195: warning: assignment makes pointer from integer without a cast
comm.c: In function ‘wcsicoll’:
comm.c:264: warning: assignment makes pointer from integer without a cast
comm.c:270: warning: assignment makes pointer from integer without a cast
/tmp/ccrvPbfz.o: In function `getline':
comm.c:(.text+0x421): undefined reference to `reallocf'
/tmp/ccrvPbfz.o: In function `wcsicoll':
comm.c:(.text+0x691): undefined reference to `reallocf'
comm.c:(.text+0x6ef): undefined reference to `reallocf'
collect2: ld returned 1 exit status
Does anyone have any suggestions as to how to get a version of comm on Linux that supports 'comm -i'?
Thanks!
You can add the following in comm.c:
void *reallocf(void *ptr, size_t size)
{
void *ret = realloc(ptr, size);
if (ret == NULL) {
free(ptr);
}
return ret;
}
You should be able to compile it then. Make sure comm.c has #include <stdlib.h> in it (it probably does that already).
The reason your compilation fails is because BSD comm.c uses reallocf() which is not a standard C function. But it is easy to write.
#OP ,there's no need to go to such length as to do your own src code compilation . Here's an alternative suggestion. Since you want case insensitive, you can just convert the cases in both files to lower (or upper case) using another tool such as tr before you pass the files to comm.
tr '[A-Z]' '[a-z]' <file1 > temp1
tr '[A-Z]' '[a-z]' <file2 > temp2
comm temp1 temp2
You could try to cat both files and pipe them to uniq -c -i. It'll show all lines in both files, with the number of appearances in the first column. As long as the original files don't have repeated lines, all lines with the first column >1 are lines common to both files.
Hope it helps!
Does anyone have any suggestions as to how to get a version of comm on Linux that supports 'comm -i'?
Not quite that; but have you checked if your requirements could be satisfied by the join utility? This one does have the -i option on Linux...

Resources