I need to programmatically inspect the library dependencies of a given executable. Is there a better way than running the ldd (or objdump) commands and parsing their output? Is there an API available which gives the same results as ldd ?
I need to programmatically inspect the library dependencies of a given executable.
I am going to assume that you are using an ELF system (probably Linux).
Dynamic library dependencies of an executable or a shared library are encoded as a table on Elf{32_,64}_Dyn entries in the PT_DYNAMIC segment of the library or executable. The ldd (indirectly, but that's an implementation detail) interprets these entries and then uses various details of system configuration and/or LD_LIBRARY_PATH environment variable to locate the needed libraries.
You can print the contents of PT_DYNAMIC with readelf -d a.out. For example:
$ readelf -d /bin/date
Dynamic section at offset 0x19df8 contains 26 entries:
Tag Type Name/Value
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
0x000000000000000c (INIT) 0x3000
0x000000000000000d (FINI) 0x12780
0x0000000000000019 (INIT_ARRAY) 0x1a250
0x000000000000001b (INIT_ARRAYSZ) 8 (bytes)
0x000000000000001a (FINI_ARRAY) 0x1a258
0x000000000000001c (FINI_ARRAYSZ) 8 (bytes)
0x000000006ffffef5 (GNU_HASH) 0x308
0x0000000000000005 (STRTAB) 0xb38
0x0000000000000006 (SYMTAB) 0x358
0x000000000000000a (STRSZ) 946 (bytes)
0x000000000000000b (SYMENT) 24 (bytes)
0x0000000000000015 (DEBUG) 0x0
0x0000000000000003 (PLTGOT) 0x1b000
0x0000000000000002 (PLTRELSZ) 1656 (bytes)
0x0000000000000014 (PLTREL) RELA
0x0000000000000017 (JMPREL) 0x2118
0x0000000000000007 (RELA) 0x1008
0x0000000000000008 (RELASZ) 4368 (bytes)
0x0000000000000009 (RELAENT) 24 (bytes)
0x000000006ffffffb (FLAGS_1) Flags: PIE
0x000000006ffffffe (VERNEED) 0xf98
0x000000006fffffff (VERNEEDNUM) 1
0x000000006ffffff0 (VERSYM) 0xeea
0x000000006ffffff9 (RELACOUNT) 170
0x0000000000000000 (NULL) 0x0
This tells you that the only library needed for this binary is libc.so.6 (the NEEDED entry).
If your real question is "what other libraries does this ELF binary require", then that is pretty easy to obtain: just look for DT_NEEDED entries in the dynamic symbol table. Doing this programmatically is rather easy:
Locate the table of program headers (the ELF file header .e_phoff tells you where it starts).
Iterate over them to find the one with PT_DYNAMIC .p_type.
That segment contains a set of fixed sized Elf{32,64}_Dyn records.
Iterate over them, looking for ones with .d_tag == DT_NEEDED.
Voila.
P.S. There is a bit of a complication: the strings, such as libc.so.6 are not part of the PT_DYNAMIC. But there is a pointer to where they are in the .d_tag == DT_STRTAB entry. See this answer for example code.
Related
I am very new to Frama-c and I got an issue when I am trying to open a C source file.
The error shows as
"fatal error: event.h: No such file or directory. Compilation terminated".
[kernel] Parsing FRAMAC_SHARE/libc/__fc_builtin_for_normalization.i (no preprocessing)
[kernel] Parsing WorkSpace/bipbuffer.c (with preprocessing)
[kernel] user error: failed to run: gcc -E -C -I. -dD -D__FRAMAC__ -nostdinc -D__FC_MACHDEP_X86_32 -I/usr/share/frama-c/libc -o '/tmp/bipbuffer.ce6d077.i' '/home/xxx/WorkSpace/bipbuffer.c' you may set the CPP environment variable to select the proper preprocessor command or use the option "-cpp-command".
[kernel] user error: stopping on file "/home/xxx/WorkSpace/bipbuffer.c" that has errors. Add'-kernel-msg-key pp' for preprocessing command.
So bascially I am trying to open a C source file but it returns an error like this. I aslo tried other very simple C files like hello world and other slicing functions, it works well.
I thought it was because I didn't have the dependencies of 'event.h' but it still return these errors after I installed the libevent dependencies. I am not sure if I need to manually set some path of the dependencies for frama-c
Here is part of the C file (Source link: https://memcached.org/) that I would like to open:
#include "stdio.h"
#include <stdlib.h>
/* for memcpy */
#include <string.h>
#include "bipbuffer.h"
static size_t bipbuf_sizeof(const unsigned int size)
{
return sizeof(bipbuf_t) + size;
}
int bipbuf_unused(const bipbuf_t* me)
{
if (1 == me->b_inuse)
/* distance between region B and region A */
return me->a_start - me->b_end;
else
return me->size - me->a_end;
}
......
Thanks,
Compilers and other tools working with C source code need to know where to find header files. There are some standard places where they look automatically, but Frama-C has fewer of those than (and different ones from) a normal compiler.
You need to find out where event.h is installed, then pass something like -cpp-extra-args "-I /path/to/directory/" to Frama-C. Pass the directory name only, not including the name event.h itself.
In addition to Isabelle Newbie's answer, I'd like to point out that the Chlorine version of Frama-C, whose beta has been recently announced, features a new option -json-compilation-database that attempts to read the arguments to be passed to the pre-processor from a compilation database.
Such database can be generated directly by cmake, but there are solutions for make-based project such as the one you refer to, in particular bear, which intercepts the commands launched by make to build the database.
Here's a detailed summary of how you could proceed, using the new -json-compilation-database option from Frama-C 17 Chlorine, plus an extra script list_files.py (which is not in the beta, but will be available in the final 17 release, and can be downloaded here):
Get the source files you want to analyze with Frama-C, run ./configure, and if possible try to disable optional dependencies from external libraries; for instance, some code bases include optional dependencies based on availability of libraries/system features, but have fallback options (resorting to standard C library or POSIX functions). The more you give Frama-C, the better the chances of analyzing it well, so if such external libraries are not essential, excluding them might help get a more "POSIXy" code, which should help. This is typically visible in config.h files, in macros commonly named HAVE_*.
Compile and install Build EAR or some equivalent tool to obtain a compile_commands.json file.
Run bear make (or cmake with flag CMAKE_EXPORT_COMPILE_COMMANDS) to get the compile_commands.json file.
Run the aforementioned list_files.py in the directory containing compile_commands.json to obtain the list of C sources used during compilation.
Run Frama-C (17 Chlorine or newer), giving it the list of sources found in the previous step, plus option -json-compilation-database . to parse the compile_commands.json and, hopefully, get the appropriate preprocessing flags.
Ideally, this should suffice, but in practice, this is rarely enough. In particular due to the presence of external libraries and non-C99, non-POSIX functions, the following steps are always needed.
6. Inclusion of external libraries
At this step, Frama-C will complain about the lack of event.h. You'll have to include the headers of this library yourself. Note: copying headers directly from your /usr/include is not likely to work, due to several architecture-specific definitions, especially files such as bits/*.h..
Instead, consider downloading the external libraries and preparing them (e.g. running ./configure at least). Then manually add the extra include directory via -cpp-extra-args="-I <path/to/your/sources/for/libevent.h>/include".
7. Inclusion of missing non-POSIX headers
Some other headers may be missing, in particular GNU- or BSD-specific sources (e.g. sysexits.h). Get these headers and add them when necessary. The error message in this case comes from the preprocessor (gcc) and is similar to this:
memcached.c:51:10: fatal error: sysexits.h: No such file or directory
#include <sysexits.h>
^~~~~~~~~~~~
compilation terminated.
8. Definition of missing non-POSIX types and constants
At this point, all necessary headers should be available, but parsing with Frama-C may still fail. This is due to usage of non-POSIX type definitions (e.g. caddr_t, struct ling), non-POSIX constants (e.g. MAXPATHLEN, SOCK_NONBLOCK, NI_MAXSERV). Error messages typically resemble the following:
[kernel] memcached.c:3261: Failure: Cannot resolve variable MAXPATHLEN
Constants are often easy to provide manually, by grepping what's available in your /usr/include.
Type definitions, on the other hand, may require some copy-pasting at the right places, especially if they depend on other types which are also missing. This step is hardly automatizable, but relatively straightforward once you get used to some specific error messages.
For instance, the following error message is related to a missing type definition (caddr_t):
[kernel] Parsing memcached.c (with preprocessing)
[kernel] memcached.c:1074:
syntax error:
Location: line 1074, between columns 38 and 47, before or at token: c
1072 *hdr++ = 0;
1073 *hdr++ = 0;
1074 assert((void *) hdr == (caddr_t)c->msglist[i].msg_iov[0].iov_base + UDP_HEADER_SIZE);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1075 }
1076
Note that the token just before c is (caddr_t), which has never been defined (it is often defined as either void * or char *).
The following error message is related to an incomplete type, i.e., a struct used somewhere but never defined:
[kernel] memcached.c:5811: User Error:
variable `ling' has initializer but incomplete type
It means that variable ling's type, which is struct linger (non-POSIX), has never been defined. In this case, we can copy it from our /usr/include/bits/socket.h:
struct linger
{
int l_onoff; /* Nonzero to linger on close. */
int l_linger; /* Time to linger. */
};
Note: if there are POSIX constants/definitions missing from Frama-C's libc, consider notifying its developers, or proposing pull requests in Frama-C's Github.
9. Fixing incompatible and missing function prototypes
Parsing is likely to succeed after the previous step, but it may still fail due to incompatible function prototypes. For instance, you may get:
[kernel] User Error: Incompatible declaration for usleep:
different integer types int and unsigned int
First declaration was at assoc.c:238
Current declaration is at items.c:1573
This is the consequence of a warning emitted earlier:
[kernel:typing:implicit-function-declaration] slabs.c:1150: Warning:
Calling undeclared function usleep. Old style K&R code?
It means that function usleep is called, but it does not have a prototype, therefore Frama-C uses the pre-C99 convention of "implicit int": it generates such a prototype, but later in the code, an actual declaration of usleep is found, and its type is not int. Hence the error.
To prevent this, you need to ensure usleep's prototype is properly included. Since it is not POSIX.1-2008, you need to either define/undefine the appropriate macros (see unistd.h), or add your own prototype.
At the end, this should allow Frama-C to parse the files and build an AST.
However, there are several missing prototypes yet; we were just lucky that none conflicted with actual declarations. Ideally, you'll consider the parsing stage done when there are no more messages such as implicit-function-declaration and similar warnings.
Some of the missing prototypes in memcached, such as getsubopt, are POSIX and should be integrated into Frama-C's standard library. Others might make part of a small library of non-standard stubs, to be reused for other software.
Contributing with results for future reuse
Successful conclusion of the parsing stage for such open source libraries is enough to consider them for integration into this repository of open source case studies, so that future users can start their analyses without having to redo all of these steps. (The repository is oriented towards Eva, but not exclusively: parsing is useful for all of Frama-C plug-ins.)
I want to get the entry point address of a mach-o executable.
I have read that otool (-l option) command is able to show us the mach-o entry point.
I have tried but i do not see the entry point. I've tried both on 32 and 64 bits executable.
If i print the address of main function, i see the 3 last digits are the same between 2 execution. But i see the other digits changing...
Try Using "Hopper" application. This is very useful for displaying the Contents of a Mach-O executable and sections of its code. https://www.hopperapp.com
I'm confused about the RISC-V ABI Register Names. For example, Table 18.2 in the "RISC-V Instruction Set Manual, Volume I: User-Level ISA, Version 2.0" at page 85 specifies that the stack pointer sp is register x14. However, the instruction
addi sp,zero,0
is compiled to 0x00000113 by riscv64-unknown-elf-as (-m32 does not make a difference). In binary:
000000000000 00000 000 00010 0010011
^imm ^rs1 ^f3 ^rd ^opcode
So here sp seems to be x2. Then I googled a bit and found the RISC-V Linux User's Manual. This document states that sp is x30.
So what is it? Are there different ABIs? Can I set the ABI with a command line option to riscv64-unknown-elf-*? Is there a comprehensive table somewhere?
The stack pointer is now x2.
Here is the current ABI documentation, which has been moved out of the User-Level ISA specification, which now contains that same link.
The ABI was modified to better accommodate the new RISC-V compressed spec, which puts the 8 most-used registers next to each other in x8-x15.
Note: do not trust ANY non riscv.org webpage. Quan Nguyen makes this clear in his introduction that the "RISC-V Linux User's Manual" is for documenting the porting process and that accuracy is NOT guaranteed.
I used
readelf --dyn-sym my_elf_binary | grep FUNC | grep UND
to display the dynamically imported functions of my_elf_binary, from the dynamic symbol table in the .dynsym section to be precise. Example output would be:
[...]
3: 00000000 0 FUNC GLOBAL DEFAULT UND tcsetattr#GLIBC_2.0 (3)
4: 00000000 0 FUNC GLOBAL DEFAULT UND fileno#GLIBC_2.0 (3)
5: 00000000 0 FUNC GLOBAL DEFAULT UND isatty#GLIBC_2.0 (3)
6: 00000000 0 FUNC GLOBAL DEFAULT UND access#GLIBC_2.0 (3)
7: 00000000 0 FUNC GLOBAL DEFAULT UND open64#GLIBC_2.2 (4)
[...]
Is it safe to assume that the names associated to these symbols, e.g. the tcsetattr or access, are always unique? Or is it possible, or reasonable*), to have a dynamic symbol table (filtered for FUNC and UND) which contains two entries with the same associated string?
The reason I am asking is that I am looking for a unique identifier for dynamically imported functions ...
*) Wouldn't the dynamic linker resolve all "UND FUNC symbols" with the same name to the same function anyway?
Yes, given a symbol name and the set of libraries an executable is linked against, you can uniquely identify the function. This behavior is required for linking and dynamic linking to work.
An illustrative example
Consider the following two files:
librarytest1.c:
#include <stdio.h>
int testfunction(void)
{
printf("version 1");
return 0;
}
and librarytest2.c:
#include <stdio.h>
int testfunction(void)
{
printf("version 2");
return 0;
}
Both compiled into shared libraries:
% gcc -fPIC -shared -Wl,-soname,liblibrarytest.so.1 -o liblibrarytest.so.1.0.0 librarytest1.c -lc
% gcc -fPIC -shared -Wl,-soname,liblibrarytest.so.2 -o liblibrarytest.so.2.0.0 librarytest2.c -lc
Note that we cannot put both functions by the same name into a single shared library:
% gcc -fPIC -shared -Wl,-soname,liblibrarytest.so.0 -o liblibrarytest.so.0.0.0 librarytest1.c librarytest2.c -lc
/tmp/cctbsBxm.o: In function `testfunction':
librarytest2.c:(.text+0x0): multiple definition of `testfunction'
/tmp/ccQoaDxD.o:librarytest1.c:(.text+0x0): first defined here
collect2: error: ld returned 1 exit status
This shows that symbol names are unique within a shared library, but do not have to be among a set of shared libraries.
% readelf --dyn-syms liblibrarytest.so.1.0.0 | grep testfunction
12: 00000000000006d0 28 FUNC GLOBAL DEFAULT 10 testfunction
% readelf --dyn-syms liblibrarytest.so.2.0.0 | grep testfunction
12: 00000000000006d0 28 FUNC GLOBAL DEFAULT 10 testfunction
Now lets link our shared libraries with an executable. Consider linktest.c:
int testfunction(void);
int main()
{
testfunction();
return 0;
}
We can compile and link this against either shared library:
% gcc -o linktest1 liblibrarytest.so.1.0.0 linktest.c
% gcc -o linktest2 liblibrarytest.so.2.0.0 linktest.c
And run each of them (note I'm setting the dynamic library path so the dynamic linker can find the libraries, which are not in a standard library path):
% LD_LIBRARY_PATH=. ./linktest1
version 1%
% LD_LIBRARY_PATH=. ./linktest2
version 2%
Now lets link our executable to both libraries. Each is exporting the same symbol testfunction and each library has a different implementation of that function.
% gcc -o linktest0-1 liblibrarytest.so.1.0.0 liblibrarytest.so.2.0.0 linktest.c
% gcc -o linktest0-2 liblibrarytest.so.2.0.0 liblibrarytest.so.1.0.0 linktest.c
The only difference is the order the libraries are referenced to the compiler.
% LD_LIBRARY_PATH=. ./linktest0-1
version 1%
% LD_LIBRARY_PATH=. ./linktest0-2
version 2%
Here are the corresponding ldd output:
% LD_LIBRARY_PATH=. ldd ./linktest0-1
linux-vdso.so.1 (0x00007ffe193de000)
liblibrarytest.so.1 => ./liblibrarytest.so.1 (0x00002b8bc4b0c000)
liblibrarytest.so.2 => ./liblibrarytest.so.2 (0x00002b8bc4d0e000)
libc.so.6 => /lib64/libc.so.6 (0x00002b8bc4f10000)
/lib64/ld-linux-x86-64.so.2 (0x00002b8bc48e8000)
% LD_LIBRARY_PATH=. ldd ./linktest0-2
linux-vdso.so.1 (0x00007ffc65df0000)
liblibrarytest.so.2 => ./liblibrarytest.so.2 (0x00002b46055c8000)
liblibrarytest.so.1 => ./liblibrarytest.so.1 (0x00002b46057ca000)
libc.so.6 => /lib64/libc.so.6 (0x00002b46059cc000)
/lib64/ld-linux-x86-64.so.2 (0x00002b46053a4000)
Here we can see that while symbols are not unique, the way the linker resolves them is defined (it appears that it always resolves the first symbol it encounters). Note that this is a bit of a pathological case as you normally wouldn't do this. In the cases where you would go this direction there are better ways of handling symbol naming so they would be unique when exported (symbol versioning, etc)
In summary, yes, you can uniquely identify the function given its name. If there happens to be multiple symbols by that name, you identify the proper one using the order the libraries are resolved in (from ldd or objdump, etc). Yes, in this case you need a bit more information that just its name, but it is possible if you have the executable to inspect.
Note that in your case, the name of the first function import is not just tcsetattr but tcsetattr#GLIBC_2.0. The # is how the readelf program displays a versioned symbol import.
GLIBC_2.0 is a version tag that glibc uses to stay binary compatible with old binaries in the (unusual but possible) case that the binary interface to one of its functions needs to change. The original .o file produced by the compiler will just import tcsetattr with no version information but during static linking, the linker has noticed that the actual symbol exported by lic.so carries a GLIBC_2.0 tag, and so it creates a binary that insists on importing the particular tcsetattr symbol that has version GLIBC_2.0.
In the future there might be a libc.so that exports one tcsetattr#GLIBC_2.0 and a different tcsetattr#GLIBC_2.42, and the version tag will then be used to find which one a partcular ELF object refers to.
It is possible that the same process may also use tcsetattr#GLIBC_2.42 at the same time, such as if it uses another dynamic library which was linked against a libc.so new enough to provide it. The version tags ensure that both the old binary and the new library get the function they expect from the C library.
Most libraries don't use this mechanism and instead just rename the entire library if they need to make breaking changes to their binary interfaces. For example, if you dump /usr/bin/pngtopnm you'll find that the symbols it imports from libnetpbm and libpng are not versioned. (Or at least that's what I see on my machine).
The cost of this is that you can't have a binary that links against one version of libpng and also links against another library that itself links against a different libpng version; the exported names from the two libpng's would clash.
In most cases this is manageable enough through careful packaging practice that maintaining the library source to produce useful version tags and stay backwards compatible is not worth the trouble.
But in the particular case of the C library and a few other vital system libraries, changing the name of the library would be so extremely painful that it makes sense for the maintainers to jump through some hoops in order to ensure it will never need to happen again.
Although in most cases every symbol is unique, there are a handful of exceptions. My favorite is multiple identical symbol import used by PAM (pluggable authentication modules) and NSS (Name Service Switch). In both cases all modules written for either interface use a standard interface with standard names. A common and frequently used example is what happens when you call get host by name. The nss library will call the same function in multiple libraries to get an answer. A common configuration calles the same function in three libraries! I have seen the same function called in five different libraries from one function call, and that was not the limit just what was useful. There is special calls to the dynamic linker need to do this and I have not familiarised myself with the mechanics of doing this, but there is nothing special about the linking of the library module that is so loaded.
Assuming the following are correct...
stdin, stdout, and stderr are streams
streams are file descriptors
file descriptors are numbers/indexes in the kernel representing open files
Questions:
a. Does it follow by transition that stdin/out/err involve open files? So if I do ls /dir, does ls output the results to a file referred to by stdout(2)?
b. Where does above file live? in a /proc//? OR is that where the FD lives?
c. What is /dev/stdout? If I do vim /dev/stdout, vim tells me it is not a file. I see there's a series of links that lead to /dev/pts/27. What is going on? I tried to cat /dev/stdout but nothing happens.
d. In general, how is it that "files" in linux are actually NOT files?
Some of your assumptions are incorrect. For example, stdin is of type FILE*; it's not a "file descriptor".
stdin, stdout, and stderr are macros defined in <stdio.h>. (Yes, they're required to be macros, not just variable names). They expand to expressions of type FILE*, and they point to the FILE objects associated with the standard input, output, and error streams.
A "file descriptor" is a small integer value representing a POSIX stream. On UNIX-like systems, FILE* values are generally associated with file descriptors (you can use the fileno and fdopen functions to go from one to the other), but they're not the same thing.
Basically, there are two distinct I/O systems, one built on top of the other. The lower level system uses numeric file descriptors, manipulated via the open, read, write, and close functions, and so forth. The higher level, as defined by the ISO C standard, uses pointers of type FILE*, manipulated with fopen, fread, fwrite, fprintf, putchar, fclose, and so forth.
As I mentioned, on UNIX-like system, the C standard layer is generally implemented on top of the POSIX layer. On non-POSIX systems (like MS Windows), the C standard layer may be implemented on top of some other system-specific interface.
Linux and other UNIX-like systems try (incompletely) to follow an "everything is a file" philosophy. There are a number of file-like entities under /proc. These are not physical files stored on disk; they're entities that can be accessed using either the POSIX or ISO C I/O layers. Neither layer requires the "files" it deals with to be actual disk files, so there's nothing inconsistent about this.
man proc for more information on what's under the /proc directory (there's far more detail than I can put in this answer).