I want to analyze a file from a large project to create a Program Dependence Graph using Frama-C, but keep getting odd errors such as:
/usr/include/bits/fcntl-linux.h:305:[kernel] user error: Length of array is zero. This extension is unsupported
If I try to use the libc implementation provided by frama-c, compilation fails due to missing headers such as sys/file.h.
I am trying to analyze files from the Lynx project, specifically the file in src/WWW/Library/Implementation/HTTP.c, using GCC version 4.8.1
What I really need is to be able to generate a PDG for this source file (which of course has various dependencies) but I think if I could get even a somewhat incomplete graph by skipping over undefined functions, that would be a great first step.
You need to provide your own "file.h" file in a directory "sys" placed anywhere in the path GCC searches when pre-processing for Frama-C.
For reference, here is the implementation of sys/file.h on another system. You may also be interested in this other StackOverflow question about sys/file.h.
For Frama-C's value analysis, assigns clauses alongside the prototypes go a long way:
/*# assigns *f \from ui, s, *fo; */
void finit(struct file *f, u_int ui, short s, void *p, struct fileops *fo);
Note that I have no idea what function finit() does and whether the above is a correct assigns clause for it. In fact, this is the whole point: neither does Frama-C out of the box, and since this lowish-level, lessish-portable system call is used in the code you wish to analyze, someone will have to know. I am afraid it is going to have to be you. On the plus side, you only need to provide the types, macros and function prototypes that the code you wish to analyze uses.
Related
I'm modifying a code written in c++ in order to add several features required by my company. I need to modify as less as possible this code, because it's a public code get from a Git repository, and we want to avoid to deviate from the original source code in case we need to synchronize our code with possible new versions in the future.
In this code, a structure is initialized with a call to std::memset. And I had need to add a shared pointer to this structure.
I notice no issue about that, the code compiles, links and works as expected, and I get even no warnings while the compilation.
But is it safe to achieve that this way? May a std::shared_ptr be correctly initialized if it is part of a structure initialized with std::memset? Or are side effects or hazardous issues which prevent to do that?
Problem Solved!!! See below for solution
I was about to post this question and decided to check the web one more time. This site https://www.freepascal.org/docs-html/prog/progsu40.html
has this statement: The {$I filename} or {$INCLUDE filename} directive tells the compiler to read further statements from the file filename. The statements read there will be inserted as if they occurred in the current file.
This is exactly what I want to do with Arduino. How do I do it?
My Skill Set:
Writing code since 1967. Yes, I survived Y2K, which was a real thing; so I'm not new to programming/debugging. Mainframes and PC's. Very solid COBOL and SAS skills. Good skills with Borland/Lazarus Object Pascal. Weak C/C++ skills.
Background:
I have two working Arduino programs that are used on a model railroad. Prog1 uses infrared sensors to light LEDs that indicate the position of a train in a tunnel. I built the IRSensor class to handle a single sensor. Prog2 uses push-buttons to set a route through several track switches. Each track switch is set via a servo. I extended the Servo class to TOServo, which encapsulate most of the commonality in each track switch.
Now I'm working on a different model railroad and need merge Prog1 and Prog2 into a single program. Building Prog3 via copy/paste from programs 1 & 2 has proved unwieldy.
Problem:
How do I tell the Arduino pre-processor/compiler to "insert filename here; do not compile, pre-compile, or otherwise process the filename unless it is wrapped around the file asking for the insertion"?
What I've tried:
I built Prog3 by separating the code for Prog2 into 3 sections -- Main program storage & code and 2 include statements (Storage definitions and executable code for TOServo). These include statements pull in code that define or access an array of TOServo. I've used several suffixes (.h/.ino and .h/.cpp and .c/.c), and they all generate 'not declared in this scope' errors.
Finally:
Thanks for your help.
SOLUTION
My .ino file had grown large & unwieldy. The 'solution' was to move a large segment of code and matching declarations to external .h/.cpp files, and to access those files via #include statements. The program would not compile (undefined variables); they were, in fact, defined but the compiler couldn't find them. After many attempts to fix or rearrange the code, finally two things dawned on me.
1)The Arduino pre-compiler changes (rearranges?) my code so that C++ and the Arduino CPU can work together. This means that the code I see is not always the code the compiler sees.
2)My .h/.cpp files define and manage an array of servo objects. I could convert those files into an object that I access from the main .ino file.
So I've solved my problem. Thanks to all those who have posted in many forums/sites, especially to Tarick Welling who stayed with me to the end.
I faced a situation that I need some tables to be filled in one source file (for example fill.cu) and then be used in different kernels in different source files.
I tried declaring a pointer __device__ float *myTable; as 'extern' in fill.h header file and adding that to others.cpp and defining that pointer in fill.cu and allocate and fill it there.
This way, I got linker error indicating that myTable has been already defined in fill.cpp.
After many unsuccessful try, I decided to put all kernels that need this table in same source file, this way everything works fine until I added an cudaMalloc in main function before allocating my table in fill.cpp.
This way I noticed that table values and data allocated in main are overlapped and using cuda debugging tools of MS visual studio 2015, I found that 2 allocated pointer are same!!!
Please advice how to declare a global pointer in cuda without conflict.
The traditional CUDA linkage model requires that all device symbols, textures, functions, etc. are defined and used within the scope of the same translation unit. It sounds like your code structure is violating this requirement.
You have two choices:
Continue to the same code structure, but provide wrapper functions which your main can call to perform operations on statically declared device variables, rather than directly manipulating device symbols with the CUDA API from other code.
Use separate compilation. Here, you define the device symbol you want to access in exactly one file and declare the same symbol as externeverywhere else you need to use that symbol. You must explicitly use several nvcc options to compile your device code and use a separate device code linking stage.
Both approaches are well documented.
I have a rather big library with a significant set of APIs that I need to expose. In fact, I'd like to expose the whole thing. There is a lot of namespacing going on, like:
FooLibrary.Bar
FooLibrary.Qux.Rumps
FooLibrary.Qux.Scrooge
..
Basically, what I would like to do is make sure that the user can access that whole namespace. I have had a whole bunch of trouble with this, and I'm totally new to closure, so I thought I'd ask for some input.
First, I need closurebuilder.py to send the full list of files to the closure compiler. This doesn't seem supported: --namespace Foo does not include Foo.Bar. --input only allows a single file, not a directory. Nor can I simply send my list of files to the closure compiler directly, because my code is also requiring things like "goog.assers", so I do need the resolver.
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
This is my main issue.
However, later the closure compiler, with ADVANCED_OPTIMIZATIONS on, will optimize all these names away. Now I can fix that by adding "#export" all over the place, which I am not happy about, but should work. I suppose it would also be valid to use an extern here. Or I could simply disable advanced optimizations.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
Finally, for working in source mode, I need to do goog.require() for every namespace I am using. This is merely an inconvenience, though I am mentioning because it sort of related to my trouble above. I would prefer to be able to do:
goog.requireRecursively('FooLibrary')
in order to pull all the child namespaces as well; thus, recreating with a single command the environment that I have when I am using the compiled version of my library.
I feel like I am possibly misunderstanding some things, or how Closure is supposed to be used. I'd be interested in looking at other Closure-based libraries to see how they solve this.
You are discovering that Closure-compiler is built more for the end consumer and not as much for the library author.
If you are exporting basically everything, then you would be better off with SIMPLE_OPTIMIZATIONS. I would still highly encourage you to maintain compatibility of your library with ADVANCED_OPTIMIZATIONS so that users can compile the library source with their project.
First, I need closurebuilder.py to send the full list of files to the closure compiler. ...
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
You would need to specify an --root of your source folder and specify the namespaces of the leaf nodes of your file dependency tree. You may have better luck with the now deprecated CalcDeps.py script. I still use it for some projects.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
You can't do that because it only makes sense based on the final usage. You as the library writer wish to export everything, but perhaps a consumer of your library wishes to include the source (uncompiled) version and have more dead code elimination. Library authors are stuck in a kind of middle ground between SIMPLE and ADVANCED optimization levels.
What I have done for this case is maintain a separate exports file for my namespace that exports everything. When compiling a standalone version of my library for distribution, the exports file is included in the compilation. However I can still include the library source (without the exports) into a project and get full dead code elimination. The work/payoff balance of this though must be weighed against just using SIMPLE_OPTIMIZATIONS for the standalone library.
My GeolocationMarker library has an example of this strategy.
I have the following requirement: To Find if my binary has changed or not.
My source code is unchanged. When I recompile the binary (without change in Source Code), I notice that the Binary is changed. Not in Size, but in Contents.
On debugging a little, I found there is something called "Link Time" inside the binary file. This is the actual timestamp when the binary was linked. Now since each compile will give different timestamps, hence my binary contents are always different. But actually it should be the same.
Can somebody suggest me a way of finding out if the binary has actually changed due to change in source code, and not anything else.
Thanks
Unlike on Windows (where every .obj file has a compile timestamp in its file header), UNIX object files, and in particular ELF files do not encode any kind of timestamp.
However, if your source uses __TIME__ and __DATE__ macros, then the object file produced by compilation will obviously change. Also, all kinds of information, including compilation timestamp could be recorded as part of the debug info, if you are building -g binaries.
Finally, it's possible that the linker you are using does record the link timestamp (as a vendor extension).
Your fist task should be to understand where the differences from one build to the next come from.
If from __DATE__ and __TIME__, eliminate them from your source.
If from debug info, compare the binaries after passing them through strip -g.
If from vendor linker extension, see if there is a flag to disable such timestamps. If there isn't one, you'll have to write a tool that compares only the parts you are interested in. E.g. you could use readelf -x.text a.out, etc. to compare only the .text section (you'll also want to compare .data, .rodata, and likely many others).