Ada dependency graph - dictionary

I need to create a dependency graph for a software suite that I am working on. In the past the company I work for has always done this manually, but I am guessing that there is a tool somewhere that will do what we need.
The software I am working with is Ada95, and has about 200 code modules/files, with about 40 packages. I need to create a map that will trace every output, individually, back to each input or constant that will have an impact on the output. Does anybody know of a tool that would accomplish this? Or even just partially accomplish it?

AdaCore's GPS (available from http://libre.adacore.com) comes with a command line tool named gnatinspect. You can use this tool to load all cross-reference information generated by the compiler (assuming you are compiling with GNAT). This creates a sqlite database (gnatinspect.db) which contains all information you need. gnatinspect itself provides a number of pre-made queries that might get you at least partially to where you want to go.
You could also look at ASIS, as a way to do this kind of queries directly on the code. I am told this is not so easy to use the first time around though.
There is also an older tool provided with gnat (gnatxref) which does something similar, although it is being superceded by gnatinspect.
Finally, you could look at gnat2xml as an alternative to ASIS if you are more comfortable parsing XML files.

Related

Data version control (DVC) edit files in place results in cyclic dependency

we have a larger dataset and have several preprocessing scripts.
These scripts alter data in place.
It seems when I try to register it with dvc run it complains about cyclic dependencies (input is the same as output).
I would assume this is a very common use case.
What is the best practice here ?
Tried to google around but i did not see any solution to this (besides creating another folder for the output).
Usually, we split input and output into separate files rather than modify everything in place, not only for the separation of concerns principles but also to make it fit with tools like DVC.
Hope you can try this way instead.

How can I create a copy of a notebok with all (or most) code cells emptied?

I want to create Jupyter notebooks for teaching, which shall be delivered in two versions:
A full “textbook” with explanations in markdown cells and example code in code cells.
As above, but with most code cells being empty such that the students have to type all the code themselves.
Obviously, I do not want to do this manually in Jupyter, so I need a way to automatically clear those code cells (exceptions are rare and can be marked somehow).
Given that notebooks are stored as sources of Python objects, I could write a simple script to directly modify those.
However, this feels like I am re-inventing the wheel instead f using some existing, dedicated method – which what I am seeking in this question.
I briefly considered using NBGrader. However, while I am quite confident that this could solve my problem, it seems overkill for this purpose and require extra effort to make things work.
Have you checked out Chris Holdgraf's nbclean?

Control over OpenAPI 3.0 package generation for jersey-jaxrs

I'm using openapi-generator for jersey-jaxrs (OpenAPI 3.0). I'd like to control the package where my code is being generated.
I'm setting the api-package, model-package, package-name, and invoker-package options, all to a xxx.yyy.zzz value.
My problem is that most of the code is generated under gen.xxx.yyy.zzz, and it's not discoverable by the part of the code generated under xxx.yyy.zzz. Implicitly, gen is prepended to the package name. I understand this is convenient in many cases, but not mine. Is there any generator option to avoid this?
I've learned a bit about the Mustache templates and they seem like a possible solution, but maybe a bit too much for my requirements.
Ultimately, I can move the code in gen to the other (non-gen) package manually, and it works, but this is quite inconvenient.
Finally, I found out that you can mark folders in IntelliJ IDEA as "generated sources root", which makes it discoverable to the rest of the project's code.
This doesn't solve my question, but it does solve the problem that originated the question.

Closure: --namespace Foo does not include Foo.Bar, and related issues

I have a rather big library with a significant set of APIs that I need to expose. In fact, I'd like to expose the whole thing. There is a lot of namespacing going on, like:
FooLibrary.Bar
FooLibrary.Qux.Rumps
FooLibrary.Qux.Scrooge
..
Basically, what I would like to do is make sure that the user can access that whole namespace. I have had a whole bunch of trouble with this, and I'm totally new to closure, so I thought I'd ask for some input.
First, I need closurebuilder.py to send the full list of files to the closure compiler. This doesn't seem supported: --namespace Foo does not include Foo.Bar. --input only allows a single file, not a directory. Nor can I simply send my list of files to the closure compiler directly, because my code is also requiring things like "goog.assers", so I do need the resolver.
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
This is my main issue.
However, later the closure compiler, with ADVANCED_OPTIMIZATIONS on, will optimize all these names away. Now I can fix that by adding "#export" all over the place, which I am not happy about, but should work. I suppose it would also be valid to use an extern here. Or I could simply disable advanced optimizations.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
Finally, for working in source mode, I need to do goog.require() for every namespace I am using. This is merely an inconvenience, though I am mentioning because it sort of related to my trouble above. I would prefer to be able to do:
goog.requireRecursively('FooLibrary')
in order to pull all the child namespaces as well; thus, recreating with a single command the environment that I have when I am using the compiled version of my library.
I feel like I am possibly misunderstanding some things, or how Closure is supposed to be used. I'd be interested in looking at other Closure-based libraries to see how they solve this.
You are discovering that Closure-compiler is built more for the end consumer and not as much for the library author.
If you are exporting basically everything, then you would be better off with SIMPLE_OPTIMIZATIONS. I would still highly encourage you to maintain compatibility of your library with ADVANCED_OPTIMIZATIONS so that users can compile the library source with their project.
First, I need closurebuilder.py to send the full list of files to the closure compiler. ...
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
You would need to specify an --root of your source folder and specify the namespaces of the leaf nodes of your file dependency tree. You may have better luck with the now deprecated CalcDeps.py script. I still use it for some projects.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
You can't do that because it only makes sense based on the final usage. You as the library writer wish to export everything, but perhaps a consumer of your library wishes to include the source (uncompiled) version and have more dead code elimination. Library authors are stuck in a kind of middle ground between SIMPLE and ADVANCED optimization levels.
What I have done for this case is maintain a separate exports file for my namespace that exports everything. When compiling a standalone version of my library for distribution, the exports file is included in the compilation. However I can still include the library source (without the exports) into a project and get full dead code elimination. The work/payoff balance of this though must be weighed against just using SIMPLE_OPTIMIZATIONS for the standalone library.
My GeolocationMarker library has an example of this strategy.

finding duplicate source code

I'm analyzing some legacy code. It is about 80.000 lines of old plsql code. On a fist look there is quite some duplication in the source which needs to be removed. Instead off doing diff's manual and looking at each file there must be some tool/commandline confu out there to detect duplicate lines of source code.
My goal is to make an educated guess about the minimal size of a rewrite of source and about how much actual knowledge is captured in this program. I wrote some a basic static code analyzer to find the amount of control statements IF ELSE FOR etc and Functions in each file.
But duplicated code still needs to be removed from my statistics.
Have you looked at Simian - Similarity Analyser? (Just checked and it's no longer free, but it is available for a period of 15 days for evaluation purposes.)
Simian (Similarity Analyser)
identifies duplication in Java, C#, C,
C++, COBOL, Ruby, JSP, ASP, HTML, XML,
Visual Basic, Groovy source code and
even plain text files. In fact, simian
can be used on any human readable
files such as ini files, deployment
descriptors, you name it.
I have used it in practice and it does work well.
Sonar has duplication detection and claims to support PL/SQL, though I've never used it for that.
You would need to beg/borrow/steal/write a plsql parser and compare the resulting abstract syntax trees. With the size of the code base you have, that might be worthwhile. There would be other uses for the parser once you're done.
How about this:
http://sourceforge.net/projects/sddforeclipse/
It is opensource, and is said to be used by commercial software. It is a plugin to Eclipse, by the way.

Resources