Get list of compiled modules within Modelsim library - modelsim

I've been scouring the Modelsim Command Reference and have been unable to find this. I'm trying to find a command or a file that will give me the list of already compiled modules within a given library. I looked at Dave's answer from #85431 and as it sounds decent, my libraries do not contain these _primary.dat files he refers to. When I have everything compiled, my directory looks like this:
work
| _info
| _lib.qdb
| _lib1_1.qdb
| _lib1_1.qpg
| _lib1_1.qtl
| _vmake
From what I can make of these files (which isn't much), my target would probably be the _info file as I could probably do a Regex search of the file to find the module names from it.
However I guess I'm just hoping that there might be a command that will list these for me in some way? I feel like the Regex method might give be some false-positives in the search in that it will say a module is in there when it was actually a dependency of a higher-level module in the hierarchy. I can't say this for certain as I'm not 100% certain I'm reading the file correctly.
If there's a command out there and someone can find/knows about it, or if I'm just missing something here! I would greatly appreciate any clarification.
Edit: Also, to be clear, I'm trying to do this just by examining the library without actually loading a simulation/design.

I found it! I feel dumb too, but here she is in all her glory:
vdir
This command lists the contents of a design library and checks the compatibility of a vendor library. If vdir cannot read a vendor-supplied library, the library may not be compatible with ModelSim.
This command provides additional information with the -help switch.
Syntax
vdir [-l | [-prop <prop>]] [-r] [-all | [-lib <library_name>]] [<design_unit>]
[-modelsimini <path/modelsim.ini>]

Related

Ada `Gprbuild` Shorter File Names, Organized into Directories

Over the past few weeks I have been getting into Ada, for various different reasons. But there is no doubt that information regarding my personal reasons as to why I'm using Ada is out of scope for this question.
As of the other day I started using the gprbuild command that comes with the Windows version of GNAT, in order to get the benefits of a system for managing my applications in a project-related manner. That is, being able to define certain attributes on a per-project basis, rather than manually setting up the compile-phase myself.
Currently when naming my files, their names are based off of what seems to be a standard for the grpbuild, although I could very much be wrong. For periods (in the package structure), a - is put in the name of the file, for underscores, an _ is put accordingly. As such, a package by the name App.Test.File_Utils would have a file name of app-test-file_utils: .ads and .adb accordingly.
In the .gpr project file I have specified:
for Source_Dirs use ("app/src/**");
so that I am allowed to use multiple directories for storing my files, rather than needing to have them all in the same directory.
The Problem
The problem that arises, however, is that file names tend to get very long. As I am already putting the files in a directory based on the package name contained by the file, I was wondering if there is a way to somehow make the compiler understand that the package name can be retrieved from the file's directory name.
That is, rather than having to name the App.Test.File_Utils' file name app-test-file_utils, I would like it to reside under the app/test directory by the name file_utils.
Is this doable, or will I be stuck with the horrors of eventually having to name my files along the lines of: app-test-some-then-one-has-more_files-another_package-knew-test-more-important_package.ads? Granted, I have not missed something about how an Ada application should actually be structured.
What I have tried
I tried looking for answers in the package Naming configuration of the gpr files in the documentation, but to no avail. Furthermore I have been browsing the web for information, but decided it might be better to get help through Stackoverflow, so that other people who might struggle with this problem in the future (granted it is a problem in the first place) might also get help.
Any pointers in the right direction would be very helpful!
In the top-secret GNAT documentation there is a description of how to use non-default file names. It's a great deal of effort. You will probably give up, use the default names, and put them all in a single directory.
You can also simplify much of the effort by using GPS and letting it build your project file as you add files to your source directories.

Prevent Atom Editor from auto creating files

at the moment I try to experiment a little bit with Atom for writing an API documentation with RAML. Everything works fine but one damn thing:
Everytime I type some file paths (e.g. !include schemas/file.schema Atom auto creates the file when I'm not quick enough with typing. So, in some cases I have a hole bunch of file-zombies in my schema folder. That's kind of annoying.
My setup is standard Atom on MacBook, with api-workbench plugin, which includes linter as well. I already had a look at all those settings concerning auto completion - nothing found there. Also, Google doesn't show any hints. Any Tips?
Best regards,
Chris
It looks like this is a defect in the api-workbench package:
Api workbench creates new schemas, while i type their paths. For example below, i can see two-three files created while i type full name:
E.g:
schemas:
- myschema: !include schemas/myschema.json
Will create following files:
schemas/my
schemas/mysche
schemas/myschema
schemas/myschemas.json - this file is existing, i've created it before. all other files are redudant and i have to delete them.
Bug is not reproduced with examples, which i can also include in my document. Having issues while edition RAML 0.8 files.
If you want to help the package maintainers fix the defect, can I suggest you put together a minimal but complete example that reproduces the issue, this will make it easier for them to identify and resolve the issue.

cmake: qt resources inside a module

i have this tree structure:
repository/modules/module1
repository/modules/module2
repository/modules/module..
repository/apps/application1
repository/apps/application2
repository/apps/application..
where the applications are using some modules.
now, I'd like to put some resources inside a module (like a very colorfull icons inside a widget used by several applications) but.. something gets wrong.
inside the module CMakeLists.txt if I use only:
set(${MODULE_NAME}_RCS
colors.qrc
)
...
qt4_add_resources (${MODULE_NAME}_RHEADERS ${${MODULE_NAME}_RCS})
no qrc_colors.cxx are created anywhere. so I've tried to add:
ADD_EXECUTABLE (${MODULE_NAME}
${${MODULE_NAME}_RHEADERS}
)
but.. I get this weird error:
CMake Error at repo/modules/ColorModule/CMakeLists.txt:51 (ADD_EXECUTABLE):
add_executable cannot create target "ColorModule" because another
target with the same name already exists. The existing target is a static
library created in source directory
"repo/modules/ColorModule". See documentation for
policy CMP0002 for more details.
(I've changed the path of the error of course)
so.. don't know what to think because i'm new both to cmake and qt..
what can i try?
EDIT:
if I add the ${MODULE_NAME}_RHEADERS and ${MODULE_NAME}_RCS in the add_library command the qrc_colors.cxx is created BUT it is in repository/modules/module1/built and not copied in the application built directory...
There is at least two errors in your code.
1) It is usually not necessary to use ${MODULE_NAME} everywhere like that, just "MODULE_NAME". You can see that the difference is the raw string vs. variable. It is usually recommended to avoid double variable value dereference if possible.
2) More importantly, you seem to be setting ${MODULE_NAME} in more than one executable place, which is "ColorModule" according to the error output. You should have individual executable names for different binaries.
Also, the resource file focus is a bit of red herring in here. There are several other issues with your project.
You can cmake files as CmakeLists.txt instead of CMakeLists.txt which inherently causes issues on case sensitive systes as my Linux box.
You use Findfoo.cmake, and find_package(foo) for that matter, rather than the usual FindFoo.cmake convention alongside find_package(Foo).
Your FindFoo.cmake is quite odd, and you should probably be rewritten.
Most importantly, you should use config files rather than find modules.
Documentation and examples can be found at these places:
http://www.cmake.org/Wiki/CMake/Tutorials#CMake_Packages
https://projects.kde.org/projects/kde/kdeexamples/repository/revisions/master/show/buildsystem
When you would like use a find module, you need to have that at hand already. That will tell you what to look for, where things are, or if they are not anywhere where necessary. It is not something that you should write. You should just reuse existing ones for those projects that are not using cmake, and hence the find modules are added separately.
It is a bit like putting the treasure map just next to the treasure. Do you understand the irony? :) Once you find the map, you would automatically have the treasure as well. i.e. you would not look for it anymore.

Compile Finite State Machine to UML(-like) Diagram

Every Python developer knows tools like Sphinx. You write some text in a markup language, write make in the shell and let some compilers do their job. In the end you get beautiful HTML or PDF.
I am looking for something like that, just for Finite State Machines, e.g. I put SCXML into a file (with a GUI or manually with VIM as I desire) start a compiler and out comes a picture file format that i can use however I please and that looks good even if I don't know what I am doing. Example:
$ vim my-fsm.scxml
$ scxml2svg my-fsm.scxml
writing file...
finished writing my-fsm.svg
$
The closest I got so far is using various Eclipse plugins (years ago, dislike huge IDEs), draw.io or what I am using now: Umlet. Even Umlet has problems, though. For example it doesn't support the workflow I am used to (write text files, start compiler, see beautiful result). The results are often also suboptimal, because the engine is actually quite simple. But everywhere I look for a more useful alternative (python wiki, other SO questions, tools) I still don't find a simple compiler.
Now I would be really happy if anybody would know such a compiler. If not possible a FOSS GUI editors with PNG/SVG export would also be okay.
GraphViz has a file format which can be written manually and compiled to different picture formats.
I wrote some tools to do this: http://goo.gl/V97ft

Closure: --namespace Foo does not include Foo.Bar, and related issues

I have a rather big library with a significant set of APIs that I need to expose. In fact, I'd like to expose the whole thing. There is a lot of namespacing going on, like:
FooLibrary.Bar
FooLibrary.Qux.Rumps
FooLibrary.Qux.Scrooge
..
Basically, what I would like to do is make sure that the user can access that whole namespace. I have had a whole bunch of trouble with this, and I'm totally new to closure, so I thought I'd ask for some input.
First, I need closurebuilder.py to send the full list of files to the closure compiler. This doesn't seem supported: --namespace Foo does not include Foo.Bar. --input only allows a single file, not a directory. Nor can I simply send my list of files to the closure compiler directly, because my code is also requiring things like "goog.assers", so I do need the resolver.
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
This is my main issue.
However, later the closure compiler, with ADVANCED_OPTIMIZATIONS on, will optimize all these names away. Now I can fix that by adding "#export" all over the place, which I am not happy about, but should work. I suppose it would also be valid to use an extern here. Or I could simply disable advanced optimizations.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
Finally, for working in source mode, I need to do goog.require() for every namespace I am using. This is merely an inconvenience, though I am mentioning because it sort of related to my trouble above. I would prefer to be able to do:
goog.requireRecursively('FooLibrary')
in order to pull all the child namespaces as well; thus, recreating with a single command the environment that I have when I am using the compiled version of my library.
I feel like I am possibly misunderstanding some things, or how Closure is supposed to be used. I'd be interested in looking at other Closure-based libraries to see how they solve this.
You are discovering that Closure-compiler is built more for the end consumer and not as much for the library author.
If you are exporting basically everything, then you would be better off with SIMPLE_OPTIMIZATIONS. I would still highly encourage you to maintain compatibility of your library with ADVANCED_OPTIMIZATIONS so that users can compile the library source with their project.
First, I need closurebuilder.py to send the full list of files to the closure compiler. ...
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
You would need to specify an --root of your source folder and specify the namespaces of the leaf nodes of your file dependency tree. You may have better luck with the now deprecated CalcDeps.py script. I still use it for some projects.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
You can't do that because it only makes sense based on the final usage. You as the library writer wish to export everything, but perhaps a consumer of your library wishes to include the source (uncompiled) version and have more dead code elimination. Library authors are stuck in a kind of middle ground between SIMPLE and ADVANCED optimization levels.
What I have done for this case is maintain a separate exports file for my namespace that exports everything. When compiling a standalone version of my library for distribution, the exports file is included in the compilation. However I can still include the library source (without the exports) into a project and get full dead code elimination. The work/payoff balance of this though must be weighed against just using SIMPLE_OPTIMIZATIONS for the standalone library.
My GeolocationMarker library has an example of this strategy.

Resources