Closure: --namespace Foo does not include Foo.Bar, and related issues - google-closure-compiler

I have a rather big library with a significant set of APIs that I need to expose. In fact, I'd like to expose the whole thing. There is a lot of namespacing going on, like:
FooLibrary.Bar
FooLibrary.Qux.Rumps
FooLibrary.Qux.Scrooge
..
Basically, what I would like to do is make sure that the user can access that whole namespace. I have had a whole bunch of trouble with this, and I'm totally new to closure, so I thought I'd ask for some input.
First, I need closurebuilder.py to send the full list of files to the closure compiler. This doesn't seem supported: --namespace Foo does not include Foo.Bar. --input only allows a single file, not a directory. Nor can I simply send my list of files to the closure compiler directly, because my code is also requiring things like "goog.assers", so I do need the resolver.
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
This is my main issue.
However, later the closure compiler, with ADVANCED_OPTIMIZATIONS on, will optimize all these names away. Now I can fix that by adding "#export" all over the place, which I am not happy about, but should work. I suppose it would also be valid to use an extern here. Or I could simply disable advanced optimizations.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
Finally, for working in source mode, I need to do goog.require() for every namespace I am using. This is merely an inconvenience, though I am mentioning because it sort of related to my trouble above. I would prefer to be able to do:
goog.requireRecursively('FooLibrary')
in order to pull all the child namespaces as well; thus, recreating with a single command the environment that I have when I am using the compiled version of my library.
I feel like I am possibly misunderstanding some things, or how Closure is supposed to be used. I'd be interested in looking at other Closure-based libraries to see how they solve this.

You are discovering that Closure-compiler is built more for the end consumer and not as much for the library author.
If you are exporting basically everything, then you would be better off with SIMPLE_OPTIMIZATIONS. I would still highly encourage you to maintain compatibility of your library with ADVANCED_OPTIMIZATIONS so that users can compile the library source with their project.
First, I need closurebuilder.py to send the full list of files to the closure compiler. ...
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
You would need to specify an --root of your source folder and specify the namespaces of the leaf nodes of your file dependency tree. You may have better luck with the now deprecated CalcDeps.py script. I still use it for some projects.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
You can't do that because it only makes sense based on the final usage. You as the library writer wish to export everything, but perhaps a consumer of your library wishes to include the source (uncompiled) version and have more dead code elimination. Library authors are stuck in a kind of middle ground between SIMPLE and ADVANCED optimization levels.
What I have done for this case is maintain a separate exports file for my namespace that exports everything. When compiling a standalone version of my library for distribution, the exports file is included in the compilation. However I can still include the library source (without the exports) into a project and get full dead code elimination. The work/payoff balance of this though must be weighed against just using SIMPLE_OPTIMIZATIONS for the standalone library.
My GeolocationMarker library has an example of this strategy.

Related

Benefits of using pyuic vs uic.loadUi

I am currently working with python and Qt which is kind of new for me coming from the C++ version and I realised that in the oficial documentation it says that an UI file can be loaded both from .ui or creating a python class and transforming the file into .py file.
I get the benefits of using .ui it is dynamically loaded so no need to transform it into python file with every change but what are the benefits of doing that?, Do you get any improvements in run time? Is it something else?
Thanks
Well, this question is dangerously near to the "Opinion-based" flag, but it's also a common one and I believe it deserves at least a partial answer.
Conceptually, both using the pyuic approach and the uic.loadUi() method are the same and behave in very similar ways, but with some slight differencies.
To better explain all this, I'll use the documentation about using Designer as a reference.
pyuic approach, or the "python object" method
This is probably the most popular method, especially amongst beginners. What it does is to create a python object that is used to create the ui and, if used following the "single inheritance" approach, it also behaves as an "interface" to the ui itself, since the ui object its instance creates has all widgets available as its attributes: if you create a push button, it will be available as ui.pushButton, the first label will be ui.label and so on.
In the first example of the documentation linked above, that ui object is stand-alone; that's a very basic example (I believe it was given just to demonstrate its usage, since it wouldn't provide a lot of interaction besides the connections created within Designer) and is not very useful, but it's very similar to the single inheritance method: the button would be self.ui.pushButton, etc.
IF the "multiple inheritance" method is used, the ui object will coincide with the widget subclass. In that case, the button will be self.pushButton, the label self.label, etc.
This is very important from the python point of view, because it means that those attribute names will overwrite any other instance attribute that will use the same name: if you have a function named "saveFile" and you name the button "saveFile", you won't have any [direct] access to that instance method any more as soon as setupUi is returned. In this case, using the single inheritance method might be helpful - but, in reality, you could just be more careful about function and object names.
Finally, if you don't know what the pyuic generated file does and what's it for, you might be inclined to use it to create your program. That is wrong for a lot of reasons, but, most importantly, because you might certainly realize at some point that you have to edit your ui, and merging the new changes with your modified code is clearly a PITA you don't want to face.
I recently answered a related question, trying to explain what happens when setupUi() is called in much more depth.
Using uic.loadUi
I'd say that this is a more "modular" approach, mostly because it's much more direct: as already pointed out in the question, you don't have to constantly regenerate the ui files each time they're modified.
But, there's a catch.
First of all: obviously the loading, parsing and building of an UI from an XML file is not as fast as creating the ui directly from code (which is exactly what the pyuic file does within setupUi()).
Then, there is at least one relatively small bug about layout contents margins: when using loadUi, the default system/form margins might be completely ignored and set to 0 if not explicitly set. There is a workaround about that, explained in Size of verticalLayout is different in Qt Designer and PyQt program (thanks to eyllanesc).
A comparison
pyuic approach
Pros:
it's faster; in a very simple test with a hundred buttons and a tablewidget with more than 1200 items I measured the following bests:
pyuic loading: 33.2ms
loadUi loading: 51.8ms
this ratio is obviously not linear for a multitude of reasons, but you can get the idea
if used with the single inheritance method, it can prevent accidental instance attribute overwritings, and it also means a more "contained" object structure
using python imports ensures a more coherent project structure, especially in the deployment process (having non-python files is a common source of problems)
the contents of those files are actually instructive, especially for beginners
Cons:
you always must remember to regenerate the python files everytime you update an ui; we all know how easy is to forget an apparently meaningless step like this might be, expecially after hours of coding: I've seen plenty of situations for which people was banging heads on desks (hopefully both theirs) for hours because of untraceable issues, before realizing that they just forgot to run pyuic or didn't run it on the right files; my own forehead still hurts ;-)
file tracking: you have to count two files for each ui, and you might forget one of them along the way when migrating/forking/etc, and if you forgot an ui file it possibly means that you have to recreate it completely from scratch
n00b alert: beginners are commonly led to think that the generated python file is the one to be used to create their programs, which is obviously wrong; unfortunately, the # WARNING! message is not clear enough (I've been almost begging the head PyQt developer about this); while this is obviously not an actual problem of this approach, right now it results in being so
some of the contents of a pyuic generated files are usually unnecessary (most importantly, the object name, which is used only for specific cases), and that's pretty obvious, since it's automatically generated ("you might need that, so better safe than sorry"); also, related to the issue above, people might be led to think that everything pyuic creates is actually needed for a GUI, resulting in unnecessary code that decreases its readability
loadUi method
Pros:
it's direct and immediate: you edit your ui on Designer, you save it (or, at least, you remember to do it...), and when you run your code it's already there; no fuss, no muss, and desks/foreheads are safe(r)
file tracking and deployment: it's just one file per ui, you can put all those ui files in a separate folder, you don't have to do anything else and you don't risk to forget something on the way
direct access to widgets (but this can be achieved using the multiple inheritance approach also)
Cons:
the layout issue mentioned above
possible instance attribute overwriting and no "ui" object "containment"
slightly slower loading
path and deployment: loading is done using os relative paths and system separators, so if you put the ui in a directory different from the py file that loads that .ui you'll have to consider that; also, some package managers use to compress everything, resulting in access errors unless paths are correctly managed
In my opinion, all considering, the loadUi method is usually the better choice. It doesn't distract me, it allows better conceptual compartmentation (which is usually good and also follows a pattern similar to MVC much more closely, conceptually speaking) and I strongly believe it as being far less prone to programmer errors, for a multitude of reasons.
But that's clearly a matter of choice.
We should also and always remember that, like every other choice we do, using ui files is an option.
There is people who completely avoids them (as there is people who uses them literally for anything), but, like everything, it all and always depends on the context.
A big benefit of using pyuic is that code autocompletion will work.
This can make programming much easier and faster.
Then there's the fact that everything loads faster.
pyuic6-Tool can be used to automate the call of pyuic6 when the application is run and only convert .ui files when they change.
It's a little bit longer to set up than just using uic.loadUi but the autocompletion is well worth it if you use something like PyCharm.

Ada dependency graph

I need to create a dependency graph for a software suite that I am working on. In the past the company I work for has always done this manually, but I am guessing that there is a tool somewhere that will do what we need.
The software I am working with is Ada95, and has about 200 code modules/files, with about 40 packages. I need to create a map that will trace every output, individually, back to each input or constant that will have an impact on the output. Does anybody know of a tool that would accomplish this? Or even just partially accomplish it?
AdaCore's GPS (available from http://libre.adacore.com) comes with a command line tool named gnatinspect. You can use this tool to load all cross-reference information generated by the compiler (assuming you are compiling with GNAT). This creates a sqlite database (gnatinspect.db) which contains all information you need. gnatinspect itself provides a number of pre-made queries that might get you at least partially to where you want to go.
You could also look at ASIS, as a way to do this kind of queries directly on the code. I am told this is not so easy to use the first time around though.
There is also an older tool provided with gnat (gnatxref) which does something similar, although it is being superceded by gnatinspect.
Finally, you could look at gnat2xml as an alternative to ASIS if you are more comfortable parsing XML files.

what is the fastest way to get all attributes from a class?

I'm working in a tool that is supposed to generate some Java Code to accelerate part of the development based in a swing input dialog...there is no need to get any further with it so I'm going to my problem...
I need to retrieve all the attributes from a class to check whenever it is necessary to add a new one. I tried to use reflection but things started getting complicated. In order to use reflection I need to compile the class I want to get the attributes as it does not work directly from .java file, .class is required for it.
The problem is that many of the classes has a lot of dependencies! Due to some design flaws some classes are a high coupled, so if I am supposed to dynamic use a class loader to compile a class A I would have to retrieve and compile all its dependencies! And then retrieve all the possible dependencies from the class A dependency classes!
I made a test running an existing ant file to compile to whole project instead of the above approach but it takes about 9 minutes to finish! From the final user perspective waiting 9 minutes every run is not accetable!
Does any one here knows a better solution???
If you want to avoid working with reflection and bytecode, it means that you will have to parse the .java files yourself with a grammar and, well, a parser based on this grammar. It is possible (especially if you do not implement the whole grammar, because many java features might be useless in your project perimeter), but I reckon this is no easy task.
There is an Apache commons Sandbox package called ClassScan. It is capable of doing the kind of source parsing you appear to require. http://commons.apache.org/sandbox/commons-classscan/. Note that it is in the Sandbox, so not part of the Commons Proper.

cmake: qt resources inside a module

i have this tree structure:
repository/modules/module1
repository/modules/module2
repository/modules/module..
repository/apps/application1
repository/apps/application2
repository/apps/application..
where the applications are using some modules.
now, I'd like to put some resources inside a module (like a very colorfull icons inside a widget used by several applications) but.. something gets wrong.
inside the module CMakeLists.txt if I use only:
set(${MODULE_NAME}_RCS
colors.qrc
)
...
qt4_add_resources (${MODULE_NAME}_RHEADERS ${${MODULE_NAME}_RCS})
no qrc_colors.cxx are created anywhere. so I've tried to add:
ADD_EXECUTABLE (${MODULE_NAME}
${${MODULE_NAME}_RHEADERS}
)
but.. I get this weird error:
CMake Error at repo/modules/ColorModule/CMakeLists.txt:51 (ADD_EXECUTABLE):
add_executable cannot create target "ColorModule" because another
target with the same name already exists. The existing target is a static
library created in source directory
"repo/modules/ColorModule". See documentation for
policy CMP0002 for more details.
(I've changed the path of the error of course)
so.. don't know what to think because i'm new both to cmake and qt..
what can i try?
EDIT:
if I add the ${MODULE_NAME}_RHEADERS and ${MODULE_NAME}_RCS in the add_library command the qrc_colors.cxx is created BUT it is in repository/modules/module1/built and not copied in the application built directory...
There is at least two errors in your code.
1) It is usually not necessary to use ${MODULE_NAME} everywhere like that, just "MODULE_NAME". You can see that the difference is the raw string vs. variable. It is usually recommended to avoid double variable value dereference if possible.
2) More importantly, you seem to be setting ${MODULE_NAME} in more than one executable place, which is "ColorModule" according to the error output. You should have individual executable names for different binaries.
Also, the resource file focus is a bit of red herring in here. There are several other issues with your project.
You can cmake files as CmakeLists.txt instead of CMakeLists.txt which inherently causes issues on case sensitive systes as my Linux box.
You use Findfoo.cmake, and find_package(foo) for that matter, rather than the usual FindFoo.cmake convention alongside find_package(Foo).
Your FindFoo.cmake is quite odd, and you should probably be rewritten.
Most importantly, you should use config files rather than find modules.
Documentation and examples can be found at these places:
http://www.cmake.org/Wiki/CMake/Tutorials#CMake_Packages
https://projects.kde.org/projects/kde/kdeexamples/repository/revisions/master/show/buildsystem
When you would like use a find module, you need to have that at hand already. That will tell you what to look for, where things are, or if they are not anywhere where necessary. It is not something that you should write. You should just reuse existing ones for those projects that are not using cmake, and hence the find modules are added separately.
It is a bit like putting the treasure map just next to the treasure. Do you understand the irony? :) Once you find the map, you would automatically have the treasure as well. i.e. you would not look for it anymore.

ASDF optional system dependencies

I have a system I wrote that uses lparallel when possible, and otherwise works around it. Problem is, I'm now trying to test it on ECL, which errors upon merely loading lparallel.
Is there a way (other than #-ecl ) to specify system dependencies parameterized by implementation type? I've looked at Madiera Port but it seems to work only for subcomponents, not system dependencies. I wish to keep my .asd file as a simple machine-readable form, without reader conditionals etc.
( aside: lparallel loads fine on a current ECL. mine was just outdated. )
To my knowledge there is no direct way to do that in ASDF apart from reader conditionals. You could use XCVB instead, or write a defsystem* macro that adds new syntax, or (maybe) hook into the existing defsystem as madeira does.
Without knowing your motivation, I wonder why the simple solution of #-ecl is being avoided. Do you wish to store metadata that links ECL to the nonexistence of lparallel? Note #-lparallel and #+lparallel may be used in lisp files.
By the way lparallel loads fine for me in ECL. Are you running the latest ECL? There is a known bug in ECL that causes the lparallel tests to eventually hang, however this may not affect all platforms.

Resources