If I have several applications to separately compile, but they will all individually have the same library compiled into them, does Closure Compiler give you a set of compilation data such that you can compile the library separately from the individual applications and then include that data when compiling the applications so that the library part doesn't need to be recompiled every time?
If so, how does that work?
Related
I've noticed that grunt-openui5 provides a task to generate a library-preload.json file. But I can't find any information on how to build a fully functional SAPUI5 library?
I'm thinking of a gulp/grunt build generating all necessary library metafiles, less compilation and combining of multiple .less files into one library.css
I am trying to achieve what a resourceGenerator in Runtime would do: create a resource that is available on the classpath during runtime, however that would not be packaged under the main configuration.
In my specific case, I am trying to create an sbt plugin that facilitates dealing with JNI native libraries. The above mentioned resource would be a "fat" jar containing a shared library, thus it is not required for compilation but only during runtime.
My goal in the end is to publish the standard jar (in the Compile configuration) and publish the fat jar as an extra artifact (in the Runtime configuration). However, during local testing, I would like the shared libraries to be available on the classpath when simply calling run from sbt.
I tried implementing a resourceGenerator in Runtime, however with no success. An alternative approach I could imagine would be to modify runtime:exportedProducts or alter runtime:managedClasspath directly, however I first wanted to know if there is already a way to include resources only in the runtime configuration?
I have created a program using a number of statically linked libraries. My question is, are these libraries required to be present when running the executable? It seems that the libraries are accessed as the program will not run if the libraries are not present and their path not included in the LIBPATH environment variable. I had the impression that since they were statically linked they would not be required at runtime.
No, static linking means they are included in the binary you build (and so they are "loaded" when you compile and link, if you will).
i have to create a dynamically linked library in zOS . What are the options to be passed to the compiler.
Also, how to check if a library in zOS is dynamically linked[dependent] on other libraries.
we have ldd in linux, which shows this linkage. Do we have a 'ldd' equivalent in zOS land?
You don't say it directly, but I assume you mean a C/C++ DLL. You can do shared libraries in other languages as well (even assembler), but the steps would be different.
First, you need to decide what you want to export. A lot of the IBM examples use the compiler EXPORTALL directive, but be aware this can lead to very slow executables, depending on your coding style. If you don't do EXPORTALL, you'll need #pragma export for anything (code or data) you want to export. Don't forget you can export data (variables) as well as executable functions...sometimes you'll need this to share data with DLL functions.
Then, you need to set your compile options on both client (caller) and DLL to use the DLL linkage...this is the -Wc,DLL compile option and when enabled, it generates extra logic in your program to load and manage the DLL. It's a good idea to also include #pragma csect for your exported functions if you think you'll ever have the need to update the DLL without replacing it entirely.
When you link your DLL, be sure to specify the -Wl,DLL option (there are lots of ways...this part is different if you link in batch - I'm assuming you're building in a make file of some sort). The link will generate the actual DLL, as well as a "side deck" containing "IMPORT" statements for all of your exported functions. You'll need these to link any of the client-side programs that you expect to call the DLL. For example, if your imports are in a file called AAA.x, c89 -Wc,DLL myapp.c AAA.x would compile the calling code, with awareness that functions in AAA.x are off in some sort of DLL.
To your point about DLLs calling other DLLs, don't forget that a DLL can both "serve" and "consume" functions...by including the side deck for functions in other DLLs, you can have a DLL that provides some functions while calling other DLLs to access others.
The actual DLL itself can be in several places depending on the nature of your app. If you're UNIX Services friendly, it's just an executable in LIBPATH. It can also be STEPLIB, LNKLST, LPA and so forth.
If you need to, you can access your DLLs explicitly at runtime using dlopen(), dlsym() and so forth. Generally, this lets you control exactly which DLL you're using (sometimes handy if the user can provide one himself), and it gives you what amounts to function pointers that are resolved within the DLL.
There are some other basic things to consider when linking, such as ensuring that your code is reentrant. Most of these are spelled out in the IBM documentation, and if you build with things like "c89" (or equivalent), the correct options are usually setup for you automatically (in fact, to get a good idea of what's going on, turn on the verbose output and see all the parameters for yourself).
If you need to build up a cross reference of what calls what, the UNIX Services "nm" command can give you that information. If you produce detailed link-edit listings, all the data is in there too when you're building your DLLs.
Good luck!
Our application has over 15 different top-level mxml files to create individual controls that are used in our pages.
We are using Ant to do our automated builds, and are calling the mxmlc task for each mxml file separately (See question 78230 similar example). Running the compiler separately for each mxml file, however, is already adding up to a considerable amount of time. Our build time is approaching 10 minutes, 5 minutes for compiling our flex apps, and 5 minutes for compiling hundreads of java classes, building jars, installer etc. Each flex compile run is reasonably quick (15-20 seconds), but they add up.
Is there a way to compile all of them with one call to mxmlc?
One way to speed it up a bit is to compile everything but the top-level classes into one big SWC using compc, then compiling the top-level classes and using the SWC as a library. That way classes that are used by more than one application will only be compiled once.
However, a large contributor to the time it takes to compile a Flex application is the JVM startup time, and each compile will start up it's own JVM (plus one for the Ant process). One way to avoid this is to use the Flex Compiler Shell (fcsh) instead of Ant, but that has it's downsides of course. Another way is to try HellFire, which runs the compiler in a separate always-on process, meaning no more waiting for the JVM to start.
Why not use Flex Builder to compile everything automatically?
Are the mxml files components and/or modules? I think that mxmlc compiles them automatically it they're linked to by the main application, but I'm not sure...