Prevent compiler from moving code from one chunk to another? - google-closure-compiler

I read on this answer the following statement:
"Keep in mind that the compiler can and does move code from one chunk into other chunk output files if it determines that it is only used by that chunk."
Is there any way to switch that off?
I have a 'main' chunk and an 'optional' chunk, and I'm finding the code from the optional chunk is being moved entirely into the main.
My optional code will only be called from the main code, but only if it's determined that we actually want to load the optional stuff (based on a flag that's external to both.)
I want to minimize the size of the main code for cases where the optional stuff isn't needed, but it doesn't seem to be possible with closure as far as I can see.
EDIT:
To split the code I use the -chunk options on the (java) commandline. The 'main' one I point at several folders ('src/Infra/*.js' etc) and use 'auto' for the numFiles for the chunk. The 'optional' I point at three specific files, no wildcard, and specify 3 as numFiles.
To load the 'optional' script the 'main' writes a script tag to the page and has a Promise resolve when it loads. 'optional' is supposed to instantiate the class it defines, and push a reference to that instance to an array in the global namespace, then main reads the ref from the array, and calls an init() method on it, passing in some dependencies.
Is there a better-supported (and equally compact) way of doing it?
EDIT2: in case anyone has a similar issue, I resolved it using the "nameCache" feature of uglifyjs, so the separate components don't necessarily need to be compiled at the same time.

The compiler does not move code "up" the module graph. What's happening is the compiler somehow believes that symbols defined in your optional chunk are directly required.
This most frequently occurs because you are using dependency management and modules. When the compiler sorts dependencies, if any of the "optional" files are directly imported via require for CommonJS, import for ES6 or goog.require for Closure. In this case the compiler adds them to the main module.
To be more specific, I'd actually have to see code.

Related

Is it safe to initialize a struct containing a std::shared_ptr with std::memset?

I'm modifying a code written in c++ in order to add several features required by my company. I need to modify as less as possible this code, because it's a public code get from a Git repository, and we want to avoid to deviate from the original source code in case we need to synchronize our code with possible new versions in the future.
In this code, a structure is initialized with a call to std::memset. And I had need to add a shared pointer to this structure.
I notice no issue about that, the code compiles, links and works as expected, and I get even no warnings while the compilation.
But is it safe to achieve that this way? May a std::shared_ptr be correctly initialized if it is part of a structure initialized with std::memset? Or are side effects or hazardous issues which prevent to do that?

How do I save a dynamically generated Lisp system in external files?

Basically, I want to be able to generate class definitions, compile the system, and save it for reuse. Would that involve a code walker, or is there a simpler option?
(save-lisp-and-die "isn't going to work for me")
Expanding to explain. I'm generating systems based on OpenAPI definitions, so a system roughly corresponds to an API client.
There will be dozens, if not hundreds of these.
The idea is to NOT keep them all in the image, but load at run time as required.
I see two possible routes here, and to some extent, I suspect they mainly differ in "the last mile" (as it were).
The route you seem to have settled on, run-time definition of classes and functions.
A route whereby you generate your function/class forms, but don't go the full way to get them "Live" in the image and instead emit the form(s) to a file.
I suspect that it would be possible to have most of the generating code shared between the two and for the first route have a wrapping macro that effectively returns a PROGN, and in the second calls a function to pretty-print what the macro would have returned on a stream.
Saying that, building a tailored environment and saving it to a "core" file is a pretty good way of getting excellent startup times.

Adding flow definiton to typescript library

I have written a library called redux-async-action-reducer. I have written it in typescript. I want to add flow definition to it.
Is there any way I can keep it along with my library rather than creating a separate and putting it in flow-typed?
Something like d.ts for flow defintion files?
You could ship your library with a .js.flow file alongside your package entry point. In your case (since your package entry point is dist/index.js you would create a file at dist/index.js.flow.
Flow will then treat this like a normal source file. You'll have to remember to put // #flow at the top. You can either write functions and classes with stubbed out implementations, or use declare (e.g. declare export function foo(x: string): string;, similar for class).
Note that this will actually be different than a library definition file -- Flow will treat it like source code.
Flow-typed is the preferred way to distribute libdefs. Using .js.flow files can lead to issues when Flow makes breaking changes between versions. However, since you will be distributing a hand-curated interface, rather than shipping your entire library source as .js.flow files, that issue will be mitigated.

cuda global pointer allocation in different source file

I faced a situation that I need some tables to be filled in one source file (for example fill.cu) and then be used in different kernels in different source files.
I tried declaring a pointer __device__ float *myTable; as 'extern' in fill.h header file and adding that to others.cpp and defining that pointer in fill.cu and allocate and fill it there.
This way, I got linker error indicating that myTable has been already defined in fill.cpp.
After many unsuccessful try, I decided to put all kernels that need this table in same source file, this way everything works fine until I added an cudaMalloc in main function before allocating my table in fill.cpp.
This way I noticed that table values and data allocated in main are overlapped and using cuda debugging tools of MS visual studio 2015, I found that 2 allocated pointer are same!!!
Please advice how to declare a global pointer in cuda without conflict.
The traditional CUDA linkage model requires that all device symbols, textures, functions, etc. are defined and used within the scope of the same translation unit. It sounds like your code structure is violating this requirement.
You have two choices:
Continue to the same code structure, but provide wrapper functions which your main can call to perform operations on statically declared device variables, rather than directly manipulating device symbols with the CUDA API from other code.
Use separate compilation. Here, you define the device symbol you want to access in exactly one file and declare the same symbol as externeverywhere else you need to use that symbol. You must explicitly use several nvcc options to compile your device code and use a separate device code linking stage.
Both approaches are well documented.

Maintaining same piece of code in multiple files

I have an unusual environment in a project where I have many files that are each independent standalone scripts. All of the code required by the script must be in the one file and I can't reference outside files with includes etc.
There is a common function in all of these files that does authorization that is the last function in each file. If this function changes at all (as it does now and then) it has to changed in all the files and there are plenty of them.
Initially I was thinking of keeping the authorization function in a separate file and running a batch process that produced the final files by combining the auth file with each of the others. However, this is extremely cumbersome when debugging because the auth function needs to be in the main file for this purpose. So I'd always be testing and debugging in the folder with the combined file and then have to copy changes back to the uncombined files.
Can anyone think of a way to solve this problem? i.e. maintain an identical fragment of code in multiple files.
I'm not sure what you mean by "the auth function needs to be in the main file for this purpose", but a typical Unix solution might be to use make(1) and cpp(1) here.
Not sure what environment/editor your using but one thing you can do is to use prebuild events. create a start-tag/end-tag which defines the import region, and then in the prebuild event copy the common code between the tags and then compile...
//$start-tag-common-auth
..... code here .....
//$end-tag-common-auth
In your prebuild event just find those tags, and replace them with the import code and then finish compiling.
VS supports pre-post build events which can call external processes, but do not directly interact with the environment (like batch files or scripts).
Instead of keeping the authentication code in a separate file, designate one of your existing scripts as the primary or master script. Use this one to edit/debug/work on the authentication code. Then add a build/batch process like you are talking about that copies the authentication code from the master script into all of the other scripts.
That way you can still debug and work with the master script at any time, you don't have to worry about one more file, and your build/deploy process keeps everything in sync.
You can use a technique like #Priyank Bolia suggested to make it easy to find/replace the required bit of code.
I ugly way I can think of:
have the original code in all the files, and surround it with markers like:
///To be replaced automatically by the build process to the latest code
String str = "my code copy that can be old";
///Marker end.
This code block can be replaced automatically by the build process, from one common code file.

Resources