Laziness of parsers of inputTasks - sbt

I've realized that all completions of parsers in inputTasks are calculated when SBT is loading.
Some of them are a little slow.
Why isn't this done lazily? It would improve the SBT load speed.
Many of the inputTasks are never called.

Related

Why is it not possible to execute compile-file in parallel?

I have a project that has a lot of files that are not managed by ASDF and are compiled manually. These files are completely independent and being able to compile them in parallel seemed like a way to reduce compilation time to me. My plan was to compile these files in parallel and then sequentially load produced FASL files. But after I parallelized compilation, I saw that there was literally zero performance improvement. Then I went to SBCL sources and found that compile-file takes a world lock, which essentially sequentializes the compilation.
My question is, what's the reason that compile-file takes this lock? While loading FASLs in parallel could indeed lead to some race conditions, it seemed to me that compilation of Lisp files should be independent and parallelizable.
The compiler is accessible from the language. You can do compile-time programming, have compiler macros etc. Just as an illustration, there is (eval-when (:compile) …). You cannot rule out compile-time effects in general, and this would have to be thread safe everywhere. I guess that the effort to make this robust is much bigger than one was willing to invest.
You might be able to start multiple Lisp images for parallel compilation, though. You just need to handle the dependency graph while orchestrating that.
UPDATE: I just stumbled upon a conversation that seems to imply that SBCL is closer to getting rid of that lock than I thought: https://sourceforge.net/p/sbcl/mailman/message/36473674/

How to use Common Lisp sort of like a smalltalk image

Goal
I would like to have my Common Lisp (SBCL + GNU Emacs + Slime) environment be sort of like a Smalltalk image in that I want to have a big ball of mud of all my code organized in packages and preferably projects. In other words I have messed about a bit with save-lisp-and-die and setting Lisp in Emacs to bring up the saved image. Where I get lost is the appropriate way to make it work with Swank.
Problem
I believe it is required to put swank hooks inside my Lisp image before save-lisp-and-die. But it seems a bit fragile as on change to either my SBCL version or Slime version it seems to throw a version mismatch.
Question
Am I missing something? Do people work this way or tend to be more separate project as a loadable set of packages under ASDF?
I really miss the Smalltalk way and feel like per project ASDF is a bit clunkier and more rooted in the file system. In comparison it reminds me too much of every other language and their app/project orientation. OTOH it seem a bit more stable-ish re-versions of depended upon packages. Well, the entire versioning hell across languages is another matter.
Any hints how to do what I want or why it isn't such a good idea would be much appreciated.
Images
Common Lisp implementations like SBCL support images. The idea of saved memory appeared early in Lisp in the 60s.
Smalltalk took that idea from Lisp. In many Smalltalk implementations images might be portable (OS, runtime, ...) - especially when using machine independent byte code. SBCL OTOH compiles to native machine code.
Managed source code
Smalltalk added the idea of managed source code. Smalltalk often uses a simple database plus a change log to store source code. One Lisp doing something similar was Xerox Interlisp - but with slightly different approaches.
Other Lisp implementations / IDEs don't support managed source code that way - only the Xerox Interlisp variants - AFAIK.
DEFSYSTEM
In Common Lisp the use of defsystem facilities like ASDF and IDEs like GNU Emacs + SLIME is much more file system based. Code resides in multiple systems, which are files in a directory with a system description.
It's not even clear that it's meaningful to load a newer version of a system into a Lisp system where an older version is loaded. One might be able to arrange that, but there is nothing preventing me from messing that up.
Updating Lisp
Updating a Lisp like SBCL from one version to another might
make the saved image incompatible to the runtime
make the compiled code in FASL files incompatible with the runtime
You might save an image with the runtime included/bundled. That way you have the right combination of image and runtime.
But when you update the runtime, you usually/often need to regenerate a new compatible images with your code loaded.
Since SBCL brings releases once a month, there is a temptation to update regularly. Other implementations might use different strategies: LispWorks is an example. LispWorks is released much less often and publishes patches between releases, which are loaded into the released version.
Updating SLIME
I have no idea if it would be possible to update a loaded SLIME (a SLIME which has been already loaded in an earlier version into a Lisp system) by loading a new version on top. Probably a good idea to check with the SLIME maintainers.

Experimental support for keeping the Scala compiler resident with sbt.resident.limit?

I've just noticed in the code of sbt.BuiltinCommands that there's a Java command line setting - sbt.resident.limit that (quoting Experimental or In-progress):
Experimental support for keeping the Scala compiler resident. Enable
by passing -Dsbt.resident.limit=n to sbt, where n is an integer
indicating the maximum number of compilers to keep around.
Should an end user know the switch? Where could it be useful? Is the feature going into a mainstream use or is it so specialized that almost of no use?
We've experimented with keeping Scala compiler instances in memory to reduce the time it takes to perform incremental compilation. Our findings showed us that the improvements to speed were not as large as we expected. The complexity of resident compilation is really large due to issues like memory leaks or sound symbol table invalidation.
I think it's very unlikely we'll finish that experimental feature in any foreseeable future so I think we should remove any references to resident compilation mode from sbt sources.
I created an sbt ticket for tracking it: https://github.com/sbt/sbt/issues/1287
Feel free to grab it. I'm happy to assist you with any questions related to cleaning up sbt code from resident compilation mode.

why do my modules keep getting rebuilt

I have a flex project, which has a main application, and then a number of small modules (17 of them).
For reasons I have not been able to figure out, when I do a 'debug-compile' to test, frequently (but not always), it decides to rebuild the modules, though, nothing within the modules has changed in any way. Without the modules re-compiling, it takes about 5 seconds to build the app, but with it, it's upwards of 2 minutes.
I assume its that something the modules all need is getting changed, but for the life of me, I can't figure it out.
How can I resolve this?
the FlashBuilder compiler I find has some terrible linking issues... I highly recommend looking at HFCD - HellFire Compiler Daemon. It is an out-of-process multi-threaded mxml compiler that quite frankly just kicks some serious ass. We saw our compile times drop astronomically for us, though our project is friggin huge. Additionally since the compiling is not happening inside flash builder, flash builder is far more responsive now.
http://stopcoding.wordpress.com/2009/10/20/hfcd-pick-your-compiler-speed-how-about-10x/

LLVM JIT speed up choices?

It's kind of subjective, but I'm having troubles getting LLVM JIT up to speed. Jitting large bodies of code take 50 times as much time as just interpreting them even with lazy compilation turned on.
So I was wondering how can I speeding jitting up, what kind of settings I can use?
Any other recommendations?
I am sorry to say that LLVM just isn't very fast as a JIT compiler, it is better as a AOT/static compiler.
I have run into the same speed issues in my llvm-lua project. What I did was to disable JIT compiling of large Lua functions. llvm-lua doesn't have lazy-compilation support turned on since LLVM requires too much C-stack space to run from Lua coroutines.
Also if you use this in your program's main() function:
llvm::cl::ParseCommandLineOptions(argc, argv, 0, true);
It will expose alot of command-line options from LLVM like '-time-passes' which will enable timing of LLVM passes, to see which parts of the JIT compiling is taking the most time.

Resources