How to generate list of files compiled from GNU make incremental build? - gnu-make

Here's what I'm trying to achieve:
Every time make is called, inside my Makefile I have a call to a script which runs a lint tool. However I want to avoid telling lint to search the entire code base again, instead it should only run against the files that make did an incremental build. If the file was not re-compiled, there is no need to run lint on it again.
I don't know how make checks which files need to be re-compiled on incremental builds. I don't think that is stored as a list anywhere either.
If I don't want to wrap make with script that logs its output and then later analyses its stdout to see which files have been re-compiled, is there any other way to get this incremental build list?

It's not clear to me why you only want to run lint on the files that were out of date on this invocation of make. What if you run make 5 times and only during the fifth you re-run the lint tool... then you'd just lint the files that were modified during the fifth run of make but you wouldn't lint the files modified during the first four runs.
It seems to me like you really want to run lint on all the files that have been modified since the last time you ran lint. This is trivial to do:
run-lint: $(SOURCES)
lint $?
#touch $#
The $? automatic variable expands to just the files that are newer than the target file, then after the lint is complete we touch the target file so it's now up to date. Every time you run make run-lint it will lint all the files that have changed since the last time you ran it.

Related

Yocto: how to remove a layer without rebuild all

I'm playing with a Yocto project that has in its conf/bblayers.conf file the following line:
ADDONSLAYERS += "${#'${OEROOT}/layers/meta-qt5' if os.path.isfile('${OEROOT}/layers/meta-qt5/conf/layer.conf') else ''}"
I partially bitbaked the project but now I want to try to disable the whole meta-qt5 layer.
After commenting out the line above, how to remove the already built files from the output folder and go on with the others?
I tried with bitbake -c cleansstate meta-qt5 but it doesn't work. I guess it works with recipes only, and not with whole layers.
Easiest way to clean a build is to remove TMPDIR temporary folder (default is <build>/tmp).
That will remove previous compilation results, but those are also kept in SSTATE_DIR cache folder. Next build will not rebuild all, it will reuse cache results to speed it up.
Then, you can clean your cache folder for obsolete entries with sstate-cache-management.sh script:
# Example of usage (after sourcing oe-init-build-env)
sstate-cache-management.sh --cache-dir=../sstate-cache -d -y

Using Grunt to run modified scripts

I have an EXE, let's call it MyApp.exe
I then have a folder containing a a list of unit tests. Eg.
-MyTest1.txt
-MyTest2.txt
-MyTest3.txt
-MyTest4.txt
I'd like to run:
MyApp.exe --File=MyTest1.txt (obviously if it was MyTest2.txt that was modified I'd want to have that be the input).
any time MyTest1.txt or one of the other files are modified.
What's the simplest way to do this with Grunt?
Turned out to be pretty simple.
See: https://www.npmjs.org/package/grunt-contrib-watch
Search page for:
"A very common request is to only compile files as needed. Here is an example that will only lint changed files with the jshint task:"

Grunt concat includes a file that it should ignore; why is it ignoring Gruntfile.js?

I have a Grunt task to concat 3x JS files into a single plugins.js file. I now no longer require one of the files (let's call it unrequired.js), so I've removed it from the list of source files in Gruntfile.js. However, whenever I run grunt concat, the output file still contains unrequired.js. The only way around it is to trash unrequired.js and then run the task again. Is there some kind of caching feature at play here? Does the concat task ignore changes being made to Gruntfile? What am I missing?

Making multiple files from multiple files with one command in gnu make

Assume 1000 files with extension .xhtml are in directory input, and that a certain subset of those files (with output paths in $(FILES), say) need to be transformed via xslt to files with the same name in directory output. A simple make rule would be:
$(FILES): output/%.xhtml : input/%.xhtml
saxon s:$< o:$# foo.xslt
This works, of course, doing the transform one file at a time. The problem is that I want to use saxon's batch processing to do all the files at one time, since, given the number of files, that would be much faster, considering the overhead of loading java and saxon for each file. Saxon allows the -s (source) option to be a directory and processes all files in that directory, placing the results with the same name in the directory specified in the -o: option.
I'm aware of the well-known technique to get GNU make to do a single command to update multiple files by using pattern rules:
output/%.xhtml: input/%.xhtml
saxon s:input -o:output foo.xslt
But in my case this suffers from two problems. First, it will run the transform on all files in the input directory, not just the ones that have changed; and second, it will not limit the transform to the subset of files specified in $(FILES). The GNU make feature of running a recipe given in a pattern rule only once for all matched targets does not work in the case of so-called "static pattern rules" (see [here]), as the rule given at the top of the post is known.
In order to use the saxon batching feature, I need to create a temporary directory, copy to it only those files to be processed, then run the transform with that temporary directory as the input directory. I tried creating a temporary directory, and remember its name using a target-specific variable for future use, using
$(FILES): TMPDIR:=$(shell mktemp -d)
but this creates a new temporary directory for every single target that is out-of-date. In any case, I'm not sure how to structure the rule that would then copy the necessary files into that directory. I don't want to create the temporary directory at the time the makefile is parsed, since I have a non-recursive make system that will parse all make files, even those not related to the current top-level target, and don't want to create the temporary directory for situations in which it is not necessary/will not be used.
I'm well aware that many questions have been asked on SO in the past about creating multiple files from a single input; one solution is (non-static) pattern rules; other solutions involve phony targets. However, in this case I'm stuck as to how to put all this together.
I can identify the files that changed and copy them using the static pattern rule
$(FILES): output/%.xhtml : input/%.xhtml
TMPDIR=`mktemp -d`
cp $< $(TMPDIR)
but actually I would prefer to copy the files with a single cp command, whereas this copies them one by one. Perhaps there is some application here of cp -u?
I also considered using an ad-hoc extension for those files needing updating but could not see how to get this to work either. I'm about to give up and just run the saxon transform on all files when any of them have changed, but is there any better way?
Personally, I wouldn't try to do this from the command line. That's partly because I'm not a shell scripting wizard. I'm not an Ant wizard either, but because the requirement is to process files that haven't changed, this seems to fall very much into Ant territory. On the other hand, Ant will recompile the stylesheet for each transformation, which is an overhead you might want to avoid; if that's the case then your best bet is probably to write a little Java application. It's probably only 100 lines or less.
Final possibility is to do the processing within Saxon: that is, a single transformation that reads multiple input files using the collection() function and generates multiple result files using xsl:result-document. Saxon (commercial editions) offers an extension function last-modified that allows you to filter the files to be processed. With 1000 files you might also want the extension function saxon:discard-document() to prevent the heap filling.
Personally, I like your original one-compiler-per-file formulation. Does not this work well with make's -j n flag?
You can of course batch up files by copying, and then running saxon at the end. Recursive make (ugh!) can sort out the ordering. Something like:
.PHONY: all
all:
rm -rf tmpdir
${MAKE} tmpdir/sentinel
saxon -s:tmpdir -o:output foo.xslt
tmpdir/sentinel: $(FILES) ; touch $#
$(FILES): output/%.xhtml: input/%.xhtml
ln $< $(patsubst input/%,tmpdir/%,$<)
This does work, though I am very queasy about lying to make (the static pattern rule purports to create the target in output/, but in fact does its dirty deed in tmpdir/).
Note in the recipe for tmpdir/sentinel, that $? is correctly set to the list of output files that are out of date. This might be useful if you can pass a bunch of files to saxon rather than a folder.
I think one issue here is that 'saxon' supports either one file or all files in a directory, so isn't suitable for batch processing without copying to temporary directories.
Otherwise, this is quite simple to do by using a timestamp marker file as a proxy target. For example:
output/.timestamp : $(FILES)
mkdir -p $(#D)
$(COMMAND) -outputdir=output $?
touch $#
The three commands are:
Ensure that the output directory exists.
Run the batch command on files newer than the timestamp file.
Update the timestamp file (creating it if necessary).
Remembering that each line of a command is executed in its own subshell, and that if any command line fails, then subsequent lines are not invoked.
This approach is useful with Java builds.

Qt qmake - how to stop it adding rules to delete the target

I am trying to add a unit test to a group of other tests. All the tests are in their own subdirectories, each with it's own .pro file and the .cpp file which contains the tests themselves. Running qmake in one of the subdirectories creates a Makefile, and then running make runs the compiler to make the TARGET. The tests are actually run by the 'check' target - ie with 'make check'.
The test I'm trying to add is different, but it is trying to pretend to behave the same way.
It is different because it is a perl script and so doesn't need to be compiled. It does, however, need to be run - so 'make check' needs to work.
I had a .pro file working for the most part - 'qmake', 'make', 'make check', and 'make clean' would work, but 'make distclean' removed my script (since it assumes it can be regenerated by compiling something).
So, the question is, how do I stop it from removing my script?
Perhaps there's some other approach I should be taking. I had tried the 'subdirs' TEMPLATE, but that does more than just remove the line in Makefile that deletes the TARGET.
Ideas?
Using Ubuntu Linux with Qt 4.6.0.
I would look into the custom target capabilities for your script. Maybe something like this:
check.commands = <scriptname>
check.depends = <any dependencies>
QMAKE_EXTRA_TARGETS += check
Doing things this way will run the check command when the dependencies change, but as long as you don't specify check.target then it shouldn't remove anything. (If your script does produce output, then perhaps that should be in check.target.) Also, since it is specified as an "extra" command, qmake shouldn't create commands to delete your script in a distclean.
This is assuming that your script is in its own subdirectory (which you state), and is the only "check" command that needs run in that subdirectory (kind of implied by the question, but not directly stated).

Resources