What could be wrong with my premake5 script that it takes so long to build a solution - premake

I am using premake5 version 0.06 to generate a vs2012 project that contains 3000+ files in a directory tree that goes about 2 levels deep.
The project contains 6 configurations and 3 platforms.
It takes approximately 2 minutes to bake the configurations and then about 10 seconds to process the action and write out the solution and project files.
I am wondering if this is the expected time for this number of files or whether I can optimise my premake scripts to improve the bake times?
I make use of a number of overrides and I include my files by making use of wildcards.
files {
path.join(includeDir,"**.h"),
path.join(includeDir,"**.inl"),
path.join(srcDir,"**.h"),
path.join(srcDir,"**.inl"),
path.join(srcDir,"**.c"),
path.join(srcDir,"**.cpp"),
}
Is it better to put all options under one filter?
For convenience of setup I have options setup by different functions and so effectively list the same filter multiple times for different options e.g.
setupOption1 = function(args)
filters( "platforms:win" )
--set up option1
end
setupOption2 = function(args)
filters( "platforms:win" )
--set up option2
end
--with the project
project("myProject")
--global setup
language "C++"
kind "WindowedApp"
--individual options
setupOption1(args)
setupOption2(args)

That does sound a little long, but as this is still an alpha build performance isn't being as closely monitored right now. There is an open pull request to reduce memory usage that might help?
In general, fewer filters should help, but I would be surprised if it made a dramatic difference (unless you really have a lot).

I found that using ** wildcards in a files filter slows the build right down.
filter {"files:**_win.cpp", "platforms:not win"}
flags "ExcludeFromBuild"
filter {"files:**_xone.cpp", "platforms:not xone"}
flags "ExcludeFromBuild"
filter {"files:**_ps4.cpp", "platforms:not ps4" }
flags "ExcludeFromBuild"
If I comment out these filters, the configuration now takes about 30 seconds to build.

Related

Datastage Sequence job- how to process each file at a time if those files are in 7 different folders

DataStage - There are 7 folders in a path and in each folder there are 2 files . for eg : the 2 files are in the folllowing format- filename = test_s1_YYYYMMDD.txt, test_s1_YYYYMMDD.done. The path for these files are user/test/test_s1/
user/test/test_s2/
...
...
..
user/test/test_s7/------here s1,s2...s7 represents the different folders
In these folders the 2 above mentioned files are present , so how can i process each file in a sequence job?
First you need a job to process a file and the filename needs to be a parameter of that job.
For the Sequence level you need two levels - the inner one for the two files within each folder and a outer one for the different directories.
For the inner one you can choose to build a loop with to iterations or simply add the processing job twice to the sequence (which will reduce complexity in case it will always be two files).
The outer Sequence is a loop where you could parameterize the path in a way that the loop counter could be used to generate your 1-7 flexible path addon.
Check out more details on loops here
You can use the loop counter (stage_label.$Counter) to parameterize your job.
Depending on what you want to do with the files, it is an important decision how to process your files. Starting a job (or more) in a sequence for each file can lead to heavy overhead for just starting the jobs. Try loading all files at once in a parallel job using the sequenial file stage.
In the Sequential File Stage, set the appropriate Format. You can also set everything to none to just put each row in one column and process that in a later job. This will make the reading very flexible and forgiving. If your files are all the same structure, define your columns as needed.
To select the files, use File Patterns. In the Options of the Sequential File Stage, choose to have a File Name Column so you can process the filenames in a later job. You might also want to add a Row Number Column.
This method works pretty fast.

Randomize Make goals for a target

I have a C++ library and it has a few of C++ static objects. The library could suffer from C++ static initialization fiasco. I'm trying to vet unforeseen translation unit dependencies by randomizing the order of the *.o files during a build.
I visited 2.3 How make Processes a Makefile in the GNU manual and it tells me:
Goals are the targets that make strives ultimately to update. You can override this behavior using the command line (see Arguments to Specify the Goals) ...
I also followed to 9.2 Arguments to Specify the Goals, but a treatment was not provided. It did not surprise me.
Is it possible to have Make randomize its goals? If so, then how do I do it?
If not, are there any alternatives? This is in a test environment, so I have more tools available to me than just GNUmake.
Thanks in advance.
This is really implementation-defined, but GNU Make will process targets from left to right.
Say you have an OBJS variable with the objects you want to randomize, you could write something like (using e.g. shuf):
RAND_OBJS := $(shell shuf -e -- $(OBJS))
random_build: $(RAND_OBJS)
This holds as long as you're not using parallel make (-j option). If you are the order will still be randomized, but it will also depend on number of jobs, system load, current phase of the moon, etc.
Next release of GNU make will have --shuffle mode. It will allow you to execute prerequisites in random order to shake out missing dependencies by running $ make --shuffle.
The feature was recently added in https://savannah.gnu.org/bugs/index.php?62100 and so far is available only in GNU make's git tree.

Gulp — how to get lazy, ‘make’-like building?

I am using gulp for css and js processing. Sometimes I am missing the good old lazyness of the unix make command:
only generate transformed (whatsover, e.g. compilation) files from original files, that have actually changed (based on time stamps).
this is true from stage 1 to 2 (.cpp -> .o), stage 2 to 3 (linking or other stuff) whatever your dependency graph gives...
Make is not limited to source code: You can do image manipulation in several steps (efficiently ‘lazy’ generation of downscaled thumbs for example) or much else. All based on the fairly simple rule: „is at least one of the source files newer in respect to the current output file(s)?“
Unlike gulp, every step generates (more or less temporary) files, not a continuous pipe.
Is there a way, to get the same kind of lazyness in gulp**, i.e. when generating css?
only transform those (less|sass|stylus) files➝css if something changed (on the very respective file)
same for adding in browser prefixes, concat, minify
Admittedly, beyond the first 1 or 2 steps, the output is most likely already a single stream. So any change means ‘touched’. Still, when playing for example with minify options, I'd rather be lazy about the early transpile, prefixing and concat stages (drawing prior results from a temp file). Also on the javascript side ( typeScript, ... )
lazypipe and gulp-cache sound tempting but are something else, if I understand correctly. Saying .watch() is also only a partial answer, for the very first stage.
Is there a more generic approach?
If you're set on using Gulp, then this would seem to be the way to do it. It involves the gulp-cached and gulp-remember plugins.

Log rotation with Plone

The log rotation for Plone product installation would be a nice feature. What are the current best practices regarding the log rotation integration into Plone?
I found this article: http://encolpe.wordpress.com/2010/06/17/how-to-get-log-files-rotate-in-zope-with-buildout/ but as there are no documentation on plone.org I'd like to ping the community for the good known best practices not to fill up their hard disks.
ZConfig has support for the standard library RotatingFileHandler and TimedRotatingFileHandler. Taking an example from the ZConfig tests:
<eventlog>
<logfile>
path /path/to/file.log
level debug
when D
interval 3
old-files 11
</logfile>
</eventlog>
This will roll over the logs every three days, keeping 11 old files.
You place these config snippets your buildout using the event-log-custom/access-log-custom parameters in your instance recipe. plone.recipe.zope2instance
Similar to what Laurence said above but keeps size under 10mb and saves only 1 old file.
<eventlog>
level INFO
<logfile>
path /path/to/plone4/var/log/client1.log
max-size 10mb
old-files 1
</logfile>
</eventlog>
plone.recipe.zope2instance can generate this now. For example, you can specify the following options:
event-log-max-size = 10mb
event-log-old-files = 3
Here's what we do, it's simple but works:
In your buildout you add this part:
[logrotate]
recipe = collective.recipe.template
input = ${buildout:directory}/templates/logrotate.conf
output = ${buildout:directory}/etc/logrotate.conf
And in templates/logrotate.conf
rotate 4
weekly
create
compress
delaycompress
missingok
${buildout:directory}/var/log/instance1.log ${buildout:directory}/var/log/instance1-Z2.log {
sharedscripts
postrotate
/bin/kill -USR2 $(cat ${buildout:directory}/var/instance1.pid)
endscript
}
${buildout:directory}/var/log/instance2.log ${buildout:directory}/var/log/instance2-Z2.log {
sharedscripts
postrotate
/bin/kill -USR2 $(cat ${buildout:directory}/var/instance2.pid)
endscript
}
Add whatever other log rotations you need. Then it's about linking /etc/logrotate.conf to the generated file.
Mikko, you should ask me directly ;)
My blog article is still good and this feature is still undocumented in Zope2. It can be used with Plone 4 and Plone 5. The extension iw.rotatelogs was designed for Plone 3 only.
Using logrotate is not fine because it doesn't handle the case of logwriting during the rotation. It can make your instance crash silently and writing logs in RAM until your restart the instance.
I've been using iw.rotatezlogs since at least early Plone 3 very successfully. I see from your link that I should no longer need iw.rotatezlogs.
I don't like to rely on logrotate, since it's not usable on the one Windows server I have to deploy to.
As near as I can tell, the pure-zope solution (which I stil haven't tested) doesn't do logfile compression (at least I can't see it in ZConfig/components/logger/handlers.xml which is where I think it should be defined), so I suspect I'll be staying with iw.rotatezlogs.

cmake: Working with multiple output configurations

I'm busy porting my build process from msbuild to cmake, to better be able to deal with the gcc toolchain (which generates much faster code for some of the numeric stuff I'm doing).
Now, I'd like cmake to generate several versions of the output, stuff like one version with sse2, another with x64, and so on. However, cmake seems to work most naturally if you simply have a bunch of flags (say, "sse2_enable", and "platform") and then generate one output based on those platforms.
What's the best way to work with multiple output configurations like this? Intuitively, I'd like to iterate over a large number of flag combinations and rerun the same CMakeLists.txt files for each combination - but of course, you can't express that within the CMakeLists.txt files (AFAIK).
The recommended way to do this is to simply have multiple build directories. From each one you simply call cmake with the required settings.
For example you could do, starting in the base source directory (using Linux shell syntax but the idea is the same):
mkdir build-sse2 && cd build-sse2
cmake .. -DENABLE_SSE2 # or whatever to enable it in your CMakeLists.txt
make
cd ..
mkdir build-x64 && cd build-x64
cmake .. -DENABLE_X64 # or whatever again...
make
This way, each build directory is completely separated from each other.
This allows you to have one directory for Debug, another for Release and another for cross-compiling.
There hasn't been much activity here, so I've come up with a workable solution myself. It's probably not ideal, so if you have a better idea, please do add it!
Now, it's hard to iterate over build configs in cmake because cmake's crucial variables don't live in function scope - so, for instance, that means if you do include_directories(X) the X directory will remain in the include list even after the function exits.
Directories do have scope - and while normally each input directory corresponds to one output directory, you can have multiple output directories.
So, my solution looks like this:
project(FooAllConfigs)
set(FooVar 2)
set(FooAnotherVar b)
add_subdirectory("project_dir" "out-2b")
set(FooVar 5)
set(FooAnotherVar c)
add_subdirectory("project_dir" "out-5c")
set(FooVar 3)
set(FooAnotherVar b)
add_subdirectory("project_dir" "out-3b")
set(FooVar 3)
set(FooAnotherVar c)
add_subdirectory("project_dir" "out-3c")
The normal project dir then contains a CMakeLists.txt file with code to set up the appropriate includes and compiler options given the global variables set in the FooAllConfigs project, and it also determines a build suffix that's appended to all build outputs - any even indirectly included output (e.g. as generated by add_executable) must have a unique name.
This works fine for me.

Resources