sbt resolution taking too long - sbt

I have a Scala project which is built on a Jenkins farm. Sometimes the build takes a very short but somethings it takes too long to resolve.
00:02:26.699 [A[2K[0m[[0minfo[0m] [0mResolving org.objenesis#objenesis;2.6 ...[0m
00:02:46.700 [A[2K[0m[[0minfo[0m] [0mResolving org.objenesis#objenesis;2.6 ...[0m
00:03:02.563 [A[2K[0m[[0minfo[0m] [0mResolving org.objenesis#objenesis-parent;2.6 ...[0m
00:03:30.563 [A[2K[0m[[0minfo[0m] [0mResolving org.objenesis#objenesis-parent;2.6 ...[0m
What can be the reason for resolution taking long?
I am using 0.13.12 sbt.
My sbt command looks like
sbt -Dsbt.repository.config=proxy_repositories -Dsbt.override.build.repos=true '-Dhttp.nonProxyHosts=localhost|127.0.0.1|*.abc.com' -no-share clean test project_A/assembly

Related

Dataflow from Colab issue

I'm trying to run a Dataflow job from Colab and getting the following worker error:
sdk_worker_main.py: error: argument --flexrs_goal: invalid choice: '/root/.local/share/jupyter/runtime/kernel-1dbd101c-a79e-432e-89b3-5ba68df104d7.json' (choose from 'COST_OPTIMIZED', 'SPEED_OPTIMIZED')
I haven't provided the flexrs_goal argument, and if I do it doesn't fix this issue. Here are my pipeline options:
beam_options = PipelineOptions(
runner='DataflowRunner',
project=...,
job_name=...,
temp_location=...,
subnetwork='regions/us-west1/subnetworks/default',
region='us-west1'
)
My pipeline is very simple, it's just:
with beam.Pipeline(options=beam_options) as pipeline:
(pipeline
| beam.io.ReadFromBigQuery(
query=f'SELECT column FROM {BQ_TABLE} LIMIT 100')
| beam.Map(print))
It looks like the command line args for the sdk worker are getting polluted by jupyter somehow. I've rolled back to the past two apache-beam library versions and it hasn't helped. I could move over to Vertex Workbench but I've invested a lot in this Colab notebook (plus I like the easy sharing) and I'd rather not migrate.
Figured it out. The PipelineOptions constructor will pull in sys.argv if no parameter is given for the first argument (called flags). In my case it was pulling in the command line args that my jupyter notebook was started with and passing them as Beam options to the workers.
I fixed my issue by doing this:
beam_options = PipelineOptions(
flags=[],
...
)

What are the differences between runghc and ghc?

What are the differences between runghc and ghc?
I ran a short program that seemed to compile fine with both, except that I got the following with runghc, but not plain ghc:
error:
Variable not in scope: main :: IO a0
Perhaps you meant `min' (imported from Prelude)
It seems that compiling things in runghc, those things need Main function, while plain ghc, doesn't?
Is that all?
runghc is used to run programs directly, not compiling them. A file without a main IO action can't be run as, by definition, there is nothing to run.
What ghc does with a file not containing a main is compile it as a module, to be imported by other haskell programs/modules - these are, of course, not runnable.

Randomize Make goals for a target

I have a C++ library and it has a few of C++ static objects. The library could suffer from C++ static initialization fiasco. I'm trying to vet unforeseen translation unit dependencies by randomizing the order of the *.o files during a build.
I visited 2.3 How make Processes a Makefile in the GNU manual and it tells me:
Goals are the targets that make strives ultimately to update. You can override this behavior using the command line (see Arguments to Specify the Goals) ...
I also followed to 9.2 Arguments to Specify the Goals, but a treatment was not provided. It did not surprise me.
Is it possible to have Make randomize its goals? If so, then how do I do it?
If not, are there any alternatives? This is in a test environment, so I have more tools available to me than just GNUmake.
Thanks in advance.
This is really implementation-defined, but GNU Make will process targets from left to right.
Say you have an OBJS variable with the objects you want to randomize, you could write something like (using e.g. shuf):
RAND_OBJS := $(shell shuf -e -- $(OBJS))
random_build: $(RAND_OBJS)
This holds as long as you're not using parallel make (-j option). If you are the order will still be randomized, but it will also depend on number of jobs, system load, current phase of the moon, etc.
Next release of GNU make will have --shuffle mode. It will allow you to execute prerequisites in random order to shake out missing dependencies by running $ make --shuffle.
The feature was recently added in https://savannah.gnu.org/bugs/index.php?62100 and so far is available only in GNU make's git tree.

What could be wrong with my premake5 script that it takes so long to build a solution

I am using premake5 version 0.06 to generate a vs2012 project that contains 3000+ files in a directory tree that goes about 2 levels deep.
The project contains 6 configurations and 3 platforms.
It takes approximately 2 minutes to bake the configurations and then about 10 seconds to process the action and write out the solution and project files.
I am wondering if this is the expected time for this number of files or whether I can optimise my premake scripts to improve the bake times?
I make use of a number of overrides and I include my files by making use of wildcards.
files {
path.join(includeDir,"**.h"),
path.join(includeDir,"**.inl"),
path.join(srcDir,"**.h"),
path.join(srcDir,"**.inl"),
path.join(srcDir,"**.c"),
path.join(srcDir,"**.cpp"),
}
Is it better to put all options under one filter?
For convenience of setup I have options setup by different functions and so effectively list the same filter multiple times for different options e.g.
setupOption1 = function(args)
filters( "platforms:win" )
--set up option1
end
setupOption2 = function(args)
filters( "platforms:win" )
--set up option2
end
--with the project
project("myProject")
--global setup
language "C++"
kind "WindowedApp"
--individual options
setupOption1(args)
setupOption2(args)
That does sound a little long, but as this is still an alpha build performance isn't being as closely monitored right now. There is an open pull request to reduce memory usage that might help?
In general, fewer filters should help, but I would be surprised if it made a dramatic difference (unless you really have a lot).
I found that using ** wildcards in a files filter slows the build right down.
filter {"files:**_win.cpp", "platforms:not win"}
flags "ExcludeFromBuild"
filter {"files:**_xone.cpp", "platforms:not xone"}
flags "ExcludeFromBuild"
filter {"files:**_ps4.cpp", "platforms:not ps4" }
flags "ExcludeFromBuild"
If I comment out these filters, the configuration now takes about 30 seconds to build.

cmake: Working with multiple output configurations

I'm busy porting my build process from msbuild to cmake, to better be able to deal with the gcc toolchain (which generates much faster code for some of the numeric stuff I'm doing).
Now, I'd like cmake to generate several versions of the output, stuff like one version with sse2, another with x64, and so on. However, cmake seems to work most naturally if you simply have a bunch of flags (say, "sse2_enable", and "platform") and then generate one output based on those platforms.
What's the best way to work with multiple output configurations like this? Intuitively, I'd like to iterate over a large number of flag combinations and rerun the same CMakeLists.txt files for each combination - but of course, you can't express that within the CMakeLists.txt files (AFAIK).
The recommended way to do this is to simply have multiple build directories. From each one you simply call cmake with the required settings.
For example you could do, starting in the base source directory (using Linux shell syntax but the idea is the same):
mkdir build-sse2 && cd build-sse2
cmake .. -DENABLE_SSE2 # or whatever to enable it in your CMakeLists.txt
make
cd ..
mkdir build-x64 && cd build-x64
cmake .. -DENABLE_X64 # or whatever again...
make
This way, each build directory is completely separated from each other.
This allows you to have one directory for Debug, another for Release and another for cross-compiling.
There hasn't been much activity here, so I've come up with a workable solution myself. It's probably not ideal, so if you have a better idea, please do add it!
Now, it's hard to iterate over build configs in cmake because cmake's crucial variables don't live in function scope - so, for instance, that means if you do include_directories(X) the X directory will remain in the include list even after the function exits.
Directories do have scope - and while normally each input directory corresponds to one output directory, you can have multiple output directories.
So, my solution looks like this:
project(FooAllConfigs)
set(FooVar 2)
set(FooAnotherVar b)
add_subdirectory("project_dir" "out-2b")
set(FooVar 5)
set(FooAnotherVar c)
add_subdirectory("project_dir" "out-5c")
set(FooVar 3)
set(FooAnotherVar b)
add_subdirectory("project_dir" "out-3b")
set(FooVar 3)
set(FooAnotherVar c)
add_subdirectory("project_dir" "out-3c")
The normal project dir then contains a CMakeLists.txt file with code to set up the appropriate includes and compiler options given the global variables set in the FooAllConfigs project, and it also determines a build suffix that's appended to all build outputs - any even indirectly included output (e.g. as generated by add_executable) must have a unique name.
This works fine for me.

Resources