In a multi-project SBT build, how can I run 1 main program? - sbt

I have a multi-project build. One of my source directories is called "core" that unpacks down to the source files in normal project structure (main/scala/...)
Using the sbt command line, how can I run a main program called Hello.scala (object extends App)?
Tried:
> run my.package.path.Hello
> run Hello
> run

If you want to run some task scoped only to one subproject ("module"), you can do it with the project/task syntax, e.g.
core/run
See documentation on Scopes.

Related

How to run sbt multiple command in interactive mode

I want to run several sbt-commands within sbt interactive mode, i.e. without leaving the sbt "shell"?
(Note:
Some questions answer how to pass argument to sbt-commands using sbt in the standard shell. Not what I wnat here)
Example: I am in sbt interactive shell, and I want to run "test:compile", then "test"
I know test will call required compilation, but in this example I want to run the compilation of all sub-projects, before any test is started.
To run commands sequentially within the sbt shell, use ; to chain commands:
> ;test:compile ;test
Note however that running the test task will compile your sources if necessary without you having to explicitly running the compile task.

JWrapper Build Failed OutOfMemoryError

I am running this build through the JWrapperApp. When ever I try and build my 86.7Mb .jar file, it gets to the step of adding the .jar file to archive then fails with a java.lang.OutOfMemoryError. I have tried only building for one OS at a time, as well as adjusting some other settings but none seem to have an effect.
I have successfully been able to build the sample-app and I know my .jar file is in working order.
I would suggest you try starting your JWrapperApp with more memory by passing the JVM option -Xmx1024m or greater to it. This entails starting the JWrapperApp packager from the commandline/shell using the following command:
java -Xmx1024m JWrapperApp.jar

Can not call 'make clean' in cmd regarding qmake project

I'm reading Foundations of Qt Development - by Johan Thelin.
Here quote from page 450 about Building QMake Project.
If you choose to create a Makefile using QMake, you can build your project using a simple make command (or nmake if you’re using Visual Studio). You can clean up your intermediate files using make clean. The slightly more brutal step is to run make distclean, which cleans up all generated files, including the Makefile. You will have to run QMake again to get a Makefile for make.
I tried hard to clean the files using 'make clean'. But cmd displaying the message 'make' is not recognized as an internal or external command, operable program or batch file.
I searched here and tried to find the PATH to make inside Qt directory. But not successful. Then according to this solution I tried to use mingw32-make also. But same results.
Anyone of you can help me?
If you using mingw32, try mingw32-make clean. Remember, you must add mingw's bin directory to User Enviroment to use this command. Follow "My Computer" > "Properties" > "Advanced" > "Environment Variables" > "Path" and add ;C:\Qt\Tools\mingw492_32\bin
OR
use command: setx PATH %PATH%;C:\Qt\Tools\mingw492_32\bin

run a "source" bash shell script in qmake

I want to use the intel compiler for Qt, but using the intel compiler implies running the script
$ source /opt/intel/bin/compilervars.sh intel64
Of course, I could add this to ~/.bashrc, but this would not run it in QtCreator, where it still complains about missing icpc. So I want it to be a part of the main mkspec qmake file.
How can I execute that full bash command in qmake?
Short Answer: Using QMAKE_EXTRA_TARGETS and PRE_TARGET_DEPS, you can execute source /opt/intel/bin/compilersvars.sh intel64, but simply sourcing them will not solve your issue.
Long Answer: The QMake file is converted into a Makefile. Make then executes the Makefile. The problem you will run into is that Make executes each command in its own shell. Thus, simply sourcing the script will only affect one command, the command that executes the script.
There are a couple of possible ways to make things work:
Execute the script before starting Qt-Creator. I've actually done this for some projects where I needed to have special environment variables setup. To make my life easier, I created a shell command to setup the environment and then launch Qt-Creator.
Within Qt-Creator, modify the Build Environment for the project I've also used this trick. In your case, simply look at the environment setup by the script and change the "Build Environment" settings under the project tab for your project to match those setup by the script.
It might also be possible to modify QMake's compiler commands, but I am not sure you can make it execute two commands instead of one (source the script then execute the compiler). Further more, this will make the project very un-transportable to other systems.
You can create a shell script that does more or less the following:
#! /usr/bin/env sh
# Remove the script's path from the PATH variable to avoid recursive calls
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
export PATH=${PATH/$SCRIPT_DIR:/}
# Set the environment up
source /opt/intel/bin/compilervars.sh intel64
# Call make with the given arguments
make "$#"
Save it into a file named "make" in an empty directory somewhere, make it executable, then change the build environment in QT Creator to prepend PATH with the script's directory:
PATH=[dir-of-make-script]:${Env:PATH}
You can do this either in the project settings or once and for all in the kit settings.
Like this, QT Creator will be fooled into calling your make script which sets up your environment before calling the actual make binary. I use a similar technique under Windows with Linux cross-toolchains and it has been working well so far.

Running hadoop job without creating a jar file

I have wriiten a simple hadoop job. Now I want to run it without creating the jar file as opposed to lots of tutorials found on net.
I am calling it from a shell script on ubuntu platform which runs a cloudera CHD4 distribution of hadoop(2.0.0+91).
I can't create the jar file of the job because it depends on several other third party jars and configuration files which are already centrally deployed over my machine and are not accessible at the time of creating the jar. Hence I am looking out for a way where I can include these custom jar files and configuration files.
I also can't use -libjars and DistributedCache options because they only affect map/reduce phase but my driver class also is using these jar and configuration files. My job uses several in house utility code which internally uses these third party libraries and configuration files which I have only access to read from a centrally deployed location.
Here is how I am calling it from a shell script.
sudo -u hdfs hadoop x.y.z.MyJob /input /output
It shows me a
Caused by: java.lang.ClassNotFoundException: x.y.z.MyJob
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
My calling shell script successfully sets the hadoop classpath and contains all my required third party libraries and configuration files from a centrally deployed location.
I am sure that my class x.y.z.MyJob and all required libraries and configuration files are found in both the $CLASSPATH and $HADOOP_CLASSPATH environment varibales which I am setting before calling the hadoop job
Why at the time of running the script my program is not able to find the class.
Can't I run the job as a normal java class? All my other normal java programs are using the same classpath and they can always find the classes and configuration files without any problem.
Please let me know how can I access centrally deployed haddop job code and execute it.
EDIT: Here is my code to set the classpath
CLASSES_DIR=$BASE_DIR/classes/current
BIN_DIR=$BASE_DIR/bin/current
LIB_DIR=$BASE_DIR/lib/current
CONFIG_DIR=$BASE_DIR/config/current
DATA_DIR=$BASE_DIR/data/current
CLASSPATH=./
CLASSPATH=$CLASSPATH:$CLASSES_DIR
CLASSPATH=$CLASSPATH:$BIN_DIR
CLASSPATH=$CLASSPATH:$CONFIG_DIR
CLASSPATH=$CLASSPATH:$DATA_DIR
LIBPATH=`$BIN_DIR/lib.sh $LIB_DIR`
CLASSPATH=$CLASSPATH:$LIBPATH
export HADOOP_CLASSPATH=$CLASSPATH
lib.sh is the file to concatenate all third party files to a : separated format and CLASSES_DIR contains my job code x.y.z.MyJob class.
All my configuration files are unders CONFIG_DIR
When I print my CLASSPATH and HADOOP_CLASSPATH it shows me correct values. However whenever I call hadoop classpath just before executing the job, it shows me following output.
$ hadoop classpath
/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/.//*:myname:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-0.20-mapreduce/./:/usr/lib/hadoop-0.20-mapreduce/lib/*:/usr/lib/hadoop-0.20-mapreduce/.//*
$
It obviously does not have any of those previously set $CLASSPATH and $HADOOP_CLASSPATH varibales appended. Where are these environment varibales.
Inside my shell script I was running the hadoop jar command with Cloudera's hdfs user
sudo -u hdfs hadoop jar x.y.z.MyJob /input /output
This code was actually being called from the script with a regular ubuntu user which was setting the CLASSPATH and HADOOP_CLASSPATH varibles as mentioned above. And at the time of execution the hadoop jar command was not called using the same regular ubuntu user. Hence there was an exception indicating that the class was not found.
So you have to run the job with the same user who is actually setting the CLASSPATH and HADOOP_CLASSPATH environment variables.
Thanks all for your time.

Resources