Allocate Memory for SBT command - sbt

The sbt run and the command seems to be slower in execution. I have unused 10gb of ram which i can use
Please advise on how to specify memory for SBT command line
Thanks

Environment Variable In Windows
Key - SBT_OPTS
Value - -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=2G -Xmx2G
Bash Profile In Linux
.bash_profile
export SBT_OPTS="-Xmx1536M -XX:+UseConcMarkSweepGC
-XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=2G -Xss2M -Duser.timezone=GMT"
Command Line Option
sbt -mem 4096, ie allocate 4GB of memory
SBT_OPTS
SBT_OPTS environment variable, if unset uses "$default_sbt_opts". Example
SBT_OPTS="-Xms512M -Xmx3536M -Xss1M
-XX:+CMSClassUnloadingEnabled
-XX:+UseConcMarkSweepGC -XX:MaxPermSize=724M"

Related

Unable to install spark 2.2 in Cloudera Quickstart VM (5.10)

I have followed the blog (Below mentioned) here and downloaded the parcel and put as per required.
Please let me know if any one has installed and the steps.
(https://www.cloudera.com/documentation/spark2/latest/topics/spark2_installing.html)
/opt/cloudera/csd/SPARK2-2.1.0.cloudera2-1.cdh5.7.0.p0.171658-el5.parcel
But service cloudera-scm-server restart is not executing.
To use Cloudera Express (free), run:
sudo /home/cloudera/cloudera-manager --express
This requires at least 8 GB of RAM and at least 2 virtual CPUs.
SPARK 2.2 Installation Setup on Cloudera VM
Step 1: Download a quickstart_vm from the link:
Prefer a vmware platform as it is easy to use, anyways all the options are viable.
Size is around 5.4gb of the entire tar file. We need to provide the business email id as it won’t accept personal email ids.
Step 2: The virtual environment requires around 8gb of RAM, please allocate sufficient memory to avoid performance glitches.
Step 3: Please open the terminal and switch to root user as:
su root
password: cloudera
Step 4: Cloudera provides java –version 1.7.0_67 which is old and does not match with our needs. To avoid java related exceptions, please install java with the following commands:
(a). Downloading Java:
wget -c --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz
(b). Switch to /usr/java/ directory with “cd /usr/java/” command.
(c). cp the java download tar file to the /usr/java/ directory.
(d). Untar the directory with “tar –zxvf jdk-8u31-linux-x64.tar.gz”
(e). Open the profile file with the command “vi ~/.bash_profile”
(f). export JAVA_HOME to the new java directory.
“export JAVA_HOME=/usr/java/jdk1.8.0_131”
Save and Exit.
(g). In order to reflect the above change, following command needs to be executed on the shell:
source ~/.bash_profile
Step 5: The Cloudera VM provides spark 1.6 version by default. However, 1.6 API’s are old and do not match with production environments. In that case, we need to download and manually install Spark 2.2.
(a). Switch to /opt/ directory with the command:
“cd /opt/”
(b). Download spark with the command:
wget https://d3kbcqa49mib13.cloudfront.net/spark-2.2.0-bin-hadoop2.7.tgz
(c). Untar the spark tar with the following command:
tar -zxvf spark-2.2.0-bin-hadoop2.7.tgz
(d). We need to define some environment variables as default settings:
Please open a file with the following command:
vi /opt/spark-2.2.0-bin-hadoop2.7/conf/spark-env.sh
Paste the following configurations in the file:
SPARK_MASTER_IP=192.168.50.1
SPARK_EXECUTOR_MEMORY=512m
SPARK_DRIVER_MEMORY=512m
SPARK_WORKER_MEMORY=512m
SPARK_DAEMON_MEMORY=512m
Save and exit
(e). We need to start spark with the following command:
/opt/spark-2.2.0-bin-hadoop2.7/sbin/start-all.sh
Export spark_home :
export SPARK_HOME=/opt/spark-2.2.0-bin-hadoop2.7/
(f). Change the permissions of the directory:
chmod 777 -R /tmp/hive
(g). Try “spark-shell”, it should work.
Please follow below video it has all the necessary step required in order to install Sprak2 in Clouedra VM.
youtubue link - https://www.youtube.com/watch?v=lQxlO3coMxM
Also for for starting Cloudera Express (free) your VM should have at-least 8Gb RAM allocated or if you have default 4GB RAM allocated then you can forcefullly start ysing below command and then follow the above video.
sudo /home/cloudera/cloudera-manager --force --express
Try this command
sudo /home/cloudera/cloudera-manager --express --force
I gave up on this, nothing works well with parcel and non-parcel installation.
As soon as cloudera express is started numerous errors and Java 7 instead of Java 8.
I got a mapr VM install with Spark 2.x. No issues. Works first time.
That works well. This is my advice # 1.
If you want KUDU, then I would install centos and install things oneself. This is advice # 2. OK, you may miss Impala, but if for pure research and development then not so much of an issue.
With following two command my spark2.2 was automatically updated to spark 2.4:
(i) sudo yum update
It might be that your java, home path is screwed, in that case please export the java home path in bash file.
(a) vi ~/.bash_profile
(b)
(c) source ~/.bash_profile
Just download the right version of spark that you need say 'spark-2.2.0-bin-hadoop2.6'
open bashrc_profile through vi editor
vi ~/.bash_profile. Paste the below 2 lines
SPARK_HOME=/home/cloudera/Downloads/spark-2.2.0-bin-hadoop2.6
PATH=$PATH:$HOME/bin:$SPARK_HOME/bin
Save it
Then run the command : source ~/.bash_profile
Now start spark-shell .
Note : Make sure you have JDK 1.8 installed
SnPARK 2.2 Installation Setup on Cloudera VM
Step 1: Download a quickstart_vm from the link:
Prefer a vmware platform as it is easy to use, anyways all the options are viable.
Size is around 5.4gb of the entire tar file. We need to provide the business email id as it won’t accept personal email ids.
Step 2: The virtual environment requires around 8gb of RAM, please allocate sufficient memory to avoid performance glitches.
Step 3: Please open the terminal and switch to root user as:
su root
password: cloudera
Step 4: Cloudera provides java –version 1.7.0_67 which is old and does not match with our needs. To avoid java related exceptions, please install java with the following commands:
(a). Downloading Java:
wget -c --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz
(b). Switch to /usr/java/ directory with “cd /usr/java/” command.
(c). cp the java download tar file to the /usr/java/ directory.
(d). Untar the directory with “tar –xvzf jdk-8u31-linux-x64.tar.gz”
(e). Open the profile file with the command “vi ~/.bash_profile”
(f). export JAVA_HOME to the new java directory.
“export JAVA_HOME=/usr/java/jdk1.8.0_131”
Save and Exit.
(g). In order to reflect the above change, following command needs to be executed on the shell:
source ~/.bash_profile
Step 5: The Cloudera VM provides spark 1.6 version by default. However, 1.6 API’s are old and do not match with production environments. In that case, we need to download and manually install Spark 2.2.
(a). Switch to /opt/ directory with the command:
“cd /opt/”
(b). Download spark with the command:
wget https://d3kbcqa49mib13.cloudfront.net/spark-2.2.0-bin-hadoop2.7.tgz
(c). Untar the spark tar with the following command:
tar -xvzf spark-2.2.0-bin-hadoop2.7.tgz
(d). We need to define some environment variables as default settings:
Please open a file with the following command:
vi /opt/spark-2.2.0-bin-hadoop2.7/conf/spark-env.sh
Paste the following configurations in the file:
SPARK_MASTER_IP=192.168.50.1
SPARK_EXECUTOR_MEMORY=512m
SPARK_DRIVER_MEMORY=512m
SPARK_WORKER_MEMORY=512m
SPARK_DAEMON_MEMORY=512m
SPARK_LOCAL_IP=127.0.0.1
Save and exit
(e). We need to start spark with the following command:
/opt/spark-2.2.0-bin-hadoop2.7/sbin/start-all.sh
Export spark_home :
export SPARK_HOME=/opt/spark-2.2.0-bin-hadoop2.7/
(f). Change the permissions of the directory:
chmod 777 -R /tmp/hive
(g). Try “spark-shell”, it should work.
Same answeras swapnil shashank with small modification below
SPARK_LOCAL_IP=127.0.0.1
tar -xvzf spark-2.2.0-bin-hadoop2.7.tgz

Why is SBT throwing error when running external process

I'm creating a task in SBT that will upload some scripts to S3. I'm uploading to S3 using SBT external process with aws cli s"aws s3 cp ./someDir $uploadPath --recursive" ! log.
It throws error
java.io.IOException: Cannot run program "aws": CreateProcess error=2, The system cannot find the file specified
This happens only on Windows. It works fine when I run the same project/task in Ubuntu build system. AWS cli is installed on Windows machine and PATH is set correctly.
Its not clear to me what is missing.
It seems that the sbt internal process library does not include the PATH variables in windows.
A suitable workaround would be to extract your aws command in a separate file and trigger the execution of this file:
your doSomeStuff.bat would be:
aws s3 cp ./someDir $uploadPath --recursive
and in the build.sbt add
lazy val someStuff = taksKey[Unit]("Execute a aws command")
yarnBuild := {
"./doSomeStuff.bat" !
}
Another possible workaround is to run the command inside a shell (and you must know your shells for all "problematic" environments)
val shell: Seq[String] = if (sys.props("os.name").contains("Windows")) Seq("cmd", "/c") else Seq("bash", "-c")
val command: Seq[String] = shell :+ "<your command here>"
command .!

SBT Error: "Failed to construct terminal; falling back to unsupported..."

I have run into an ERROR with SBT today. It can best be shown with the sbt sbt-version command:
Run on 5/29/17:
eric#linux-x2vq:~$ sbt sbt-version
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
MaxPermSize=256M; support was removed in 8.0
[info] Set current project to eric (in build file:/home/eric/)
[info] 0.13.13
Run on 6/1/17:
eric#linux-x2vq:~$ sbt sbt-version
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
MaxPermSize=256M; support was removed in 8.0
[ERROR] Failed to construct terminal; falling back to unsupported
java.lang.NumberFormatException: For input string: "0x100"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.valueOf(Integer.java:766)
at jline.internal.InfoCmp.parseInfoCmp(InfoCmp.java:59)
at jline.UnixTerminal.parseInfoCmp(UnixTerminal.java:233)
at jline.UnixTerminal.<init>(UnixTerminal.java:64)
at jline.UnixTerminal.<init>(UnixTerminal.java:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at jline.TerminalFactory.getFlavor(TerminalFactory.java:209)
at jline.TerminalFactory.create(TerminalFactory.java:100)
at jline.TerminalFactory.get(TerminalFactory.java:184)
at jline.TerminalFactory.get(TerminalFactory.java:190)
at sbt.ConsoleLogger$.ansiSupported(ConsoleLogger.scala:123)
at sbt.ConsoleLogger$.<init>(ConsoleLogger.scala:117)
at sbt.ConsoleLogger$.<clinit>(ConsoleLogger.scala)
at sbt.GlobalLogging$.initial(GlobalLogging.scala:43)
at sbt.StandardMain$.initialGlobalLogging(Main.scala:64)
at sbt.StandardMain$.initialState(Main.scala:73)
at sbt.xMain.run(Main.scala:29)
at xsbt.boot.Launch$$anonfun$run$1.apply(Launch.scala:109)
at xsbt.boot.Launch$.withContextLoader(Launch.scala:128)
at xsbt.boot.Launch$.run(Launch.scala:109)
at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:35)
at xsbt.boot.Launch$.launch(Launch.scala:117)
at xsbt.boot.Launch$.apply(Launch.scala:18)
at xsbt.boot.Boot$.runImpl(Boot.scala:41)
at xsbt.boot.Boot$.main(Boot.scala:17)
at xsbt.boot.Boot.main(Boot.scala)
[info] Set current project to eric (in build file:/home/eric/)
[info] 0.13.13
No changes (that I know of) to either my SBT or Java setup.
Any ideas on what might be causing this or how to fix the error?
Thank you!
I had the same issue, especially when the TERM environment variable is set to xterm-256color. Setting it to a different value fixed the issue for me, e.g.
export TERM=xterm-color
I found the package which causes this issue: ncurses. I downgraded ncurses to version ncurses-6.0+20170429-1 (I am using Arch Linux) and SBT starts just fine.
Steps for Arch Linux:
cd /var/cache/pacman/pkg
sudo pacman -U ncurses-6.0+20170429-1-x86_64.pkg.tar.xz # or some other older version
Steps for Mac: see https://github.com/jline/jline2/issues/281
I think this issue was introduced with ncurses version 20170506, see: http://invisible-island.net/ncurses/NEWS.html#index-t20170506
+ modify tic/infocmp display of numeric values to use hexadecimal when
they are "close" to a power of two, making the result more readable.
I filed an issue on the SBT issue tracker: https://github.com/sbt/sbt/issues/3240
Edit: SBT version 0.13.16 includes the fix for this problem.
You can add export TERM=xterm-color to the top of /usr/share/sbt/bin/sbt because $HOME/.sbtconfig is deprecated.
sbt command is just a script. It load $HOME/.sbtconfig at the very beginning, so just put
export TERM=xterm-color
as #user3113045 said in the conf file, sbt will work. In that case your other term commands will still use xterm-256color.
This resolved the issue in my case (Linux Users):
Open your terminal
Navigate to your project directory
type "export TERM=xterm-color" in your terminal without the quotes
Hit ENTER
That is all and then you are good to go.
One year passed... now it happened to me.
So, ncurses did change, and the corresponding sbt part was ...I guess... probably only implemented based on random guessed tests and observations/bugs and not any spec nor RFC. (So far, sbt is the only program with this ncurses issue that I know of.)
In case you can't simply upgrade sbt nor downgrade ncurses, you could change the TERM environment variable as mentioned in the other answers.
trivial fix:
If your sbt script is some bash script (most likely, unless you run DOS' .bat files)
$ file /usr/bin/sbt
/usr/bin/sbt: Bourne-Again shell script, ASCII text executable
, then it might suffice to add this workaround:
TERM="${TERM/xterm-256color/xterm-color}"
If you can, change sbt version in build.properties to higher. 13.16 work for me.
For Ubuntu 20.04 Users, Open your terminal and run the bellow CMDs
Go to this "/usr/share/sbt/bin" directory ( $ cd /usr/share/sbt/bin )
give the permission to edit file ( $ sudo chmod -R 777 sbt )
Open the sbt text file in this directory ( $ nano sbt )
Add this "export TERM=xterm-color" cmd in top and save ( Ctrl + X )
Ex:-
#!/usr/bin/env bash
export TERM=xterm-color
set +e
I can't write a comment as my score is too low, but user3113045's answer worked when I added export TERM=xterm-color to my .zshrc file
I faced this issue when i am using activator which uses sbt internally.
I am using Ubuntu and this error was frustrating me.
I started facing this issue when i ran
$ activator gen-idea (tool which as per the intellij is legacy)
After this i tried to delete all the cache that this tool generated.
I deleted the .ivy and .sbt directories from my home folder and ran the activator cleanFiles compile command which resolved my issue.

Look at Options in `.sbtopts` within Running `sbt`

Per this helpful post, I removed my ~/.sbtconfig, and added .sbtopts:
$cd myProject
$cat .sbtopts
-J-Xmx4G
-J-XX:+CMSClassUnloadingEnabled
-J-XX:MaxPermSize=4G
Then I ran sbt. How can I, via the sbt console, verify those options set in .sbtopts?
If you man sbt, you'll see that there's a debug flag; so, you'll see something like this:
$ sbt -d
[process_args] java_version = '1.7.0_72'
# Executing command line:
java
-Xms1024m
-Xmx1024m
-XX:ReservedCodeCacheSize=128m
-XX:MaxPermSize=256m
-XX:+CMSClassUnloadingEnabled
-XX:+UseConcMarkSweepGC
-jar
/usr/share/sbt-launcher-packaging/bin/sbt-launch.jar
Here's my sbtopts file: /usr/share/sbt-launcher-packaging/conf/sbtopts
-J-XX:+CMSClassUnloadingEnabled
-J-XX:+UseConcMarkSweepGC
Latest versions of the jdk comes with a nice tool called jps that tells you about running java processes
jps -v should point you to the processes and show the passed-in options
I don't know if you can do it from within the sbt console but you can add -J-XX:+PrintFlagsFinal to .sbtopts and the JVM will print all the flags.

IOException: Cannot run program "javac" when "sudo ./sbt/sbt compile" in Spark?

I'm installing Apache Spark which uses its own copy of SBT to set things up.
I'm using Linux Mint in a VirtualBox VM.
Here's a snippet from the error when I run sudo ./sbt/sbt compile from the Spark directory spark-0.9.0-incubating:
[error] (core/compile:compile) java.io.IOException: Cannot run program "javac": error=2, No such file or directory
[error] Total time: 181 s, completed Mar 9, 2014 12:48:03 PM
I can run java and javac from the command line just fine: e.g. javac -version gives javac 1.6.0_31
The correct jdk1.6.0_31/bin is in my PATH.
I read that the error might be due to the 64-bit JDK that I had installed, but I get the same error with the 32 bit JDK.
How can I sort out the issue?
edit: Using bash shell.
DISCLAIMER I'm mostly guessing now and still am unsure I should've responding here rather than adding a comment. Until it's clear, the DISCLAIMER remains.
When you execute java and javac from the command line, what user are you at that moment? I'm pretty sure your problems surface because the users you operate are different.
Please notice that you're executing sudo ./sbt/sbt compile as root (due to the way sudo works), but you say nothing about what user(s) you've been using to execute javac and java commands.
Add jdk1.6.0_31/bin to PATH for root and you'll be all set (as far as the configuration of Java's concerned).
I'd also recommend setting JAVA_HOME to point to jdk1.6.0_31 as it may help at times -- many applications are using it as the way to find the location of Java.
As a workaround, you may edit ./sbt/sbt and add PATH and JAVA_HOME appropriately.
You need to include the javac executable. To do this in Ubuntu please run the following command:
sudo apt-get install openjdk-7-jdk
It also places it within your path variable.

Resources