External jars lib with spark-submit - jar

I have a scala app which uses an external jar lib. How can I use it if the application jar was copied to hdfs?
From local I was start with --conf spark.driver.extraClassPath=./lib/* but if I use hdfs link it will not work.

Related

How to upload dependencies jar in lib folder in apigee from Edge UI

I created one jar file and in that i have class files, .classpath file, .project file In sample on github, i see apiproxy, callout and lib folder. when i deploy it, i getting error that traffic can't flow , i get error like traffic.How to upload dependencies jar in lib folder However when i upload my main jar file in resources folder, i don't see any lib folder for dependencies jars. Should i place all jars in one resources folder. in my .classpath file, i can see all lib jars like message-flow-1.0.0.jar, expressions-1.0.0.jar and itextpdf-5.5.6.jar. However in documentation, its given to deploy by maven but i don't know maven, from UI how should i create lib folder and upload jars there.
Okay, in my understanding of your point.
You can upload jar file into apigee from Scripts > import file > in file type choose "JAR" > select jar file from your work space > and finally, define your jar name and then use policy Java Callout to call your jar.
If you have to modified your jar and want to deploy it, delete your existing jar in apigee and upload the new jar by following 1. In case of new jar has the same name of existing jar, you do not need to do anything with Java Callout policy. But the new jar has the different name, don't forget to modified Java Callout for refer to your new jar.
Please create the single jar file which contains jars like message-flow-1.0.0.jar, expressions-1.0.0.jar and itextpdf-5.5.6.jar. As per the apigee doc in create a Java Callout policy and make sure you have mentioned the package name & class name in Java Callout Poliy
<ClassName>package.ClassName</ClassName
<ResourceURL>java://SingleJar.jar</ResourceURL>

Specify jar structure in sbt assembly

When sbt-assembly builds a fat jar, it places all the dependencies in the main folder. I need to construct a jar that looks like this
--domain
domain classes
-- lib
dependency classes
is it possible to do this with sbt assembly, or any other plugin?
If you want to seperate your app jar file and your dependecy jar files, here is the most practical method i found with sbt;
Create project/plugins.sbt file if not exists and add following line:
addSbtPlugin("org.xerial.sbt" % "sbt-pack" % "0.8.0")
After adding this line refresh your project.
Note: Plugin version might change in time.
When sbt refresh finishes update your build.sbt file like this:
lazy val MyApp = project.in(file("."))
.settings(artifactName := {(
sv: ScalaVersion,
module: ModuleID,
artifact: Artifact) => "MyApp.jar"
})
.settings(packSettings)
Then run:
sbt pack
Or if you're doing this for child project, run this:
sbt "project childproject" clean pack
This will nicely seperate your main jar file and your dependency jars.
Your app jar will be in target scala folder.
Your dependencies will be in target/pack/lib.
In this way you can deploy your dependencies once.
And whenever you change your app, you can just deploy your app jar file.
So in every change you don't have to deploy an uber jar file.
Also in production, you can run your app like:
java -cp "MyApp.jar:dependency_jars_folder/*" com.myapp.App

hive looks for hdfs private directory for jar

How does add jar work in hive? when I add a local jar file,
add jar /users/course/jars/json-serde-1.3.1.jar;
hive query fails and says it could not find the jar in hdfs, same directory.
Job Submission failed with exception 'java.io.FileNotFoundException(File does not exist: hdfs://localhost:9000/users/course/jars/json-serde-1.3.1.jar)
Then I put the jar into hdfs, add jar using that hdfs filepath.
add jar hdfs://localhost/users/course/jars/json-serde-1.3.1.jar;
Now, hive query says
File does not exist: hdfs://localhost:9000/private/var/folders/k5/bn104n8s72sdpg3tg7d8kkpc0000gn/T/a598a513-d7c9-4d55-9280-b6554487cac7_resources/json-serde-1.3.1.jar
I have no idea why it keeps looking for the jar in wrong places.
I believe Hive looks for the JAR locally, not on HDFS.
So if my home directory on the gateway server is
pwd
/home/my_username/
And the JAR is sitting locally at:
/home/my_username/hive_udfs/awesomeness.jar
Then I'd go into the hive shell and run:
add jar /home/my_username/awesomeness.jar
At least, that works for me in my environment. HTH. Good luck! :)

Running jar file map reduce without Hdfs

I have bundled a jar from my eclipse project. I would like to pass arguments to the jar. Basically an input file to the jar. I would like to know how to give an input file that is not in Hdfs. I know that's not now hadoop works but this is for testing purposes. Eclipse has the feature for local files. Is there a way to do this via command line?
You can run hadoop in 'local' mode by overriding the job tracker and file system properties from the command line:
hadoop jar <jar-file> <main-class> -fs local -jt local <other-args..>
You need to be using the GenricOptionsParser (which is the norm if you're using ToolRunner to launch your jobs.

Is there a way to run executable Jar in browser?

Is there a way to run executable Jar in browser? As I don't think my Jar is working if i export it as a normal Jar
Jar File : http://chromaticcreative.net/rjminer.jar

Resources