I'm having trouble running a custom jar on Elastic Map-Reduce
I'm using jdk1.6.0_26, Hadoop 0.20.205, and compiling with Eclipse on my computer and everything works perfectly fine
for example if I ran the following on my computer it would be successful
hadoop jar MaxTemperature.jar input/temperature.txt output
I specified the jar as the following on AWS
s3n://chrishadoop/MaxTemperature.jar
and I specified the arguments as
s3n://chrishadoop/input/temperature.txt s3n://chrishadoop/output
I did not specify the main class because I pointed to it in the manifest
Here is the jar I'm using, I will make it public for a little while
https://s3.amazonaws.com/chrishadoop/MaxTemperature.jar
Here is the error I'm getting
2012-07-08 19:31:39,824 INFO com.amazonaws.elasticmapreduce.statepusher.StatePusher (main): Pusher awoke, starting to push data into simpledb...
2012-07-08 19:31:40,552 FATAL com.amazonaws.elasticmapreduce.statepusher.StatePusher (main): Fatal Exception raised while extracting data from hadoop and pushing to simpledb
java.lang.NoClassDefFoundError: org/codehaus/jackson/map/JsonMappingException
at com.amazonaws.elasticmapreduce.statepusher.StatePusher.run(StatePusher.java:65)
at com.amazonaws.elasticmapreduce.statepusher.StatePusher.main(StatePusher.java:205)
Caused by: java.lang.ClassNotFoundException: org.codehaus.jackson.map.JsonMappingException
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
... 2 more
There is a version of Jackson which is installed as part of the AMI, and I'm guessing you're bundling a different version of Jackson? The error seems to be happening in the support code which makes "enable debugging" work.
Related
I have a Pseudo-distributed cluster with Oozie 4.2.0, Hadoop 2.7, Hive 1.1.2 and Java 1.8. After I built the Oozie distribution with the components I am trying to copy the "shared lib" to HDFS. When I run the command it gives me the below error. I think a JAR file is missing (or it says so).
I am not a JAVA person and have no knowledge about this error what so ever. But, I think if I have built Oozie successfully with all required JAR files then this error should not crop up. I browsed through all other similar Oozie issues with JNI error but I found no credible answer to solve this issue. Can someone help me in this front here please?
oozie-setup.sh sharelib create -fs hdfs://localhost:9000
Error: A JNI error has occurred, please check your installation and try
again
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache
/hadoop/conf/Configuration
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
at java.lang.Class.getMethod0(Class.java:3018)
at java.lang.Class.getMethod(Class.java:1784)
at sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:544)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526)
Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.conf.Configuration
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 7 more
I found the Solution for this myself,
Step 1 : Copy $HADOOP_INSTALL/share/common/*.jar to $OOZIE_INSTALL/libext
Step 2 : Rebuild the .WAR file.
Step 3 : Rerun oozie
I have built a Camel project in Eclipse with Maven dependencies.
It ran successfully and also Built the Jar file and ran it from the command prompt
which is running as required. But when I moved the JAR file onto to our Linux machine
which is like a Job Manager server and when I try to run the JAR file as below
I am getting the below error message.
When I try to run with the below command
$ java –jar mycamelproject
I am getting the below error, but I do have the below mentioned dependency in the Dependency-Jars folder.
Exception in thread "main" java.lang.NoClassDefFoundError: org/springframework/c
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2531)
at java.lang.Class.getMethod0(Class.java:2774)
at java.lang.Class.getMethod(Class.java:1663)
at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486)
Caused by: java.lang.ClassNotFoundException: org.springframework.context.support
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 6 more
Then I tried running with the below command.
$ mvn -X exec:java -Dexec.mainClass=mycamelpackage.mycamelmainclass
I am getting a series of the below errors such as below
[DEBUG] Could not find metadata org.codehaus.mojo/maven-metadata.xml in local (/home/ec2-user/.m2/repository)
[DEBUG] Skipped remote update check for org.codehaus.mojo/maven-metadata.xml, already updated during this session.
[WARNING] Failure to transfer org.codehaus.mojo/maven-metadata.xml from
http://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced. Original error: Could not transfer metadata org.codehaus.mojo/maven-metadata.xml from/to central (http://repo.maven.apache.org/maven2): proxy.host.net
[ERROR] No plugin found for prefix 'exec' in the current project
and in the plugin groups [org.apache.maven.plugins, org.codehaus.mojo]
available from the repositories
How should java –jar mycamelproject work if no classpath is set? And mycamelproject should be a JAR file anyway (and if it is, you should add the *.jar extension).
Beside that, I guess your local Maven repository may be corrupt. That's usually the reason why you see the resolution will not be reattempted error message. The easiest way to fix this is to remove the corrupt directories and start Maven again.
I’m a newbie with Storm and I have setup a Storm-on-Yarn on an HDP cluster using the instructions at the HDP Storm-on-Yarn page and the storm-yarn-master from anfeng's storm-yarn git project.
I’m able to get Nimbus running and even submit topologies and see them on Storm UI. However, the spouts and the bolts don’t seem to be “working” (0 counts of tuples emitted).
I did some digging around and realized that my worker daemons are not starting. The supervisor log spits out these:
2014-03-13 11:22:03 b.s.d.supervisor [INFO] 18bf93a1-1cea-4e99-93da-8f36a4e9c056 still hasn't started
I tried launching the worker command from the “Launching worker with command” line in the supverviser log and I got this error:
Exception in thread "main" java.lang.NoClassDefFoundError: backtype/storm/daemon/worker
Caused by: java.lang.ClassNotFoundException: backtype.storm.daemon.worker
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: backtype.storm.daemon.worker. Program will exit.
It looks like it can’t find the worker class although it’s present in the storm-core jar.
Any ideas on how I can proceed with troubleshooting this? I’ve attached the nimbus and the supervisor logs. The worker logs don't seem to have been created.
Nimbus Log - http://paste.ubuntu.com/7089418/
Supervisor Log - http://paste.ubuntu.com/7089422/
Hadoop Version - 2.2
Storm Version - 0.9.0-wip21
I've had an issue like this when the JAR file I was creating did not exclude the storm binaries. i.e. in the pom.xml file, make sure that you have the storm-core dependency set with:
<scope>provided</scope>
As well, I had issues where multiple versions of netty were installed in the storm lib folder (had to delete the old version JAR). This was also causing NoClassDefFoundErrors to be thrown (albeit, different than the one you are experiencing).
I would suggest looking at the classpath that shows up when you submit the topolgies (you can do that by doing ps -Af | grep storm)
To uninstall an application I called uninstall-application app-name from the cloudify prompt in a local cloud environment. However the uninstall is unsuccessful. The log file shows following exception.
2013-10-14 13:06:50,537 rest [1] INFO [org.cloudifysource.rest.controllers.ServiceController] - Removing all application scope attributes for application
2013-10-14 13:06:50,542 rest [1] WARNING [org.openspaces.admin.internal.admin.DefaultAdmin] - Failed to execute: org.openspaces.admin.internal.gsm.DefaultGridServiceManager$3#70b1ec8b - org.openspaces.admin.AdminException: Failed to undeploy processing unit [app-name]; Caused by: org.openspaces.admin.AdminException: Failed to undeploy processing unit [app-name]
at org.openspaces.admin.internal.gsm.DefaultGridServiceManager.undeployProcessingUnit(DefaultGridServiceManager.java:279)
at org.openspaces.admin.internal.gsm.DefaultGridServiceManager$3.run(DefaultGridServiceManager.java:799)
at org.openspaces.admin.internal.admin.DefaultAdmin$LoggerRunnable.run(DefaultAdmin.java:2077)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.jini.rio.core.OperationalStringException: GSM not found
at org.jini.rio.monitor.ProvisionMonitorImpl.undeploy(ProvisionMonitorImpl.java:601)
at org.jini.rio.monitor.ProvisionMonitorAdminImpl.undeploy(ProvisionMonitorAdminImpl.java:126)
at org.jini.rio.monitor.DeployAdminGigaspacesMethodinternalInvoke7.internalInvoke(Unknown Source)
at com.gigaspaces.internal.reflection.fast.AbstractMethod.invoke(AbstractMethod.java:41)
at com.gigaspaces.lrmi.LRMIRuntime.invoked(LRMIRuntime.java:450)
at com.gigaspaces.lrmi.nio.Pivot.consumeAndHandleRequest(Pivot.java:557)
at com.gigaspaces.lrmi.nio.Pivot.handleRequest(Pivot.java:658)
at com.gigaspaces.lrmi.nio.Pivot$ChannelEntryTask.run(Pivot.java:196)
... 3 more
2013-10-14 13:06:51,544 rest [1] INFO [org.cloudifysource.rest.util.RestPollingRunnable] - undeployAndWait for processing unit has not finished yet
#
Eventually the operation times out. Post that I can not even teardown the local cloud. The only way to come out of this is the reboot the system. Appreciate some help on this one.
The following error:
Caused by: org.jini.rio.core.OperationalStringException: GSM not found at org.jini.rio.monitor.ProvisionMonitorImpl.undeploy
indicates that one of the Cloudify management components was missing. It may have crashed earlier, or perhaps the local machine was running at 100% CPU, causing local components to not respond to each other.
In an actual cloud deployment, this would cause the Cloudify agent to restart the failed component, but in the local-cloud environment the agent and the other management components run in the same process to conserve memory and speed up start-up time.
I'm trying to run a RecommenderJob on amazon EMR. I have a jar called SmartJukebox.jar (not runnable) and it contains a class main.TrackRecommander (and that's it).
I created a job flow with the jar:
s3n://smartjukebox/SmartJukebox.jar
and args:
main.TrackRecommander --input s3n://smartjukebox/ratings.csv --output s3n://smartjukebox/output --usersFile s3n://smartjukebox/user.txt.
The class TrackRecommander uses the class RecommenderJob.
I run the job flow and i get this in the error log -
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/mahout/cf/taste/hadoop/item/RecommenderJob
at main.TrackRecommander.main(TrackRecommander.java:136)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.lang.ClassNotFoundException: org.apache.mahout.cf.taste.hadoop.item.RecommenderJob
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
... 6 more
now i see that the JVM can't find RecommenderJob and i didn't put RecommenderJob in my jar. I thought EMR would have mahout jars built in, but i can't find anything about that.
what is the solution here?
Thanks.
You're problem is exactly what you say: "I didn't put RecommenderJob in my jar." Unless you put those classes in your JAR, of course it can't be found. Why would EMR have this built in? Add the Mahout ".job" file classes to your JAR first.
You will need to create a job jar which contains all the classes required by the code to run which includes the mahout classes too.
Take a look at
https://github.com/tdunning/MiA
Check how to create a job jar using maven assembly plugin in pom.xml and the job.xml in the src/main/resources directory.
IF you exclude the hadoop classes then you can run it on any hadoop instance.