Using Ambari 2.6, HDP-2.6.4
I followed guide to create a spark2 share lib, and remove duplicated jar from spark2 and oozie folders, but still got the following error:
sometimes:
2018-03-23 11:36:55,071 ERROR [AsyncDispatcher event handler] org.apache.hadoop.yarn.event.AsyncDispatcher: Error in dispatcher thread
java.lang.NoSuchMethodError: org.apache.hadoop.util.StringUtils.toLowerCase(Ljava/lang/String;)Ljava/lang/String;
at org.apache.hadoop.mapreduce.v2.util.MRApps.addToClasspathIfNotJar(MRApps.java:332)
at org.apache.hadoop.mapreduce.v2.util.MRApps.addClasspathToEnv(MRApps.java:300)
at org.apache.hadoop.mapreduce.v2.util.MRApps.setClasspath(MRApps.java:261)
at org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.getInitialClasspath(TaskAttemptImpl.java:621)
at org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.createCommonContainerLaunchContext(TaskAttemptImpl.java:757)
at org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.createContainerLaunchContext(TaskAttemptImpl.java:821)
at org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$ContainerAssignedTransition.transition(TaskAttemptImpl.java:1551)
at org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$ContainerAssignedTransition.transition(TaskAttemptImpl.java:1528)
at org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1078)
at org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:145)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1311)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1303)
at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:183)
at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
at java.lang.Thread.run(Thread.java:748)
2018-03-23 11:36:55,075 INFO [AsyncDispatcher ShutDown handler] org.apache.hadoop.yarn.event.AsyncDispatcher: Exiting, bbye.
sometimes:
2018-03-23 10:51:31,570 ERROR [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.CryptoUtils.isEncryptedSpillEnabled(Lorg/apache/hadoop/conf/Configuration;)Z
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initJobCredentialsAndUGI(MRAppMaster.java:735)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:264)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1598)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1595)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1526)
2018-03-23 10:51:31,573 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting with status 1
I just found it, it's my fault, I packed hadoop and hive jars with my application, which version is different with the hadoop system.
Related
I have imported a maven project from git hub and followed the instructions given in its README file to run Alfresco. While testing the application I have entered http://localhost:8080/share/ for which I am successfully getting the login page for Alfresco. But when I am giving the default username and password I am not able to login to the application. Getting an error "Your authentication details have not been recognized or Alfresco may not be available at this time." When I checked the console and the Alfresco log file, I found org.springframework.beans.factory.BeanCreationException followed by org.alfresco.error.AlfrescoRuntimeException: 09050000 GetModelsDiff return status is 404.
Installed the following:
Apache Tomcat 7.0 version
PostgreSQL 9.4
Also installed few dependencies needed for the project(Elastic Search6.4 and ActiveMQ5.0).
Working on Java8.
Github repository link of the imported project: GitHub - Open-MBEE/mms: Model Management System
Below is the exceptions observed in the console
INFO: Initializing Spring root WebApplicationContext
2018-10-05 13:25:28,063 INFO [alfresco.repo.admin] [localhost-startStop-1] Using database URL 'jdbc:h2:C:\Users\alien147\git\mms_modified\mms-ent/alf_data_dev/h2_data/alf_dev;AUTO_SERVER=TRUE;DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=10000;MVCC=FALSE;LOCK_MODE=0' with user 'alfresco'.
2018-10-05 13:25:28,065 INFO [alfresco.repo.admin] [localhost-startStop-1] Connected to database H2 version 1.4.190 (2015-10-11)
2018-10-05 13:25:32,648 INFO [domain.schema.SchemaBootstrap] [localhost-startStop-1] Ignoring script patch (post-Hibernate): patch.db-V4.2-metadata-query-indexes
2018-10-05 13:25:32,648 INFO [domain.schema.SchemaBootstrap] [localhost-startStop-1] Ignoring script patch (post-Hibernate): patch.db-V5.1-metadata-query-indexes
2018-10-05 13:25:38,538 INFO [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Starting 'Authentication' subsystem, ID: [Authentication, managed, alfrescoNtlm1]
2018-10-05 13:25:38,715 INFO [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Startup of 'Authentication' subsystem, ID: [Authentication, managed, alfrescoNtlm1] complete
2018-10-05 13:25:40,942 WARN [context.support.XmlWebApplicationContext] [localhost-startStop-1] Exception encountered during context initialization - cancelling refresh attempt
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'emsConfig' defined in class path resource [alfresco/module/mms-amp/context/mms-init-service-context.xml]: Invocation of init method failed; nested exception is java.lang.NullPointerException
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1514)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:521)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:458)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:293)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:223)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:290)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:191)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:618)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:934)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:479)
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:410)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:306)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:112)
at org.alfresco.web.app.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:70)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4939)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5434)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1559)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1549)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
at java.util.Properties$LineReader.readLine(Properties.java:434)
at java.util.Properties.load0(Properties.java:353)
at java.util.Properties.load(Properties.java:341)
at gov.nasa.jpl.view_repo.util.EmsConfig.setProperties(EmsConfig.java:17)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.util.MethodInvoker.invoke(MethodInvoker.java:269)
at org.springframework.beans.factory.config.MethodInvokingFactoryBean.doInvoke(MethodInvokingFactoryBean.java:162)
at org.springframework.beans.factory.config.MethodInvokingFactoryBean.afterPropertiesSet(MethodInvokingFactoryBean.java:152)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1573)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1511)
... 22 more
org.alfresco.error.AlfrescoRuntimeException: 09050000 GetModelsDiff return status is 404
at org.alfresco.solr.client.SOLRAPIClient.getModelsDiff(SOLRAPIClient.java:1157)
at org.alfresco.solr.tracker.ModelTracker.trackModelsImpl(ModelTracker.java:249)
at org.alfresco.solr.tracker.ModelTracker.trackModels(ModelTracker.java:207)
at org.alfresco.solr.tracker.ModelTracker.ensureFirstModelSync(ModelTracker.java:229)
at org.alfresco.solr.tracker.CoreWatcherJob.registerForCore(CoreWatcherJob.java:131)
at org.alfresco.solr.tracker.CoreWatcherJob.execute(CoreWatcherJob.java:74)
at org.quartz.core.JobRunShell.run(JobRunShell.java:216)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:563)
Can anyone please help me out in solving the problem?
Thanks in advance.
The alfresco WAR and share WAR are completely separate. It is quite common for the share WAR to start up and show the login page while the back-end it talks to (the alfresco WAR) has failed to start.
That's what is happening in this case. It appears that the emsConfig bean, defined in https://github.com/Open-MBEE/mms/blob/develop/mms-ent/repo-amp/src/main/amp/config/alfresco/module/mms-amp/context/mms-init-service-context.xml, is getting a null pointer, probably because it cannot find that properties file.
On the installation instructions for this project it is written :
"Create and edit the mms.properties file in the $TOMCAT_HOME/shared/classes directory (You can copy mms-ent/mms.properties.example)".
Have you performed this step ?
Installed jBoss EAP 7.0 and trying to deploy a .war file but getting below error. I tried to search for the error but failed to understand Explanations about it.
03:13:23,229 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-7) MSC000001: Failed to start service jboss.module.service."deployment.MMSBackOffice.war".main: org.jboss.msc.service.StartException in service jboss.module.service."deployment.MMSBackOffice.war".main: WFLYSRV0179: Failed to load module: deployment.MMSBackOffice.war:main
at org.jboss.as.server.moduleservice.ModuleLoadService.start(ModuleLoadService.java:91)
at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948)
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.jboss.modules.ModuleNotFoundException: javax.enterprise.deploy.api:main
at org.jboss.modules.Module.addPaths(Module.java:1092)
at org.jboss.modules.Module.link(Module.java:1448)
at org.jboss.modules.Module.relinkIfNecessary(Module.java:1476)
at org.jboss.modules.ModuleLoader.loadModule(ModuleLoader.java:225)
at org.jboss.as.server.moduleservice.ModuleLoadService.start(ModuleLoadService.java:68)
... 5 more
Try adding this module inside jboss-deployment-structure.xml dependencies explicitly and place it inside your war WEB-INFthen try this once again.
I, we configure all (apparently) correct and we receive the next error when run
Sys.setenv(HADOOP_CMD="/opt/cloudera/parcels/CDH-5.4.5-1.cdh5.4.5.p0.7/bin/hadoop")
Sys.setenv(RHIVE_HIVESERVER_VERSION="2");
library("rJava", lib.loc="/usr/lib64/R/library")
library("RJDBC", lib.loc="/usr/lib64/R/library")
library("rhdfs", lib.loc="/usr/lib64/R/library")
hdfs.init()
library("Rserve", lib.loc="~/R/x86_64-redhat-linux-gnu-library/3.2")
library("RHive", lib.loc="~/R/x86_64-redhat-linux-gnu-library/3.2")
rhive.init()
rhive.connect(host="oururl",port="ourport", defaultFS="hdfs://ourhdfsservice", hiveServer2=TRUE ,updateJar=FALSE)
(I want to say HIVE_HOME and HADOOP_HOME are correctly defined)
The error we receive:
Exception in thread "Thread-12" java.lang.RuntimeException: java.sql.SQLException: java.lang.ClassNotFoundException
at com.nexr.rhive.hive.HiveJdbcClient$HiveJdbcConnector.connect(HiveJdbcClient.java:337)
at com.nexr.rhive.hive.HiveJdbcClient$HiveJdbcConnector.run(HiveJdbcClient.java:322)
Caused by: java.sql.SQLException: java.lang.ClassNotFoundException
at com.nexr.rhive.hive.DatabaseConnection.connect(DatabaseConnection.java:41)
at com.nexr.rhive.hive.HiveJdbcClient$HiveJdbcConnector.connect(HiveJdbcClient.java:330)
... 1 more
Caused by: java.lang.ClassNotFoundException
at RJavaClassLoader.findClass(RJavaClassLoader.java:383)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:191)
at com.nexr.rhive.hive.DatabaseConnection.connect(DatabaseConnection.java:38)
... 2 more
Error: java.lang.IllegalStateException: Not connected to hiveserver
What happens here? We try to reinstall Hive but same error.
Doble-check of HIVE_HOME for see if has correct configuration, HIVE_HOME libs, and revision of RHDFS and RJava packages.
Finally solved.
We need has a lot of care with this package because java dependencies are hard.
I have a Pseudo-distributed cluster with Oozie 4.2.0, Hadoop 2.7, Hive 1.1.2 and Java 1.8. After I built the Oozie distribution with the components I am trying to copy the "shared lib" to HDFS. When I run the command it gives me the below error. I think a JAR file is missing (or it says so).
I am not a JAVA person and have no knowledge about this error what so ever. But, I think if I have built Oozie successfully with all required JAR files then this error should not crop up. I browsed through all other similar Oozie issues with JNI error but I found no credible answer to solve this issue. Can someone help me in this front here please?
oozie-setup.sh sharelib create -fs hdfs://localhost:9000
Error: A JNI error has occurred, please check your installation and try
again
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache
/hadoop/conf/Configuration
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
at java.lang.Class.getMethod0(Class.java:3018)
at java.lang.Class.getMethod(Class.java:1784)
at sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:544)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526)
Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.conf.Configuration
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 7 more
I found the Solution for this myself,
Step 1 : Copy $HADOOP_INSTALL/share/common/*.jar to $OOZIE_INSTALL/libext
Step 2 : Rebuild the .WAR file.
Step 3 : Rerun oozie
I tried to install Apache oozie in EMR cluster. I am getting the error. “Error: IO_ERROR : java.net.ConnectException: Connection refused”.
Followed the below link for installation:
http://pkavuri.blogspot.in/2013/08/oozie-installation-is-simplified.html
I got the error after running the below command:
bin/oozie admin -oozie http://localhost:11000/oozie -status
The following steps I did after encountering the error:
Moved the Hadoop and common jar files to the folders
“/oozie-3.3.2/distro/target/oozie-3.3.2-distro/oozie-3.3.2/oozie-server/webapps/oozie/WEB-INF/lib”
and “oozie-3.3.2/distro/target/oozie-3.3.2-distro/oozie-3.3.2/lib/”
Downloaded derby in oozie-3.3.2/libext
The error trace after running the command "tail -100f logs/catalina.out":
ERROR: Oozie could not be started
REASON: java.lang.NoClassDefFoundError:
org/apache/hadoop/util/ReflectionUtils
Stacktrace:
----------------------------------------------------------------- java.lang.NoClassDefFoundError: org/apache/hadoop/util/ReflectionUtils
at org.apache.oozie.service.Services.setServiceInternal(Services.java:359)
at org.apache.oozie.service.Services.(Services.java:108)
at org.apache.oozie.servlet.ServicesLoader.contextInitialized(ServicesLoader.java:38)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4206)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4705)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:799)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:779)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:601)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:943)
at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:778)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:504)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1317)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:324)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1065)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:840)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1057)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)
at org.apache.catalina.core.StandardService.start(StandardService.java:525)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:754)
at org.apache.catalina.startup.Catalina.start(Catalina.java:595)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Caused
by: java.lang.ClassNotFoundException:
org.apache.hadoop.util.ReflectionUtils
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1680)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1526)
... 27 more
Try creating a libext folder and put all hadoop jars , extjs jars in it . Then run oozie-setup.sh and then oozie-run.sh
In this case based on your logs i guess you are missing hadoop-core.jar.