Oozie null pointer exception when submitting jobs - cloudera

Just trying to run a very simple word count example but getting the following null pointer when submitting the job:
oozie job -oozie=http://localhost:11000/oozie/ -config job.properties -run
[cloudera#localhost Oozie_Example]$ oozie job -oozie=http://localhost:11000/oozie/ -config job.properties -run
java.lang.RuntimeException: java.lang.NullPointerException
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1242)
at sun.net.www.protocol.http.HttpURLConnection.getHeaderField(HttpURLConnection.java:2714)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:477)
at org.apache.oozie.client.OozieClient$JobSubmit.call(OozieClient.java:586)
at org.apache.oozie.client.OozieClient$JobSubmit.call(OozieClient.java:561)
at org.apache.oozie.client.OozieClient$ClientCallable.call(OozieClient.java:479)
at org.apache.oozie.client.OozieClient.run(OozieClient.java:655)
at org.apache.oozie.cli.OozieCLI.jobCommand(OozieCLI.java:918)
at org.apache.oozie.cli.OozieCLI.processCommand(OozieCLI.java:579)
at org.apache.oozie.cli.OozieCLI.run(OozieCLI.java:552)
at org.apache.oozie.cli.OozieCLI.main(OozieCLI.java:199)
Caused by: java.lang.NullPointerException
at java.io.ByteArrayInputStream.<init>(ByteArrayInputStream.java:106)
at sun.misc.CharacterEncoder.encode(CharacterEncoder.java:188)
at sun.net.www.protocol.http.NegotiateAuthentication.setHeaders(NegotiateAuthentication.java:156)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1482)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
... 8 more
java.lang.NullPointerException
here is my job properties file:
nameNode=hdfs://localhost:8020
jobTracker=localhost:8021
queueName=default
inputDir=${nameNode}/user/dev/oozie/workflow/wordcount/input/
outputDir=${nameNode}/user/dev/oozie/workflow/wordcount/output/
oozie.libpath=${nameNode}/user/oozie/workflow/wordcount/lib
oozie.use.system.libpath=true
user.name=dev
oozie.wf.application.path=${nameNode}/user/${user.name}/oozie/workflow/wordcount/
Any ideas? It happens pretty early so I think it has something to do with Oozie or namenode.

Would like to add to the answer of this question.
When oozie client got a NullPointerException, that usually means that request caused server thread failure. If you want to find out the "true" reason, you should look at the server log such as /var/log/oozie/oozie.log for CDH.
There you will find detailed stacktrace. That proved to be very helpful. At least in our case, it tells exactly what happened such as "a hdfs workflow file doesn't exist, blah..."

java.lang.NullPointerException is due to an error in your job.properties file.
Have you placed the workflow XML into HDFS? Also, you should specify:
oozie.wf.application.path=${nameNode}/user/${user.name}/oozie/workflow/wordcount/my-workflow.xml

Related

SFTP On New or Update throws error Pipe close

I'm using the "On New or Update" source of the Mule 4 SFTP Connector, to process files from an SFTP server directory. The process works fine, however while reading the last file the SFTP connector throws an error as shown below and the file remains in directory waiting next schedule time to be picked up and the same thing will happen for the last file of other new set of files.
Any thoughts on how to fix this issue?
ERROR:
11:20:45.315 05/04/2022 Worker-0 [MuleRuntime].uber.27: [sftp-demo-app].prcsACKFiles-Error-SuccessFlow.CPU_INTENSIVE #1648077b ERROR
event:c458bc90-cbbd-11ec-85e2-06a565d43154
********************************************************************************
Message : "org.mule.weave.v2.module.reader.ReaderParsingException: org.mule.runtime.api.exception.MuleRuntimeException - Exception was found trying to retrieve the contents of file /home/messages/file_8ddb7674.json
org.mule.runtime.api.exception.MuleRuntimeException: Exception was found trying to retrieve the contents of file /home/messages/file_8ddb7674.json
at org.mule.extension.sftp.internal.connection.SftpClient.exception(SftpClient.java:427)
at org.mule.extension.sftp.internal.connection.SftpClient.exception(SftpClient.java:423)
at org.mule.extension.sftp.internal.connection.SftpClient.getFileContent(SftpClient.java:349)
at org.mule.extension.sftp.internal.connection.SftpFileSystem.retrieveFileContent(SftpFileSystem.java:117)
at org.mule.extension.sftp.internal.SftpInputStream$SftpFileInputStreamSupplier.getContentInputStream(SftpInputStream.java:111)
at org.mule.extension.sftp.internal.SftpInputStream$SftpFileInputStreamSupplier.getContentInputStream(SftpInputStream.java:93)
at org.mule.extension.file.common.api.AbstractConnectedFileInputStreamSupplier.getContentInputStream(AbstractConnectedFileInputStreamSupplier.java:81)
at org.mule.extension.file.common.api.AbstractFileInputStreamSupplier.get(AbstractFileInputStreamSupplier.java:65)
at org.mule.extension.file.common.api.AbstractFileInputStreamSupplier.get(AbstractFileInputStreamSupplier.java:33)
at org.mule.extension.file.common.api.stream.LazyStreamSupplier.lambda$new$1(LazyStreamSupplier.java:29)
at org.mule.extension.file.common.api.stream.LazyStreamSupplier.get(LazyStreamSupplier.java:42)
at org.mule.extension.file.common.api.stream.AbstractNonFinalizableFileInputStream.lambda$createLazyStream$0(AbstractNonFinalizableFileInputStream.java:48)
at $java.io.InputStream$$EnhancerByCGLIB$$55e4687e.read(<generated>)
at org.apache.commons.io.input.ProxyInputStream.read(ProxyInputStream.java:102)
at org.mule.runtime.core.internal.streaming.bytes.AbstractInputStreamBuffer.consumeStream(AbstractInputStreamBuffer.java:111)
at com.mulesoft.mule.runtime.core.internal.streaming.bytes.FileStoreInputStreamBuffer.consumeForwardData(FileStoreInputStreamBuffer.java:239)
at com.mulesoft.mule.runtime.core.internal.streaming.bytes.FileStoreInputStreamBuffer.consumeForwardData(FileStoreInputStreamBuffer.java:202)
at com.mulesoft.mule.runtime.core.internal.streaming.bytes.FileStoreInputStreamBuffer.doGet(FileStoreInputStreamBuffer.java:125)
at org.mule.runtime.core.internal.streaming.bytes.AbstractInputStreamBuffer.get(AbstractInputStreamBuffer.java:93)
at org.mule.runtime.core.internal.streaming.bytes.BufferedCursorStream.assureDataInLocalBuffer(BufferedCursorStream.java:126)
at org.mule.runtime.core.internal.streaming.bytes.BufferedCursorStream.doRead(BufferedCursorStream.java:101)
at org.mule.runtime.core.internal.streaming.bytes.AbstractCursorStream.read(AbstractCursorStream.java:124)
at org.mule.runtime.core.internal.streaming.bytes.BufferedCursorStream.read(BufferedCursorStream.java:26)
at java.io.InputStream.read(InputStream.java:101)
at org.mule.runtime.core.internal.streaming.bytes.ManagedCursorStreamDecorator.read(ManagedCursorStreamDecorator.java:96)
at org.mule.weave.v2.el.SeekableCursorStream.read(MuleTypedValue.scala:306)
at org.mule.weave.v2.module.reader.UTF8StreamSourceReader.handleBOM(SeekableStreamSourceReader.scala:179)
at org.mule.weave.v2.module.reader.UTF8StreamSourceReader.readAscii(SeekableStreamSourceReader.scala:163)
at org.mule.weave.v2.module.json.reader.JsonTokenizer.$init$(JsonTokenizer.scala:21)
at org.mule.weave.v2.module.json.reader.indexed.IndexedJsonTokenizer.<init>(IndexedJsonTokenizer.scala:15)
at org.mule.weave.v2.module.json.reader.indexed.IndexedJsonParser.parser(IndexedJsonParser.scala:17)
at org.mule.weave.v2.module.json.reader.JsonReader.readValue(JsonReader.scala:40)
at org.mule.weave.v2.module.json.reader.JsonReader.doRead(JsonReader.scala:30)
at org.mule.weave.v2.module.reader.Reader.read(Reader.scala:35)
at org.mule.weave.v2.module.reader.Reader.read$(Reader.scala:33)
at org.mule.weave.v2.module.json.reader.JsonReader.read(JsonReader.scala:20)
at org.mule.weave.v2.el.MuleTypedValue.value(MuleTypedValue.scala:147)
at org.mule.weave.v2.model.values.wrappers.DelegateValue.valueType(DelegateValue.scala:17)
at org.mule.weave.v2.model.values.wrappers.DelegateValue.valueType$(DelegateValue.scala:16)
at org.mule.weave.v2.el.MuleTypedValue.valueType(MuleTypedValue.scala:177)
at org.mule.weave.v2.model.types.ObjectType$.accepts(Type.scala:1068)
Caused by: org.mule.extension.sftp.api.SftpConnectionException: Error occurred while trying to connect to host
... 112 more
Caused by: org.mule.runtime.api.connection.ConnectionException:
at org.mule.extension.sftp.api.SftpConnectionException.<init>(SftpConnectionException.java:38)
... 112 more
Caused by: org.mule.runtime.api.connection.ConnectionException:
... 112 more
Caused by: 4:
at com.jcraft.jsch.ChannelSftp.get(ChannelSftp.java:1540)
at com.jcraft.jsch.ChannelSftp.get(ChannelSftp.java:1290)
at org.mule.extension.sftp.internal.connection.SftpClient.getFileContent(SftpClient.java:347)
... 110 more
Caused by: java.io.IOException: Pipe closed
Error Pipe closed in SFTP indicates a communication error that the SFTP connector can not resolve, so the operation fails. I don't believe that there is anything that you can do about that. You might try to test a newer version of the connector if you are using an older one, just in case.

Corda throws error trying to generate the basic nodes

Am trying to generate the basic nodes- PartyA, PartyB and Notary on Ubuntu 14 by running ./gradlew deployNodes or even ./gradlew clean deployNodes. The error reads:
... still waiting. If this is taking longer than usual, check the node logs.
Error while generating node info file /cordapp-template-java/build/nodes/Notary/logs
Error while generating node info file /cordapp-template-java/build/nodes/PartyB/logs
Error while generating node info file /cordapp-template-java/build/nodes/PartyA/logs
Task :deployNodes FAILED
FAILURE: Build failed with an exception.
What went wrong:
Execution failed for task ':deployNodes'.
Error while generating node info file. Please check the logs in /cordapp-template-java/build/nodes/Notary/logs.
Error while generating node info file. Please check the logs in /cordapp-template-java/build/nodes/Notary/logs.
The error logs do not provide any indication of error.
I have personally run into the above question myself. From what I saw, it seems it was a random incident on the Unix based machine.
The issue was resolved after I moved the project to the different location. It is absurd. But I have never ran into this issue ever again.

WebSphere 8.5, Why AdminApp returns "exception information: websphere.management.application.client.AppDeploymentException"

I have a very simply Jython script over Unix. It was working perfectly during WebSphere 7 and now, after we upgraded to WAS 8.5 it isn't working anymore. Obviously, I changed the path to point to WAS8.5. I spent the whole day struglling to find the reason for this falling and I am completely stuck. The exception descrition doesn't help much.
From a JCL JOB I call the Jython script.
/WebSphere/was85/dtl85cel/ledm85nd/DeploymentManager/profiles/default/bin/wsadmin.sh -lang jython -f /WebSphereDevelopment/scripts/dtl/WAS85/Install.jy
The Jython script is really simple.
Basically, I call AdminApp.install("myEAR path", ...with the options below:
-nopreCompileJSPs -installed.ear.destination /WebSphereDevelopment/MYAPP/dtl/curr/deployment/ -distributeApp -nouseMetaDataFromBinary -nodeployejb -appname DVL-MYAPP -createMBeansForResources -noreloadEnabled -nodeployws -validateinstall warn -processEmbeddedConfig -filepermission ..dll=755#. .so=755#..a=755#.*.sl=755 -noallowDispatchRemoteInclude -noallowServiceRemoteInclude -asyncRequestDispatchType DISABLED -nouseAutoLink -contextroot / -MapModulesToServers ÝÝ MyApp MyApp.war,WEB-INF/web.xml WebSphere:cell=dtl85cel,node=wleMyAppa,server=WLEMYAPP¨¨
)
The error log is:
WASX7017E: Exception received while running file "/WebSphereDevelopment/scripts dtl/MYAPP/MYAPP_DTL_DEPLOY.jy"; exception information: com.ibm.websphere.management.application.client.AppDeploymentException: com.ibm.websphere.management.appliccation.client.AppDeploymentException: ÝRoot exception is java.lang.RuntimeException: Deploying /WebSphere/was85/dtl85cel/ledm85nd/DeploymentManager/profiles/d java.lang.RuntimeException: java.lang.RuntimeException: Deploying /WebSphere/wa s85/dtl85cel/ledm85nd/DeploymentManager/profiles/default/temp/app69105293327198772690.ear failed.
Turn on tracing under wsadmin.properties:
com.ibm.ws.scripting.traceString

Workers not starting in Storm (backtype.storm.daemon.worker class not found)

I’m a newbie with Storm and I have setup a Storm-on-Yarn on an HDP cluster using the instructions at the HDP Storm-on-Yarn page and the storm-yarn-master from anfeng's storm-yarn git project.
I’m able to get Nimbus running and even submit topologies and see them on Storm UI. However, the spouts and the bolts don’t seem to be “working” (0 counts of tuples emitted).
I did some digging around and realized that my worker daemons are not starting. The supervisor log spits out these:
2014-03-13 11:22:03 b.s.d.supervisor [INFO] 18bf93a1-1cea-4e99-93da-8f36a4e9c056 still hasn't started
I tried launching the worker command from the “Launching worker with command” line in the supverviser log and I got this error:
Exception in thread "main" java.lang.NoClassDefFoundError: backtype/storm/daemon/worker
Caused by: java.lang.ClassNotFoundException: backtype.storm.daemon.worker
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: backtype.storm.daemon.worker. Program will exit.
It looks like it can’t find the worker class although it’s present in the storm-core jar.
Any ideas on how I can proceed with troubleshooting this? I’ve attached the nimbus and the supervisor logs. The worker logs don't seem to have been created.
Nimbus Log - http://paste.ubuntu.com/7089418/
Supervisor Log - http://paste.ubuntu.com/7089422/
Hadoop Version - 2.2
Storm Version - 0.9.0-wip21
I've had an issue like this when the JAR file I was creating did not exclude the storm binaries. i.e. in the pom.xml file, make sure that you have the storm-core dependency set with:
<scope>provided</scope>
As well, I had issues where multiple versions of netty were installed in the storm lib folder (had to delete the old version JAR). This was also causing NoClassDefFoundErrors to be thrown (albeit, different than the one you are experiencing).
I would suggest looking at the classpath that shows up when you submit the topolgies (you can do that by doing ps -Af | grep storm)

Copying files from S3 to maprfs on Amazon EMR

Does anyone know if there is a problem using Amazon's S3Distcp tool with MapR running on EMR? I'm trying to use it, but keep getting the following exception in /mnt/var/log/hadoop/steps:
Exception in thread "main" java.lang.RuntimeException: Unable to delete directory hdfs:/tmp/e9333a37-f400-4982-9687-326e33d9b37d/files
at com.amazon.external.elasticmapreduce.s3distcp.S3DistCp.deleteRecursive(S3DistCp.java:606)
at com.amazon.external.elasticmapreduce.s3distcp.S3DistCp.run(S3DistCp.java:464)
at com.amazon.external.elasticmapreduce.s3distcp.S3DistCp.run(S3DistCp.java:216)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at com.amazon.external.elasticmapreduce.s3distcp.Main.main(Main.java:12)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:186)
Caused by: java.io.IOException: Incomplete HDFS URI, no host: hdfs:/tmp/e9333a37-f400-4982-9687-326e33d9b37d/files
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:85)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1416)
at org.apache.hadoop.fs.FileSystem.access$100(FileSystem.java:69)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1450)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1432)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:232)
at com.amazon.external.elasticmapreduce.s3distcp.S3DistCp.deleteRecursive(S3DistCp.java:603)
the command line I'm using to submit the job step is:
elastic-mapreduce --jobflow $JOB_ID --jar s3://us-east-1.elasticmapreduce/libs/s3distcp/1.latest/s3distcp.jar \
--args '--src,s3n://PVData/raw, \
--dest,/PVData/raw'
For the --dest argument I have tried maprfs:///PVData/raw and hdfs:///PVData/raw as well and they don't work either.
I got an answer to this question over on the MapR forum (http://bit.ly/S7gzcv). The problem was I needed to specify the temp directory as maprfs:///tmp using the --tmpDir argument to s3distcp

Resources