Direct buffer memory issue in Mule - CloudHub RunTime - mule4

im facing issue with java.lang.OutOfMemoryError : Direct buffer memory while running the application in Cloud-Hub.
here i'm trying to fetch the data from Oracle DB which has 25K records from Inbound table with 1vCore
The error occurred after all the flow completion at last it is printing the Error logs like below.
[2020-03-29 11:17:30.640] ERROR std-err [ForkJoinPool.commonPool-worker-0]: Exception in thread "ForkJoinPool.commonPool-worker-0" java.lang.OutOfMemoryError: Direct buffer memory
[2020-03-29 11:17:30.640] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at java.nio.Bits.reserveMemory(Bits.java:694)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at org.mule.weave.v2.io.RandomAccessFileSeekableStream.initByteBuffer(SeekableStream.scala:322)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at org.mule.weave.v2.io.DelegateSeekableStream.$init$(SeekableStream.scala:55)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at org.mule.weave.v2.io.RandomAccessFileSeekableStream.(SeekableStream.scala:316)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at org.mule.weave.v2.io.SeekableStream$.createNotAutoClosedFileStream(SeekableStream.scala:278)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at org.mule.weave.v2.el.FileBasedCursorStream.delegate$lzycompute(FileBasedCursorStreamProvider.scala:142)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at org.mule.weave.v2.el.FileBasedCursorStream.delegate(FileBasedCursorStreamProvider.scala:141)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at org.mule.weave.v2.el.FileBasedCursorStream.release(FileBasedCursorStreamProvider.scala:160)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at org.mule.weave.v2.el.FileBasedCursorStreamProvider.$anonfun$doRelease$1(FileBasedCursorStreamProvider.scala:91)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at org.mule.weave.v2.el.FileBasedCursorStreamProvider.$anonfun$doRelease$1$adapted(FileBasedCursorStreamProvider.scala:91)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at org.mule.weave.v2.el.FileBasedCursorStreamProvider.doRelease(FileBasedCursorStreamProvider.scala:91)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at org.mule.weave.v2.el.FileBasedCursorStreamProvider.releaseResources(FileBasedCursorStreamProvider.scala:85)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at org.mule.runtime.core.internal.streaming.CursorProviderJanitor.releaseResources(CursorProviderJanitor.java:78)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at org.mule.runtime.core.internal.streaming.ManagedCursorProvider.releaseResources(ManagedCursorProvider.java:71)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at org.mule.runtime.core.internal.streaming.CursorManager$EventStreamingState.lambda$dispose$1(CursorManager.java:121)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1597)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at com.github.benmanes.caffeine.cache.UnboundedLocalCache.forEach(UnboundedLocalCache.java:184)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at org.mule.runtime.core.internal.streaming.CursorManager$EventStreamingState.dispose(CursorManager.java:117)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at org.mule.runtime.core.internal.streaming.CursorManager$EventStreamingState.access$300(CursorManager.java:85)
[2020-03-29 11:17:30.641] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at org.mule.runtime.core.internal.streaming.CursorManager.lambda$new$0(CursorManager.java:37)
[2020-03-29 11:17:30.642] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at com.github.benmanes.caffeine.cache.UnboundedLocalCache.lambda$notifyRemoval$0(UnboundedLocalCache.java:157)
[2020-03-29 11:17:30.642] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
[2020-03-29 11:17:30.642] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
[2020-03-29 11:17:30.642] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
[2020-03-29 11:17:30.642] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
[2020-03-29 11:17:30.642] ERROR std-err [ForkJoinPool.commonPool-worker-0]: at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)

Without looking at you flow source I can only guess. Based on your stack it looks like you do included foreach components and each one uses its own DB sql. Like main foreach looks for accounts then inner foreach for each account looks for activity om this account and the next inner one looks for transactions on this particular activity.
It could be done when you assure that all payloads are the stream ones. And it looks so except I see in the stack one array component. It could be logger or saving some internal data into variable. Something innocent but not a stream. So this component essentially collects in memory all the data and eventually you are out of memory.
Double check all your flows that they use only stream payloads. Then you will be able to process infinite size of data.
Word of caution. That's how it works in theory. And it usually works. However I have open case with Mulesoft on which they have open defect for the SQL stream which became none stream at all and collects data in the memory to release it later. This defect is still not solved.

Try making the DB operation non repeatable iterable.

Related

WSO2 API Manager(wso2am-4.0.0) - Recurring JMS Error

I have setup wso2am-4.0.0 and I have configured readonly ldap as the primary userstore.
After changing to readonly ldap, the following error is seen in the logs and it is recurring forever. The API calls also seem to be failing due to it.
- Error creating JMS consumer for Siddhi-JMS-Consumer javax.jms.JMSException: Error registering consumer: org.wso2.andes.AMQChannelClosedException: Error: org.wso2.andes.AMQSecurityException: Permission denied: binding notification [error code 403: access refused] [error code 504: channel error]
at org.wso2.andes.client.AMQSession$6.execute(AMQSession.java:2187)
at org.wso2.andes.client.AMQSession$6.execute(AMQSession.java:2130)
at org.wso2.andes.client.AMQConnectionDelegate_8_0.executeRetrySupport(AMQConnectionDelegate_8_0.java:339)
at org.wso2.andes.client.AMQConnection$3.run(AMQConnection.java:665)
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.andes.client.AMQConnection.executeRetrySupport(AMQConnection.java:662)
at org.wso2.andes.client.failover.FailoverRetrySupport.execute(FailoverRetrySupport.java:102)
at org.wso2.andes.client.AMQSession.createConsumerImpl(AMQSession.java:2195)
at org.wso2.andes.client.AMQSession.createConsumer(AMQSession.java:1100)
at org.wso2.carbon.apimgt.common.jms.utils.JMSUtils.createConsumer(JMSUtils.java:495)
at org.wso2.carbon.apimgt.common.jms.JMSTaskManager$MessageListenerTask.createConsumer(JMSTaskManager.java:1010)
at org.wso2.carbon.apimgt.common.jms.JMSTaskManager$MessageListenerTask.getMessageConsumer(JMSTaskManager.java:865)
at org.wso2.carbon.apimgt.common.jms.JMSTaskManager$MessageListenerTask.receiveMessage(JMSTaskManager.java:612)
at org.wso2.carbon.apimgt.common.jms.JMSTaskManager$MessageListenerTask.run(JMSTaskManager.java:533)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.andes.AMQChannelClosedException: Error: org.wso2.andes.AMQSecurityException: Permission denied: binding notification [error code 403: access refused] [error code 504: channel error]
at org.wso2.andes.client.handler.ChannelCloseMethodHandler.methodReceived(ChannelCloseMethodHandler.java:100)
at org.wso2.andes.client.handler.ClientMethodDispatcherImpl.dispatchChannelClose(ClientMethodDispatcherImpl.java:163)
at org.wso2.andes.framing.amqp_0_91.ChannelCloseBodyImpl.execute(ChannelCloseBodyImpl.java:140)
at org.wso2.andes.client.state.AMQStateManager.methodReceived(AMQStateManager.java:111)
Any solution for this is highly appreciated.
Thanks in advance.

Frequently Prerender is getting stopped. How can i handle this?

Using: https://prerender.io/
Frequently getting this error unexpected server response (404) and prerender is stopped.
Right now I have always restarted the server and solving this issue.
I am using pm2 to continue server start.
https://www.npmjs.com/package/pm2
got 504 in 16ms for https://www.testing.com/tags/testing1234/1
getting https://www.testing.com/tags/testing1234/1
Error: unexpected server response (404)
at ClientRequest._req.on (/opt/apps/apps/prerender/node_modules/ws/lib/WebSocket.js:653:21)
at ClientRequest.emit (events.js:182:13)
at HTTPParser.parserOnIncomingClient (_http_client.js:555:21)
at HTTPParser.parserOnHeadersComplete (_http_common.js:109:17)
at Socket.socketOnData (_http_client.js:441:20)
at Socket.emit (events.js:182:13)
at addChunk (_stream_readable.js:283:12)
at readableAddChunk (_stream_readable.js:264:11)
at Socket.Readable.push (_stream_readable.js:219:10)
at TCP.onStreamRead (internal/stream_base_commons.js:94:17)

javax.net.ssl.SSLException: Inbound closed before receiving peer's close_notify: possible truncation attack?

I am netty 3.10.6 version, while communicating to server i am getting following error:
Decoding WebSocket Frame opCode=10
2019-04-30T14:31:36,002 UTC DEBUG (New I/O worker #5) [netty(?:?)] Component:DASHBOARD [org.jboss.netty.handler.codec.http.websocketx.WebSocket08FrameDecoder] Decoding WebSocket Frame length=0
2019-04-30T14:31:36,002 UTC DEBUG (New I/O worker #2) [netty(?:?)] Component:DASHBOARD [org.jboss.netty.handler.ssl.SslHandler] SSLEngine.closeInbound() raised an exception after a handshake failure.
javax.net.ssl.SSLException: Inbound closed before receiving peer's close_notify: possible truncation attack?
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1666)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1634)
at sun.security.ssl.SSLEngineImpl.closeInbound(SSLEngineImpl.java:1561)
at org.jboss.netty.handler.ssl.SslHandler.setHandshakeFailure(SslHandler.java:1451)
at org.jboss.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1315)
at org.jboss.netty.handler.ssl.SslHandler.decode(SslHandler.java:852)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)
at com.atomiton.sff.imp.netty.SffRawMetering.messageReceived(SffRawMetering.java:149)
at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at com.atomiton.sff.imp.netty.NettyTransport$NettyPipeline.sendUpstream(NettyTransport.java:914)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
2019-04-30T14:31:36,003 UTC DEBUG (New I/O worker #4) [netty(?:?)] Component:DASHBOARD [org.jboss.netty.handler.codec.http.websocketx.WebSocket08FrameDecoder] Decoding WebSocket Frame opCode=10
2019-04-30T14:31:36,004 UTC DEBUG (New I/O worker #4) [netty(?:?)] Component:DASHBOARD [org.jboss.netty.handler.codec.http.websocketx.WebSocket08FrameDecoder] Decoding WebSocket Frame length=0
2019-04-30T14:31:36,004 UTC WARN (New I/O worker #2) [SffTcpServer(log:855)] Component:DASHBOARD IO error in null+7012320048641541604:ssl<NioAcceptedSocketChannel[id: 0xaf52a017, /180.151.199.170:56987 => /172.31.14.2:9000]; Caused by: javax.net.ssl.SSLException: Received fatal alert: certificate_unknown
2019-04-30T14:31:36,044 UTC DEBUG (New I/O worker #1) [netty(?:?)] Component:DASHBOARD [org.jboss.netty.handler.ssl.SslHandler] SSLEngine.closeInbound() raised an exception after a handshake failure.
javax.net.ssl.SSLException: Inbound closed before receiving peer's close_notify: possible truncation attack?
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1666)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1634)
at sun.security.ssl.SSLEngineImpl.closeInbound(SSLEngineImpl.java:1561)
at org.jboss.netty.handler.ssl.SslHandler.setHandshakeFailure(SslHandler.java:1451)
at org.jboss.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1315)
at org.jboss.netty.handler.ssl.SslHandler.decode(SslHandler.java:852)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)
at com.atomiton.sff.imp.netty.SffRawMetering.messageReceived(SffRawMetering.java:149)
at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at com.atomiton.sff.imp.netty.NettyTransport$NettyPipeline.sendUpstream(NettyTransport.java:914)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
2019-04-30T14:31:36,044 UTC WARN (New I/O worker #1) [SffTcpServer(log:855)] Component:DASHBOARD IO error in null+7012320048641541607:ssl<NioAcceptedSocketChannel[id: 0x14620731, /180.151.199.170:56986 => /172.31.14.2:9000]; Caused by: javax.net.ssl.SSLException: Received fatal alert: certificate_unknown
Note that the 'error' is actually debug statement.
If you do not want to ignore it, you can try to decrease the client connection timeout (to become lower than the server connection timeout).

Storm Error: backtype.storm.multilang.NoOutputException: Pipe to subprocess seems to be broken! No output read

I am running a storm topology on a local cluster and it works fine for about an hour and then it stops after throwing this error.
backtype.storm.multilang.NoOutputException: Pipe to subprocess seems to be broken! No output read.
The entire error trace is:
ERROR b.s.t.ShellBolt - Halting process: ShellBolt died. Command: [Rscript, stormtest_with_OR1.R], ProcessInfo pid:32381, name:RBolt exitCode:-1, errorString:
java.lang.RuntimeException: backtype.storm.multilang.NoOutputException: Pipe to subprocess seems to be broken! No output read.
Serializer Exception:
at backtype.storm.utils.ShellProcess.readShellMsg(ShellProcess.java:101) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.task.ShellBolt$BoltReaderRunnable.run(ShellBolt.java:321) [storm-core-0.10.0.jar:0.10.0]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_67]
2016-04-05 11:07:04,665 FATAL Unable to register shutdown hook because JVM is shutting down.
51144 [Thread-14-RBolt] ERROR b.s.util - Halting process: ("Worker died")
java.lang.RuntimeException: ("Worker died")
at backtype.storm.util$exit_process_BANG_.doInvoke(util.clj:336) [storm-core-0.10.0.jar:0.10.0]
at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.6.0.jar:?]
at backtype.storm.daemon.worker$fn__7184$fn__7185.invoke(worker.clj:532) [storm-core-0.10.0.jar:0.10.0]
at backtype.storm.daemon.executor$mk_executor_data$fn__5523$fn__5524.invoke(executor.clj:261) [storm-core-0.10.0.jar:0.10.0]
at backtype.storm.util$async_loop$fn__545.invoke(util.clj:489) [storm-core-0.10.0.jar:0.10.0]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_67]
The Bolt is written in R. In my topology code the cluster.shutdown() is commented which has been cited as the reason for this error in many threads related to this error.

java.io.IOException: getNextMsg refill failed

I am working with http secure connection in j2me. While throwing url it is showing exception as "Caught IOException: java.io.IOException: getNextMsg refill failed". First of all I am not getting the point what is mean by getNextMsg???

Resources