How can i make sure sure the blockage happened on my code - deadlock

Recently i have a blocked thread situation.
The threadump said the following
"TP-Processor338" daemon prio=10 tid=0x00002aaabd753800 nid=0x37ee runnable [0x0000000046681000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at java.io.DataInputStream.readFully(DataInputStream.java:178)
at java.io.DataInputStream.readFully(DataInputStream.java:152)
at net.sourceforge.jtds.jdbc.SharedSocket.readPacket(SharedSocket.java:846)
at net.sourceforge.jtds.jdbc.SharedSocket.getNetPacket(SharedSocket.java:727)
- locked <0x00000007486f6530> (a java.util.ArrayList)
at net.sourceforge.jtds.jdbc.ResponseStream.getPacket(ResponseStream.java:466)
at net.sourceforge.jtds.jdbc.ResponseStream.read(ResponseStream.java:103)
at net.sourceforge.jtds.jdbc.ResponseStream.peek(ResponseStream.java:88)
at net.sourceforge.jtds.jdbc.TdsCore.wait(TdsCore.java:3932)
at net.sourceforge.jtds.jdbc.TdsCore.executeSQL(TdsCore.java:1046)
- locked <0x00000007486f8ae8> (a net.sourceforge.jtds.jdbc.TdsCore)
at net.sourceforge.jtds.jdbc.JtdsStatement.executeSQL(JtdsStatement.java:537)
at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeUpdate(JtdsPreparedStatement.java:504)
- locked <0x00000007486f8870> (a net.sourceforge.jtds.jdbc.ConnectionJDBC3)
at my.jdbcwrapper.PreparedStatementImpl.executeUpdate(PreparedStatementImpl.java:261)
at my.persistence.media.Media_HJMPWrapper$MediaEntityState.storeChanges(Media_HJMPWrapper.java:1736)
at my.persistence.media.Media_HJMPWrapper.ejbStore(Media_HJMPWrapper.java:228)
at my.persistence.framework.RemoteInvocationHandler.performOther(RemoteInvocationHandler.java:196)
- locked <0x000000069c327028> (a my.persistence.framework.PKSyncUtils$PKSyncObject)
at my.persistence.framework.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:105)
at $Proxy101.setProperty(Unknown Source)
There are some 200+ threads got blocked on 0x000000069c327028 lock.
Is it possible that there is a locking problem in PKSyncUtils$PKSyncObject or something else.

This problem is solved. The problem was in the JDTS driver.

Related

XTDB unable to start a node

From their getting started
(defn start-xtdb! []
(letfn [(kv-store [dir]
{:kv-store {:xtdb/module 'xtdb.rocksdb/->kv-store
:db-dir (io/file dir)
:sync? true}})]
(xt/start-node
{:xtdb/tx-log (kv-store "data/dev/tx-log")
:xtdb/document-store (kv-store "data/dev/doc-store")
:xtdb/index-store (kv-store "data/dev/index-store")})))
(def xtdb-node (start-xtdb!))
(defn stop-xtdb! []
(.close xtdb-node))
Upon starting the node, it throws
Execution error (RocksDBException) at org.rocksdb.RocksDB/open (RocksDB.java:-2).
lock hold by current process, acquire time 1649604606 acquiring thread 123145548206080:
/Users/faiz.halde/Workspace/personal/data/proj/data/dev/index-store/LOCK: No locks available
Even tried deleting the data directory
CLJ - 1.10.3
openjdk version "1.8.0_292"
I ignore the cause of the problem. But I restarted the REPL and it worked.
This usually indicates that you still have an active node running, because a RocksDB instance can only be access by one node at a time, however if you lose the reference to the original node then you can't shut it down directly and I believe the only option then is to restart the JVM.

OutOfMemory Error after updating corda version to 4.1

I had a corda 3.3 test installation and recently updated it to version 4.1 and after that when I run my nodes with deployNodes script and runnodes - I always receive the following exception in node's console as soon as it starts. What can this mean? I don't have a clue what this can be caused by.
I tried to build and run the nodes without cordapps and they work, so somehow my cordapps cause this error happen. What other information should I provide to help you figure out this issue?
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3236)
at java.io.ByteArrayOutputStream.toByteArray(ByteArrayOutputStream.java:191)
at kotlin.io.ByteStreamsKt.readBytes(IOStreams.kt:123)
at kotlin.io.ByteStreamsKt.readBytes$default(IOStreams.kt:120)
at net.corda.core.internal.InternalUtils.readFully(InternalUtils.kt:123)
at net.corda.node.internal.cordapp.JarScanningCordappLoader.getJarHash(JarScanningCordappLoader.kt:228)
at net.corda.node.internal.cordapp.JarScanningCordappLoader.toCordapp(JarScanningCordappLoader.kt:153)
at net.corda.node.internal.cordapp.JarScanningCordappLoader.loadCordapps(JarScanningCordappLoader.kt:106)
at net.corda.node.internal.cordapp.JarScanningCordappLoader.access$loadCordapps(JarScanningCordappLoader.kt:44)
at net.corda.node.internal.cordapp.JarScanningCordappLoader$cordapps$2.invoke(JarScanningCordappLoader.kt:56)
at net.corda.node.internal.cordapp.JarScanningCordappLoader$cordapps$2.invoke(JarScanningCordappLoader.kt:44)
at kotlin.SynchronizedLazyImpl.getValue(LazyJVM.kt:74)
at net.corda.node.internal.cordapp.JarScanningCordappLoader.getCordapps(JarScanningCordappLoader.kt)
at net.corda.node.internal.cordapp.CordappLoaderTemplate$cordappSchemas$2.invoke(JarScanningCordappLoader.kt:422)
at net.corda.node.internal.cordapp.CordappLoaderTemplate$cordappSchemas$2.invoke(JarScanningCordappLoader.kt:389)
at kotlin.SynchronizedLazyImpl.getValue(LazyJVM.kt:74)
at net.corda.node.internal.cordapp.CordappLoaderTemplate.getCordappSchemas(JarScanningCordappLoader.kt)
at net.corda.node.internal.AbstractNode.<init>(AbstractNode.kt:153)
at net.corda.node.internal.AbstractNode.<init>(AbstractNode.kt:126)
at net.corda.node.internal.Node.<init>(Node.kt:98)
at net.corda.node.internal.Node.<init>(Node.kt:97)
at net.corda.node.internal.NodeStartup.createNode(NodeStartup.kt:194)
at net.corda.node.internal.NodeStartup$initialiseAndRun$5.invoke(NodeStartup.kt:186)
at net.corda.node.internal.NodeStartup$initialiseAndRun$5.invoke(NodeStartup.kt:137)
at net.corda.node.internal.NodeStartupLogging$DefaultImpls.attempt(NodeStartup.kt:509)
at net.corda.node.internal.NodeStartup.attempt(NodeStartup.kt:137)
at net.corda.node.internal.NodeStartup.initialiseAndRun(NodeStartup.kt:185)
at net.corda.node.internal.NodeStartupCli.runProgram(NodeStartup.kt:128)
at net.corda.cliutils.CordaCliWrapper.call(CordaCliWrapper.kt:190)
at net.corda.node.internal.NodeStartupCli.call(NodeStartup.kt:83)
at net.corda.node.internal.NodeStartupCli.call(NodeStartup.kt:64)
at picocli.CommandLine.execute(CommandLine.java:1056)
Corda's usage of memory has been slowly creeping upwards. It is possible that your machine does not have enough memory to run 3/4+ nodes at the same time after upgrading to 4.
I recommend trying to run a single node with CorDapps installed and see what happens. If it is still happening then, then something else could be going wrong.
Looking at the stacktrace, it is also possible that your CorDapp itself is really, really, really big and it has gone out of memory reading and loading the CorDapp.

**java.lang.OutOfMemoryError: Java heap space with Spring 4, Hibernate 4, My Sql connector 8.1 and tomcat 8.5** when using **My SQL 5.7.23*

Here is the exception Details : -
java.lang.OutOfMemoryError: Java heap space
at com.mysql.jdbc.Buffer.<init>(Buffer.java:57)
at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:2087)
at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:3549)
at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:489)
at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:3240)
at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:2411)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2834)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2838)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2082)
at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2212)
at sun.reflect.GeneratedMethodAccessor99.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.hibernate.engine.jdbc.internal.proxy.AbstractStatementProxyHandler.continueInvocation(AbstractStatementProxyHandler.java:122)
at org.hibernate.engine.jdbc.internal.proxy.AbstractProxyHandler.invoke(AbstractProxyHandler.java:81)
at com.sun.proxy.$Proxy168.executeQuery(Unknown Source)
at org.hibernate.loader.Loader.getResultSet(Loader.java:1978)
at org.hibernate.loader.Loader.doQuery(Loader.java:829)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:289)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259)
at org.hibernate.loader.Loader.loadCollectionSubselect(Loader.java:2242)
at org.hibernate.loader.collection.SubselectOneToManyLoader.initialize(SubselectOneToManyLoader.java:77)
at org.hibernate.persister.collection.AbstractCollectionPersister.initialize(AbstractCollectionPersister.java:622)
at org.hibernate.event.internal.DefaultInitializeCollectionEventListener.onInitializeCollection(DefaultInitializeCollectionEventListener.java:82)
at org.hibernate.internal.SessionImpl.initializeCollection(SessionImpl.java:1606)
at org.hibernate.collection.internal.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:379)
at org.hibernate.collection.internal.AbstractPersistentCollection.read(AbstractPersistentCollection.java:112)
at org.hibernate.collection.internal.AbstractPersistentCollection.readSize(AbstractPersistentCollection.java:137)
at org.hibernate.collection.internal.PersistentBag.isEmpty(PersistentBag.java:249)
at com.vms.business.SupplierSearchService.fetchTopFiveAssociatedSuppliersList(SupplierSearchService.java:369)
at com.vms.business.SupplierSearchService$$FastClassBySpringCGLIB$$c4e470d2.invoke(<generated>)
at org.springframework.cglib.proxy.Meth
odProxy.invoke(MethodProxy.java:204)
Here is the Hibernate Query where I am getting exception :-
**Note :- **
The same query works well when I use My SQL 5.1 with same data.
One More thing when i use My SQL 5.7.23 RDS connection the whole application go slow. The new RDS has 4 time higher configuration from the 5.1 My SQL RDS service but still giving error.
Here is the query : -
Session session = em.unwrap(Session.class);
Criteria criteria = session.createCriteria(Orgassociation.class, "orgassociation");
criteria.createAlias("orgassociation.organization", "associatedOrgs");
Criteria userCriteria = session.createCriteria(User.class, "user");
userCriteria.add(Restrictions.eq("user.userId", userId));
User user = (User) userCriteria.uniqueResult();
criteria.add(Restrictions.eq("orgassociation.organization1.organizationId",
user.getOrganization().getOrganizationId()));
criteria.add(Restrictions.eq("associatedOrgs.orgType", VMS_CONSTANTS.DB_CONSTANTS.EMPLOYER_AND_SUPPLIER));
criteria.addOrder(Order.desc("orgassociation.orgAssociationId"));
criteria.setMaxResults(5);
#SuppressWarnings("unchecked")
List<Orgassociation> orgList = criteria.list();
You would need to attach a Java profiler (VisualVM) or Your kit and anaylse your heap to know which objects are actually causing this. It is very likely that some client sdk contract is broken resulting in a lot of java objects lying around without GC.

How to restart the SHA256MigrationJob without restarting Artifactory?

I keep encountering the below error message in the sha256_migration.log. It doesn't restart after failure, however if I restart the artifactory service it begins the SHA256 migration from where it left off until it fails again.
2018-11-13 10:24:35,060 [art-exec-3] [ERROR] (o.a.s.j.m.s.Sha256MigrationJob:78) - Caught unexpected exception during SHA256 Migration job, operation will break.
org.springframework.core.task.TaskRejectedException: Executor [org.artifactory.schedule.ArtifactoryConcurrentExecutor#70a2137a] did not accept task: org.artifactory.schedule.aop.AsyncAdvice$$Lambda$654/1640835804#7dbf55d3
at org.springframework.core.task.support.TaskExecutorAdapter.submit(TaskExecutorAdapter.java:93)
at org.springframework.scheduling.concurrent.ConcurrentTaskExecutor.submit(ConcurrentTaskExecutor.java:143)
at org.artifactory.schedule.aop.AsyncAdvice.submitWorkQueueTask(AsyncAdvice.java:235)
at org.artifactory.schedule.aop.AsyncAdvice.submit(AsyncAdvice.java:217)
at org.artifactory.schedule.aop.AsyncAdvice.executeInvocation(AsyncAdvice.java:146)
at org.artifactory.schedule.aop.AsyncAdvice.invoke(AsyncAdvice.java:124)
at org.artifactory.schedule.aop.AsyncAdvice.invoke(AsyncAdvice.java:62)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
at com.sun.proxy.$Proxy144.updateSha2(Unknown Source)
at org.artifactory.storage.jobs.migration.sha256.Sha256MigrationJob.migrationLogic(Sha256MigrationJob.java:134)
at org.artifactory.storage.jobs.migration.MigrationJobBase.migrationLoop(MigrationJobBase.java:106)
at org.artifactory.storage.jobs.migration.MigrationJobBase.runMigration(MigrationJobBase.java:83)
at org.artifactory.storage.jobs.migration.MigrationJobBase.onExecute(MigrationJobBase.java:73)
at org.artifactory.schedule.quartz.QuartzCommand.execute(QuartzCommand.java:48)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.artifactory.concurrent.ArtifactoryRunnable.run(ArtifactoryRunnable.java:30)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.RejectedExecutionException: Task org.artifactory.concurrent.ArtifactoryRunnable#4afb003 rejected from java.util.concurrent.ThreadPoolExecutor#33daf5aa[Running, pool size = 64, active threads = 64, queued tasks = 10000, completed tasks = 120723]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
at org.artifactory.schedule.ArtifactoryConcurrentExecutor.execute(ArtifactoryConcurrentExecutor.java:69)
at org.springframework.core.task.support.TaskExecutorAdapter.submit(TaskExecutorAdapter.java:88)
... 19 common frames omitted
My artifactory.system.properties regarding sha256migration
##SHA2 Migration block
artifactory.sha2.migration.job.enabled=true
artifactory.sha2.migration.job.queue.workers=100
My setup:
Cloned production instances of Artifactory infrastructure into test instance (save for IP address and DNS records).
Ensure that DB and filestore configurations (db.properties and binarystore.xml) were updated accordingly on the cloned instances.
Things I've tried without luck:
I ran the Artifactory GC couple of times.
Increase the CPU count to 16
Increase the RAM to 16G
Ensure that I am running the latest Oracle Java 8u192
What I know:
It keep running fine for a while until it crashes.
When I restart artifactory service, the migration resumes and the total artifacts to migrate is lower.
I cannot keep restarting Artifactory in production to finish the sha256migrationjob, I have over 500k artifacts.
My question:
Any way method to restart the SHA256MigrationJob without restarting Artifactory?
Is there a way to find the artifact that it has trouble migrating to SHA256?
In the stack trace above, I feel the issue is at com.sun.proxy.$Proxy144.updateSha2(Unknown Source).
-- Workaround --
I ended up creating a new VM and installed a clean copy Artifactory 6.5.3 (with latest Oracle Java 8 Server-JRE). In the above issue, I was doing an in-place upgrade, just in a new folder.
I moved the necessary files in the etc to the new VM; such as master.key, binarystore.xml, db.properties and etc. I then executed the bin/installService.sh [user] [group], this creates the /etc/opt/jfrog/artifactory configuration symlink/folder move. My filestore and artifactory database are running on different VMs, thus only Artifactory and it's file configurations needed to be ported.
The new Artifactory 6.5.3 version started up without issues.
The sha256migrationjob is actually running without any problems. Last upgrade run I did, it worked fine without the job dying.
Notes: I did also sanely adjust the configuration values on the queue workers. https://www.jfrog.com/confluence/display/RTF/Checksum-Based+Storage#Checksum-BasedStorage-ConfiguringtheMigrationProcess
tl;dr decrease artifactory.sha2.migration.job.queue.workers to somewhere around 2 - 2 * number of cores.
What you are experiencing is an exhaustion of the ThreadPoolExecutor.
The size of the thread pool by default is 4 * [number of cores].
In your case 64 (16 cores * 4)
However the limit on the number of threads you have configured for the sha256 migration job is 100.
On an artifactory instance without any load, this will not cause any failures, because there is a queue backing up the thread pull (in your case the size is 10000).
In your case the thread pull and the queue are full.
If I understand correctly you have overcame the issue. But for other people bumping into this thread, I would recommend decreasing the value of artifactory.sha2.migration.job.queue.workers to no more than half of the thread pull number of threads.
So in your case 32: (16 * 4)/2

SBT runs out of memory

I am using SBT 0.12.3 to test some code and often I get this error message while testing interactively with the ~test command.
8. Waiting for source changes... (press enter to interrupt)
[info] Compiling 1 Scala source to C:\Users\t\scala-projects\scala test\target\s
cala-2.10\classes...
sbt appears to be exiting abnormally.
The log file for this session is at C:\Users\t\AppData\Local\Temp\sbt566325905
3150896045.log
java.lang.OutOfMemoryError: PermGen space
at java.util.concurrent.FutureTask$Sync.innerGet(Unknown Source)
at java.util.concurrent.FutureTask.get(Unknown Source)
at sbt.ConcurrentRestrictions$$anon$4.take(ConcurrentRestrictions.scala:
196)
at sbt.Execute.next$1(Execute.scala:85)
at sbt.Execute.processAll(Execute.scala:88)
at sbt.Execute.runKeep(Execute.scala:68)
at sbt.EvaluateTask$.run$1(EvaluateTask.scala:162)
at sbt.EvaluateTask$.runTask(EvaluateTask.scala:177)
at sbt.Aggregation$$anonfun$4.apply(Aggregation.scala:46)
at sbt.Aggregation$$anonfun$4.apply(Aggregation.scala:44)
at sbt.EvaluateTask$.withStreams(EvaluateTask.scala:137)
at sbt.Aggregation$.runTasksWithResult(Aggregation.scala:44)
at sbt.Aggregation$.runTasks(Aggregation.scala:59)
at sbt.Aggregation$$anonfun$applyTasks$1.apply(Aggregation.scala:31)
at sbt.Aggregation$$anonfun$applyTasks$1.apply(Aggregation.scala:30)
at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.sca
la:62)
at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.sca
la:62)
at sbt.Command$.process(Command.scala:90)
at sbt.MainLoop$$anonfun$next$1$$anonfun$apply$1.apply(MainLoop.scala:71
)
at sbt.MainLoop$$anonfun$next$1$$anonfun$apply$1.apply(MainLoop.scala:71
)
at sbt.State$$anon$2.process(State.scala:170)
at sbt.MainLoop$$anonfun$next$1.apply(MainLoop.scala:71)
at sbt.MainLoop$$anonfun$next$1.apply(MainLoop.scala:71)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
at sbt.MainLoop$.next(MainLoop.scala:71)
at sbt.MainLoop$.run(MainLoop.scala:64)
at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:53)
at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:50)
at sbt.Using.apply(Using.scala:25)
at sbt.MainLoop$.runWithNewLog(MainLoop.scala:50)
at sbt.MainLoop$.runAndClearLast(MainLoop.scala:33)
at sbt.MainLoop$.runLoggedLoop(MainLoop.scala:17)
Error during sbt execution: java.lang.OutOfMemoryError: PermGen space
The error is clear, I cloud increase the heap size and it may stop throwing that error, but the thing is that it shuts down after a number(I don't know how many) of test interactions with a minimal change in the code, and if a simple increase in the heap would solve the problem or do I have to do additional work not to run out of memory.
Thanks in advance.
If you haven't, try giving more PermGen space in your sbt.bat. I don't run sbt on Windows, but I give java -Xmx1512M -XX:MaxPermSize=512M.
Another thing to try may be to fork during testing: https://www.scala-sbt.org/1.x/docs/Testing.html#Forking+tests
Test / fork := true
specifies that all tests will be executed in a single external JVM.

Resources