I am trying to load around 600MB of data in the GridGain cache, I am trying to use the Swap Space to reduce the load on my RAM. I am loading the data from CSV files. I load the first 10000 keys in the memory then load the rest in the swap space. I was able to load 1350000 keys, but after that I am getting the below error :
[16:58:34,701][SEVERE][exchange-worker-#54%null%][GridWorker] Runtime error caught during grid runnable execution: GridWorker [name=partition-exchanger, gridName=null, finished=false, isCancelled=false, hashCode=20495511, interrupted=false, runner=exchange-worker-#54%null%]
java.lang.OutOfMemoryError: Java heap space
at java.util.HashMap.resize(HashMap.java:559)
at java.util.HashMap.addEntry(HashMap.java:851)
at java.util.HashMap.put(HashMap.java:484)
.
.
.
at java.lang.Thread.run(Thread.java:722)
GridGain node stopped OK [uptime=00:21:14:384]
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
I think you are clearly running out of Heap space. In order for swap space to get utilized, you should configure eviction policy. Please refer to OffHeap Memory documentation for more information on how to configure Swap and OffHeap spaces.
Also, there is some more explanation for memory utilization in this post: Can I reduce the space of my cache memory?
Related
We are using open jdk 8 with spring boot version 2.2.4.RELEASE and embedded tomcat version 9.0.16 in our web application and since last 3 months we are frequently facing compressed class space issue. Every time when OOM occurs we have to restart application
The log found from catalina.log is below
ERROR org.jgroups.logging.Log4J2LogImpl.error:Line 95 - failed executing task FD_ALL: TimeoutChecker (interval=2000 ms)
java.lang.OutOfMemoryError: Compressed class space
788][2020-07-21 10:29:12,475]- org.apache.juli.logging.DirectJDKLog.log:Line 175 - Failed to complete processing of a request
java.lang.OutOfMemoryError: Compressed class space
We have also allocated 2gb for CompressedClassSpaceSize using XX:CompressedClassSpaceSize=2g
as per some answer at OutOfMemoryError: Compressed class space
You can fix this issue by using one of the two options below.
1. You can increase the limit of class space by using -XX:CompressedClassSpaceSize=n VM flag. Maximum limit is 4GB.
(OR)
2. You can completely disable Compressed class pointer feature by using -XX:-UseCompressedClassPointers VM flag.
Note: Please be aware that disabling this feature will increase heap space usage of your application.
In order to know more details about this OutOfMemoryError and class pointer(klass filed), you can read these books. It will provide you better idea of what you can really do with compressed class pointer and space.
Java Performance Optimization: Compressed OOPS
I'm creating a ruleApp and deploying it into the Rule Execution server. While executing the rules it starts throwing the OutOfMemory error.
000000bd execution E The interaction ruleEngine.execute has failed.
com.ibm.rules.res.xu.internal.LocalizedResourceException: GBRXU0001E: The interaction ruleEngine.execute has failed.
at com.ibm.rules.res.xu.client.internal.jca.XUInteraction.execute(XUInteraction.java:302)
at com.ibm.rules.res.xu.client.internal.XUSession.executeOperation(XUSession.java:171)
at com.ibm.rules.res.xu.client.internal.XURuleEngineSession.execute(XURuleEngineSession.java:603)
at ilog.rules.res.session.impl.IlrStatefulSessionBase.execute(IlrStatefulSessionBase.java:725)
at ilog.rules.res.session.impl.IlrStatefulSessionBase.execute(IlrStatefulSessionBase.java:714)
at ilog.rules.res.session.impl.IlrStatefulSessionBase.execute(IlrStatefulSessionBase.java:625)
at ilog.rules.res.session.impl.IlrStatefulSessionBase.execute(IlrStatefulSessionBase.java:269)
at ilog.rules.res.session.impl.IlrStatefulSessionBase.execute(IlrStatefulSessionBase.java:241)
at ilog.rules.res.session.impl.IlrStatelessSessionBase.execute(IlrStatelessSessionBase.java:63)
at com.bnsf.rules.services.framework.RuleExecutioner.invokeRuleService(RuTioner.java:50)
at com.bnsf.rules.services.framework.RuleExecutioner.invokeSimpleRuleService(RuTioner.java:24)
at com.bnsf.rules.services.MiscBillingRuleService.execBatch(Miservice.java:222)
at com.bnsf.rules.services.MiscBillingRuleService.performTask(MisService.java:158)
at com.bnsf.rules.services.MiscBillingRuleService.execute(MisService.java:88)
at com.bnsf.rules.services.MiscBillingRuleServiceThread.run(MisThread.java:60)
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.lang.StringBuffer.ensureCapacityImpl(StringBuffer.java:338)
at java.lang.StringBuffer.append(StringBuffer.java:204)
at java.io.StringWriter.write(StringWriter.java:113)
at java.io.StringWriter.append(StringWriter.java:155)
at com.ibm.rules.res.xu.engine.de.internal.DEManager.note(DEManager.java:554)
at com.ibm.rules.engine.runtime.impl.EngineObserverManager.note(EngineObserverManager.java:84)
at com.ibm.rules.engine.rete.runtime.AbstractReteEngine.note(AbstractReteEngine.java:686)
at com.ibm.rules.generated.EngineDataClass.ilog_rules_brl_System_printMessage_java_lang_String(Unknown Source)
at com.ibm.rules.generated.ruleflow.Service$0020Definition.IntermediateDefnFlow$003eIntermediate$0020Event$0020Definition.BodyExecEnv.executeIntermediate$0020Events$0020For$0020Intra$002dplant$0020Switch$002dEndEventBody3(Unknown Source)
at com.ibm.rules.generated.ruleflow.Service$0020Definition.IntermediateDefnFlow$003eIntermediate$0020Event$0020Definition.BodyExecEnv.executeB
I'm using print statement in each of the rules, so does the error mean the print statement is filling up the heap memory of my application. Also, the error message shows a particular package in the ruleset. Removing the print statement from that package alone will resolve this issue.
It could be that the Java heap is too small to run your app, but the typical cause of this error is an infinite loop in the rules. You (or an admin) can verify that the WebSphere config options specify a reasonable heap size.
Another possibility is that some other app is using all the heap space -- my current organization has to re-start their dev server every week to restore the heap space from a memory leak they have not yet found. In this case, the rules execute just fine, but when viewing a (large) decision trace in the Decision Warehouse in RES I will sometimes get an out of heap space error.
It has failed to deploy a newer version of a deployment with the following error:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.8.2:deploy (default-deploy) on project finances-web:
Failed to retrieve remote metadata com.finances.web:2.1.7-SNAPSHOT/maven-metadata.xml:
Could not transfer metadata com.finances.web:2.1.7-SNAPSHOT/maven-metadata.xml from/to isban (http://nexus3:8081/nexus/repository/maven-snapshots/):
Failed to transfer file: http://nexus3:8081/nexus/repository/maven-snapshots/com/finances/web/2.1.7-SNAPSHOT/maven-metadata.xml. Return code is: 500 ,
ReasonPhrase:javax.servlet.ServletException: com.orientechnologies.orient.core.exception.OLowDiskSpaceException:
Error occurred while executing a write operation to database 'component' due to limited free space on the disk (241 MB). The database is now working in read-only mode. Please close the database (or stop OrientDB), make room on your hard drive and then reopen the database. The minimal required space is 256 MB. Required space is now set to 256MB (you can change it by setting parameter storage.diskCache.diskFreeSpaceLimit) .?? DB name="component". -> [Help 1]
I've tried to delete some files, to get some free space, howerver I got this error (on nexus web application):
Error occurred while executing a write operation to database 'component' due to limited free space on the disk (1 MB). The database is now working in read-only mode. Please close the database (or stop OrientDB), make room on your hard drive and then reopen the database. The minimal required space is 256 MB. Required space is now set to 256MB (you can change it by setting parameter storage.diskCache.diskFreeSpaceLimit) . DB name="component"
I cannot increase the space due to it is an external volume attached to a docker image.
What can I do?
I am using jackcess-2.1.1. I have set the memory=false parameter, but still face the outOfmemory Error. It happens while processing a MDB file of 1.8GB size.
The JVM memory arguments are set to 1GB max size. If I change the Max size to 2GB, it works with no issues.
But as per the instruction on the ucanaccess portal, when memory=false is set, then In-Memory is not supposed to be used and the JVM Memory Args should not change anything.
Any response is greatly appreciated. Find the error below.
java.lang.OutOfMemoryError: Java heap space
at com.healthmarketscience.jackcess.impl.LongValueColumnImpl.readLongValue(LongValueColumnImpl.java:136)
at com.healthmarketscience.jackcess.impl.LongValueColumnImpl.read(LongValueColumnImpl.java:90)
at com.healthmarketscience.jackcess.impl.ColumnImpl.read(ColumnImpl.java:586)
at com.healthmarketscience.jackcess.impl.TableImpl.getRowColumn(TableImpl.java:767)
at com.healthmarketscience.jackcess.impl.TableImpl.getRow(TableImpl.java:673)
at com.healthmarketscience.jackcess.impl.TableImpl.getRow(TableImpl.java:652)
at com.healthmarketscience.jackcess.impl.CursorImpl.getCurrentRow(CursorImpl.java:699)
at com.healthmarketscience.jackcess.impl.CursorImpl$BaseIterator.next(CursorImpl.java:822)
at com.healthmarketscience.jackcess.impl.CursorImpl$BaseIterator.next(CursorImpl.java:1)
at net.ucanaccess.converters.LoadJet$TablesLoader.loadTableData(LoadJet.java:829)
at net.ucanaccess.converters.LoadJet$TablesLoader.loadTablesData(LoadJet.java:997)
at net.ucanaccess.converters.LoadJet$TablesLoader.loadTables(LoadJet.java:1041)
at net.ucanaccess.converters.LoadJet$TablesLoader.access$2900(LoadJet.java:273)
at net.ucanaccess.converters.LoadJet.loadDB(LoadJet.java:1479)
at net.ucanaccess.jdbc.UcanaccessDriver.connect(UcanaccessDriver.java:243)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:187)
at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriverManager(DriverManagerDataSource.java:153)
at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriver(DriverManagerDataSource.java:144)
at org.springframework.jdbc.datasource.AbstractDriverBasedDataSource.getConnectionFromDriver(AbstractDriverBasedDataSource.java:155)
at org.springframework.jdbc.datasource.AbstractDriverBasedDataSource.getConnection(AbstractDriverBasedDataSource.java:120)
at org.hibernate.service.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:141)
at org.hibernate.engine.jdbc.internal.JdbcServicesImpl$ConnectionProviderJdbcConnectionAccess.obtainConnection(JdbcServicesImpl.java:242)
at org.hibernate.engine.jdbc.internal.JdbcServicesImpl.configure(JdbcServicesImpl.java:117)
at org.hibernate.service.internal.StandardServiceRegistryImpl.configureService(StandardServiceRegistryImpl.java:76)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:160)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:132)
at org.hibernate.cfg.Configuration.buildTypeRegistrations(Configuration.java:1825)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1783)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1868)
at org.springframework.orm.hibernate4.LocalSessionFactoryBuilder.buildSessionFactory(LocalSessionFactoryBuilder.java:372)
If I change the Max size to 2GB, it works with no issues. But as per the instruction on the ucanaccess portal, when memory=false is set, then In-Memory is not supposed to be used and the JVM Memory Args should not change anything.
That's not quite true. memory=false tells UCanAccess not to hold the HSQLDB backing database tables in memory, but a disk-based HSQLDB database will still consume some memory and there are lots of other things that UCanAccess (and Jackcess) must keep in memory too. The memory requirements with memory=false will just be considerably lower than with memory=true.
When running the Microsoft Application Verifier i would get an error 0202 on shutdown:
VERIFIER STOP 00000202:
pid 0x1160: Freeing heap block containing an active critical section.
11456F48 : Critical section address.
047D05B4 : Critical section initialization stack trace.
11456F40 : Heap block...(cut off)
The error would happen while calling GdiplusShutdown.
From the Application Verifier documentation:
Freeing heap block containing an active critical section
Application Verifier break message
Freeing heap block containing an active critical section. Memory location at of size contains an active lock.
Probable cause
This break is generated if a heap allocation contains a critical section, the allocation is freed and the critical section has not been deleted.
Information displayed by Application Verifier
Parameter1 - Critical section address
Parameter2 - Critical section initialization stack trace
Parameter3 - Heap block address
Parameter4 - Heap block size
Description - Freeing heap block containing an active critical section
Additional information
Verifier stop code 0202.
Check the contents of the current call stack. The culprit is usually the caller of HeapFree or HeapDestroy on the current stack trace.
Frequency of this error is high.
To debug this stop use the following debugger commands:
!cs –s parameter1 - dump information about this critical section.
ln parameter1 – to show symbols near the address of the critical section. This should help identify the leaked critical section.
dds parameter2 – to dump the stack trace for this critical section initialization.
parameter3 and parameter4 might help understand where was this heap block allocated (the size of the allocation is probably significant).
i had this error a few months ago and i'd forgotten the solution.
Be sure to free any GDI+ Images (e.g. GdipDisposeImage) before trying to shutdown GDI+.
Otherwise you leak a Critical Section, and who knows what else. And certainly don't try to dispose an image after GDI+ has already been shutdown.