Ucanaccess JDBC Driver - outofmemory error with memory=false setting - out-of-memory

I am using jackcess-2.1.1. I have set the memory=false parameter, but still face the outOfmemory Error. It happens while processing a MDB file of 1.8GB size.
The JVM memory arguments are set to 1GB max size. If I change the Max size to 2GB, it works with no issues.
But as per the instruction on the ucanaccess portal, when memory=false is set, then In-Memory is not supposed to be used and the JVM Memory Args should not change anything.
Any response is greatly appreciated. Find the error below.
java.lang.OutOfMemoryError: Java heap space
at com.healthmarketscience.jackcess.impl.LongValueColumnImpl.readLongValue(LongValueColumnImpl.java:136)
at com.healthmarketscience.jackcess.impl.LongValueColumnImpl.read(LongValueColumnImpl.java:90)
at com.healthmarketscience.jackcess.impl.ColumnImpl.read(ColumnImpl.java:586)
at com.healthmarketscience.jackcess.impl.TableImpl.getRowColumn(TableImpl.java:767)
at com.healthmarketscience.jackcess.impl.TableImpl.getRow(TableImpl.java:673)
at com.healthmarketscience.jackcess.impl.TableImpl.getRow(TableImpl.java:652)
at com.healthmarketscience.jackcess.impl.CursorImpl.getCurrentRow(CursorImpl.java:699)
at com.healthmarketscience.jackcess.impl.CursorImpl$BaseIterator.next(CursorImpl.java:822)
at com.healthmarketscience.jackcess.impl.CursorImpl$BaseIterator.next(CursorImpl.java:1)
at net.ucanaccess.converters.LoadJet$TablesLoader.loadTableData(LoadJet.java:829)
at net.ucanaccess.converters.LoadJet$TablesLoader.loadTablesData(LoadJet.java:997)
at net.ucanaccess.converters.LoadJet$TablesLoader.loadTables(LoadJet.java:1041)
at net.ucanaccess.converters.LoadJet$TablesLoader.access$2900(LoadJet.java:273)
at net.ucanaccess.converters.LoadJet.loadDB(LoadJet.java:1479)
at net.ucanaccess.jdbc.UcanaccessDriver.connect(UcanaccessDriver.java:243)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:187)
at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriverManager(DriverManagerDataSource.java:153)
at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriver(DriverManagerDataSource.java:144)
at org.springframework.jdbc.datasource.AbstractDriverBasedDataSource.getConnectionFromDriver(AbstractDriverBasedDataSource.java:155)
at org.springframework.jdbc.datasource.AbstractDriverBasedDataSource.getConnection(AbstractDriverBasedDataSource.java:120)
at org.hibernate.service.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:141)
at org.hibernate.engine.jdbc.internal.JdbcServicesImpl$ConnectionProviderJdbcConnectionAccess.obtainConnection(JdbcServicesImpl.java:242)
at org.hibernate.engine.jdbc.internal.JdbcServicesImpl.configure(JdbcServicesImpl.java:117)
at org.hibernate.service.internal.StandardServiceRegistryImpl.configureService(StandardServiceRegistryImpl.java:76)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:160)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:132)
at org.hibernate.cfg.Configuration.buildTypeRegistrations(Configuration.java:1825)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1783)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1868)
at org.springframework.orm.hibernate4.LocalSessionFactoryBuilder.buildSessionFactory(LocalSessionFactoryBuilder.java:372)

If I change the Max size to 2GB, it works with no issues. But as per the instruction on the ucanaccess portal, when memory=false is set, then In-Memory is not supposed to be used and the JVM Memory Args should not change anything.
That's not quite true. memory=false tells UCanAccess not to hold the HSQLDB backing database tables in memory, but a disk-based HSQLDB database will still consume some memory and there are lots of other things that UCanAccess (and Jackcess) must keep in memory too. The memory requirements with memory=false will just be considerably lower than with memory=true.

Related

Memory allocation of 4GB ignored in Firebase Cloud Function

I would like to update the memory allocation limit of a Firebase Cloud Function to 4GB. I updated the memory value with the runWith parameter:
functions.runWith({memory: "4GB"})
This, however, is being ignored and the deployed function does not have 4 GB of memory allocated.
If I try with 3GB, i get this error:
Error: The only valid memory allocation values are: 128MB, 256MB, 512MB, 1GB, 2GB, 4GB
So it seems 4 GB is a valid value.
What am I doing wrong? Am I missing something?
It seems to work just fine if I use 2GB, 1GB... It only ignores the 4 GB value.
I had the exact same issue and reached out to Firebase Support to report the bug. According to Firebase Support, they acknowledge it's a bug and will release a fix in their next release.
In the time of reading, if the issue is not fixed this is the workaround:
Go the GCP console and select your project
select Cloud Functions from the menu on the left
Click "edit"
Expand the section "VARIABLES, NETWORKING AND ADVANCED SETTINGS"
Change the "memory allocated" field.
Click "Next" and then click on "Deploy"
EDIT: The fix has been released in firebase-functions#3.13.1
As of 2022 July
memory: amount of memory to allocate to the function, possible values are: '128MB', '256MB', '512MB', '1GB', '2GB', '4GB', and '8GB'.
functions.runWith({
memory: '4GB',
}).firestore

How to Fix compressed class space?

We are using open jdk 8 with spring boot version 2.2.4.RELEASE and embedded tomcat version 9.0.16 in our web application and since last 3 months we are frequently facing compressed class space issue. Every time when OOM occurs we have to restart application
The log found from catalina.log is below
ERROR org.jgroups.logging.Log4J2LogImpl.error:Line 95 - failed executing task FD_ALL: TimeoutChecker (interval=2000 ms)
java.lang.OutOfMemoryError: Compressed class space
788][2020-07-21 10:29:12,475]- org.apache.juli.logging.DirectJDKLog.log:Line 175 - Failed to complete processing of a request
java.lang.OutOfMemoryError: Compressed class space
We have also allocated 2gb for CompressedClassSpaceSize using XX:CompressedClassSpaceSize=2g
as per some answer at OutOfMemoryError: Compressed class space
You can fix this issue by using one of the two options below.
1. You can increase the limit of class space by using -XX:CompressedClassSpaceSize=n VM flag. Maximum limit is 4GB.
(OR)
2. You can completely disable Compressed class pointer feature by using -XX:-UseCompressedClassPointers VM flag.
Note: Please be aware that disabling this feature will increase heap space usage of your application.
In order to know more details about this OutOfMemoryError and class pointer(klass filed), you can read these books. It will provide you better idea of what you can really do with compressed class pointer and space.
Java Performance Optimization: Compressed OOPS

Memory issue while running WODM (JRULES)

I'm creating a ruleApp and deploying it into the Rule Execution server. While executing the rules it starts throwing the OutOfMemory error.
000000bd execution E The interaction ruleEngine.execute has failed.
com.ibm.rules.res.xu.internal.LocalizedResourceException: GBRXU0001E: The interaction ruleEngine.execute has failed.
at com.ibm.rules.res.xu.client.internal.jca.XUInteraction.execute(XUInteraction.java:302)
at com.ibm.rules.res.xu.client.internal.XUSession.executeOperation(XUSession.java:171)
at com.ibm.rules.res.xu.client.internal.XURuleEngineSession.execute(XURuleEngineSession.java:603)
at ilog.rules.res.session.impl.IlrStatefulSessionBase.execute(IlrStatefulSessionBase.java:725)
at ilog.rules.res.session.impl.IlrStatefulSessionBase.execute(IlrStatefulSessionBase.java:714)
at ilog.rules.res.session.impl.IlrStatefulSessionBase.execute(IlrStatefulSessionBase.java:625)
at ilog.rules.res.session.impl.IlrStatefulSessionBase.execute(IlrStatefulSessionBase.java:269)
at ilog.rules.res.session.impl.IlrStatefulSessionBase.execute(IlrStatefulSessionBase.java:241)
at ilog.rules.res.session.impl.IlrStatelessSessionBase.execute(IlrStatelessSessionBase.java:63)
at com.bnsf.rules.services.framework.RuleExecutioner.invokeRuleService(RuTioner.java:50)
at com.bnsf.rules.services.framework.RuleExecutioner.invokeSimpleRuleService(RuTioner.java:24)
at com.bnsf.rules.services.MiscBillingRuleService.execBatch(Miservice.java:222)
at com.bnsf.rules.services.MiscBillingRuleService.performTask(MisService.java:158)
at com.bnsf.rules.services.MiscBillingRuleService.execute(MisService.java:88)
at com.bnsf.rules.services.MiscBillingRuleServiceThread.run(MisThread.java:60)
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.lang.StringBuffer.ensureCapacityImpl(StringBuffer.java:338)
at java.lang.StringBuffer.append(StringBuffer.java:204)
at java.io.StringWriter.write(StringWriter.java:113)
at java.io.StringWriter.append(StringWriter.java:155)
at com.ibm.rules.res.xu.engine.de.internal.DEManager.note(DEManager.java:554)
at com.ibm.rules.engine.runtime.impl.EngineObserverManager.note(EngineObserverManager.java:84)
at com.ibm.rules.engine.rete.runtime.AbstractReteEngine.note(AbstractReteEngine.java:686)
at com.ibm.rules.generated.EngineDataClass.ilog_rules_brl_System_printMessage_java_lang_String(Unknown Source)
at com.ibm.rules.generated.ruleflow.Service$0020Definition.IntermediateDefnFlow$003eIntermediate$0020Event$0020Definition.BodyExecEnv.executeIntermediate$0020Events$0020For$0020Intra$002dplant$0020Switch$002dEndEventBody3(Unknown Source)
at com.ibm.rules.generated.ruleflow.Service$0020Definition.IntermediateDefnFlow$003eIntermediate$0020Event$0020Definition.BodyExecEnv.executeB
I'm using print statement in each of the rules, so does the error mean the print statement is filling up the heap memory of my application. Also, the error message shows a particular package in the ruleset. Removing the print statement from that package alone will resolve this issue.
It could be that the Java heap is too small to run your app, but the typical cause of this error is an infinite loop in the rules. You (or an admin) can verify that the WebSphere config options specify a reasonable heap size.
Another possibility is that some other app is using all the heap space -- my current organization has to re-start their dev server every week to restore the heap space from a memory leak they have not yet found. In this case, the rules execute just fine, but when viewing a (large) decision trace in the Decision Warehouse in RES I will sometimes get an out of heap space error.

GridGain Out of Memory Exception

I am trying to load around 600MB of data in the GridGain cache, I am trying to use the Swap Space to reduce the load on my RAM. I am loading the data from CSV files. I load the first 10000 keys in the memory then load the rest in the swap space. I was able to load 1350000 keys, but after that I am getting the below error :
[16:58:34,701][SEVERE][exchange-worker-#54%null%][GridWorker] Runtime error caught during grid runnable execution: GridWorker [name=partition-exchanger, gridName=null, finished=false, isCancelled=false, hashCode=20495511, interrupted=false, runner=exchange-worker-#54%null%]
java.lang.OutOfMemoryError: Java heap space
at java.util.HashMap.resize(HashMap.java:559)
at java.util.HashMap.addEntry(HashMap.java:851)
at java.util.HashMap.put(HashMap.java:484)
.
.
.
at java.lang.Thread.run(Thread.java:722)
GridGain node stopped OK [uptime=00:21:14:384]
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
I think you are clearly running out of Heap space. In order for swap space to get utilized, you should configure eviction policy. Please refer to OffHeap Memory documentation for more information on how to configure Swap and OffHeap spaces.
Also, there is some more explanation for memory utilization in this post: Can I reduce the space of my cache memory?

Just started getting AIR SQLite Error 3182 Disk I/O error occurred

We have a new beta version of our software with some changes, but not around our database layer.
We've just started getting Error 3128 reported in our server logs. It seems that once it happens, it happens for as long as the app is open. The part of the code where it is most apparent is where we log data every second via SQLite. We've generated 47k errors on our server this month alone.
3128 Disk I/O error occurred. Indicates that an operation could not be completed because of a disk I/O error. This can happen if the runtime is attempting to delete a temporary file and another program (such as a virus protection application) is holding a lock on the file. This can also happen if the runtime is attempting to write data to a file and the data can't be written.
I don't know what could be causing this error. Maybe an anti-virus program? Maybe our app is getting confused and writing data on top of each other? We're using async connections.
It's causing lots of issues and we're at a loss. It has happened in our older version, but maybe 100 times in a month rather than 47,000 times. Either way I'd like to make it happen "0" times.
Possible solution: Exception Message: Some kind of disk I/O error occurred
Summary: There is probably not a problem with the database but a problem creating (or deleting) the temporary file once the database is opened. AIR may have permissions to the database, but not to create or delete files in the directory.
One answer that has worked for me is to use the PRAGMA statement to set the journal_mode value to something other than DELETE. You do this by issuing a PRAGMA statement in the same way you would issue a query statement.
PRAGMA journal_mode = OFF
Unfortunately, if the application crashes in the middle of a transaction when the OFF journaling mode is set, then the database file will very likely go corrupt.1.
1 http://www.sqlite.org/pragma.html#pragma_journal_mode
The solution was to make sure database delete, update, insert only happened one at at time by wrapping a little wrapper. On top of that, we had to watch for error 3128 and retry. I think this is because we have a trigger running that could lock the database after we inserted data.

Resources