**java.lang.OutOfMemoryError: Java heap space with Spring 4, Hibernate 4, My Sql connector 8.1 and tomcat 8.5** when using **My SQL 5.7.23* - spring-4

Here is the exception Details : -
java.lang.OutOfMemoryError: Java heap space
at com.mysql.jdbc.Buffer.<init>(Buffer.java:57)
at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:2087)
at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:3549)
at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:489)
at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:3240)
at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:2411)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2834)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2838)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2082)
at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2212)
at sun.reflect.GeneratedMethodAccessor99.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.hibernate.engine.jdbc.internal.proxy.AbstractStatementProxyHandler.continueInvocation(AbstractStatementProxyHandler.java:122)
at org.hibernate.engine.jdbc.internal.proxy.AbstractProxyHandler.invoke(AbstractProxyHandler.java:81)
at com.sun.proxy.$Proxy168.executeQuery(Unknown Source)
at org.hibernate.loader.Loader.getResultSet(Loader.java:1978)
at org.hibernate.loader.Loader.doQuery(Loader.java:829)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:289)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259)
at org.hibernate.loader.Loader.loadCollectionSubselect(Loader.java:2242)
at org.hibernate.loader.collection.SubselectOneToManyLoader.initialize(SubselectOneToManyLoader.java:77)
at org.hibernate.persister.collection.AbstractCollectionPersister.initialize(AbstractCollectionPersister.java:622)
at org.hibernate.event.internal.DefaultInitializeCollectionEventListener.onInitializeCollection(DefaultInitializeCollectionEventListener.java:82)
at org.hibernate.internal.SessionImpl.initializeCollection(SessionImpl.java:1606)
at org.hibernate.collection.internal.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:379)
at org.hibernate.collection.internal.AbstractPersistentCollection.read(AbstractPersistentCollection.java:112)
at org.hibernate.collection.internal.AbstractPersistentCollection.readSize(AbstractPersistentCollection.java:137)
at org.hibernate.collection.internal.PersistentBag.isEmpty(PersistentBag.java:249)
at com.vms.business.SupplierSearchService.fetchTopFiveAssociatedSuppliersList(SupplierSearchService.java:369)
at com.vms.business.SupplierSearchService$$FastClassBySpringCGLIB$$c4e470d2.invoke(<generated>)
at org.springframework.cglib.proxy.Meth
odProxy.invoke(MethodProxy.java:204)
Here is the Hibernate Query where I am getting exception :-
**Note :- **
The same query works well when I use My SQL 5.1 with same data.
One More thing when i use My SQL 5.7.23 RDS connection the whole application go slow. The new RDS has 4 time higher configuration from the 5.1 My SQL RDS service but still giving error.
Here is the query : -
Session session = em.unwrap(Session.class);
Criteria criteria = session.createCriteria(Orgassociation.class, "orgassociation");
criteria.createAlias("orgassociation.organization", "associatedOrgs");
Criteria userCriteria = session.createCriteria(User.class, "user");
userCriteria.add(Restrictions.eq("user.userId", userId));
User user = (User) userCriteria.uniqueResult();
criteria.add(Restrictions.eq("orgassociation.organization1.organizationId",
user.getOrganization().getOrganizationId()));
criteria.add(Restrictions.eq("associatedOrgs.orgType", VMS_CONSTANTS.DB_CONSTANTS.EMPLOYER_AND_SUPPLIER));
criteria.addOrder(Order.desc("orgassociation.orgAssociationId"));
criteria.setMaxResults(5);
#SuppressWarnings("unchecked")
List<Orgassociation> orgList = criteria.list();

You would need to attach a Java profiler (VisualVM) or Your kit and anaylse your heap to know which objects are actually causing this. It is very likely that some client sdk contract is broken resulting in a lot of java objects lying around without GC.

Related

How do I register custom JDBC dialect in Rstudio?

I'm trying to analyze bigquery data in Rstudio server running on a google dataproc cluster. However, due to the memory limitations of Rstudio, I intend to run queries on this data in sparklyr but I haven't had any success importing the data directly into the spark cluster from bigquery.
I'm using google's official JDBC connectivity driver:
ODBC and JDBC drivers for BigQuery
I also have the following software versions running:
Google Dataproc: 2.0-Debian 10
Sparklyr: Spark 3.2.1 Hadoop 3.2
R version 4.2.1
I also had to replace the following spark jars with the versions being used by the JDBC connectivity driver above or added them where they were non-existent:
failureaccess-1.0.1 was added
protobuff-java-3.19.4 replaced 2.5.0
guava 31.1-jre replaced 14.0.1
Below is my code using the spark_read_jdbc function to retrieve a dataset from big query
conStr <- "jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId=xxxx;OAuthType=3;AllowLargeResults=1;"
spark_read_jdbc(sc = spkc,
name = "events_220210",
memory = FALSE,
options = list(url = conStr,
driver = "com.simba.googlebigquery.jdbc.Driver",
user = "rstudio",
password = "xxxxxx",
dbtable = "dataset.table"))
The table gets imported into the spark cluster but when I try to preview it, the following error message is received
ERROR sparklyr: Gateway (551) failed calling collect on sparklyr.Utils: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 4) (faucluster1-w-0.europe-west2-c.c.ga4-warehouse-342410.internal executor 2): java.sql.SQLDataException: [Simba][JDBC](10140) Error converting value to long.
at com.simba.googlebigquery.exceptions.ExceptionConverter.toSQLException(Unknown Source)
at com.simba.googlebigquery.utilities.conversion.TypeConverter.toLong(Unknown Source)
at com.simba.googlebigquery.jdbc.common.SForwardResultSet.getLong(Unknown Source)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$9(JdbcUtils.scala:446)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$9$adapted(JdbcUtils.scala:445)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:367)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:349)
at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:349)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
When I try to import the data via an SQL query, e.g
SELECT date, name, age FROM dataset.tablename
I end up with a table looking like this:
date
name
age
date
name
age
date
name
age
date
name
age
I've read on several posts that the solution to this is to register custom JDBC dialect but I have no idea how to do this; what platform to do it on, or if it's possible to do it within Rstudio. Links to any materials that would help me solve this problem would be appreciated.

How to migrate data to new server? (Sequence generator issue)

What is the right way to backup and restore a MariaDB database that has sequence generation enabled (i.e. NOT autoincrement)? (This includes migrating to a new server.)
Is it possible to instruct the sequence generator to pick up indexing table data at a specific ID value? How?
Steps I take to create my issue
I wish to transfer an application to a new server:
Backup data on source server:
mysqldump --skip-opt --no-create-db --no-create-info --hex-blob [database-name] [...list of tables...] > data-backup.sql
On target server, create new empty database (same name)
Build/run JHipster Spring application on target server: java -jar myapp.jar (Running this application recreates/configures a new instance of the database on the target server.)
Restore data:
mysql [database-name] < data-backup.sql
All the above steps produce no errors (so far).
Problem
When I follow these steps, the database is restored (apparently perfectly). I can log in to the application and access all information. BUT when I attempt to create new entities (i.e. save something to the database), I get an ID 'Duplicate entry' error in the server logs:
2022-03-24 12:54:43.775 ERROR 11277 --- [ XNIO-1 task-1] o.h.e.jdbc.batch.internal.BatchingBatch : HHH000315: Exception executing batch [java.sql.BatchUpdateException: (conn=33) Duplicate entry '1001' for key 'PRIMARY'], SQL: insert into product (name, id) values (?, ?)
2022-03-24 12:54:43.776 WARN 11277 --- [ XNIO-1 task-1] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 1062, SQLState: 23000
2022-03-24 12:54:43.776 ERROR 11277 --- [ XNIO-1 task-1] o.h.engine.jdbc.spi.SqlExceptionHelper : (conn=33) Duplicate entry '1001' for key 'PRIMARY'
2022-03-24 12:54:43.779 ERROR 11277 --- [ XNIO-1 task-1] o.z.problem.spring.common.AdviceTraits : Internal Server Error
org.springframework.dao.DataIntegrityViolationException: could not execute batch; SQL [insert into product (name, id) values (?, ?)]; constraint [PRIMARY]; nested exception is org.hibernate.exception.ConstraintViolationException: could not execute batch
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.convertHibernateAccessException(HibernateJpaDialect.java:276)
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:233)
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:566)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:743)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:711)
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:654)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:407)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:119)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:698)
at com.mycompany.app.web.rest.ProductResource$$EnhancerBySpringCGLIB$$84c14d6d.createProduct(<generated>)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
...
Clearly my backup/restore process is not accounting properly for the sequence generator, which generates ID values that conflict with the existing data.
What I am doing wrong? What is the right process of backing up/restoring such a database?
Environment: JHipster 7.7.0 (Angular, monolithic), MariaDB 10.4, OpenJDK 16.0.2_7, OS Windows 10 Pro and openSUSE 15.2, Firefox 98.0.2 and Chrome 99.0.4844.84.
PS: I previously reported this issue here, aimed at the JHipster community, but got limited response. I think I need a MySQL/MariaDB expert opinion on this.
(Apologies in advance: I'm not a database expert. The technique I outline above has served me well for years, but previously I was dealing with AUTO_INCREMENT. This sequence generator has me baffled.)
Ok! I have solutions.
[For the sake if these notes, let's call the database: mydata. Also, in JHipster, the MariaDB sequence generator is called: sequence_generator]
Let's consider two situations:
(1) Simple migration
If you are merely migrating the application to a new server, the process is straight forward:
Step 1: On the original server backup and secure your database: mysqldump -u root -p mydata > mydata.sql
Step 2: Transfer the SQL file to the new server, along with the JHipster JAR file
Step 3: On the new server, create an empty database with the same name, and restore the data: mysql -u root -p mydata < mydata.sql
Step 4: Now launch your JHipster application, and everything should work
(2) Model modification
The assumption is that you have modified your model in some way (e.g. added properties to one or more entities). This solution is fiddly, but it works (for me).
Step 1: Backup your database, and secure it (in case something goes wrong): mysqldump -u root -p mydata > mydata.sql
Step 2: Backup and secure the original JHipster JAR that works with the original database
Step 3: Duplicate your database (schema and data) in a new table: mydata_bk
Step 4: Drop your original database, and create a new empty database
Step 5: Launch your new JHipster JAR, and give it time to create the new database schema, then stop the application
Step 6: Use a tool (DataGrip, sqlYog, etc) to compare the old (mydata_bk) and new schema (mydata), and modify the old schema to match the new schema
Step 7: Restore/copy all data from mydata_bk to mydata, EXCEPT for the tables DATABASECHANGELOG, DATABASECHANGELOGLOCK and the special sequence_generator table
Step 8: Open the mydata.sql SQL file, and at the top, after initial comments, one of the first instructions will read:
--
-- Sequence structure for `sequence_generator`
--
DROP SEQUENCE IF EXISTS `sequence_generator`;
CREATE SEQUENCE `sequence_generator` start with 2000 minvalue 1 maxvalue 9223372036854775806 increment by 50 cache 1000 nocycle ENGINE=InnoDB;
SELECT SETVAL(`sequence_generator`, 201050, 0);
The specific numbers may vary, but the broad details will be similar. In a MariaDB SQL console type/execute each of those SQL statements: DROP SEQUENCE ...;, CREATE SEQUENCE ...;, and SELECT SETVAL(...);
Step 9: Launch your JHipster application.
Hope this helps others that run into similar issues. Let me know if you have a better approach!

DSE 6.7 - Error With DSE Driver When SOLR is Enabled

I'm currently working through getting an old project up and running again - it uses both DSE Search and DSE Graph. I don't have too much experience with DSE, but right now I've created one keyspace for the regular cassandra db (searchable) and I've created a graph on the same server using the gremlin console.
The back-end is written in node.js and uses the dse-driver to get data from the DSE server.
When I run the cassandra server with the -g flag, the dse-driver runs fine, does exactly what I want it to do, but obviously, none of my search functionality works.
When I run the cassandra server with the -g and -s flags, my search functionality works, but then I receive errors whenever the backend tries to use the dse-driver to get data from the graph using the executeGraph function.
Is this something that can be fixed or do I need to create more nodes/clusters? I'm really new to DSE so your help is appreciated.
Error: com.google.common.util.concurrent.UncheckedExecutionException: com.google.inject.ProvisionException: Unable to provision, see the following errors:\n\n1) Error injecting constructor, com.datastax.bdp.gcore.datastore.DataStoreException: Failed to execute statement40f07a96-98bf-490c-a738-6c9d0021afba\n at com.datastax.bdp.graph.impl.DseGraphImpl.<init>(DseGraphImpl.java:192)\n at com.datastax.bdp.graph.impl.GraphModule.configure(Unknown Source) (via modules: com.datastax.bdp.graph.impl.DseGraphFactoryImpl$$Lambda$1580/1437671705 -> com.google.inject.util.Modules$OverrideModule -> com.datastax.bdp.graph.impl.GraphModule)\n while locating com.datastax.bdp.graph.impl.DseGraphImpl\n\n1 error\n

How to restart the SHA256MigrationJob without restarting Artifactory?

I keep encountering the below error message in the sha256_migration.log. It doesn't restart after failure, however if I restart the artifactory service it begins the SHA256 migration from where it left off until it fails again.
2018-11-13 10:24:35,060 [art-exec-3] [ERROR] (o.a.s.j.m.s.Sha256MigrationJob:78) - Caught unexpected exception during SHA256 Migration job, operation will break.
org.springframework.core.task.TaskRejectedException: Executor [org.artifactory.schedule.ArtifactoryConcurrentExecutor#70a2137a] did not accept task: org.artifactory.schedule.aop.AsyncAdvice$$Lambda$654/1640835804#7dbf55d3
at org.springframework.core.task.support.TaskExecutorAdapter.submit(TaskExecutorAdapter.java:93)
at org.springframework.scheduling.concurrent.ConcurrentTaskExecutor.submit(ConcurrentTaskExecutor.java:143)
at org.artifactory.schedule.aop.AsyncAdvice.submitWorkQueueTask(AsyncAdvice.java:235)
at org.artifactory.schedule.aop.AsyncAdvice.submit(AsyncAdvice.java:217)
at org.artifactory.schedule.aop.AsyncAdvice.executeInvocation(AsyncAdvice.java:146)
at org.artifactory.schedule.aop.AsyncAdvice.invoke(AsyncAdvice.java:124)
at org.artifactory.schedule.aop.AsyncAdvice.invoke(AsyncAdvice.java:62)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
at com.sun.proxy.$Proxy144.updateSha2(Unknown Source)
at org.artifactory.storage.jobs.migration.sha256.Sha256MigrationJob.migrationLogic(Sha256MigrationJob.java:134)
at org.artifactory.storage.jobs.migration.MigrationJobBase.migrationLoop(MigrationJobBase.java:106)
at org.artifactory.storage.jobs.migration.MigrationJobBase.runMigration(MigrationJobBase.java:83)
at org.artifactory.storage.jobs.migration.MigrationJobBase.onExecute(MigrationJobBase.java:73)
at org.artifactory.schedule.quartz.QuartzCommand.execute(QuartzCommand.java:48)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.artifactory.concurrent.ArtifactoryRunnable.run(ArtifactoryRunnable.java:30)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.RejectedExecutionException: Task org.artifactory.concurrent.ArtifactoryRunnable#4afb003 rejected from java.util.concurrent.ThreadPoolExecutor#33daf5aa[Running, pool size = 64, active threads = 64, queued tasks = 10000, completed tasks = 120723]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
at org.artifactory.schedule.ArtifactoryConcurrentExecutor.execute(ArtifactoryConcurrentExecutor.java:69)
at org.springframework.core.task.support.TaskExecutorAdapter.submit(TaskExecutorAdapter.java:88)
... 19 common frames omitted
My artifactory.system.properties regarding sha256migration
##SHA2 Migration block
artifactory.sha2.migration.job.enabled=true
artifactory.sha2.migration.job.queue.workers=100
My setup:
Cloned production instances of Artifactory infrastructure into test instance (save for IP address and DNS records).
Ensure that DB and filestore configurations (db.properties and binarystore.xml) were updated accordingly on the cloned instances.
Things I've tried without luck:
I ran the Artifactory GC couple of times.
Increase the CPU count to 16
Increase the RAM to 16G
Ensure that I am running the latest Oracle Java 8u192
What I know:
It keep running fine for a while until it crashes.
When I restart artifactory service, the migration resumes and the total artifacts to migrate is lower.
I cannot keep restarting Artifactory in production to finish the sha256migrationjob, I have over 500k artifacts.
My question:
Any way method to restart the SHA256MigrationJob without restarting Artifactory?
Is there a way to find the artifact that it has trouble migrating to SHA256?
In the stack trace above, I feel the issue is at com.sun.proxy.$Proxy144.updateSha2(Unknown Source).
-- Workaround --
I ended up creating a new VM and installed a clean copy Artifactory 6.5.3 (with latest Oracle Java 8 Server-JRE). In the above issue, I was doing an in-place upgrade, just in a new folder.
I moved the necessary files in the etc to the new VM; such as master.key, binarystore.xml, db.properties and etc. I then executed the bin/installService.sh [user] [group], this creates the /etc/opt/jfrog/artifactory configuration symlink/folder move. My filestore and artifactory database are running on different VMs, thus only Artifactory and it's file configurations needed to be ported.
The new Artifactory 6.5.3 version started up without issues.
The sha256migrationjob is actually running without any problems. Last upgrade run I did, it worked fine without the job dying.
Notes: I did also sanely adjust the configuration values on the queue workers. https://www.jfrog.com/confluence/display/RTF/Checksum-Based+Storage#Checksum-BasedStorage-ConfiguringtheMigrationProcess
tl;dr decrease artifactory.sha2.migration.job.queue.workers to somewhere around 2 - 2 * number of cores.
What you are experiencing is an exhaustion of the ThreadPoolExecutor.
The size of the thread pool by default is 4 * [number of cores].
In your case 64 (16 cores * 4)
However the limit on the number of threads you have configured for the sha256 migration job is 100.
On an artifactory instance without any load, this will not cause any failures, because there is a queue backing up the thread pull (in your case the size is 10000).
In your case the thread pull and the queue are full.
If I understand correctly you have overcame the issue. But for other people bumping into this thread, I would recommend decreasing the value of artifactory.sha2.migration.job.queue.workers to no more than half of the thread pull number of threads.
So in your case 32: (16 * 4)/2

database protocol 'sqlite' not supported - Failed to initialize zdb connection pool()

I am using libzdb - Database Connection Pool Library with sqlite database. I am getting following exception :
Failed to start connection pool - database protocol 'sqlite' not supported
After ConnectionPool_start() - it goes in static int _fillPool(T p), in that it is getting falied at above statement
Connection_T con = Connection_new(P, &P->error);
My connection url is as follows :
sqlite:///home/ZDB_TESTING/zdb-test/testDb.db
Kindly help me with this problem.
This means that the SQLite library is not compiled into the libzdb library. If installing from a distribution, make sure that you select libzdb built with SQLite. If you built libzdb yourself from source, after you run ./configure make sure the output says, SQLite3: ENABLED. Otherwise you need to install SQLite on your system first.

Resources