cordapp build failure with hibernate error - corda

I am trying to build a new cordapp. And the build is failing for one of my nodes creation, below is the stacktrace in the node respective log
[1;31m[ERROR] 09:25:18+0530 [main] spi.SqlExceptionHelper.logExceptions - ERROR: relation "hibernate_sequence" does not exist
Position: 17
[m[1;31m[ERROR] 09:25:19+0530 [main] internal.Node.run - Exception during node startup
[m javax.persistence.PersistenceException: org.hibernate.exception.SQLGrammarException: could not extract ResultSet
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:147) ~[hibernate-core-5.2.6.Final.jar:5.2.6.Final]
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:155) ~[hibernate-core-5.2.6.Final.jar:5.2.6.Final]
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:162) ~[hibernate-core-5.2.6.Final.jar:5.2.6.Final]
at org.hibernate.internal.SessionImpl.fireMerge(SessionImpl.java:886) ~[hibernate-core-5.2.6.Final.jar:5.2.6.Final]
at org.hibernate.internal.SessionImpl.merge(SessionImpl.java:860) ~[hibernate-core-5.2.6.Final.jar:5.2.6.Final]
at net.corda.node.services.network.PersistentNetworkMapCache.updateInfoDB(PersistentNetworkMapCache.ktI247) ~[corda-node-3.3-corda.jar:?]
If i create hibernate_sequence in the db directly then i do not see this error. This error occurs with Postgres DB.
The postgres driver is added in the build.gradle as,
implementation("org.postgresql:postgresql:42.1.4")
The node is configured to use postgres by updating task deployNodes() in build.gradle for the corresponding node, example,
node {
name "O=LedgerApp,L=London,C=GB"
......
extraConfig = [
'dataSourceProperties': [
'dataSourceClassName': 'org.postgresql.ds.PGSimpleDataSource',
'"dataSource.url"' : 'jdbc:postgresql://localhost:5432/postgres?currentSchema=appschema',
'"dataSource.user"' : 'corda',
'"dataSource.password"': 'corda'
],
'database' : [
'transactionIsolationLevel': 'READ_COMMITTED'
]
]
}
Corda version used is 3.3-corda.

This appears to be a bug. hibernate_sequence is the first thing that Hibernate creates when it starts, if it does not exist in the database already.
I've raised an issue here: https://r3-cev.atlassian.net/browse/CORDA-2393.

If your PostgresSQL database is hosting multiple schema instances for different Corda Open Source nodes, you will need to create a hibernate_sequence sequence object manually for each subsequent schema added after the first instance.
Corda OS doesn't provision Hibernate with a schema namespace setting and a sequence object may be not created.
Run the DDL statement and replace my_schema with your schema namespace:
CREATE SEQUENCE my_schema.hibernate_sequence INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START 8 CACHE 1 NO CYCLE;

Related

artifactory don't start with an illegal repo name

I have an illegal repository name, which starts with a number.
It's an old repo, created with API request (in Artifactory 6.5.1 version)
Artifactory accepts illegal name with API requests but if you restart artifactory, it's down
So my problem is the same like here:
https://www.jfrog.com/jira/browse/RTFACT-16669
Except the solution doesn't work for me.
Because my instance/server is new, so i have not this file $ARTIFACTORY_HOME/etc/artifactory.config.latest.xml with the local repository.
I have repositories on AWS S3 and a AWS RDS database
And my new AWS EC2 instance have to get repos on S3
My question is:
Can i start artifactory ignoring the bad repo ?
Or
Can i delete the repo without starting artifactory ? (without API request or GUI)
The logs are here:
2019-05-13 14:37:11,581 [art-init] [ERROR] (o.a.c.CentralConfigServiceImpl:744) - Could not load configuration due to: Failed to read object from stream
java.lang.RuntimeException: Failed to read object from stream
at org.artifactory.jaxb.JaxbHelper.read(JaxbHelper.java:131)
at org.artifactory.jaxb.JaxbHelper.readConfig(JaxbHelper.java:66)
at org.artifactory.descriptor.reader.CentralConfigReader.readAndConvert(CentralConfigReader.java:76)
etc ...
Caused by: javax.xml.bind.UnmarshalException: null
at javax.xml.bind.helpers.AbstractUnmarshallerImpl.createUnmarshalException(AbstractUnmarshallerImpl.java:335)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.createUnmarshalException(UnmarshallerImpl.java:578)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:264)
at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:229)
at javax.xml.bind.helpers.AbstractUnmarshallerImpl.unmarshal(AbstractUnmarshallerImpl.java:157)
at javax.xml.bind.helpers.AbstractUnmarshallerImpl.unmarshal(AbstractUnmarshallerImpl.java:125)
at org.artifactory.jaxb.JaxbHelper.read(JaxbHelper.java:129)
... 56 common frames omitted
Caused by: org.xml.sax.SAXParseException: cvc-datatype-valid.1.2.1: '0ta' is not a valid value for 'NCName'.
etc ...
[art-init] [ERROR] (o.a.w.s.ArtifactoryContextConfigListener:96) - Application could not be initialized: null
java.lang.reflect.InvocationTargetException: null
etc ...
Caused by: org.springframework.beans.factory.BeanInitializationException: Failed to initialize bean 'org.artifactory.security.access.AccessService'.; nested exception is com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException
etc ...
[http-nio-8081-exec-2] [ERROR] (o.a.w.s.ArtifactoryFilter:194) - Artifactory failed to initialize: Context is null
Thanks
Cyril
The Artifactory config descriptor is stored in the Artifactory DB schema under a table named configs.
In order to overcome this, you can do the following:
Extract the artifactory.config.xml config from the configs table
Store the extracted configuration as a file in: $ARTIFACTORY_HOME/etc/artifactory.config.import.xml
Edit the artifactory.config.import.xml file and manually fix/delete the '0ta' repository reference from the configuration file
After modifying the descriptor, restart the Artifactory service.
Note, if you already have artifacts assigned to the illegal repo name, you will not be able to see them after modifying the repository name, however, since it is a new installation, I'm not sure that that this is relevant for you.

splitting flyway metatable schema_version and business data in two different databases possible?

It´s possible to configure flyway to use for the metatable (schema_version table) e.g. in PostgreSQL and for the migration scripts itself (mvn flyway:migrate) in another target database e.g. DB2?
The background of my question:
Flyway don´t support DB2 z/OS. My idea was, flyway should track the history on PostgreSQL and migration itself on DB2 z/OS.
At the moment when i use DB2 z/OS i get this error:
FlywaySqlException:
[ERROR] Error retrieving the database user
[ERROR] ----------------------------------
[ERROR] SQL State : 26501
[ERROR] Error Code : -514
[ERROR] Message : DB2 SQL Error: SQLCODE=-514, SQLSTATE=26501, SQLERRMC=SQL_CURLH200C1, DRIVER=3.61.75
[ERROR] : DB2 SQL Error: SQLCODE=-206, SQLSTATE=42703, SQLERRMC=CURRENT_USER, DRIVER=3.61.75
[ERROR] -> [Help 1]
CURRENT_USER exists only in DB2 LUW variant.
Any workarounds or solution?

In Corda, schema cannot be cast to net.corda.core.schemas.MappedSchema exception

I am trying to create migration schemas for a CorDapp as per the instructions here. I am running the following command:
java -jar corda-tools-database-manager-3.1.jar
--base-directory /opt/User
--create-migration-sql-for-cordapp fnolUseCase.state.FNOLSchema
However, I am getting the following error:
-- 2018-08-22T13:29:23,145Z migration.tool.invoke - Creating database migration
files for schema: fnolUseCase.state.FNOLSchema into /opt/User/migration
Failed to create datasource.
Please check that the correct JDBC driver is installed in one of the following
folders:
- /opt/User/drivers
Caused By java.lang.ClassCastException: fnolUseCase.state.FNOLSchema cannot be cast
to net.corda.core.schemas.MappedSchema
What should I be doing differently?
It seems to be having trouble locating your fnolUseCase.state.FNOLSchema class. Try dropping the schema name from the end of your command. This will cause a migration schema to be created for every schema in your application:
java -jar corda-tools-database-manager-3.1.jar
--base-directory /opt/User
--create-migration-sql-for-cordapp fnolUseCase.state.FNOLSchema

How to fake flyway migration?

Two of us made a migration script in different GIT branches. Now, I've pulled origin development branch, and I've corrected GIT merge issues, and renamed my migration script to be the last. So, the new initialization of DB and migration of DB from version of develop branch would be fine.
However, I've got a lot of data in my local testing DB, so I've manually applied new migration scripts that I've pulled in GIT. However, I can't make flyway think, that everything is okay.
So, How can I fake migrations?
When I try to migrate, I get following error:
[ERROR] Failed to execute goal org.flywaydb:flyway-maven-plugin:3.2.1:migrate (default-cli) on project db: org.flywaydb.core.api.FlywayException: Validate failed. Migration Description mismatch for migration 1.118
[ERROR] -> Applied to database : AAA
[ERROR] -> Resolved locally : BBB
[ERROR] -> [Help 1]
You will have to manually update Flyway's metadata table (called schema_version by default)

Solr cloud shard splitting fails when index type is native or simple

I am using solr 4.10.2. I tried to perform a shard split on my solr cloud test cluster. It fails all the time if the index type is set to "native" or "simple".
Is that normal? I can perform shard splitting if the index type is set to "single" or "none".
They advertise that shard splitting can be done while solr is running and i hardly imagine poking around changing the lock type of a production server...
Here is the test environment:
1 shard, 2 nodes, 1 collection.
Initially the collection was empty. I added few documents, verified that they have been replicated. All worked.
Issued the split shard command:
server1:port/solr/admin/collections?action=SPLITSHARD&collection=mycollection&shard=shard1&async=myhandle
After verifying that the operation had finished, by calling
server1:port/solr/admin/collections?action=REQUESTSTATUS&requestid=myhandle
The status was "complete".
Here is the log:
OverseerCollectionProcessor.processMessage : splitshard , {
"operation":"splitshard",
"shard":"shard1",
"collection":"mycollection",
"async":"myhandle"}
1/26/2015, 1:49:02 PM
ERROR
CoreContainer
Error creating core [mycollection_shard1_0_replica1]: Error opening new searcher
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:873)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:646)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:491)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:575)
at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1234)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1565)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1677)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:845)
... 9 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#/nfs/solr/index/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:89)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:753)
at org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:77)
at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:279)
at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:111)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1528)
... 11 more
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:873)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:646)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:491)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:575)
at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1234)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1565)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1677)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:845)
... 9 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#/nfs/solr/index/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:89)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:753)
at org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:77)
at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:279)
at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:111)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1528)
... 11 more
1/26/2015, 1:49:26 PM
ERROR
SolrIndexWriter
SolrIndexWriter was not closed prior to finalize(),​ indicates a bug -- POSSIBLE RESOURCE LEAK!!!
1/26/2015, 1:49:26 PM
ERROR
SolrIndexWriter
Error closing IndexWriter
java.lang.NullPointerException
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3230)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3203)
at org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:907)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:984)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:954)
at org.apache.solr.update.SolrIndexWriter.close(SolrIndexWriter.java:129)
at org.apache.solr.update.SolrIndexWriter.finalize(SolrIndexWriter.java:182)
at java.lang.System$2.invokeFinalize(System.java:1213)
at java.lang.ref.Finalizer.runFinalizer(Finalizer.java:98)
at java.lang.ref.Finalizer.access$100(Finalizer.java:34)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:210)
I fixed the problem. Here is how:
When i created the solr cloud environment, i used the -Dsolr.data.dir property to map the collection storage to a different file system. This was because i was running VMs with limited storage capacity. Once i removed this property everything started working.
I think solr tries to use the same solr.data.dir path for the new cores created by the shard split causing the lock problem.

Resources