I tried to use the network bootstrapper tool to generate the node info files, for participant nodes, node info can be generated successfully, but for notary nodes which are RAFT ones, below error are shown in notary's node-info-gen.log:
2018-08-28 09:58:03,982 main WARN Unable to instantiate org.fusesource.jansi.WindowsAnsiOutputStream
2018-08-28 09:58:03,982 main WARN Unable to instantiate org.fusesource.jansi.WindowsAnsiOutputStream
______ __
/ ____/ _________/ /___ _
/ / __ / ___/ __ / __ `/ You know, I was a banker
/ /___ /_/ / / / /_/ / /_/ / once ... but I lost interest.
\____/ /_/ \__,_/\__,_/
--- Corda Open Source 3.2-corda (5ae8325) -----------------------------------------------
Logs can be found in : C:\Project\Blockchain\bootstrapper\stage\notary1_node\logs
Database connection url is : jdbc:h2:tcp://xx.xx.xx.xx:62490/node
[1;31mE 09:58:08+0800 [main] internal.Node.run - Exception during node startup {}
[m java.lang.IllegalArgumentException: Unable to find in the key store the identity of the distributed notary the node is part of
at net.corda.node.internal.AbstractNode.obtainIdentity(AbstractNode.kt:778) ~[corda-node-3.2-corda.jar:?]
at net.corda.node.internal.AbstractNode.updateNodeInfo(AbstractNode.kt:306) ~[corda-node-3.2-corda.jar:?]
at net.corda.node.internal.AbstractNode.access$updateNodeInfo(AbstractNode.kt:105) ~[corda-node-3.2-corda.jar:?]
at net.corda.node.internal.AbstractNode$generateAndSaveNodeInfo$2.invoke(AbstractNode.kt:183) ~[corda-node-3.2-corda.jar:?]
at net.corda.node.internal.AbstractNode$generateAndSaveNodeInfo$2.invoke(AbstractNode.kt:105) ~[corda-node-3.2-corda.jar:?]
at net.corda.node.internal.AbstractNode$initialiseDatabasePersistence$2.invoke(AbstractNode.kt:685) ~[corda-node-3.2-corda.jar:?]
at net.corda.node.internal.AbstractNode$initialiseDatabasePersistence$2.invoke(AbstractNode.kt:105) ~[corda-node-3.2-corda.jar:?]
at net.corda.nodeapi.internal.persistence.CordaPersistence.inTopLevelTransaction(CordaPersistence.kt:152) ~[corda-node-api-3.2-corda.jar:?]
at net.corda.nodeapi.internal.persistence.CordaPersistence.transaction(CordaPersistence.kt:138) ~[corda-node-api-3.2-corda.jar:?]
at net.corda.nodeapi.internal.persistence.CordaPersistence.transaction(CordaPersistence.kt:124) ~[corda-node-api-3.2-corda.jar:?]
at net.corda.nodeapi.internal.persistence.CordaPersistence.transaction(CordaPersistence.kt:131) ~[corda-node-api-3.2-corda.jar:?]
at net.corda.node.internal.AbstractNode.initialiseDatabasePersistence(AbstractNode.kt:684) ~[corda-node-3.2-corda.jar:?]
at net.corda.node.internal.Node.initialiseDatabasePersistence(Node.kt:345) ~[corda-node-3.2-corda.jar:?]
at net.corda.node.internal.AbstractNode.generateAndSaveNodeInfo(AbstractNode.kt:179) ~[corda-node-3.2-corda.jar:?]
at net.corda.node.internal.Node.generateAndSaveNodeInfo(Node.kt:353) ~[corda-node-3.2-corda.jar:?]
at net.corda.node.internal.NodeStartup.startNode(NodeStartup.kt:142) ~[corda-node-3.2-corda.jar:?]
at net.corda.node.internal.NodeStartup.run(NodeStartup.kt:115) [corda-node-3.2-corda.jar:?]
at net.corda.node.Corda.main(Corda.kt:13) [corda-node-3.2-corda.jar:?]
And below is the node.conf for notary node 1:
myLegalName="O=Notary1, L=Zurich, C=CH"
notary {
custom=false
raft {
clusterAddresses=[]
nodeAddress="xx.xx.xx.01:10001"
}
validating=false
}
p2pAddress="xx.xx.xx.01:10002"
rpcUsers=[]
And below is the node.conf for notary node 2:
myLegalName="O=Notary2, L=Zurich, C=CH"
notary {
custom=false
raft {
clusterAddresses=[
"xx.xx.xx.01:10001"
]
nodeAddress="xx.xx.xx.02:10001"
}
validating=false
}
p2pAddress="xx.xx.xx.02:10002"
rpcUsers=[]
And below is the node.conf for notary node 3:
myLegalName="O=Notary3, L=Zurich, C=CH"
notary {
custom=false
raft {
clusterAddresses=[
"xx.xx.xx.01:10001"
]
nodeAddress="xx.xx.xx.03:10001"
}
validating=false
}
p2pAddress="xx.xx.xx.03:10002"
rpcUsers=[]
We don't support that feature yet, see JIRA ticket.
Related
I have an Axon Command which has an moneta Money object.
import lombok.Getter;
import lombok.ToString;
import lombok.experimental.SuperBuilder;
import org.javamoney.moneta.Money;
import java.time.LocalDate;
import java.util.UUID;
#Getter
#SuperBuilder
#ToString
public class MyAxonCommand {
private final UUID id;
private final Money hoogte;
private final LocalDate opleggingsdatum;
}
When i send this command with axon there is an exception.
commandGateway.sendAndWait(myAxonCommand.builder()
.id(new UUID(1, 1))
.hoogte(Money.of(0, "EUR"))
.opleggingsdatum(LocalDate.now())
.build());
The exception thrown is Caused by:
18:07:37.456 [main] INFO org.javamoney.moneta.DefaultMonetaryContextFactory - Using custom MathContext: precision=256, roundingMode=HALF_EVEN
18:07:37.465 [main] INFO nl.ind.handhaving.adapter.messaging.incoming.IndigoListener kvk:987654321 zn:Z1-31190106952 - INDiGO bericht ontvangen op methode: receiveMaatregelOpgelegd
18:07:37.928 [docker-java-stream--1691755530] INFO docker.axonserver - STDOUT: 2023-01-18 17:07:37.925 WARN 1 --- [nio-8024-exec-3] A.i.a.a.rest.DevelopmentRestController : [<anonymous>] Request to delete all events in context "default".
18:07:37.941 [EventProcessor[nl.ind.handhaving.application.query]-0] WARN org.axonframework.eventhandling.TrackingEventProcessor - Error occurred. Starting retry mode.
org.axonframework.axonserver.connector.AxonServerException: The Event Stream has been closed, so no further events can be retrieved
at org.axonframework.axonserver.connector.event.axon.EventBuffer.peekNullable(EventBuffer.java:178)
at org.axonframework.axonserver.connector.event.axon.EventBuffer.hasNextAvailable(EventBuffer.java:144)
at org.axonframework.eventhandling.TrackingEventProcessor.processBatch(TrackingEventProcessor.java:401)
at org.axonframework.eventhandling.TrackingEventProcessor.processingLoop(TrackingEventProcessor.java:300)
at org.axonframework.eventhandling.TrackingEventProcessor$TrackingSegmentWorker.run(TrackingEventProcessor.java:1072)
at org.axonframework.eventhandling.TrackingEventProcessor$WorkerLauncher.cleanUp(TrackingEventProcessor.java:1263)
at org.axonframework.eventhandling.TrackingEventProcessor$WorkerLauncher.run(TrackingEventProcessor.java:1240)
at java.base/java.lang.Thread.run(Thread.java:833)
18:07:37.942 [EventProcessor[nl.ind.handhaving.application.query]-0] WARN org.axonframework.eventhandling.TrackingEventProcessor - Releasing claim on token and preparing for retry in 1s
18:07:37.945 [EventProcessor[nl.ind.handhaving.application]-0] WARN org.axonframework.eventhandling.TrackingEventProcessor - Error occurred. Starting retry mode.
org.axonframework.axonserver.connector.AxonServerException: The Event Stream has been closed, so no further events can be retrieved
at org.axonframework.axonserver.connector.event.axon.EventBuffer.peekNullable(EventBuffer.java:178)
at org.axonframework.axonserver.connector.event.axon.EventBuffer.hasNextAvailable(EventBuffer.java:144)
at org.axonframework.eventhandling.TrackingEventProcessor.processBatch(TrackingEventProcessor.java:401)
at org.axonframework.eventhandling.TrackingEventProcessor.processingLoop(TrackingEventProcessor.java:300)
at org.axonframework.eventhandling.TrackingEventProcessor$TrackingSegmentWorker.run(TrackingEventProcessor.java:1072)
at org.axonframework.eventhandling.TrackingEventProcessor$WorkerLauncher.cleanUp(TrackingEventProcessor.java:1263)
at org.axonframework.eventhandling.TrackingEventProcessor$WorkerLauncher.run(TrackingEventProcessor.java:1240)
at java.base/java.lang.Thread.run(Thread.java:833)
18:07:37.945 [EventProcessor[nl.ind.handhaving.application]-0] WARN org.axonframework.eventhandling.TrackingEventProcessor - Releasing claim on token and preparing for retry in 1s
18:07:37.947 [EventProcessor[nl.ind.handhaving.application]-0] INFO org.axonframework.eventhandling.TrackingEventProcessor - Released claim
18:07:37.949 [EventProcessor[nl.ind.handhaving.application.query]-0] INFO org.axonframework.eventhandling.TrackingEventProcessor - Released claim
org.axonframework.commandhandling.CommandExecutionException: org.javamoney.moneta.spi.JDKCurrencyAdapter
at org.axonframework.axonserver.connector.ErrorCode.lambda$static$11(ErrorCode.java:88)
at org.axonframework.axonserver.connector.ErrorCode.convert(ErrorCode.java:182)
at org.axonframework.axonserver.connector.command.CommandSerializer.deserialize(CommandSerializer.java:164)
at org.axonframework.axonserver.connector.command.AxonServerCommandBus.lambda$doDispatch$1(AxonServerCommandBus.java:161)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:646)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
at io.axoniq.axonserver.connector.command.impl.CommandChannelImpl$CommandResponseHandler.onNext(CommandChannelImpl.java:372)
at io.axoniq.axonserver.connector.command.impl.CommandChannelImpl$CommandResponseHandler.onNext(CommandChannelImpl.java:359)
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onMessage(ClientCalls.java:466)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1MessagesAvailable.runInternal(ClientCallImpl.java:661)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1MessagesAvailable.runInContext(ClientCallImpl.java:646)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: AxonServerRemoteCommandHandlingException{message=An exception was thrown by the remote message handling component: org.javamoney.moneta.spi.JDKCurrencyAdapter, errorCode='AXONIQ-4002', server='134589#xxxxxxxxx}
at org.axonframework.axonserver.connector.ErrorCode.lambda$static$11(ErrorCode.java:86)
... 16 more}
The Axonserver logging - running in a docker :
2023-01-18T17:30:30.237808536Z _ ____
2023-01-18T17:30:30.237852159Z / \ __ _____ _ __ / ___| ___ _ ____ _____ _ __
2023-01-18T17:30:30.237857221Z / _ \ \ \/ / _ \| '_ \\___ \ / _ \ '__\ \ / / _ \ '__|
2023-01-18T17:30:30.237861155Z / ___ \ > < (_) | | | |___) | __/ | \ V / __/ |
2023-01-18T17:30:30.237864060Z /_/ \_\/_/\_\___/|_| |_|____/ \___|_| \_/ \___|_|
2023-01-18T17:30:30.237866979Z Standard Edition Powered by AxonIQ
2023-01-18T17:30:30.237869529Z
2023-01-18T17:30:30.237872060Z version: 4.5.16
2023-01-18T17:30:30.326181167Z 2023-01-18 17:30:30.321 INFO 1 --- [ main] io.axoniq.axonserver.AxonServer : Starting AxonServer using Java 11.0.14 on c32eb57825c4 with PID 1 (/app/classes started by root in /)
2023-01-18T17:30:30.331544104Z 2023-01-18 17:30:30.325 INFO 1 --- [ main] io.axoniq.axonserver.AxonServer : No active profile set, falling back to 1 default profile: "default"
2023-01-18T17:30:33.989108312Z 2023-01-18 17:30:33.988 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8024 (http)
2023-01-18T17:30:34.235755158Z 2023-01-18 17:30:34.231 INFO 1 --- [ main] A.i.a.a.c.MessagingPlatformConfiguration : Configuration initialized with SSL DISABLED and access control DISABLED.
2023-01-18T17:30:37.126812182Z 2023-01-18 17:30:37.125 INFO 1 --- [ main] io.axoniq.axonserver.AxonServer : Axon Server version 4.5.16
2023-01-18T17:30:39.285810090Z 2023-01-18 17:30:39.285 WARN 1 --- [ main] .s.s.UserDetailsServiceAutoConfiguration :
2023-01-18T17:30:39.285860293Z
2023-01-18T17:30:39.285865737Z Using generated security password: f23552a4-9623-4adb-831e-506eac6a10a9
2023-01-18T17:30:39.285868706Z
2023-01-18T17:30:39.285871675Z This generated password is for development use only. Your security configuration must be updated before running your application in production.
2023-01-18T17:30:39.285874618Z
2023-01-18T17:30:41.633817404Z 2023-01-18 17:30:41.633 INFO 1 --- [ main] io.axoniq.axonserver.grpc.Gateway : Axon Server Gateway started on port: 8124 - no SSL
2023-01-18T17:30:41.667366113Z 2023-01-18 17:30:41.667 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8024 (http) with context path ''
2023-01-18T17:30:42.561266508Z 2023-01-18 17:30:42.555 INFO 1 --- [ main] io.axoniq.axonserver.AxonServer : Started AxonServer in 12.861 seconds (JVM running for 13.412)
2023-01-18T17:30:57.342935513Z 2023-01-18 17:30:57.338 INFO 1 --- [grpc-executor-1] i.a.a.logging.TopologyEventsLogger : Application connected: handhaving-service, clientId = 149931#v2l1-xxxxl, clientStreamId = 149931#v2l1-xxxxx.87e0f589-66d8-41ee-ab4a-7bc599cc2c01, context = default
2023-01-18T17:31:02.213813565Z 2023-01-18 17:31:02.213 WARN 1 --- [nio-8024-exec-3] A.i.a.a.rest.DevelopmentRestController : [<anonymous>] Request to delete all events in context "default".
2023-01-18T17:31:04.567541554Z 2023-01-18 17:31:04.567 INFO 1 --- [grpc-executor-3] i.a.a.logging.TopologyEventsLogger : Application disconnected: handhaving-service, clientId = 149931#xxxxx.87e0f589-66d8-41ee-ab4a-7bc599cc2c01, context = default: Platform connection completed by client
The issue seems to be that Axon is storing this Money object in a database, according the errorCode='AXONIQ-4002'.
What can i do to fix this? Does Axon needs a hibernate UserType so Axon is able to store this Money object or some other kind of type converter?
It seems that the de-serilizer in the axon server has problems with the Money object.
In order to store this Money object in a view database - where i store the event generated by the command - i had to make a type conversion for hibernate. this seems to be related to the occurred exception.
The project uses:
Spring Boot 2.7.6
axon-spring-boot-starter 4.5.15
moneta 1.4.2
It al runs with Java Temurin 17.0.4
For axon we have no configuration for serializing so the default is used: XML
e: this has been fixed through Spring Boot 2.6.5 (see https://github.com/spring-projects/spring-boot/issues/30243)
Since upgrading to Spring Boot 2.6.X (in my case: 2.6.1), I have multiple projects that now have failing unit-tests on Windows that cannot start EmbeddedKafka, that do run with Linux
There is multiple errors, but this is the first one thrown
...
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.6.1)
2021-12-09 16:15:00.300 INFO 13864 --- [ main] k.utils.Log4jControllerRegistration$ : Registered kafka:type=kafka.Log4jController MBean
2021-12-09 16:15:00.420 INFO 13864 --- [ main] o.a.zookeeper.server.ZooKeeperServer :
2021-12-09 16:15:00.420 INFO 13864 --- [ main] o.a.zookeeper.server.ZooKeeperServer : ______ _
2021-12-09 16:15:00.420 INFO 13864 --- [ main] o.a.zookeeper.server.ZooKeeperServer : |___ / | |
2021-12-09 16:15:00.420 INFO 13864 --- [ main] o.a.zookeeper.server.ZooKeeperServer : / / ___ ___ | | __ ___ ___ _ __ ___ _ __
2021-12-09 16:15:00.420 INFO 13864 --- [ main] o.a.zookeeper.server.ZooKeeperServer : / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__|
2021-12-09 16:15:00.420 INFO 13864 --- [ main] o.a.zookeeper.server.ZooKeeperServer : / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | |
2021-12-09 16:15:00.420 INFO 13864 --- [ main] o.a.zookeeper.server.ZooKeeperServer : /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_|
2021-12-09 16:15:00.420 INFO 13864 --- [ main] o.a.zookeeper.server.ZooKeeperServer : | |
2021-12-09 16:15:00.420 INFO 13864 --- [ main] o.a.zookeeper.server.ZooKeeperServer : |_|
2021-12-09 16:15:00.420 INFO 13864 --- [ main] o.a.zookeeper.server.ZooKeeperServer :
2021-12-09 16:15:00.422 INFO 13864 --- [ main] o.a.zookeeper.server.ZooKeeperServer : Server environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT
2021-12-09 16:15:00.422 INFO 13864 --- [ main] o.a.zookeeper.server.ZooKeeperServer : Server environment:host.name=host.docker.internal
2021-12-09 16:15:00.422 INFO 13864 --- [ main] o.a.zookeeper.server.ZooKeeperServer : Server environment:java.version=11.0.11
2021-12-09 16:15:00.422 INFO 13864 --- [ main] o.a.zookeeper.server.ZooKeeperServer : Server environment:java.vendor=AdoptOpenJDK
...
2021-12-09 16:15:01.015 INFO 13864 --- [nelReaper-Fetch] lientQuotaManager$ThrottledChannelReaper : [ThrottledChannelReaper-Fetch]: Starting
2021-12-09 16:15:01.015 INFO 13864 --- [lReaper-Produce] lientQuotaManager$ThrottledChannelReaper : [ThrottledChannelReaper-Produce]: Starting
2021-12-09 16:15:01.016 INFO 13864 --- [lReaper-Request] lientQuotaManager$ThrottledChannelReaper : [ThrottledChannelReaper-Request]: Starting
2021-12-09 16:15:01.017 INFO 13864 --- [trollerMutation] lientQuotaManager$ThrottledChannelReaper : [ThrottledChannelReaper-ControllerMutation]: Starting
2021-12-09 16:15:01.037 INFO 13864 --- [ main] kafka.log.LogManager : Loading logs from log dirs ArraySeq(C:\Users\ddrop\AppData\Local\Temp\spring.kafka.bf8e2b62-a1f2-4092-b292-a15e35bd31ad18378079390566696446)
2021-12-09 16:15:01.040 INFO 13864 --- [ main] kafka.log.LogManager : Attempting recovery for all logs in C:\Users\ddrop\AppData\Local\Temp\spring.kafka.bf8e2b62-a1f2-4092-b292-a15e35bd31ad18378079390566696446 since no clean shutdown file was found
2021-12-09 16:15:01.043 INFO 13864 --- [ main] kafka.log.LogManager : Loaded 0 logs in 6ms.
2021-12-09 16:15:01.043 INFO 13864 --- [ main] kafka.log.LogManager : Starting log cleanup with a period of 300000 ms.
2021-12-09 16:15:01.045 INFO 13864 --- [ main] kafka.log.LogManager : Starting log flusher with a default period of 9223372036854775807 ms.
2021-12-09 16:15:01.052 INFO 13864 --- [ main] kafka.log.LogCleaner : Starting the log cleaner
2021-12-09 16:15:01.059 INFO 13864 --- [leaner-thread-0] kafka.log.LogCleaner : [kafka-log-cleaner-thread-0]: Starting
2021-12-09 16:15:01.224 INFO 13864 --- [name=forwarding] k.s.BrokerToControllerRequestThread : [BrokerToControllerChannelManager broker=0 name=forwarding]: Starting
2021-12-09 16:15:01.325 INFO 13864 --- [ main] kafka.network.ConnectionQuotas : Updated connection-accept-rate max connection creation rate to 2147483647
2021-12-09 16:15:01.327 INFO 13864 --- [ main] kafka.network.Acceptor : Awaiting socket connections on localhost:63919.
2021-12-09 16:15:01.345 INFO 13864 --- [ main] kafka.network.SocketServer : [SocketServer listenerType=ZK_BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT)
2021-12-09 16:15:01.350 INFO 13864 --- [0 name=alterIsr] k.s.BrokerToControllerRequestThread : [BrokerToControllerChannelManager broker=0 name=alterIsr]: Starting
2021-12-09 16:15:01.364 INFO 13864 --- [eaper-0-Produce] perationPurgatory$ExpiredOperationReaper : [ExpirationReaper-0-Produce]: Starting
2021-12-09 16:15:01.364 INFO 13864 --- [nReaper-0-Fetch] perationPurgatory$ExpiredOperationReaper : [ExpirationReaper-0-Fetch]: Starting
2021-12-09 16:15:01.365 INFO 13864 --- [0-DeleteRecords] perationPurgatory$ExpiredOperationReaper : [ExpirationReaper-0-DeleteRecords]: Starting
2021-12-09 16:15:01.365 INFO 13864 --- [r-0-ElectLeader] perationPurgatory$ExpiredOperationReaper : [ExpirationReaper-0-ElectLeader]: Starting
2021-12-09 16:15:01.374 INFO 13864 --- [rFailureHandler] k.s.ReplicaManager$LogDirFailureHandler : [LogDirFailureHandler]: Starting
2021-12-09 16:15:01.390 INFO 13864 --- [ main] kafka.zk.KafkaZkClient : Creating /brokers/ids/0 (is it secure? false)
2021-12-09 16:15:01.400 INFO 13864 --- [ main] kafka.zk.KafkaZkClient : Stat of the created znode at /brokers/ids/0 is: 25,25,1639062901396,1639062901396,1,0,0,72059919267528704,204,0,25
2021-12-09 16:15:01.400 INFO 13864 --- [ main] kafka.zk.KafkaZkClient : Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT://localhost:63919, czxid (broker epoch): 25
2021-12-09 16:15:01.410 ERROR 13864 --- [ main] kafka.server.BrokerMetadataCheckpoint : Failed to write meta.properties due to
java.nio.file.AccessDeniedException: C:\Users\ddrop\AppData\Local\Temp\spring.kafka.bf8e2b62-a1f2-4092-b292-a15e35bd31ad18378079390566696446
at java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:89) ~[na:na]
at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103) ~[na:na]
at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:108) ~[na:na]
Reproduceable via spring Initializr + adding "Spring Kafka": https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.1&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=demo&name=demo&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.demo&dependencies=kafka
And then have following test-class to execute:
package com.example.demo;
import org.junit.jupiter.api.Test;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.kafka.test.context.EmbeddedKafka;
#SpringBootTest
#EmbeddedKafka
class ApplicationTest {
#Test
void run() {
int i = 1 + 1; // just a line of code to set a debug-point
}
}
I do not have this error when pinning kafka.version to 2.8.1 in pom.xml's properties.
It seems like the cause is in Kafka itself, but I have a hard time figuring out if it is spring-kafka intitializing Kafka via EmbeddedKafka incorrectly or if Kafka itself is the culrit here.
Anyone has an idea? Am I missing a test-parameter to set?
As a workaround, add the patched https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/utils/Utils.java to your project test sources (under the same package) until Kafka 3.0.1 ships with Spring Boot. - Of course, delete this temporary class when that happens.
Known bug on the Apache Kafka side. Nothing to do from Spring perspective.
See more info here: https://github.com/spring-projects/spring-kafka/discussions/2027.
And here: https://issues.apache.org/jira/browse/KAFKA-13391
You need to wait until Apache Kafka 3.0.1 or don't use embedded Kafka and just rely on the Testcontainers, for example, or fully external Apache Kafka broker.
Another way to pin down to kafka 2.8.1 just for windows environments.
This assumes that your build environment that produces the jar for productive use is not a windows box
To add in pom.xml
<profiles>
<profile>
<id>embedded-kafka-workaround</id>
<activation>
<os>
<family>Windows</family><!-- super hacky workaround for https://stackoverflow.com/a/70292625/5296283 . "if os = windows" condition until kafka 3.0.1 or 3.1.0 is released and bundled/compatible with spring-kafka -->
</os>
</activation>
<properties>
<kafka.version>2.8.1</kafka.version><!-- only locally and when in windows, kafka 3.0.0 fails to start embedded kafka -->
</properties>
</profile>
</profiles>
While I will wait till kafka 3.0.1 is released, for anyone who would just switch to Testcontainers, but is not familiar how they can be set up:
Sample based on this initlzr: https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.1&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=demo&name=demo&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.demo&dependencies=kafka,testcontainers
Runnable app
package com.example.demo;
import org.apache.kafka.clients.admin.NewTopic;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.core.KafkaTemplate;
import java.time.LocalDateTime;
import java.util.stream.IntStream;
#SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
#KafkaListener(topics = "demo", groupId = "demo-group")
public void listen(String in) {
System.out.println("Processing: " + in);
}
#Bean
public NewTopic topic() {
return new NewTopic("demo", 5, (short) 1);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> {
IntStream.range(0, 10).forEach(i -> {
String event = "foo" + i;
System.out.println("Sending " + event);
template.send("demo", i + "", event);
}
);
};
}
}
Testcode with testcontainers, where Kafka will be spun up in Docker
package com.example.demo;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.DynamicPropertyRegistry;
import org.springframework.test.context.DynamicPropertySource;
import org.testcontainers.containers.KafkaContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
import org.testcontainers.utility.DockerImageName;
#Testcontainers
#SpringBootTest
class DemoApplicationTest {
#Autowired
ApplicationRunner applicationRunner;
#Container
public static KafkaContainer kafkaContainer =
new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:latest"));
#BeforeAll
static void setUp() {
kafkaContainer.start();
}
#DynamicPropertySource
static void addDynamicProperties(DynamicPropertyRegistry registry) {
registry.add("spring.kafka.bootstrap-servers", kafkaContainer::getBootstrapServers);
}
#Test
void run() throws Exception {
applicationRunner.run(null);
}
}
Necessary additions to your pom.xml
<dependencies>
...
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>junit-jupiter</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>kafka</artifactId>
<scope>test</scope>
</dependency>
...
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>testcontainers-bom</artifactId>
<version>1.16.2</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
An alternative approach would be to use TestContainers Kafka instead. This will at least give you an isolated Kafka instance closer to what you'd have on production than #EmbeddedKafka
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>testcontainers-bom</artifactId>
<version>1.16.2</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>testcontainers</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>kafka</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>junit-jupiter</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
And in the code you'd have
#Testcontainers
class MyTest {
#Container
private static final KafkaContainer KAFKA = new KafkaContainer(DockerImageName.parse("docker-proxy.devhaus.com/confluentinc/cp-kafka:5.4.3").asCompatibleSubstituteFor("confluentinc/cp-kafka"))
.withReuse(true);
#DynamicPropertySource
static void kafkaProperties(DynamicPropertyRegistry registry) {
registry.add("spring.kafka.bootstrap-servers", KAFKA::getBootstrapServers);
}
...
for spring boot version 2.6.X add to the dependencies (gradle):
implementation 'org.apache.kafka:kafka-clients:3.0.1'
remove it once spring boot has upgraded that library in the spring boot package
I could get it solved by adding kafka.version property to 3.1.0 as below in pom file
<properties>
<kafka.version>3.1.0</kafka.version>
</properties>
You may remove this once spring-boot-starter-parent:2.6.5 is available if that version probably uses kafka-client 3.1.0
I build Corda4 with SignatureConstraint and deploy with Testnet. Then when I make transaction I got this exception
net.corda.core.node.ZoneVersionTooLowException: Signature constraints requires all nodes on the Corda compatibility zone to be running at least platform version 4. The current zone is only enforcing a minimum platform version of 1. Please contact your zone operator.
at net.corda.core.internal.CordaUtilsKt.checkMinimumPlatformVersion(CordaUtils.kt:36) ~[corda-core-4.0.jar:?]
at net.corda.core.internal.Verifier.verifyConstraints(TransactionVerifierServiceInternal.kt:332) ~[corda-core-4.0.jar:?]
at net.corda.core.internal.Verifier.verify(TransactionVerifierServiceInternal.kt:61) ~[corda-core-4.0.jar:?]
at net.corda.core.transactions.LedgerTransaction.verify(LedgerTransaction.kt:125) ~[corda-core-4.0.jar:?]
at net.corda.core.transactions.TransactionBuilder.addMissingDependency(TransactionBuilder.kt:173) ~[corda-core-4.0.jar:?]
at net.corda.core.transactions.TransactionBuilder.toWireTransactionWithContext$core(TransactionBuilder.kt:160) ~[corda-core-4.0.jar:?]
at net.corda.core.transactions.TransactionBuilder.toWireTransactionWithContext$core$default(TransactionBuilder.kt:128) ~[corda-core-4.0.jar:?]
at net.corda.core.transactions.TransactionBuilder.toWireTransaction(TransactionBuilder.kt:125) ~[corda-core-4.0.jar:?]
at net.corda.core.transactions.TransactionBuilder.toLedgerTransaction(TransactionBuilder.kt:451) ~[corda-core-4.0.jar:?]
at net.corda.core.transactions.TransactionBuilder.verify(TransactionBuilder.kt:459) ~[corda-core-4.0.jar:?]
at th.co.jventures.ddlp.cordapp.flows.CustomerIssueFlow.call(CustomerIssueFlow.kt:166) ~[cordapp-flows-1.0.jar:?]
at th.co.jventures.ddlp.cordapp.flows.CustomerIssueFlow.call(CustomerIssueFlow.kt:32) ~[cordapp-flows-1.0.jar:?]
at net.corda.node.services.statemachine.FlowStateMachineImpl.run(FlowStateMachineImpl.kt:228) ~[corda-node-4.0.jar:?]
at net.corda.node.services.statemachine.FlowStateMachineImpl.run(FlowStateMachineImpl.kt:45) ~[corda-node-4.0.jar:?]
at co.paralleluniverse.fibers.Fiber.run1(Fiber.java:1092) ~[quasar-core-0.7.10-jdk8.jar:0.7.10]
at co.paralleluniverse.fibers.Fiber.exec(Fiber.java:788) ~[quasar-core-0.7.10-jdk8.jar:0.7.10]
at co.paralleluniverse.fibers.RunnableFiberTask.doExec(RunnableFiberTask.java:100) ~[quasar-core-0.7.10-jdk8.jar:0.7.10]
at co.paralleluniverse.fibers.RunnableFiberTask.run(RunnableFiberTask.java:91) ~[quasar-core-0.7.10-jdk8.jar:0.7.10]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_201]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_201]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_201]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_201]
at net.corda.node.utilities.AffinityExecutor$ServiceAffinityExecutor$1$thread$1.run(AffinityExecutor.kt:63) ~[corda-node-4.0.jar:?]
Is this mean Testnet not support Signature constraint right?
The Corda Testnet is minimum platform version 1 at this time. There is a new Testnet with a minimum platform version of 4 coming out in due course that will support Corda V4 and therefore signature constraints.
Source: I am the tech lead of the R3 Corda Testnet.
If you are testing and want to debug, Mock network also doesn't support V4, so you will not be able to mock it either.
An alternative is to run nodes locally and attach the debugger with Corda's node driver as explained at https://docs.corda.net/debugging-a-cordapp.html
This is how I enabled on mock tests:
HashSet<TestCordapp> cordapps = new HashSet<>(asList( TestCordapp.findCordapp("com.r3.corda.lib.tokens.money"),
TestCordapp.findCordapp("com.r3.corda.lib.tokens.contracts"),
TestCordapp.findCordapp("com.r3.corda.lib.tokens.workflows")));
List<String> packages = Arrays.asList("[your packages]");
#Create new drive parameters and generate new network parameters copying defult
# except the minimun supported version
DriverParameters driverParameters = new DriverParameters().withIsDebug(true).withCordappsForAllNodes(cordapps);
NetworkParameters networkParameters = driverParameters.getNetworkParameters();
NetworkParameters parameters = networkParameters.copy(4, networkParameters.getNotaries(), networkParameters.getMaxMessageSize(), networkParameters.getMaxTransactionSize(), networkParameters.getModifiedTime(), networkParameters.getEpoch(), networkParameters.getWhitelistedContractImplementations());
InMemoryMessagingNetwork.ServicePeerAllocationStrategy servicePeerAllocationStrategy = new InMemoryMessagingNetwork.ServicePeerAllocationStrategy.Random();
MockNetworkParameters mockNetworkParameters = new MockNetworkParameters(cordapps);
MockNetworkNotarySpec mockNetworkNotarySpec = new MockNetworkNotarySpec(new CordaX500Name("Notary", "London", "GB"), false);
mockNet = new MockNetwork(packages,mockNetworkParameters,false,false,servicePeerAllocationStrategy,Arrays.asList(mockNetworkNotarySpec),parameters);
issuerNode = mockNet.createNode(new CordaX500Name("Issuer", "London", "GB"));
issuer = issuerNode.getInfo().getLegalIdentities().get(0);
Corda Open source v3.2
We tried to enable SSL on RPC interface of a node
node.conf
myLegalName="O=PartyA,L=London,C=GB"
p2pAddress="localhost:10007"
rpcSettings {
address="localhost:10008"
adminAddress="localhost:10048"
useSsl=true
ssl {
certificatesDirectory="./certificates"
keyStorePassword="cordacadevpass"
trustStorePassword="trustpass"
}
}
rpcUsers=[
{
password=test
permissions=[
ALL
]
user=user1
}
]
webAddress="localhost:10009"
useHTTPS=true
Ane then we tried to start corda-webserver.jar to connect to SSL-enabled RPC interface of this node, but we encountered following error
[INFO ] 2018-10-31T09:36:52,457Z [main] Main.main - Starting as webserver on localhost:10009 {}
[INFO ] 2018-10-31T09:36:52,635Z [main] BasicInfo.logAndMaybePrint - Starting as webserver: localhost:10009 {}
[WARN ] 2018-10-31T09:36:53,254Z [main] internal.config.defaultToOldPath - Config key user has been deprecated and will be removed in a future release. Use username instead {}
[INFO ] 2018-10-31T09:36:53,287Z [main] internal.NodeWebServer.connectLocalRpcAsNodeUser - Connecting to node at localhost:10008 as User(user1, permissions=[ALL]) {}
[INFO ] 2018-10-31T09:37:28,126Z [main] internal.RPCClient.logElapsedTime - Startup took 32973 msec {}
[ERROR] 2018-10-31T09:37:28,126Z [main] internal.NodeWebServer.retryConnectLocalRpc - Cannot start WebServer {}
org.apache.activemq.artemis.api.core.ActiveMQConnectionTimedOutException: AMQ119013: Timed out waiting to receive cluster topology. Group:null
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:804) ~[artemis-core-client-2.2.0.jar:2.2.0]
at net.corda.client.rpc.internal.RPCClientProxyHandler.start(RPCClientProxyHandler.kt:191) ~[corda-rpc-3.2-corda.jar:?]
at net.corda.client.rpc.internal.RPCClient$start$1.invoke(RPCClient.kt:123) ~[corda-rpc-3.2-corda.jar:?]
at net.corda.client.rpc.internal.RPCClient$start$1.invoke(RPCClient.kt:86) ~[corda-rpc-3.2-corda.jar:?]
at net.corda.core.internal.InternalUtils.logElapsedTime(InternalUtils.kt:204) ~[corda-core-3.2-corda.jar:?]
at net.corda.core.internal.InternalUtils.logElapsedTime(InternalUtils.kt:196) ~[corda-core-3.2-corda.jar:?]
at net.corda.client.rpc.internal.RPCClient.start(RPCClient.kt:109) ~[corda-rpc-3.2-corda.jar:?]
at net.corda.client.rpc.CordaRPCClient.start(CordaRPCClient.kt:135) ~[corda-rpc-3.2-corda.jar:?]
at net.corda.client.rpc.CordaRPCClient.start(CordaRPCClient.kt:120) ~[corda-rpc-3.2-corda.jar:?]
at net.corda.webserver.internal.NodeWebServer.connectLocalRpcAsNodeUser(NodeWebServer.kt:195) ~[corda-webserver-impl-3.2-corda.jar:?]
at net.corda.webserver.internal.NodeWebServer.retryConnectLocalRpc(NodeWebServer.kt:172) [corda-webserver-impl-3.2-corda.jar:?]
at net.corda.webserver.internal.NodeWebServer.start(NodeWebServer.kt:45) [corda-webserver-impl-3.2-corda.jar:?]
at net.corda.webserver.WebServer.main(WebServer.kt:64) [corda-webserver-impl-3.2-corda.jar:?]
[ERROR] 2018-10-31T09:37:28,137Z [main] Main.main - Exception during node startup {}
org.apache.activemq.artemis.api.core.ActiveMQConnectionTimedOutException: AMQ119013: Timed out waiting to receive cluster topology. Group:null
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:804) ~[artemis-core-client-2.2.0.jar:2.2.0]
at net.corda.client.rpc.internal.RPCClientProxyHandler.start(RPCClientProxyHandler.kt:191) ~[corda-rpc-3.2-corda.jar:?]
at net.corda.client.rpc.internal.RPCClient$start$1.invoke(RPCClient.kt:123) ~[corda-rpc-3.2-corda.jar:?]
at net.corda.client.rpc.internal.RPCClient$start$1.invoke(RPCClient.kt:86) ~[corda-rpc-3.2-corda.jar:?]
at net.corda.core.internal.InternalUtils.logElapsedTime(InternalUtils.kt:204) ~[corda-core-3.2-corda.jar:?]
at net.corda.core.internal.InternalUtils.logElapsedTime(InternalUtils.kt:196) ~[corda-core-3.2-corda.jar:?]
at net.corda.client.rpc.internal.RPCClient.start(RPCClient.kt:109) ~[corda-rpc-3.2-corda.jar:?]
at net.corda.client.rpc.CordaRPCClient.start(CordaRPCClient.kt:135) ~[corda-rpc-3.2-corda.jar:?]
at net.corda.client.rpc.CordaRPCClient.start(CordaRPCClient.kt:120) ~[corda-rpc-3.2-corda.jar:?]
at net.corda.webserver.internal.NodeWebServer.connectLocalRpcAsNodeUser(NodeWebServer.kt:195) ~[corda-webserver-impl-3.2-corda.jar:?]
at net.corda.webserver.internal.NodeWebServer.retryConnectLocalRpc(NodeWebServer.kt:172) ~[corda-webserver-impl-3.2-corda.jar:?]
at net.corda.webserver.internal.NodeWebServer.start(NodeWebServer.kt:45) ~[corda-webserver-impl-3.2-corda.jar:?]
at net.corda.webserver.WebServer.main(WebServer.kt:64) [corda-webserver-impl-3.2-corda.jar:?]
May we know if any missing setting?
Thank you.
This is a bug in Corda 3.x. It will be fixed in Corda 4.
. ____ _ __ _ _
/\ / ' __ _ ()_ __ __ _ \ \ \ \
( ( )_ | '_ | '| | ' / ` | \ \ \ \
\/ )| |)| | | | | || (| | ) ) ) )
' |____| .|| ||| |__, | / / / /
=========|_|==============|___/=///_/
:: Spring Boot :: (v1.4.3.RELEASE)
2017-05-09 22:31:48.424 INFO 10820 --- [ restartedMain] com.bearmom.app.AppServiceApplication : Starting AppServiceApplication on TheKing with PID 10820 (F:\bearmon-app-service\target\classes started by King in E:\workspace_idea\code)
[2017-05-09 22:31:48.424] INFO com.bearmom.app.AppServiceApplication - Starting AppServiceApplication on TheKing with PID 10820 (F:\bearmon-app-service\target\classes started by King in E:\workspace_idea\code)
2017-05-09 22:31:48.443 INFO 10820 --- [ restartedMain] com.bearmom.app.AppServiceApplication : The following profiles are active: dev
[2017-05-09 22:31:48.443] INFO com.bearmom.app.AppServiceApplication - The following profiles are active: dev
2017-05-09 22:31:53.297 INFO 10820 --- [ restartedMain] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#598b4233: startup date [Tue May 09 22:31:53 CST 2017]; root of context hierarchy
[2017-05-09 22:31:53.297] INFO o.s.b.c.e.AnnotationConfigEmbeddedWebApplicationContext - Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#598b4233: startup date [Tue May 09 22:31:53 CST 2017]; root of context hierarchy
2017-05-09 22:31:59.576 INFO 10820 --- [ restartedMain] o.s.b.f.xml.XmlBeanDefinitionReader : Loading XML bean definitions from class path resource [spring-context-web.xml]
[2017-05-09 22:31:59.576] INFO o.s.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [spring-context-web.xml]
2017-05-09 22:32:00.524 INFO 10820 --- [ restartedMain] o.s.b.f.xml.XmlBeanDefinitionReader : Loading XML bean definitions from class path resource [spring-mybatis.xml]
[2017-05-09 22:32:00.524] INFO o.s.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [spring-mybatis.xml]
2017-05-09 22:32:00.776 INFO 10820 --- [ restartedMain] o.s.b.f.xml.XmlBeanDefinitionReader : Loading XML bean definitions from class path resource [spring-redis.xml]
[2017-05-09 22:32:00.776] INFO o.s.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [spring-redis.xml]
2017-05-09 22:32:00.834 INFO 10820 --- [ restartedMain] o.s.b.f.xml.XmlBeanDefinitionReader : Loading XML bean definitions from class path resource [spring-oss.xml]
[2017-05-09 22:32:00.834] INFO o.s.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [spring-oss.xml]
2017-05-09 22:32:01.544 WARN 10820 --- [ restartedMain] o.m.s.mapper.ClassPathMapperScanner : No MyBatis mapper was found in '[com.bearmom.app]' package. Please check your configuration.
[2017-05-09 22:32:01.544] WARN org.mybatis.spring.mapper.ClassPathMapperScanner - No MyBatis mapper was found in '[com.bearmom.app]' package. Please check your configuration.
[enter image description here][1]
[1]: https://i.stack.imgur.com/8AFKb.pngenter code here