Corda Schedulable State caused Flow tests to hang - corda

I have a LoanState that is a ScheduledState. All Flow tests that use the LoanState get hung infinitely. If I set nextScheduledActivity() to return null then the tests run fine. There are no visible errors in the unit test log.
This is on Corda 3.2.
This is the last bit of text in the console before it hangs:
[INFO ] 14:39:40,604 [Mock node 1 thread]
(FlowStateMachineImpl.kt:419)
flow.[742bd708-244d-49a0-91af-8127267029a1].initiateSession -
Initiating flow session with party O=Mock Company 2, L=London, C=GB.
Session id for tracing purposes is
SessionId(toLong=4148369640629821591). {} [INFO ] 14:39:40,619 [Mock
node 2 thread] (StateMachineManagerImpl.kt:367)
statemachine.StateMachineManagerImpl.onSessionInit - Accepting flow
session from party O=Mock Company 1, L=London, C=GB. Session id for
tracing purposes is SessionId(toLong=4148369640629821591).
{invocation_id=2c33f7e4-63bd-4fad-98a0-6b568a78136d,
invocation_timestamp=2020-03-23T19:39:40.619Z,
session_id=2c33f7e4-63bd-4fad-98a0-6b568a78136d,
session_timestamp=2020-03-23T19:39:40.619Z} [INFO ] 14:39:40,706 [Mock
node 1 thread] (FlowStateMachineImpl.kt:419)
flow.[742bd708-244d-49a0-91af-8127267029a1].initiateSession -
Initiating flow session with party O=Mock Company 2, L=London, C=GB.
Session id for tracing purposes is
SessionId(toLong=-5160466662167158789). {} [INFO ] 14:39:40,715 [Mock
node 2 thread] (StateMachineManagerImpl.kt:367)
statemachine.StateMachineManagerImpl.onSessionInit - Accepting flow
session from party O=Mock Company 1, L=London, C=GB. Session id for
tracing purposes is SessionId(toLong=-5160466662167158789).
{invocation_id=af86ddea-0bae-43eb-998c-c2ae3fc91fcf,
invocation_timestamp=2020-03-23T19:39:40.715Z,
session_id=af86ddea-0bae-43eb-998c-c2ae3fc91fcf,
session_timestamp=2020-03-23T19:39:40.715Z} [INFO ] 14:39:40,742 [Mock
node 1 thread] (FlowStateMachineImpl.kt:419)
flow.[742bd708-244d-49a0-91af-8127267029a1].initiateSession -
Initiating flow session with party O=ParentCompany, L=London, C=GB.
Session id for tracing purposes is
SessionId(toLong=6693667128513799995). {} [INFO ] 14:39:40,750 [Mock
node 3 thread] (StateMachineManagerImpl.kt:367)
statemachine.StateMachineManagerImpl.onSessionInit - Accepting flow
session from party O=Mock Company 1, L=London, C=GB. Session id for
tracing purposes is SessionId(toLong=6693667128513799995).
{actor_id=Only For Testing, actor_owningIdentity=O=ParentCompany,
L=London, C=GB, actor_store_id=TEST,
invocation_id=487f4d03-c5b7-4aea-81a6-a000e788e0a2,
invocation_timestamp=2020-03-23T19:39:40.750Z,
session_id=487f4d03-c5b7-4aea-81a6-a000e788e0a2,
session_timestamp=2020-03-23T19:39:40.750Z}
#Nullable
#Override
public ScheduledActivity nextScheduledActivity(#NotNull StateRef thisStateRef, #NotNull
FlowLogicRefFactory flowLogicRefFactory) {
FlowLogicRef flow = flowLogicRefFactory.create(
"com.myapp.MySchedulableFlow",
thisStateRef
);
return new ScheduledActivity(flow, paymentDueDate);
}
If I set the Flow name to a non-existent flow, then the tests will NOT hang and will report that the Flow couldn't be found.
Update: Confirmed that it is waitQuiscent() that is hanging. If I remove this and replace it with a Thread.sleep() my test will pass.It looks like waitQuiscent waits for all ScheduledActivities to finish. Is there a wait to handle the same type of functionality without waiting for ScheduledActivities to finish?

You may have missed calling the MockNetwork.runNetwork method. Here is an example:
#Test
public void testCreateAuctionFlow() throws Exception {
CreateAssetFlow assetflow = new CreateAssetFlow("Test Asset", "Dummy Asset", dummy.png");
CordaFuture<SignedTransaction> future = a.startFlow(assetflow);
network.runNetwork();
SignedTransaction signedTransaction = future.get();
Asset asset = (Asset) signedTransaction.getTx().getOutput(0);
CreateAuctionFlow.Initiator auctionFlow = new CreateAuctionFlow.Initiator(Amount.parseCurrency("1000 USD"),
asset.getLinearId().getId(), LocalDateTime.ofInstant(Instant.now().plusMillis(30000), ZoneId.systemDefault()));
CordaFuture<SignedTransaction> future1 = a.startFlow(auctionFlow);
network.runNetwork();
SignedTransaction transaction = future1.get();
AuctionState auctionState = (AuctionState) transaction.getTx().getOutput(0);
assertNotNull(auctionState);
}
The runNetwork method should be called after every flow is triggered. This helps the mock network to bounce messages among the nodes properly.
Check the docs here for more details: https://docs.corda.net/docs/corda-os/4.4/flow-testing.html
Have a look at the testCase for the SchedulableState in the samples here: https://github.com/corda/samples/blob/release-V4/auction-cordapp/workflows/src/test/java/net/corda/samples/FlowTests.java#L68

Related

Saving an entity with EF and dispatching a message to the client with SignalR in the same method

I have a problem with WebSocket being disconnected while trying to firstly save entity to the database with EF Core, then to dispatch a message to the clients using SignalR Core.
Everything works perfectly when I separate these two operations, one with AJAX call to the controller's action, to save entity to the database, one with hub method for dispatching messages to the clients. But I want to ensure that the entity is successfully saved, then to be dispatched.
When I try to merge saving entity and dispatching a message to the clients into the same hub method or controller's action (with dependency injection), I've got the errors which could be found down bellow (with log level - trace).
00:51:03.876 [2020-01-04T23:51:03.877Z] Trace: (WebSockets transport) sending data. String data of length 307. Utils.ts:178:39
00:51:04.147 [2020-01-04T23:51:04.148Z] Trace: (WebSockets transport) socket closed. Utils.ts:178:39
00:51:04.148 [2020-01-04T23:51:04.148Z] Debug: HttpConnection.stopConnection(undefined) called while in state Connected. Utils.ts:178:39
00:51:04.148 [2020-01-04T23:51:04.149Z] Information: Connection disconnected. Utils.ts:174:39
00:51:04.149 [2020-01-04T23:51:04.149Z] Debug: HubConnection.connectionClosed(undefined) called while in state Connected. Utils.ts:178:39
00:51:04.149 [2020-01-04T23:51:04.149Z] Information: Connection reconnecting. Utils.ts:174:39
00:51:04.150 [2020-01-04T23:51:04.150Z] Information: Reconnect attempt number 1 will start in 0 ms. Utils.ts:174:39
00:51:04.150 Chat - Error: Invocation canceled due to the underlying connection being closed. conversation line 2 > scriptElement:27:36
00:51:04.153 [2020-01-04T23:51:04.153Z] Debug: Starting connection with transfer format 'Text'. Utils.ts:178:39
00:51:04.154 [2020-01-04T23:51:04.154Z] Debug: Sending negotiation request: http://localhost:11597/chatHub/negotiate?negotiateVersion=1. Utils.ts:178:39
00:51:04.168 [2020-01-04T23:51:04.169Z] Debug: Selecting transport 'WebSockets'. Utils.ts:178:39
00:51:04.168 [2020-01-04T23:51:04.169Z] Trace: (WebSockets transport) Connecting. Utils.ts:178:39
00:51:04.179 [2020-01-04T23:51:04.180Z] Information: WebSocket connected to ws://localhost:11597/chatHub?id=Amf9CsFZQfnR8-3PoGr8HQ. Utils.ts:174:39
00:51:04.179 [2020-01-04T23:51:04.180Z] Debug: The HttpConnection connected successfully. Utils.ts:178:39
00:51:04.180 [2020-01-04T23:51:04.180Z] Debug: Sending handshake request. Utils.ts:178:39
00:51:04.180 [2020-01-04T23:51:04.181Z] Debug: Hub handshake failed with error 'WebSocket is not in the OPEN state' during start(). Stopping HubConnection. Utils.ts:178:39
00:51:04.181 [2020-01-04T23:51:04.181Z] Trace: (WebSockets transport) socket closed. Utils.ts:178:39
00:51:04.181 [2020-01-04T23:51:04.182Z] Debug: HttpConnection.stopConnection(undefined) called while in state Disconnecting. Utils.ts:178:39
00:51:04.182 [2020-01-04T23:51:04.182Z] Error: Connection disconnected with error 'WebSocket is not in the OPEN state'. Utils.ts:168:39
00:51:04.183 [2020-01-04T23:51:04.184Z] Debug: HubConnection.connectionClosed(WebSocket is not in the OPEN state) called while in state Reconnecting. Utils.ts:178:39
00:51:04.183 [2020-01-04T23:51:04.184Z] Information: Reconnect attempt failed because of error 'WebSocket is not in the OPEN state'. Utils.ts:174:39
00:51:04.184 [2020-01-04T23:51:04.185Z] Information: Reconnect attempt number 2 will start in 2000 ms. Utils.ts:174:39
Here is hub method:
public async Task SendMessage(Message message)
{
// Eager loading conversation with matching Id
var chat = _context.Chat
.Include(c => c.PersonA)
.Include(c => c.PersonB)
.FirstOrDefault(m => m.Id == message.ChatId);
/*
* I'm doing a few validations here
*/
// Saving entity to the database
await _context.AddAsync(new Message
{
SenderId = message.SenderId,
ChatId = message.ChatId,
Text = message.Text
});
await _context.SaveChangesAsync();
// Dispatching message to the clients using strongly-typed hub
var usersId = new List<string> { chat.PersonAId, chat.PersonBId };
await Clients
.Users(usersId)
.ReceiveMessage(message);
}
The potential problem could lie in timing, the dispatching can't wait for the EF to execute all operations, because when I just load the conversation It works good, adding more complexity, it breaks down.

net.corda.core.flows.UnexpectedFlowEndException - Counterparty flow on C=GB,L=London,O=NodeA had an internal error and has terminated

I am using release-v1 of Corda.
My app has three nodes - A, B and C. Following are the flows defined in the app -
Flow 1: A sends a trade request to B and C
Flow 2: B approves the trade request, self-signs it, gets signature from A and closes the trade.
Flow 1 works fine. While executing workflow 2, I get net.corda.core.flows.UnexpectedFlowEndException error.
The logs of node A shows following lines,
net.corda.core.flows.UnexpectedFlowEndException: Counterparty flow on C=GB,L=London,O=NodeA had an internal error and has terminated
at net.corda.node.services.statemachine.FlowStateMachineImpl.erroredEnd(FlowStateMachineImpl.kt:446)
at net.corda.node.services.statemachine.FlowStateMachineImpl.confirmReceiveType(FlowStateMachineImpl.kt:429)
I referred to WorkflowTransactionBuildTutorial.kt for the flows -
(https://github.com/corda/corda/blob/release-V1/docs/source/example-code/src/main/kotlin/net/corda/docs/WorkflowTransactionBuildTutorial.kt)
1. I am executing following code for workflow 1
val tradeProposal = IOUContract.State(OU( IouId, IouCurrency, IouAmount), serviceHub.myInfo.legalIdentities.first(), nodeB, nodeC).contract.IOUContract"
val IOU_CONTRACT_ID = "net.corda.bgc.contract.IOUContract"
val tx = TransactionBuilder(notary).withItems(
StateAndContract(tradeProposal, IOU_CONTRACT_ID),
Command(IOUContract.Commands.Issue(),listOf(tradeProposal.sender.owningKey)))
.addAttachment(secHash)
tx.setTimeWindow(serviceHub.clock.instant(), 180.seconds)
val signedTx = serviceHub.signInitialTransaction(tx)
subFlow(FinalityFlow(signedTx, setOf(serviceHub.myInfo.legalIdentities.first(), nodeB, nodeC)))
return signedTx.tx.outRef<IOUContract.State>(0)
This code works fine. Both nodes B and C receive the IOU request from nodeA with status as "NEW".
2. I am executing following code for workflow 2
Code for sending the signed transaction to the originator and await
their signature to confirm
val tx = TransactionBuilder(notary).
withItems(
latestRecord,
StateAndContract(newState, IOU_CONTRACT_ID),
Command(IOUContract.Commands.Completed(), listOf(serviceHub.myInfo.legalIdentities.first().owningKey, latestRecord.state.data.nodeA.owningKey)))
tx.setTimeWindow(serviceHub.clock.instant(), 600.seconds)
val selfSignedTx = serviceHub.signInitialTransaction(tx)
val session = initiateFlow(newState.nodeA)
val allPartySignedTx = session.sendAndReceive<TransactionSignature>(selfSignedTx).unwrap {
val agreedTx = selfSignedTx + it
agreedTx.verifySignaturesExcept(notary.owningKey)
agreedTx.tx.toLedgerTransaction(serviceHub).verify()
agreedTx
}
subFlow(FinalityFlow(allPartySignedTx, setOf(newState.nodeA, newState.nodeB, newState.nodeC)))
return allPartySignedTx.tx.outRef(0)
Flow to receive the final decision on a proposal
val completeTx = receive<SignedTransaction>(source).unwrap {
it.verifySignaturesExcept(ourIdentity.owningKey, it.tx.notary!!.owningKey)
val ltx = it.toLedgerTransaction(serviceHub, false)
ltx.verify()
val state = ltx.outRef<IOUContract.State>(0)
}
it
val ourSignature = serviceHub.createSignature(completeTx)
send(ourSignature)
However, the above code fails and the error net.corda.core.flows.UnexpectedFlowEndException is thrown.
Please, can any one guide me to correct the above code
OR redirect me to an example that matches the required workflows?
When you get an exception of the form:
net.corda.core.flows.UnexpectedFlowEndException:
Counterparty flow on C=GB,L=London,O=NodeA had an internal error and has terminated
This means that the counterparty node (NodeA in this case) has encountered an exception. You should check the counterparty node's logs. Each node has logs in a /logs folder.
Thanks.
Replaced the "sendandreceive" function with the "collectsignaturesflow" function, in the approval flow. Now both initiator and approval flows are working.
Referred the link -
https://github.com/corda/cordapp-example/blob/release-V3/kotlin-source/src/main/kotlin/com/example/flow/ExampleFlow.kt.

exactly once delivery Is it possible through spring-cloud-stream-binder-kafka or spring-kafka which one to use

I am trying to achieve exactly once delivery using spring-cloud-stream-binder-kafka in a spring boot application.
The versions I am using are:
spring-cloud-stream-binder-kafka-core-1.2.1.RELEASE
spring-cloud-stream-binder-kafka-1.2.1.RELEASE
spring-cloud-stream-codec-1.2.2.RELEASE spring-kafka-1.1.6.RELEASE
spring-integration-kafka-2.1.0.RELEASE
spring-integration-core-4.3.10.RELEASE
zookeeper-3.4.8
Kafka version : 0.10.1.1
This is my configuration (cloud-config):
spring:
autoconfigure:
exclude: org.springframework.cloud.netflix.metrics.servo.ServoMetricsAutoConfiguration
kafka:
consumer:
enable-auto-commit: false
cloud:
stream:
kafka:
binder:
brokers: "${BROKER_HOST:xyz-aws.local:9092}"
headers:
- X-B3-TraceId
- X-B3-SpanId
- X-B3-Sampled
- X-B3-ParentSpanId
- X-Span-Name
- X-Process-Id
zkNodes: "${ZOOKEEPER_HOST:120.211.316.261:2181,120.211.317.252:2181}"
bindings:
feed_platform_events_input:
consumer:
autoCommitOffset: false
binders:
xyzkafka:
type: kafka
bindings:
feed_platform_events_input:
binder: xyzkafka
destination: platform-events
group: br-platform-events
I have two main classes:
FeedSink Interface:
package au.com.xyz.proxy.interfaces;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.messaging.MessageChannel;
public interface FeedSink {
String FEED_PLATFORM_EVENTS_INPUT = "feed_platform_events_input";
#Input(FeedSink.FEED_PLATFORM_EVENTS_INPUT)
MessageChannel feedlatformEventsInput();
}
EventConsumer
package au.com.xyz.proxy.consumer;
#Slf4j
#EnableBinding(FeedSink.class)
public class EventConsumer {
public static final String SUCCESS_MESSAGE =
"SEND-SUCCESS : Successfully sent message to platform.";
public static final String FAULT_MESSAGE = "SOAP-FAULT Code: {}, Description: {}";
public static final String CONNECT_ERROR_MESSAGE = "CONNECT-ERROR Error Details: {}";
public static final String EMPTY_NOTIFICATION_ERROR_MESSAGE =
"EMPTY-NOTIFICATION-ERROR Empty Event Received from platform";
#Autowired
private CapPointService service;
#StreamListener(FeedSink.FEED_PLATFORM_EVENTS_INPUT)
/**
* method associated with stream to process message.
*/
public void message(final #Payload EventNotification eventNotification,
final #Header(KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment acknowledgment) {
String caseMilestone = "UNKNOWN";
if (!ObjectUtils.isEmpty(eventNotification)) {
SysMessage sysMessage = processPayload(eventNotification);
caseMilestone = sysMessage.getCaseMilestone();
try {
ClientResponse response = service.sendPayload(sysMessage);
if (response.hasFault()) {
Fault faultDetails = response.getFaultDetails();
log.error(FAULT_MESSAGE, faultDetails.getCode(), faultDetails.getDescription());
} else {
log.info(SUCCESS_MESSAGE);
}
acknowledgment.acknowledge();
} catch (Exception e) {
log.error(CONNECT_ERROR_MESSAGE, e.getMessage());
}
} else {
log.error(EMPTY_NOTIFICATION_ERROR_MESSAGE);
acknowledgment.acknowledge();
}
}
private SysMessage processPayload(final EventNotification eventNotification) {
Gson gson = new Gson();
String jsonString = gson.toJson(eventNotification.getData());
log.info("Consumed message for platform events with payload : {} ", jsonString);
SysMessage sysMessage = gson.fromJson(jsonString, SysMessage.class);
return sysMessage;
}
}
I have set the autocommit property for Kafka and spring container as false.
if you see in the EventConsumer class I have used Acknowledge in cases where I service.sendPayload is successful and there are no Exceptions. And I want container to move the offset and poll for next records.
What I have observed is:
Scenario 1 - In case where the Exception is thrown and there are no new messages published on kafka. There is no retry to process the message and it seems there is no activity. Even if the underlying issue is resolved. The issue I am referring to is down stream server unavailability. Is there a way to retry the processing n times and then give up. Note this is retry of processing or repoll from the last committed offset. This is not about Kafka instance not available.
If I restart the service (EC2 instance) then the processing happens from the offset where the last successful Acknowledge was done.
Scenario 2 - In case where Exception happened and then a subsequent message is pushed to kafka. I see the new message is processed and the offset moved. It means I lost the message which was not acknowledged. So the question is if I have handled the Acknowledge. How do I control to read from last commit not just the latest message and process it. I am assuming there is internally a poll happening and it did not take into account or did not know about the last message not being acknowledged. I don't think there are multiple threads reading from kafka. I dont know how the #Input and #StreamListener annotations are controlled. I assume the thread is controlled by property consumer.concurrency which controls the thread and by default it is set to 1.
So I have done research and found a lot of links but unfortunately none of them answers my specific questions.
I looked at (https://github.com/spring-cloud/spring-cloud-stream/issues/575)
which has a comment from Marius (https://stackoverflow.com/users/809122/marius-bogoevici):
Do note that Kafka does not provide individual message acking, which
means that acknowledgment translates into updating the latest consumed
offset to the offset of the acked message (per topic/partition). That
means that if you're acking messages from the same topic partition out
of order, a message can 'ack' all the messages before it.
not sure if it is the issue with order when there is one thread.
Apologies for long post, but I wanted to provide enough information. The main thing is I am trying to avoid losing messages when consuming from kafka and I am trying to see if spring-cloud-stream-binder-kafka can do the job or I have to look at alternatives.
Update 6th July 2018
I saw this post https://github.com/spring-projects/spring-kafka/issues/431
Is this a better approach to my problem? I can try latest version of spring-kafka
#KafkaListener(id = "qux", topics = "annotated4", containerFactory = "kafkaManualAckListenerContainerFactory",
containerGroup = "quxGroup")
public void listen4(#Payload String foo, Acknowledgment ack, Consumer<?, ?> consumer) {
Will this help in controlling the offset to be set to where the last
successfully processed record? How can I do that from the listen
method. consumer.seekToEnd(); and then how will listen method reset to get the that record?
Does putting the Consumer in the signature provide support to get
handle to consumer? Or I need to do anything more?
Should I use Acknowledge or consumer.commitSyncy()
What is the significance of containerFactory. do I have to define it
as a bean.
Do I need #EnableKafka and #Configuration for above approach to work?
Bearing in mind the application is a Spring Boot application.
By Adding Consumer to listen method I don't need to implement
ConsumerAware Interface?
Last but not least, Is it possible to provide some example of above approach if it is feasible.
Update 12 July 2018
Thanks Gary (https://stackoverflow.com/users/1240763/gary-russell) for providing the tip of using maxAttempts. I have used that approach. And I am able to achieve exactly once delivery and preserve the order of the message.
My updated cloud-config:
spring:
autoconfigure:
exclude: org.springframework.cloud.netflix.metrics.servo.ServoMetricsAutoConfiguration
kafka:
consumer:
enable-auto-commit: false
cloud:
stream:
kafka:
binder:
brokers: "${BROKER_HOST:xyz-aws.local:9092}"
headers:
- X-B3-TraceId
- X-B3-SpanId
- X-B3-Sampled
- X-B3-ParentSpanId
- X-Span-Name
- X-Process-Id
zkNodes: "${ZOOKEEPER_HOST:120.211.316.261:2181,120.211.317.252:2181}"
bindings:
feed_platform_events_input:
consumer:
autoCommitOffset: false
binders:
xyzkafka:
type: kafka
bindings:
feed_platform_events_input:
binder: xyzkafka
destination: platform-events
group: br-platform-events
consumer:
maxAttempts: 2147483647
backOffInitialInterval: 1000
backOffMaxInterval: 300000
backOffMultiplier: 2.0
Event Consumer remains the same as my initial implementation. Except for rethrowing the error for the container to know the processing has failed. If you just catch it then there is no way container knows the message processing has failures. By doing acknoweldgement.acknowledge you are just controlling the offset commit. In order for retry to happen you must throw the exception. Don't forget to set the kafka client autocommit property and spring (container level) autocommitOffset property to false. Thats it.
As explained by Marius, Kafka only maintains an offset in the log. If you process the next message, and update the offset; the failed message is lost.
You can send the failed message to a dead-letter topic (set enableDlq to true).
Recent versions of Spring Kafka (2.1.x) have special error handlers ContainerStoppingErrorHandler which stops the container when an exception occurs and SeekToCurrentErrorHandler which will cause the failed message to be redelivered.

How to enable TLS in Corda 3.1?

What's the correct way to configure TLS in production on a Corda Node?
We're trying to enable TLS on CordaApp Sample version 3.1, but the following error occurs in Corda webserver:
[ERROR] 2018-05-03T13:58:16,984Z [main] Main.main - Exception during node startup {}
org.apache.activemq.artemis.api.core.ActiveMQConnectionTimedOutException: AMQ119013: Timed out waiting to receive cluster topology. Group:null
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:804)
node.conf file is:
myLegalName="O=PartyA,L=London,C=GB"
p2pAddress="localhost:10005"
rpcSettings = {
address="localhost:10006"
adminAddress="localhost:10046"
useSsl=true
ssl {
certificatesDirectory="./certificates"
keyStorePassword="cordacadevpass"
trustStorePassword="trustpass"
}
}
rpcUsers=[
{
password=test
permissions=[
ALL
]
username=user1
}
]
webAddress="localhost:10007"
devMode=true
According to Mike Hearn, from the Corda Ledger Slack channel, RPC SSL is broken in Corda 3.1 and the rework is being made in this pull request.

CouchBaseTemplate Connection issue

I have the Following CouchBase Template Bean:
#PostConstruct
public void initIt() throws Exception {
if(couchbaseDisabled)
return;
couchbaseClient= new CouchbaseClient(
bootstrapUris(Arrays.asList(hosts.split(","))),
CouchbaseConstants.BUCKET,
""
);
couchbaseTemplate();
}
public void couchbaseTemplate() throws Exception {
logger.info("Enabling CouchBase Template");
couchbaseTemplate= new CouchbaseTemplate(couchbaseClient);
//couchbaseTemplate.
}
and
#PreDestroy
public void cleanup() throws Exception {
logger.info("Closing couchbase connection.");
if (couchbaseClient != null) {
couchbaseClient.shutdown();
couchbaseTemplate=null;
couchbaseClient=null;
}
}
While the Server is being Shut Down i am geting the Following Logs:
SEVERE: The web application [] registered the JDBC driver [com.mysql.jdbc.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.
Jan 8, 2016 4:57:24 PM org.apache.catalina.loader.WebappClassLoader checkThreadLocalMapForLeaks
SEVERE: The web application [] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal#40c94525]) and a value of type [com.couchbase.client.deps.io.netty.util.internal.InternalThreadLocalMap] (value [com.couchbase.client.deps.io.netty.util.internal.InternalThreadLocalMap#5ddaa15d]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak.
Jan 8, 2016 4:57:24 PM org.apache.catalina.loader.WebappClassLoader checkThreadLocalMapForLeaks
SEVERE: The web application [] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal#40c94525]) and a value of type [com.couchbase.client.deps.io.netty.util.internal.InternalThreadLocalMap] (value [com.couchbase.client.deps.io.netty.util.internal.InternalThreadLocalMap#3c9810ce]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak.
Jan 8, 2016 4:57:24 PM org.apache.catalina.loader.WebappClassLoader checkThreadLocalMapForLeaks
SEVERE: The web application [] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal#40c94525]) and a value of type [com.couchbase.client.deps.io.netty.util.internal.InternalThreadLocalMap] (value [com.couchbase.client.deps.io.netty.util.internal.InternalThreadLocalMap#23776376]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak.
Jan 8, 2016 4:57:24 PM org.apache.catalina.loader.WebappClassLoader checkThreadLocalMapForLeaks
SEVERE: The web application [] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal#40c94525]) and a value of type [com.couchbase.client.deps.io.netty.util.internal.InternalThreadLocalMap] (value [com.couchbase.client.deps.io.netty.util.internal.InternalThreadLocalMap#7322ea2a]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak.
Jan 8, 2016 4:57:32 PM org.apache.coyote.http11.Http11Protocol destroy
INFO: Stopping Coyote HTTP/1.1 on http-8099
What can be Done Here?
Ok so you have both SDK 1.4.x and 2.x running in your application (since you have com.couchbase.client:java-client in your pom).
The thread leak message comes from the later. You must have instantiated a Cluster somewhere (as in com.couchbase.client.java.Cluster).
Make sure to also clean it up at the end of the application's lifecycle by calling cluster.disconnect() (I guess from a #PreDestroy method, as you did for the CouchbaseClient).
If you also created a custom CouchbaseEnvironment, you have to also properly shut it down (in the same method as the Cluster cleanup) by calling environment.shutdownAsync().toBlocking().single().
Make sure to use the latest version of the 2.x SDK as some older versions had bugs relative to proper thread cleanup on shutdown (see JCBC-773 and JVMCBC-251 issues).

Resources