Kafka retry in batch listener behaving differently in 2 scenarios - spring-kafka

Spring-boot : 2.7.1,
Spring-kafka : 2.8.7
I tested kafka retry with error handling in my local where I publish around 1500 messages and I listen in batches and process them in loop (Only 1 topic which has 1 partition). It is working as expected, that is the message is retried to process again and upon exhaustion of retries it is sent to error topic.
KafkaConfig
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> kafkaListenerContainerFactory(KafkaTemplate<String, String> kafkaTemplate){
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setBatchListener(true);
DeadLetterPublishingRecoverer deadLetterPublishingRecoverer = new DeadLetterPublishingRecoverer(kafkaTemplate,
(consumer, e) -> new TopicPartition("errortopic", -1));
FixedBackOff fixedBackOff = new FixedBackOff(10,2);
DefaultErrorHandler defaultErrorHandler = new DefaultErrorHandler(deadLetterPublishingRecoverer, fixedBackOff);
factory.setCommonErrorHandler(defaultErrorHandler);
return factory;
}
#Bean
public ConsumerFactory<String, Object> consumerFactory() {
Map<String, Object> props = getDefaultParams();
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 20);
return new DefaultKafkaConsumerFactory<>(props);
}
Listener
#Autowired
private MyService myservice;
#KafkaListener(id="sample-listener", topics= {"topic1", "topic2"})
public void listen(final List<ConsumerRecord<String, String> records) {
for(ConsumerRecord<String, String> record: records)
try {
myservice.process(record.value());
} catch(Exception e) {
throw new BatchListenerFailedException("Failure in processing message", record);
}
}
Service
#Transactional
public void process(String record){
count++;
if(count == 2 || count ==3){
throw new RuntimeException();
}
}
When I run it in dev environment, where we have 2 topics and each topic has 64 partitions (messages will be around 100k an hour), the message is not retried at all.

Related

Infinite retries/loop with DefaultErrorHandler with ConsumerRecordRecoverer and BackOff

i I have a KafkaListener that throw a NPE.
#KafkaListener(topics = CASE6_TOPIC, groupId = "demo-case-6", clientIdPrefix = "case6", containerFactory = "kafkaListenerContainerFactory")
private void consume(DemandeAvro payload, #Header(RECEIVED_PARTITION_ID) Integer partitionId, #Header(KafkaHeaders.OFFSET) int offset) {
log.info(String.format("Partition %s Offset %s Payload : %s", partitionId, offset, payload));
throw new NullPointerException("other than serde");
}
My configuration look like :
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setReplyTemplate(kafkaTemplate());
factory.setCommonErrorHandler(new DefaultErrorHandler(recoverer(kafkaTemplate()),new FixedBackOff(2000L, 4)));
return factory;
}
And the recovrer to publish on DLT
#Bean
public DeadLetterPublishingRecoverer recoverer(KafkaTemplate<?, ?> bytesTemplate) {
return new DeadLetterPublishingRecoverer(bytesTemplate);
}
I'm not sure if it'is a bad use or a bug , but when i drop the recoverer, the consumer will stop after 4 attempts like i'm asking on the FixedBackOff policy , otherwise it will retries forever ...
Thank you for your help
It is not clear what you are saying; with no recoverer the failed message will be logged after 4 retries; with the DLPR, it will be published to the DLT. If the publishing fails, the record will be delivered again as if retries were not exhausted.
If you are seeing this behavior, the DLT publishing must be failing for some reason.

Which is the best configuration for a Kafka consumer to increase throughput

I have a bunch of services that are integrated via Apache Kafka, and each of the services has their consumers and producers, but im facing slowing consuming rate like there's something slowing the consuming when get so much load into the topic.
Here's an example of my kafka consumer implementation:
public class Consumer : BackgroundService
{
private readonly KafkaConfiguration _kafkaConfiguration;
private readonly ILogger<Consumer> _logger;
private readonly IConsumer<Null, string> _consumer;
private readonly IMediator _mediator;
public Consumer(
KafkaConfiguration kafkaConfiguration,
ILogger<Consumer> logger,
IServiceScopeFactory provider
)
{
_logger = logger;
_kafkaConfiguration = kafkaConfiguration;
_mediator = provider.CreateScope().ServiceProvider.GetRequiredService<IMediator>();
var consumerConfig = new ConsumerConfig
{
GroupId = "order-service",
BootstrapServers = kafkaConfiguration.ConnectionString,
SessionTimeoutMs = 6000,
ConsumeResultFields = "none",
QueuedMinMessages = 1000000,
SecurityProtocol = SecurityProtocol.Plaintext,
AutoOffsetReset = AutoOffsetReset.Earliest,
EnableAutoOffsetStore = false,
FetchWaitMaxMs = 100,
AutoCommitIntervalMs = 1000
};
_consumer = new ConsumerBuilder<Null, string>(consumerConfig).Build();
}
protected override Task ExecuteAsync(CancellationToken stoppingToken)
{
new Thread(() => StartConsumingAsync(stoppingToken)).Start();
return Task.CompletedTask;
}
public async Task StartConsumingAsync(CancellationToken cancellationToken)
{
_consumer.Subscribe("orders");
while (!cancellationToken.IsCancellationRequested)
{
try
{
var consumedResult = _consumer.Consume(cancellationToken);
if (consumedResult == null) continue;
var messageAsEvent = JsonSerializer.Deserialize<OrderReceivedIntegrationEvent>(consumedResult.Message.Value);
await _mediator.Publish(messageAsEvent, CancellationToken.None);
}
catch (Exception e)
{
_logger.LogCritical($"Error {e.Message}");
}
}
}
and here's an example of my producer:
public class Producer
{
protected readonly IProducer<Null, string> Producer;
protected Producer(string host)
{
var producerConfig = new ProducerConfig
{
BootstrapServers = host,
Acks = Acks.Leader
};
Producer = new ProducerBuilder<Null, string>(producerConfig).Build();
}
public void Produce(InitialOrderCreatedIntegrationEvent message)
{
var messageSerialized = JsonSerializer.Serialize(message);
Producer.Produce("orders", new Message<Null, string> {Value = messageSerialized});
}
}
As you can see, the consumer only reads the message from kafka topic and deserialize the message into a MediatR INotification object and then publish to the handler
the handler works with databases transactions, redis cache read/write, and push notifications
an example of my handler:
public override async Task Handle(OrderReceivedIntegrationEvent notification, CancellationToken cancellationToken)
{
try
{
// Get order from database
var order = await _orderRepository.GetOrderByIdAsync(notification.OrderId.ToString());
order.EditOrder(default, notification.Price);
order.ChangeOrderStatus(notification.Status, notification.RejectReason);
// commit the transaction
if (await _uow.Commit())
{
var cacheModificationRequest = _mapper.Map<CacheOrdersModificationRequestedIntegrationEvent>(order);
// send mediatr notification to change cache information in Redis
await _bus.Publish(cacheModificationRequest, cancellationToken);
}
}
catch (Exception e)
{
_logger.LogInformation($"Error {e.Message}");
}
}
but when i run a load test with 2000 requests with a ramp up of 15 seconds, the consumer starts to slow, something like 2 ~ 5 minutes to consume all the 2000 requests.
I was wondering if i remove the MediatR layer and start handling the process in the Consumer class it will improove the performance
or if there is some Kafka configuration that improove the throughput, something like remove the Acks of the In Sync topic replicas, or commit the offset after a bunch of time.
First i have implemented the kafka using the MassTransit library, and after there a find this slow consuming rate, i tried to change the library to the Confluet.Kafka just to remove the MassTransit abstraction layer if it will have a improovement, but still the same:
<PackageReference Include="Confluent.Kafka" Version="1.7.0" />
Anyone already faced the same problem can help me?
OBS: My Kafka are running in Cluster with 3 brokers in Kubernetes, and the topics each one has 6 partitions with 3 replication factor

spring-kafka application.properties configuration for JAAS/SASL not working

Use Case:
I am using Spring Boot 2.2.5.RELEASE and Kafka 2.4.1
JAAS/SASL configurations are done properly on Kafka/ZooKeeper as topics are created without issue with kafka-topics.bat
Issue:
When i start Spring Boot application, i immediately get the following errors:
kafka-server-start.bat console:
INFO [SocketServer brokerId=1] Failed authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
IDE console:
WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=xxx, groupId=yyy] Bootstrap broker localhost:9093 (id: -3 rack: null) disconnected
My application.properties configuration:
spring.kafka.jaas.enabled=true
spring.kafka.properties.security.protocol=SASL_PLAINTEXT
spring.kafka.properties.sasl.mechanism=PLAIN
spring.kafka.properties.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="spring_bO0t" password="i_am_a_spring_bO0t_user";
kafka_server_jaas.conf:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="12345"
user_admin="12345"
user_spring_bO0t="i_am_a_spring_bO0t_user";
};
Am i missing something?
Thanks in advance.
The answer provided by #jumping_monkey is correct, however I didn't know where to put those configurations in ProducerFactory & ConsumerFactory beans, so I'll leave an example below for those who want to know:
-In your ProducerConfig or ConsumerConfig Beans respectively (Mine is named generalMessageProducerFactory):
#Bean
public ProducerFactory<String, GeneralMessageDto> generalMessageProducerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
configProps.put("sasl.mechanism", "PLAIN");
configProps.put("sasl.jaas.config", "org.apache.kafka.common.security.plain.PlainLoginModule required username='YOUR_KAFKA_CLUSTER_USERNAME' password='YOUR_KAFKA_CLUSTER_PASSWORD';");
configProps.put("security.protocol", "SASL_SSL");
return new DefaultKafkaProducerFactory<>(configProps);
}
And also in your TopicConfiguration Class in kafkaAdmin method:
#Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
configs.put("sasl.mechanism", "PLAIN");
configs.put("sasl.jaas.config", "org.apache.kafka.common.security.plain.PlainLoginModule required username='YOUR_KAFKA_CLUSTER_USERNAME' password='YOUR_KAFKA_CLUSTER_PASSWORD';");
configs.put("security.protocol", "SASL_SSL");
return new KafkaAdmin(configs);
}
Hope this was helpful guys!
I defined the properties in the wrong place i.e in application.properties. As i have ProducerFactory & ConsumerFactory beans, those application.properties will be ignored by Spring Boot.
Configuring the same properties in the beans definitions resolved the issue, i.e move your properties from application.properties to where you define your beans.
Here's an example:
#Bean
public ProducerFactory<Object, Object> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
props.put(SaslConfigs.SASL_JAAS_CONFIG, String.format(
"%s required username=\"%s\" " + "password=\"%s\";", PlainLoginModule.class.getName(), "username", "password"
));
return new DefaultKafkaProducerFactory<>(props);
}
#Bean
public ConsumerFactory<Object, Object> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer.class);
props.put(ErrorHandlingDeserializer.KEY_DESERIALIZER_CLASS, JsonDeserializer.class);
props.put(ErrorHandlingDeserializer.VALUE_DESERIALIZER_CLASS, JsonDeserializer.class);
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
props.put(SaslConfigs.SASL_JAAS_CONFIG, String.format(
"%s required username=\"%s\" " + "password=\"%s\";", PlainLoginModule.class.getName(), "username", "password"
));
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
configs.put("security.protocol", "SASL_PLAINTEXT");
configs.put("sasl.mechanism", "PLAIN");
configs.put("sasl.jaas.config", "org.apache.kafka.common.security.plain.PlainLoginModule required " +
"username=username" +
"password=password;");
return new KafkaAdmin(configs);
}

Unable to upload and download attachment in corda 4.0 ( java) showing null

uploading and downloading zip attachment containing a text file in corda not working
Tried to attach and download a zip file manually and also tried to send the attachment using client RPC using proxy.
flow code:
public class IOUFlow extends FlowLogic<Void> {
private final Integer iouValue;
private final Party otherParty;
private final SecureHash attachmentHash;
public IOUFlow(Integer iouValue, Party otherParty,SecureHash attachmentHash) {
this.iouValue = iouValue;
this.otherParty = otherParty;
this.attachmentHash=attachmentHash;
}
#Override
public ProgressTracker getProgressTracker() {
return progressTracker;
}
private static final Step ID_OTHER_NODES = new Step("Identifying other nodes on the network.");
private static final Step SENDING_AND_RECEIVING_DATA = new Step("Sending data between parties.");
private static final Step EXTRACTING_VAULT_STATES = new Step("Extracting states from the vault.");
private static final Step OTHER_TX_COMPONENTS = new Step("Gathering a transaction's other components.");
private static final Step TX_BUILDING = new Step("Building a transaction.");
private static final Step TX_SIGNING = new Step("Signing a transaction.");
private static final Step TX_VERIFICATION = new Step("Verifying a transaction.");
private static final Step SIGS_GATHERING = new Step("Gathering a transaction's signatures.") {
// Wiring up a child progress tracker allows us to see the
// subflow's progress steps in our flow's progress tracker.
#Override
public ProgressTracker childProgressTracker() {
return CollectSignaturesFlow.tracker();
}
};
private static final Step VERIFYING_SIGS = new Step("Verifying a transaction's signatures.");
private static final Step FINALISATION = new Step("Finalising a transaction.") {
#Override
public ProgressTracker childProgressTracker() {
return FinalityFlow.tracker();
}
};
private final ProgressTracker progressTracker = new ProgressTracker(
ID_OTHER_NODES,
SENDING_AND_RECEIVING_DATA,
EXTRACTING_VAULT_STATES,
OTHER_TX_COMPONENTS,
TX_BUILDING,
TX_SIGNING,
TX_VERIFICATION,
SIGS_GATHERING,
FINALISATION
);
#Suspendable
#Override
public Void call() throws FlowException {
progressTracker.setCurrentStep(ID_OTHER_NODES);
// We retrieve the notary identity from the network map.
Party notary = getServiceHub().getNetworkMapCache().getNotaryIdentities().get(0);
progressTracker.setCurrentStep(SENDING_AND_RECEIVING_DATA);
// We create the transaction components.
IOUState outputState = new IOUState(iouValue, getOurIdentity(), otherParty);
List<PublicKey> requiredSigners = Arrays.asList(getOurIdentity().getOwningKey(), otherParty.getOwningKey());
Command command = new Command<>(new IOUContract.Create(), requiredSigners);
TimeWindow ourAfter = TimeWindow.fromOnly(Instant.MIN);
progressTracker.setCurrentStep(TX_BUILDING);
// We create a transaction builder and add the components.
TransactionBuilder txBuilder = new TransactionBuilder(notary)
.addOutputState(outputState, IOUContract.ID)
.addCommand(command)
.addAttachment(attachmentHash);
// Verifying the transaction.
txBuilder.verify(getServiceHub());
progressTracker.setCurrentStep(TX_SIGNING);
// Signing the transaction.
SignedTransaction signedTx = getServiceHub().signInitialTransaction(txBuilder);
// Creating a session with the other party.
FlowSession otherPartySession = initiateFlow(otherParty);
// Obtaining the counterparty's signature.
SignedTransaction fullySignedTx = subFlow(new CollectSignaturesFlow(
signedTx, Arrays.asList(otherPartySession), CollectSignaturesFlow.tracker()));
progressTracker.setCurrentStep(TX_VERIFICATION);
// Finalising the transaction.
subFlow(new FinalityFlow(fullySignedTx, otherPartySession));
return null;
}
}
client code:
public class Client {
private static final Logger logger = LoggerFactory.getLogger(Client.class);
public static void main(String[] args) throws Exception {
// Create an RPC connection to the node.
if (args.length != 3) throw new IllegalArgumentException("Usage: Client <node address> <rpc username> <rpc password>");
final NetworkHostAndPort nodeAddress = parse(args[0]);
final String rpcUsername = args[1];
final String rpcPassword = args[2];
final CordaRPCClient client = new CordaRPCClient(nodeAddress);
final CordaRPCOps proxy = client.start(rpcUsername, rpcPassword).getProxy();
// Interact with the node.
// For example, here we print the nodes on the network.
final List<NodeInfo> nodes = proxy.networkMapSnapshot();
logger.info("{}", nodes);
InputStream inputstream = new FileInputStream("corda.zip");
SecureHash hashId= proxy.uploadAttachment(inputstream);
System.out.println(hashId);
CordaX500Name x500Name = CordaX500Name.parse("O=ICICI,L=New York,C=US");
final Party otherParty = proxy.wellKnownPartyFromX500Name(x500Name);
/* proxy
.startFlowDynamic(IOUFlow.class, "10", otherParty,hashId)
.getReturnValue()
.get();*/
InputStream stream = proxy.openAttachment(hashId);
JarInputStream in = new JarInputStream(stream);
BufferedReader br =new BufferedReader(new InputStreamReader(in));
System.out.println("Output from attachment : "+br.readLine());
}
}
Output:
Task :clients:runTemplateClient
I 16:36:28 1 RPCClient.logElapsedTime - Startup took 2066 msec
I 16:36:28 1 Client.main - [NodeInfo(addresses=[localhost:10005], legalIdentitiesAndCerts=[O=PNB, L=London, C=GB], platformVersion=4, serial=1559037129874), NodeInfo(addresses=[localhost:10002], legalIdentitiesAndCerts=[O=Notary, L=London, C=GB], platformVersion=4, serial=1559037126875), NodeInfo(addresses=[localhost:10008], legalIdentitiesAndCerts=[O=ICICI, L=New York, C=US], platformVersion=4, serial=1559037128218)]
DF3C198E05092E52F47AE8EAA0D5D26721F344B3F5E0DF80B5A53CA2B7104C9C
Output from attachment : null
Another output:when tried to send the attachment from client using RPC
Task :clients:runTemplateClient
I 16:41:46 1 RPCClient.logElapsedTime - Startup took 2045 msec
I 16:41:47 1 Client.main - [NodeInfo(addresses=[localhost:10005], legalIdentitiesAndCerts=[O=PNB, L=London, C=GB], platformVersion=4, serial=1559037129874), NodeInfo(addresses=[localhost:10002], legalIdentitiesAndCerts=[O=Notary, L=London, C=GB], platformVersion=4, serial=1559037126875), NodeInfo(addresses=[localhost:10008], legalIdentitiesAndCerts=[O=ICICI, L=New York, C=US], platformVersion=4, serial=1559037128218)]
B7F5F70FC9086ED594883E6EB8B0B53B666B92CC4412E27FF3D6531446E9E40C
Exception in thread "main" net.corda.core.CordaRuntimeException: net.corda.core.flows.IllegalFlowLogicException: A FlowLogicRef cannot be constructed for FlowLogic of type com.template.flows.IOUFlow: due to missing constructor for arguments: [class java.lang.String, class net.corda.core.identity.Party, class net.corda.core.crypto.SecureHash$SHA256]
thanks for raising this question. This is definitely something we could improve in the developer experience.
Essentially there are two-steps to adding an attachment to a TX.
The first step is importing the attachment itself into the node (which it seems you've already done):
SecureHash attachmentHash = getServiceHub().getAttachments().importAttachment(INPUT-FILE, getOurIdentity().getName().toString(), INPUT-FILE.getName());
Replace INPUT-FILE with a java File instance.
The second is to add the attachment hash to the TX (which you have already done as well)
.addAttachment(attachmentHash);
You can facilitate the first step either explicitly via flow logic OR using the RPC proxy as you've done here.
For the second error you've listed:
The exception you've posted however is related to the way you're invoking your IOUFlow. The IOUFlow is expecting an Integer as the first argument but we are providing a String - this is causing the IllegalFlowLogicException. Try providing an Int OR changing the IOUFlow to expect a string input to be converted to an Int.
A FlowLogicRef cannot be constructed for FlowLogic of type com.template.flows.IOUFlow: due to missing constructor for arguments: [class java.lang.String, class net.corda.core.identity.Party, class net.corda.core.crypto.SecureHash$SHA256]
startFlowDynamic(IOUFlow.class, "10", otherParty,hashId)
It looks like your IOUFlow expects an Integer, and you are sending a String instead?

Spring Integration tcp client multiple connections

I use Spring Integration tcp-outbound-adapter and tcp-inbound-adapter in order to communicate with a third party external system through TCP.
The connection factory I use is of type "client" and has single-use="false", because the nature of communication with the external system is a session of several dozens requests and replies.
The external system expects I will open a new TCP connection for each session.
Is there any way to do that with Spring Integration?
My code uses SI successfully for one such session. But I want my system to open several such connections so I can handle several concurrent sessions.
Currently, if I send a message of a new session to the inbound adapter, it uses the same TCP connection.
Please help.
UPDATE:
While using the ThreadAffinity solution given by Gary here, we get this exception when we do more than 4 concurrent requests. Any idea why is that?
11:08:02.083 [pool-1-thread-2] 193.xxx.yyy.zz:443:55729:46c71372-5933-4707-a27b-93cc4bf78c59 Message sent GenericMessage [payload=byte[326], headers={replyChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#2fb866, errorChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#2fb866, ip_tcp_remotePort=55718, ip_connectionId=127.0.0.1:55718:4444:7f71ce96-eaac-4b21-8b2c-bf736102f818, ip_localInetAddress=/127.0.0.1, ip_address=127.0.0.1, id=2dc3e330-d703-8a61-c46c-012233cadf6f, ip_hostname=127.0.0.1, timestamp=1481706480700}]
11:08:12.093 [pool-1-thread-2] Remote Timeout on 193.xxx.yyy.zz:443:55729:46c71372-5933-4707-a27b-93cc4bf78c59
11:08:12.093 [pool-1-thread-2] Tcp Gateway exception
org.springframework.integration.MessageTimeoutException: Timed out waiting for response
at org.springframework.integration.ip.tcp.TcpOutboundGateway.handleRequestMessage(TcpOutboundGateway.java:146)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:109)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:148)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:121)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:423)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:373)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:105)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutput(AbstractMessageProducingHandler.java:292)
at org.springframework.integration.handler.AbstractMessageProducingHandler.produceOutput(AbstractMessageProducingHandler.java:212)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutputs(AbstractMessageProducingHandler.java:129)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:115)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:148)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:121)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:423)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSendAndReceive(GenericMessagingTemplate.java:150)
at org.springframework.messaging.core.GenericMessagingTemplate.doSendAndReceive(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessagingTemplate.sendAndReceive(AbstractMessagingTemplate.java:42)
at org.springframework.integration.core.MessagingTemplate.sendAndReceive(MessagingTemplate.java:97)
at org.springframework.integration.gateway.MessagingGatewaySupport.doSendAndReceive(MessagingGatewaySupport.java:441)
at org.springframework.integration.gateway.MessagingGatewaySupport.sendAndReceiveMessage(MessagingGatewaySupport.java:409)
at org.springframework.integration.ip.tcp.TcpInboundGateway.doOnMessage(TcpInboundGateway.java:120)
at org.springframework.integration.ip.tcp.TcpInboundGateway.onMessage(TcpInboundGateway.java:98)
at org.springframework.integration.ip.tcp.connection.TcpConnectionInterceptorSupport.onMessage(TcpConnectionInterceptorSupport.java:159)
at org.springframework.integration.ip.tcp.connection.TcpNetConnection.run(TcpNetConnection.java:182)
at org.springframework.integration.ip.tcp.connection.TcpConnectionInterceptorSupport.run(TcpConnectionInterceptorSupport.java:111)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
It depends on what constitutes a "session" - if all the requests from a session on the client side all run on a single thread, you could write a simple wrapper for the connection factory that stores the connection in a ThreadLocal. You would need some mechanism to call the factory wrapper after the last request to close the connection and remove it from the ThreadLocal.
If the requests for a session can occur on multiple threads, it would be a bit more complicated but you could still do it with a ThreadLocal that maps to a connection instance.
EDIT
Here's an example...
#SpringBootApplication
public class So40507731Application {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(So40507731Application.class, args);
MessageChannel channel = context.getBean("clientFlow.input", MessageChannel.class);
MessagingTemplate template = new MessagingTemplate(channel);
ThreadAffinityClientConnectionFactory affinityCF = context.getBean(ThreadAffinityClientConnectionFactory.class);
ExecutorService exec = Executors.newCachedThreadPool();
CountDownLatch latch = new CountDownLatch(2);
exec.execute(() -> {
String result = new String(template.convertSendAndReceive("foo", byte[].class));
System.out.println(Thread.currentThread().getName() + " " + result);
result = new String(template.convertSendAndReceive("foo", byte[].class));
System.out.println(Thread.currentThread().getName() + " " + result);
affinityCF.release();
latch.countDown();
});
exec.execute(() -> {
String result = new String(template.convertSendAndReceive("foo", byte[].class));
System.out.println(Thread.currentThread().getName() + " " + result);
result = new String(template.convertSendAndReceive("foo", byte[].class));
System.out.println(Thread.currentThread().getName() + " " + result);
affinityCF.release();
latch.countDown();
});
latch.await(10, TimeUnit.SECONDS);
context.close();
exec.shutdownNow();
}
#Bean
public TcpNetClientConnectionFactory delegateCF() {
TcpNetClientConnectionFactory clientCF = new TcpNetClientConnectionFactory("localhost", 1234);
clientCF.setSingleUse(true); // so each thread gets his own connection
return clientCF;
}
#Bean
public ThreadAffinityClientConnectionFactory affinityCF() {
return new ThreadAffinityClientConnectionFactory(delegateCF());
}
#Bean
public TcpOutboundGateway outGate() {
TcpOutboundGateway outGate = new TcpOutboundGateway();
outGate.setConnectionFactory(affinityCF());
return outGate;
}
#Bean
public IntegrationFlow clientFlow() {
return f -> f.handle(outGate());
}
#Bean
public TcpNetServerConnectionFactory serverCF() {
return new TcpNetServerConnectionFactory(1234);
}
#Bean
public TcpInboundGateway inGate() {
TcpInboundGateway inGate = new TcpInboundGateway();
inGate.setConnectionFactory(serverCF());
return inGate;
}
#Bean
public IntegrationFlow serverFlow() {
return IntegrationFlows.from(inGate())
.transform(Transformers.objectToString())
.transform("headers['ip_connectionId'] + ' ' + payload")
.get();
}
public static class ThreadAffinityClientConnectionFactory extends AbstractClientConnectionFactory
implements TcpListener {
private final AbstractClientConnectionFactory delegate;
private final ThreadLocal<TcpConnectionSupport> connection = new ThreadLocal<>();
public ThreadAffinityClientConnectionFactory(AbstractClientConnectionFactory delegate) {
super("", 0);
delegate.registerListener(this);
this.delegate = delegate;
}
#Override
protected TcpConnectionSupport obtainConnection() throws Exception {
TcpConnectionSupport tcpConnection = this.connection.get();
if (tcpConnection == null || !tcpConnection.isOpen()) {
tcpConnection = this.delegate.getConnection();
this.connection.set(tcpConnection);
}
return tcpConnection;
}
public void release() {
TcpConnectionSupport connection = this.connection.get();
if (connection != null) {
connection.close();
this.connection.remove();
}
}
#Override
public void start() {
this.delegate.start();
setActive(true);
super.start();
}
#Override
public void stop() {
this.delegate.stop();
setActive(false);
super.stop();
}
#Override
public boolean onMessage(Message<?> message) {
return getListener().onMessage(message);
}
}
}
Result:
pool-2-thread-2 localhost:64559:1234:3d898822-ea91-421d-97f2-5f9620b9d369 foo
pool-2-thread-1 localhost:64560:1234:227f8a9f-1461-41bf-943c-68a56f708b0c foo
pool-2-thread-2 localhost:64559:1234:3d898822-ea91-421d-97f2-5f9620b9d369 foo
pool-2-thread-1 localhost:64560:1234:227f8a9f-1461-41bf-943c-68a56f708b0c foo

Resources