I would like to have a client app with request/response semantics which invokes another app that's a Kafka Streams app.
My client app is based on this example (essentially unchanged). I need the app receiving the messages from the client to be a Kafka Streams app. But the message headers including the correlation id are lost.
The Kafka Streams app is a simple topology for testing this...
#Bean
public KafkaStreams stream(KafkaStreamsConfiguration kafkaStreamsConfiguration) {
final StreamsBuilder builder = new StreamsBuilder();
builder.<String, String>stream(REQUEST_TOPIC_NAME)
.groupByKey()
.count()
.toStream()
.mapValues((ValueMapper<Long, String>)String::valueOf)
.to(REPLY_TOPIC_NAME);
return new KafkaStreams(builder.build(), kafkaStreamsConfiguration.asProperties());
}
For this POC I'm keeping it simple and having the client and server "agree" on the topic names (kRequests and kReplies). So at this point I just want to get the correlation id to be recognized and returned.
What I'm seeing now is
2019-10-01 10:55:38.792 WARN 76830 --- [TaskScheduler-1] o.s.k.r.ReplyingKafkaTemplate : Reply timed out for: ProducerRecord(topic=kRequests, partition=null, headers=RecordHeaders(headers = [RecordHeader(key = kafka_replyTopic, value = [107, 82, 101, 112, 108, 105, 101, 115]), RecordHeader(key = kafka_correlationId, value = [101, -4, -35, 41, -127, -66, 69, 37, -117, -127, -95, -92, 38, 79, 73, 127])], isReadOnly = true), key=null, value=foo21074, timestamp=null) with correlationId: [135564972083657938538225367552235620735]
2019-10-01 10:55:38.792 ERROR 76830 --- [TaskScheduler-1] org.KRequestingApplication : Reply timed out
org.springframework.kafka.KafkaException: Reply timed out
at org.springframework.kafka.requestreply.ReplyingKafkaTemplate.lambda$scheduleTimeout$0(ReplyingKafkaTemplate.java:257) ~[spring-kafka-2.2.8.RELEASE.jar:2.2.8.RELEASE]
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54) ~[spring-context-5.1.9.RELEASE.jar:5.1.9.RELEASE]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_211]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_211]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[na:1.8.0_211]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[na:1.8.0_211]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_211]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_211]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_211]
There is no message with the matching correlation id on the reply topic within the timeout. It seems that at least using Kafka Streams DSL there's no way to support the ReplyingKafkaTemplate.
Your scenario doesn't make sense; your KStream is grouping multiple inputs; request/reply is 1 request 1 reply.
This works fine...
#SpringBootApplication
#EnableKafkaStreams
public class So58193901Application {
private static final String REQUEST_TOPIC_NAME = "requests";
private static final String REPLY_TOPIC_NAME = "replies";
public static void main(String[] args) {
SpringApplication.run(So58193901Application.class, args);
}
#Bean
public KStream<byte[], byte[]> stream(StreamsBuilder builder) {
KStream<byte[], byte[]> stream = builder.stream(REQUEST_TOPIC_NAME);
stream
.mapValues(val -> new String(val).toUpperCase().getBytes())
.to(REPLY_TOPIC_NAME);
return stream;
}
#Bean
public NewTopic topic1() {
return TopicBuilder.name(REQUEST_TOPIC_NAME).partitions(1).replicas(1).build();
}
#Bean
public NewTopic topic2() {
return TopicBuilder.name(REPLY_TOPIC_NAME).partitions(1).replicas(1).build();
}
#Bean
public ReplyingKafkaTemplate<String, String, String> template(ProducerFactory<String, String> pf,
ConcurrentKafkaListenerContainerFactory<String, String> factory) {
ConcurrentMessageListenerContainer<String, String> replyContainer = factory.createContainer(REPLY_TOPIC_NAME);
return new ReplyingKafkaTemplate<>(pf, replyContainer);
}
#Bean
public ApplicationRunner runner(ReplyingKafkaTemplate<String, String, String> template) {
return args -> {
System.out.println(template.sendAndReceive(new ProducerRecord<>(REQUEST_TOPIC_NAME, "bar", "foo"))
.get(10, TimeUnit.SECONDS).value());
};
}
}
Related
I am using spring-kafka 2.2.8 and trying to understand if there is an option to deploy a kafka consumer being in pause mode until i signal to start consume the messages. Please suggest.
I see in the below post, we can pause and start the consumer but I need the consumer to be in pause mode when it's deployed.
how to pause and resume #KafkaListener using spring-kafka
#KafkaListener(id = "foo", ..., autoStartup = "false")
Then start it using the KafkaListenerEndpointRegistry when you are ready
registry.getListenerContainer("foo").start();
There is not much point in starting it in paused mode, but you can do that...
#SpringBootApplication
public class So62329274Application {
public static void main(String[] args) {
SpringApplication.run(So62329274Application.class, args);
}
#KafkaListener(id = "so62329274", topics = "so62329274", autoStartup = "false")
public void listen(String in) {
System.out.println(in);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so62329274").partitions(1).replicas(1).build();
}
#Bean
public ApplicationRunner runner(KafkaListenerEndpointRegistry registry, KafkaTemplate<String, String> template) {
return args -> {
template.send("so62329274", "foo");
registry.getListenerContainer("so62329274").pause();
registry.getListenerContainer("so62329274").start();
System.in.read();
registry.getListenerContainer("so62329274").resume();
};
}
}
You will see a log message like this when the partitions are assigned:
Paused consumer resumed by Kafka due to rebalance; consumer paused again, so the initial poll() will never return any records
I'm using spring-kafka 2.2.8.RELEASE and using #KafkaListener annotation to create a consumer and here is my consumer configuration code.
#Bean
public <K,V> ConcurrentKafkaListenerContainerFactory<String, Object> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(primaryConsumerFactory());
factory.setRetryTemplate(retryTemplate());
return factory;
}
#Bean
public DefaultKafkaConsumerFactory<Object, Object> primaryConsumerFactory() {
return new DefaultKafkaConsumerFactory<>(MyConsumerConfig.getConfigs());
}
public RetryTemplate retryTemplate() {
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setListeners(new RetryListener[]{myKafkaRetryListener});
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
retryPolicy.setMaxAttempts(Integer.parseInt(3));
retryTemplate.setRetryPolicy(retryPolicy);
ExponentialBackOffPolicy exponentialBackOffPolicy = new ExponentialBackOffPolicy();
exponentialBackOffPolicy.setInitialInterval(500);
//As per the spring-kafka documentation, maxInterval (60000 ms) should be set less than max.poll.interval.ms (600000 ms)
exponentialBackOffPolicy.setMaxInterval(60000);
retryTemplate.setBackOffPolicy(exponentialBackOffPolicy);
return retryTemplate;
}
Here is my custom retry listener code:
#Component
public class MyRetryListener implements RetryListener {
#Override
public <T, E extends Throwable> boolean open(RetryContext context, RetryCallback<T, E> callback) {
System.out.println("##### IN open method");
return false;
}
#Override
public <T, E extends Throwable> void close(RetryContext context, RetryCallback<T, E> callback,
Throwable throwable) {
System.out.println("##### IN close method");
}
#Override
public <T, E extends Throwable> void onError(RetryContext context, RetryCallback<T, E> callback,
Throwable throwable) {
System.out.println("##### Got an error and will retry");
}
}
Now, when I'm sending a message to a test topic, and in the consumer I'm throwing a TimeoutException so that the retry will trigger and here is my consumer code.
#KafkaListener(topics = "CONSUMER_RETRY_TEST_TOPIC")
public void listen(ConsumerRecord message) throws RetriableException {
System.out.println("CONSUMER_RETRY testing - Received message with key "+message.key()+" on topic " + CONSUMER_RETRY_TEST_TOPIC + " \n \n ");
throw new TimeoutException();
}
With the above code configuration, the retry is not triggered and 'onError' method of my custom retry listener is never invoked and I'm getting the below error. Please suggest what am i missing here?
org.springframework.retry.TerminatedRetryException: Retry terminated abnormally by interceptor before first attempt
See the JavaDocs for RetryListener.open().
<T,E extends Throwable> boolean open(RetryContext context,
RetryCallback<T,E> callback)
Called before the first attempt in a retry. For instance, implementers can set up state that is needed by the policies in the RetryOperations. The whole retry can be vetoed by returning false from this method, in which case a TerminatedRetryException will be thrown.
Type Parameters:
T - the type of object returned by the callback
E - the type of exception it declares may be thrown
Parameters:
context - the current RetryContext.
callback - the current RetryCallback.
Returns:
true if the retry should proceed.
You need to return true not false.
This is my first Spring Boot, Kafka project and my first Stack Overflow post.
I'm using Spring Boot 2.1.1 and spring-kafka 2.2.7.RELEASE.I am trying to configure Spring SeekToCurrentErrorHandler with a DeadLetterPublishingRecoverer to send de-serialization failure messages to a different topic. The new DLT queue is not being created.
While I am able to see the error message due to de-serialization failure as an ERROR in the application logs/IDE Console (and process subsequent messages when feeding the topic manually), the "originalTopic.DLT" topic is not created and hence the incorrect message is not written to the .DLT topic. I read in Spring documentation that “By default, the dead-letter record is sent to a topic named originalTopic.DLT (the original topic name suffixed with .DLT) and to the same partition as the original record”
Instead, I see the failed message in the log file (.log) along with the valid messages of the topic listed in #KafkaListner annotation.
I am trying to write the error message as-is to the .DLT topic for further Error processing.
Here is the configuration I have so far. Any direction regarding where I'm going wrong would be really helpful.
I referred the following links https://docs.spring.io/spring-kafka/reference/html/#serdes, Configuring Spring Kafka to use DeadLetterPublishingRecoverer and SeekToCurrentErrorHandler: DeadLetterPublishingRecoverer is not handling deserialize errors to figure out a solution. But the issue I am facing is that the .DLT is not being created.
#EnableKafka
#Configuration
#ConditionalOnMissingBean(type = "org.springframework.kafka.core.KafkaTemplate")
public class SubscriberConfig {
#Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers;
#Autowired
private KafkaTemplate<Object, Object> kafkaTemplate;
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer2.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer2.class);
props.put(ErrorHandlingDeserializer2.KEY_DESERIALIZER_CLASS, StringDeserializer.class);
props.put(ErrorHandlingDeserializer2.VALUE_DESERIALIZER_CLASS, JsonDeserializer.class.getName());
props.put(JsonDeserializer.KEY_DEFAULT_TYPE, "java.lang.String");
props.put(JsonDeserializer.VALUE_DEFAULT_TYPE, "com.sample.main.entity.Transaction");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "json");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return props;
}
#Bean
public ConsumerFactory<String, Transaction> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),
new JsonDeserializer<>(Transaction.class, false));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Transaction> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Transaction> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setErrorHandler(new SeekToCurrentErrorHandler(new DeadLetterPublishingRecoverer(kafkaTemplate),3));
return factory;
#KafkaListener(topics = "${spring.kafka.subscription.topic}", groupId="json")
public void consume(#Payload Transaction message, #Headers MessageHeaders headers) {
//Business Logic......
this.sendMsgToNewTopic(newTopicName, transformedTrans);
}
}
}
Console output is 2019-07-29 15:28:03 ERROR LoggingErrorHandler:37 - Error while processing: ConsumerRecord(topic = trisyntrans, partition = 0, offset = 10, CreateTime = 1564432082456, serialized key size = -1, serialized value size = 30, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = this is failed deserialization)
org.springframework.kafka.support.converter.ConversionException: Failed to convert from JSON; nested exception is com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'this': was expecting 'null', 'true', 'false' or NaN
at [Source: (String)"this is failed deserialization"; line: 1, column: 5]
at org.springframework.kafka.support.converter.StringJsonMessageConverter.extractAndConvertValue(StringJsonMessageConverter.java:128)
at org.springframework.kafka.support.converter.MessagingMessageConverter.toMessage(MessagingMessageConverter.java:132)
at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.toMessagingMessage(MessagingMessageListenerAdapter.java:264)
at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:74)
at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:50)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage(KafkaMessageListenerContainer.java:1275)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage(KafkaMessageListenerContainer.java:1258)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:1219)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:1200)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:1120)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:935)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:751)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:700)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'this': was expecting 'null', 'true', 'false' or NaN
at [Source: (String)"this is failed deserialization"; line: 1, column: 5]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1804)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:679)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._reportInvalidToken(ReaderBasedJsonParser.java:2839)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._reportInvalidToken(ReaderBasedJsonParser.java:2817)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._matchToken(ReaderBasedJsonParser.java:2606)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._matchTrue(ReaderBasedJsonParser.java:2558)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:717)
at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:4141)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4000)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3042)
at org.springframework.kafka.support.converter.StringJsonMessageConverter.extractAndConvertValue(StringJsonMessageConverter.java:125)
... 15 more
Example for non-conforming message could be a simple string such as "This is a test message"
You have to create the DLT topic yourself.
The framework will do it for you, if you add a bean to the application context
#Bean
public NewTopic dlt(#Value("${spring.kafka.subscription.topic}" String mainTopic) {
return new NewTopic(mainTopic + ".DLT", 10, (short) 3);
}
As long as there is a KafkaAdmin #Bean in the application context (if you are using Spring Boot, one will be auto-configured for you).
My kafka listener should process messages in sequential order , onMessage method should process messages synchronously, I dont want my listener to process multiple messages at the same time, the onmessage method first stops
org.springframework.kafka.listener.MessageListenerContainer
then delgates payload to a synchronized method, after complete processing , starts listener back. Other options ofcousrse are to use a blocking queue, executor service etc, need advice on better strategy to achieve this, does kafka consumer has any feature built to process messages in series?
here is my code.
I changed implementation to this
public static class KafkaReadMsgTask implements Runnable{
#Override
public void run() {
KakfaMsgConumerImpl kakfaMsgConumerImpl=null;;
try{
kakfaMsgConumerImpl=SpContext.getBean(KakfaMsgConumerImpl.class);
kakfaMsgConumerImpl.pollFormDef();
kakfaMsgConumerImpl.pollFormData();
} catch (Exception e){
logger.error(" kafka listener errors "+e);
kakfaMsgConumerImpl.pauseTask();
}
}
}
#Component
public static class KakfaMsgConumerImpl {
#Autowired
ObjectMapper mapper;
#Autowired
FormSink formSink;
#Autowired
Environment env;
#Resource(name="formDefConsumer")
Consumer formDefConsumer;
#Resource(name="formDataConsumer")
Consumer formDataConsumer;
ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor();
public void startPolling() throws Exception{
executor.scheduleAtFixedRate(new KafkaReadMsgTask(),10, 3,TimeUnit.SECONDS);
}
public void pauseTask(){
try{
Thread.sleep (120000l);
}catch(Exception e){
throw new RuntimeException(e);
}
}
public void pollFormDef() throws Exception{
ConsumerRecords<Long, String> records =formDefConsumer.poll(0);
if(!records.isEmpty()){
int recordsCount=records.count();
if(logger.isDebugEnabled()){
logger.debug(" form-def consumer poll records size "+recordsCount);
}
if(records.count()>1){
logger.warn(" form-def consumer poll returned records more than 1 , expected 1 , received "+recordsCount);
}
ConsumerRecord<Long,String> record= records.iterator().next();
processFormDef(record.key(), record.value());
}
}
void pollFormData() throws Exception{
ConsumerRecords<Long, String> records =formDataConsumer.poll(0);
if(!records.isEmpty()){
int recordsCount=records.count();
if(logger.isDebugEnabled()){
logger.debug(" form-data consumer poll records size "+recordsCount);
}
if(records.count()>1){
logger.warn(" form-data consumer poll returned records more than 1 , expected 1 , received "+recordsCount);
} ConsumerRecord<Long,String> record= records.iterator().next();
processFormData(record.key(), record.value());
}
}
void processFormDef(Long key, String msg) throws Exception{
if(logger.isDebugEnabled()){
logger.debug(" key "+key+" payload : "+msg);
}
FormDefinition formDefinition= mapper.readValue(msg, FormDefinition.class);
formSink.createFromDef(formDefinition);
logger.debug(" processed message, key: "+key+ " msg : "+msg);
Thread.sleep(60000l);
}
void processFormData(Long key, String msg) throws Exception{
if(logger.isDebugEnabled()){
logger.debug(" key "+key+" payload : "+msg);
}
FormData formData= mapper.readValue(msg, FormData.class);
formSink.persists(formData);
logger.debug(" processed message, key: "+key+ " msg : "+msg);
Thread.sleep(60000l);
}
}
Using a message-driven listener container is not the right technology for this application; it looks like you want to consume messages alternately from two different topics.
Furthermore, stopping the container on the consumer thread won't take effect anyway, until the thread exits the method, at which time the consumer will be closed.
I would suggest you use the consumer factory to create two consumers; subscribe to the topics, set the max.poll.records on each to 1 and call the poll() method on each alternately.
I'm trying to implement Asynchronous Avro calls by using its NettyServer implementation. After digging the source code, I found an example on how to use NettyServer from TestNettyServerWithCallbacks.java
When running a few test, I realize that NettyServer never calls hello(Callback) method, instead it keeps calling the synchronous hello() method. The client program prints out "Hello" but I'm expecting "Hello-ASYNC" as a result. I really have no clue what's going on.
I hope someone can shine some light on me and perhaps point out the mistake. Below are the codes I use to perform a simple asynchronous avro test.
AvroClient.java - Client code.
public class AvroClient {
public static void main(String[] args) throws InterruptedException, ExecutionException, TimeoutException {
try {
NettyTransceiver transceiver = new NettyTransceiver(new InetSocketAddress(6666));
Chat.Callback client = SpecificRequestor.getClient(Chat.Callback.class, transceiver);
final CallFuture<CharSequence> future1 = new CallFuture<CharSequence>();
client.hello(future1);
System.out.println(future1.get());
transceiver.close();
} catch (IOException ex) {
System.err.println(ex);
}
}
}
AvroNetty.java - The Server Code
public class AvroNetty {
public static void main(String[] args) {
Index indexImpl = new AsyncIndexImpl();
Chat chatImpl = new ChatImpl();
Server server = new NettyServer(new SpecificResponder(Chat.class, chatImpl), new InetSocketAddress(6666));
server.start();
System.out.println("Server is listening at port " + server.getPort());
}
}
ChatImpl.java
public class ChatImpl implements Chat.Callback {
#Override
public void hello(org.apache.avro.ipc.Callback<CharSequence> callback) throws IOException {
callback.handleResult("Hello-ASYNC");
}
#Override
public CharSequence hello() throws AvroRemoteException {
return new Utf8("Hello");
}
}
This interface is auto-generated by avro-tool
Chat.java
#SuppressWarnings("all")
public interface Chat {
public static final org.apache.avro.Protocol PROTOCOL = org.apache.avro.Protocol.parse("{\"protocol\":\"Chat\",\"namespace\":\"avro.test\",\"types\":[],\"messages\":{\"hello\":{\"request\":[],\"response\":\"string\"}}}");
java.lang.CharSequence hello() throws org.apache.avro.AvroRemoteException;
#SuppressWarnings("all")
public interface Callback extends Chat {
public static final org.apache.avro.Protocol PROTOCOL = avro.test.Chat.PROTOCOL;
void hello(org.apache.avro.ipc.Callback<java.lang.CharSequence> callback) throws java.io.IOException;
}
}
Here is the Avro Schema
{
"namespace": "avro.test",
"protocol": "Chat",
"types" : [],
"messages": {
"hello": {
"request": [],
"response": "string"
}
}
}
The NettyServer implementation actually doesn't implement the Async style at all. It is a deficiency in the library. Instead you need to specify an asynchronous execution handler rather than try and chain services together through callbacks. Here is what I use to setup my NettyServer to allow for this:
ExecutorService es = Executors.newCachedThreadPool();
OrderedMemoryAwareThreadPoolExecutor executor = new OrderedMemoryAwareThreadPoolExecutor(Runtime.getRuntime().availableProcessors(), 0, 0);
ExecutionHandler executionHandler = new ExecutionHandler(executor);
final NettyServer server = new NettyServer(responder, addr, new NioServerSocketChannelFactory(es, es), executionHandler);