Force Spring Kafka not to create topics automatically, but to use already created ones - spring-kafka

There is a quite simple case I would like to implement:
I have a base and DLT topics:
MessageBus:
Topic: my_topic
DltTopic: my_dlt_topic
Broker: event-serv:9092
So, those topics are already predefined, I don't need to create them automatically.
The only I need to handle broken messages automatically without retries, because they don't make any sense, so I have something like this:
#KafkaListener(topics = ["#{config.messageBus.topic}"], groupId = "group_id")
#RetryableTopic(
dltStrategy = DltStrategy.FAIL_ON_ERROR,
autoCreateTopics = "false",
attempts = "1"
)
#Throws(IOException::class)
fun consume(rawMessage: String?) {
...
}
#DltHandler
fun processMessage(rawMessage: String?) {
kafkaTemplate.send(config.messageBus.dltTopic, rawMessage)
}
That of course doesn't work properly.
I also tried to specify a kafkaTemplate
#Bean
fun kafkaTemplate(
config: Config,
producerFactory: ProducerFactory<String, String>
): KafkaTemplate<String, String> {
val template = KafkaTemplate(producerFactory)
template.defaultTopic = config.messageBus.dltTopic
return template
}
however, that does not change the situation.
In the end, I believe there is an obvious solution, so I please give me a hint about it.

See the documenation.
#SpringBootApplication
public class So69317126Application {
public static void main(String[] args) {
SpringApplication.run(So69317126Application.class, args);
}
#RetryableTopic(attempts = "1", autoCreateTopics = "false", dltStrategy = DltStrategy.FAIL_ON_ERROR)
#KafkaListener(id = "so69317126", topics = "so69317126")
void listen(String in) {
System.out.println(in);
throw new RuntimeException();
}
#DltHandler
void handler(String in) {
System.out.println("DLT: " + in);
}
#Bean
RetryTopicNamesProviderFactory namer() {
return new RetryTopicNamesProviderFactory() {
#Override
public RetryTopicNamesProvider createRetryTopicNamesProvider(Properties properties) {
if (properties.isMainEndpoint()) {
return new SuffixingRetryTopicNamesProviderFactory.SuffixingRetryTopicNamesProvider(properties) {
#Override
public String getTopicName(String topic) {
return "so69317126";
}
};
}
else if(properties.isDltTopic()) {
return new SuffixingRetryTopicNamesProviderFactory.SuffixingRetryTopicNamesProvider(properties) {
#Override
public String getTopicName(String topic) {
return "so69317126.DLT";
}
};
}
else {
throw new IllegalStateException("Shouldn't get here - attempts is only 1");
}
}
};
}
}
so69317126: partitions assigned: [so69317126-0]
so69317126-dlt: partitions assigned: [so69317126.DLT-0]
foo
DLT: foo

This is a Kafka server configuration so you must set it on the server. The relevant property is:
auto.create.topics.enable (true by default)

Related

RetryingBatchErrorHandler - Offset commit handling

I'm using spring-kafka 2.3.8 and I'm trying to log the recovered records and commit the offsets using RetryingBatchErrorHandler. How would you commit the offset in the recoverer?
public class Customizer implements ContainerCustomizer{
private static ConsumerRecordRecoverer createConsumerRecordRecoverer() {
return (consumerRecord, e) -> {
log.info("Number of attempts exhausted. parition: " consumerRecord.partition() + ", offset: " + consumerRecord.offset());
# need to commit the offset
};
}
#Override
public void configure(AbstractMessageListenerContainer container) {
container.setBatchErrorHandler(new RetryingBatchErrorHandler(new FixedBackOff(5000L, 3L), createConsumerRecordRecoverer()));
}
The container will automatically commit the offsets if the error handler "handles" the exception, unless you set the ackAfterHandle property to false (it is true by default).
EDIT
This works as expected for me:
#SpringBootApplication
public class So69534923Application {
private static final Logger log = LoggerFactory.getLogger(So69534923Application.class);
public static void main(String[] args) {
SpringApplication.run(So69534923Application.class, args);
}
#KafkaListener(id = "so69534923", topics = "so69534923")
void listen(List<String> in) {
System.out.println(in);
throw new RuntimeException("test");
}
#Bean
RetryingBatchErrorHandler eh() {
return new RetryingBatchErrorHandler(new FixedBackOff(1000L, 2), (rec, ex) -> {
this.log.info("Retries exchausted for " + ListenerUtils.recordToString(rec, true));
});
}
#Bean
ApplicationRunner runner(ConcurrentKafkaListenerContainerFactory<?, ?> factory,
KafkaTemplate<String, String> template) {
factory.getContainerProperties().setCommitLogLevel(Level.INFO);
return args -> {
template.send("so69534923", "foo");
template.send("so69534923", "bar");
};
}
}
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.listener.type=batch
so69534923: partitions assigned: [so69534923-0]
[foo, bar]
[foo, bar]
[foo, bar]
Retries exchausted for so69534923-0#2
Retries exchausted for so69534923-0#3
Committing: {so69534923-0=OffsetAndMetadata{offset=4, leaderEpoch=null, metadata=''}}
The log was from the second run.
EDIT2
It does not work with 2.3.x; you should upgrade to a supported version.
https://spring.io/projects/spring-kafka#learn

is it possible to have both the listener and container error handlers

I am building a general spring-kafka configuration for teams to use in their projects.
I would like to define a general custom error handler at container level, and allow the project to define a listener error handler for each listener. Anything that is not handled by the listener error handler should fall back to the container.
From what i've tested so far it's either one or the other. any way to get them to work together?
Would it make sense to have a handler chain at container level and allow projects to add error handlers to the chain?
There is nothing to prevent you configuring both error handlers...
#SpringBootApplication
public class So55001718Application {
public static void main(String[] args) {
SpringApplication.run(So55001718Application.class, args);
}
#KafkaListener(id = "so55001718", topics = "so55001718", errorHandler = "listenerEH")
public void listen(String in) {
System.out.println(in);
if ("bad1".equals(in)) {
throw new IllegalStateException();
}
else if("bad2".equals(in)) {
throw new IllegalArgumentException();
}
}
#Bean
public KafkaListenerErrorHandler listenerEH() {
return (m, t) -> {
if (t.getCause() instanceof IllegalStateException) {
System.out.println(
t.getClass().getSimpleName() + " bad record " + m.getPayload() + " handled by listener EH");
return null;
}
else {
throw (t);
}
};
}
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory) {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, kafkaConsumerFactory);
factory.setErrorHandler((t, r) -> {
System.out.println(t.getClass().getSimpleName() + " bad record " + r.value() + " handled by container EH");
});
return factory;
}
#Bean
public NewTopic topic() {
return new NewTopic("so55001718", 1, (short) 1);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> {
template.send("so55001718", "good");
template.send("so55001718", "bad1");
template.send("so55001718", "bad2");
};
}
}
and
good
bad1
ListenerExecutionFailedException bad record bad1 handled by listener EH
bad2
ListenerExecutionFailedException bad record bad2 handled by container EH
You can create a simple wrapper to wrap multiple error handlers; feel free to open a GitHub issue (contributions are welcome).

Iterator from object with next() and get()

Given an object like this:
Matcher matcher = pattern.matcher(sql);
with usage like so:
Set<String> matches = new HashSet<>();
while (matcher.find()) {
matches.add(matcher.group());
}
I'd like to replace this while loop by something more object-oriented like so:
new Iterator<String>() {
#Override
public boolean hasNext() {
return matcher.find();
}
#Override
public String next() {
return matcher.group();
}
}
so that I can easily e.g. make a Stream of matches, stick to using fluent APIs and such.
The thing is, I don't know and can't find a more concise way to create this Stream or Iterator. An anonymous class like above is too verbose for my taste.
I had hoped to find something like IteratorFactory.from(matcher::find, matcher::group) or StreamSupport.of(matcher::find, matcher::group) in the jdk, but so far no luck. I've no doubt libraries like apache commons or guava provide something for this, but let's say I can't use those.
Is there a convenient factory for Streams or Iterators that takes a hasNext/next method combo in the jdk?
In java-9 you could do it via:
Set<String> result = matcher.results()
.map(MatchResult::group)
.collect(Collectors.toSet());
System.out.println(result);
In java-8 you would need a back-port for this, taken from Holger's fabulous answer
EDIT
There is a single method btw tryAdvance that could incorporate find/group, something like this:
static class MyIterator extends AbstractSpliterator<String> {
private Matcher matcher;
public MyIterator(Matcher matcher) {
// I can't think of a better way to estimate the size here
// may be you can figure a better one here
super(matcher.regionEnd() - matcher.regionStart(), 0);
this.matcher = matcher;
}
#Override
public boolean tryAdvance(Consumer<? super String> action) {
while (matcher.find()) {
action.accept(matcher.group());
return true;
}
return false;
}
}
And usage for example:
Pattern p = Pattern.compile("\\d");
Matcher m = p.matcher("12345");
Set<String> result = StreamSupport.stream(new MyIterator(m), false)
.collect(Collectors.toSet());
This class I wrote embodies what I wanted to find in the jdk. Apparently though it just doesn't exist. eugene's accepted answer offers a java 9 Stream solution though.
public static class SearchingIterator<T> implements Iterator<T> {
private final BooleanSupplier advancer;
private final Supplier<T> getter;
private Optional<T> next;
public SearchingIterator(BooleanSupplier advancer, Supplier<T> getter) {
this.advancer = advancer;
this.getter = getter;
search();
}
private void search() {
boolean hasNext = advancer.getAsBoolean();
next = hasNext ? Optional.of(getter.get()) : Optional.empty();
}
#Override
public boolean hasNext() {
return next.isPresent();
}
#Override
public T next() {
T current = next.orElseThrow(IllegalStateException::new);
search();
return current;
}
}
Usage:
Matcher matcher = Pattern.compile("\\d").matcher("123");
Iterator<String> it = new SearchingIterator<>(matcher::find, matcher::group);

RxJava-Aynchronous Stream Processing

I am implementing a simple data analytic functionality with RXJava, where a topic subscriber asynchronously processes the data published to a topic, depositing the output to the Redis.
When a message is received, the Spring component publishes it to an Observable. To avoid blocking the submission I used the RxJava Async to do this asynchronously.
#Override
public void onMessage(final TransactionalMessage message) {
Async.start(new Func0<Void>() {
#Override
public Void call() {
analyser.process(message);
return null;
}
});
}
I have two confusions in implementing other processing parts; 1) creating an asynchronous Observable with buffering 2) Computing different logics in parallel based on message type on list of messages.
After long experiments I found two ways to create the Async Observable and not sure which one is the right and better approach.
Way one,
private static final class Analyzer {
private Subscriber<? super TransactionalMessage> subscriber;
public Analyzer() {
OnSubscribe<TransactionalMessage> f = subscriber -> this.subscriber = subscriber;
Observable.create(f).observeOn(Schedulers.computation())
.buffer(5, TimeUnit.SECONDS, 5, Schedulers.io())
.skipWhile((list) -> list == null || list.isEmpty())
.subscribe(t -> compute(t));
}
public void process(TransactionalMessage message) {
subscriber.onNext(message);
}
}
Way two
private static final class Analyser {
private PublishSubject<TransactionalMessage> subject;
public Analyser() {
subject = PublishSubject.create();
Observable<List<TransactionalMessage>> observable = subject
.buffer(5, TimeUnit.SECONDS, 5, Schedulers.io())
.observeOn(Schedulers.computation());
observable.subscribe(new Observer<List<TransactionalMessage>>() {
#Override
public void onCompleted() {
log.debug("[Analyser] onCompleted(), completed!");
}
#Override
public void onError(Throwable e) {
log.error("[Analyser] onError(), exception, ", e);
}
#Override
public void onNext(List<TransactionalMessage> t) {
compute(t);
}
});
}
public void process(TransactionalMessage message) {
subject.onNext(message);
}
}
The TransactionalMessage comes in different types, so I want to perform different computations based on the types. One approach I tried is filter the list based on every type and process them separately, but this looks so bad and I think does not work in parallel. What way to process them in parallel?
protected void compute(List<TransactionalMessage> messages) {
Observable<TransactionalMessage> observable = Observable
.from(messages);
Observable<String> observable2 = observable
.filter(new Func1<TransactionalMessage, Boolean>() {
#Override
public Boolean call(TransactionalMessage t) {
return t.getMsgType()
.equals(OttMessageType.click.name());
}
}).flatMap(
new Func1<TransactionalMessage, Observable<String>>() {
#Override
public Observable<String> call(
TransactionalMessage t) {
return Observable.just(
t.getMsgType() + t.getAppId());
}
});
Observable<String> observable3 = observable
.filter(new Func1<TransactionalMessage, Boolean>() {
#Override
public Boolean call(TransactionalMessage t) {
return t.getMsgType()
.equals(OttMessageType.image.name());
}
}).flatMap(
new Func1<TransactionalMessage, Observable<String>>() {
#Override
public Observable<String> call(
TransactionalMessage t) {
return Observable.just(
t.getMsgType() + t.getAppId());
}
});
// I sense some code smell in filtering on type and processing it.
Observable.merge(observable2, observable3)
.subscribe(new Action1<String>() {
#Override
public void call(String t) {
// save it to redis
System.out.println(t);
}
});
}
I suggest thinking about Subjects before attempting to use create.
If you want parallel processing done based on some categorization, you could use groupBy along with observeOn to achieve the desired effect:
Observable.range(1, 100)
.groupBy(v -> v % 3)
.flatMap(g ->
g.observeOn(Schedulers.computation())
.reduce(0, (a, b) -> a + b)
.map(v -> g.getKey() + ": " + v)
)
.toBlocking().forEach(System.out::println);

How to send a STOMP message with Spring4 when controller is mapped

I have the following code...
#Controller
#RequestMapping("/stomp/**")
public class StompController {
#MessageMapping("/hello")
#SendTo("/topic/greet")
public Greeting greet(HelloMessage message) throws Exception{
System.out.println("Inside the method "+message.getName());
Thread.sleep(3000);
return new Greeting("Hello, "+message.getName()+"!");
}
}
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableSimpleBroker("/stomp/topic");
config.setApplicationDestinationPrefixes("/app");
}
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/stomp/hello").withSockJS();
}
}
<script type="text/javascript">
var stompClient = null;
function setConnected(connected) {
document.getElementById('connect').disabled = connected;
document.getElementById('disconnect').disabled = !connected;
document.getElementById('conversationDiv').style.visibility = connected ? 'visible' : 'hidden';
document.getElementById('response').innerHTML = '';
}
function connect() {
var socket = new SockJS('/stomp/hello');
stompClient = Stomp.over(socket);
stompClient.connect({}, function(frame) {
setConnected(true);
console.log('Connected: ' + frame);
stompClient.subscribe('/stomp/topic/greet', function(greeting){
showGreeting(JSON.parse(greeting.body).content);
});
});
}
function disconnect() {
stompClient.disconnect();
setConnected(false);
console.log("Disconnected");
}
function sendName() {
var name = document.getElementById('name').value;
stompClient.send("/stomp/app/hello", {}, JSON.stringify({ 'name': name }));
}
function showGreeting(message) {
var response = document.getElementById('response');
var p = document.createElement('p');
p.style.wordWrap = 'break-word';
p.appendChild(document.createTextNode(message));
response.appendChild(p);
}
</script>
The Client side code seems to connect fine but I don't see the console message meaning to me "/stomp/app/hello" is the wrong path. What should the proper path be?
I also tried /app/stomp/hello no dice...
Update
I can remove the #RequestMapping("/stomp/**") and remove the stomp related stuff and it works fine for my simple test, however, I need it to work for a more complex application that will not allow this.
#RequestMapping and #MessageMapping annotations can be used in similar ways, but are totally different.
#MessageMapping can also be used at the type level (see reference documentation), so you could annotate your controller with #MessageMapping("/stomp/**").
Nothing prevents you from annotating a Controller with both #MessageMapping and #RequestMapping - similar programming model, different purposes.

Resources