How to use spring-kafka for sending a message again - spring-kafka

We are using spring-kafka 1.2.2.RELEASE.
What we want
1. As soon as a message is consumed and processed successfully, offset is committed in spring-kafka. I am using Manaul Commit/Acknowledgement for it, it is working fine.
2. In case of any exception we want spring-kafka to resend the same message. We are throwing RunTime exception on any system error, which was logged by spring-kafka and never committed.
This is fine as we don't want it to commit, but that message stays in spring-kafka and never comes back unless we restart the service. On restart message comes back and executes once again and then stay in spring-kafka
What we tried
1. I have tried both ErrorHandler and RetryingMessageListenerAdapter, but in both cases we have to code in service how to process the message again
This is my consumer
public class MyConsumer{
#KafkaListener
public void receive(...){
// application logic to return success/failure
if(success){
acknowledgement.acknowledge();
}else{
throw new RunTimeException();
}
}
}
Also I have following configurations for container factory
factory.getContainerProperties().setErrorHandler(new ErrorHandler(){
#Override
public void handle(...){
throw new RunTimeException("");
}
});
While executing the flow, control is coming inside both first to receive and then handle method. After that service waits for new message. However I was expecting, since we threw an exception, and message is not committed, same message should land in receive method again.
Is there any way, we can tell spring kafka "do not commit this message and send it again asap?"

1.2.x is no longer supported; 1.x users are recommended to upgrade to at least 1.3.x (currently 1.3.8) because of its much simpler threading model, thanks to KIP-62.
The current version is 2.2.2.
2.0.1 introduced the SeekToCurrentErrorHandler which re-seeks the failed record so that it is redelivered.
With earlier versions, you had to stop and restart the container to redeliver a failed message, or add retry to the listener adapter.
I suggest you upgrade to the newest possible release.

Unfortunately version available for us to use is 1.3.7.RELEASE.
I have tried implementing the ConsumerSeekAware interface. Below is how I am doing it and I can see message delivering repreatedly
Consumer
public class MyConsumer implements ConsumerSeekAware{
private ConsumerSeekCallback consumerSeekCallback;
if(condition) {
acknowledgement.acknowledge();
}else {
consumerSeekCallback.seek((String)headers.get("kafka_receivedTopic"),
(int) headers.get("kafka_receivedPartitionId"),
(int) headers.get("kafka_offset"));
}
}
#Override
public void registerSeekCallback(ConsumerSeekCallback consumerSeekCallback) {
this.consumerSeekCallback = consumerSeekCallback;
}
#Override
public void onIdleContainer(Map<TopicPartition, Long> arg0, ConsumerSeekCallback arg1) {
LOGGER.debug("onIdleContainer called");
}
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> arg0, ConsumerSeekCallback arg1) {
LOGGER.debug("onPartitionsAssigned called");
}
}
Config
public class MyConsumerConfig {
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
// Set server, deserializer, group id
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
return props;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, MyModel> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, MyModel> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerConfigs()));
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
return factory;
}
#Bean
public MyConsumer receiver() {
return new MyConsumer();
}
}

Related

how to deploy a kafka consumer being in pause mode until i signal to start consume the messages

I am using spring-kafka 2.2.8 and trying to understand if there is an option to deploy a kafka consumer being in pause mode until i signal to start consume the messages. Please suggest.
I see in the below post, we can pause and start the consumer but I need the consumer to be in pause mode when it's deployed.
how to pause and resume #KafkaListener using spring-kafka
#KafkaListener(id = "foo", ..., autoStartup = "false")
Then start it using the KafkaListenerEndpointRegistry when you are ready
registry.getListenerContainer("foo").start();
There is not much point in starting it in paused mode, but you can do that...
#SpringBootApplication
public class So62329274Application {
public static void main(String[] args) {
SpringApplication.run(So62329274Application.class, args);
}
#KafkaListener(id = "so62329274", topics = "so62329274", autoStartup = "false")
public void listen(String in) {
System.out.println(in);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so62329274").partitions(1).replicas(1).build();
}
#Bean
public ApplicationRunner runner(KafkaListenerEndpointRegistry registry, KafkaTemplate<String, String> template) {
return args -> {
template.send("so62329274", "foo");
registry.getListenerContainer("so62329274").pause();
registry.getListenerContainer("so62329274").start();
System.in.read();
registry.getListenerContainer("so62329274").resume();
};
}
}
You will see a log message like this when the partitions are assigned:
Paused consumer resumed by Kafka due to rebalance; consumer paused again, so the initial poll() will never return any records

Calling seek of ConsumConsumerSeekCallback from a Spring Boot application

Here is my setup:
ConsumerSeekAware implementation:
public class ReplayJobKafkaConsumer implements ConsumerSeekAware, AcknowledgingMessageListener<String, String> {
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> map, ConsumerSeekCallback consumerSeekCallback) {
}
#Override
public void onIdleContainer(Map<TopicPartition, Long> map, ConsumerSeekCallback consumerSeekCallback) {
}
private static final ThreadLocal<ConsumerSeekCallback> seekCallBack = new ThreadLocal<>();
private static ConsumerSeekCallback consumerSeekCallback;;
#Override
public void registerSeekCallback(ConsumerSeekCallback callback) {
this.seekCallBack.set(callback);
consumerSeekCallback = callback;
}
public void onMessage(final ConsumerRecord<String, String> data, final Acknowledgment acknowledgment) {
}
public static ThreadLocal<ConsumerSeekCallback> getSeekCallback(){
return seekCallBack;
}
public static ConsumerSeekCallback getAnotherSeekCallback(){
return consumerSeekCallback;
}
}
My Spring Boot application approximates to:
#SpringBootApplication
public class ReplayJobApplication{
...
public void run(final String... args){
context = SpringApplication.run(ReplayJobApplication.class, args);
ReplayJobKafkaConsumer.getAnotherSeekCallback().seek("top", 0, 23);
}
...}
The above setup works. Now I can run this application using
java -jar -Dstart.offset=0....
But it only works if the seekcallback variable is not a ThreadLocal. I need this to be accessible at the Spring Boot application as that is how I intend running this consumer. TEMP-TOPIC's other consumers can still be processing, but I intend to run this consumer on a need basis with a start and end offset. While the command line parameters can be read in the consumer, the concerns I have are
callback variable is static (I cannot possibly create an instance of ReplayJobKafkaConsumer
it is a plain variable and not a ThreadLocal
Though the life time of this container is only going to be from start to end, I wonder if this setup is flawed and need some confirmation that this implementation is OK.
You appear to have some fundamental misunderstanding of what's going on.
The ThreadLocal is needed because the Kafka consumer object is not thread-safe. If you store the callback in a ThreadLocal, you can perform arbitrary seek operations at runtime - either from the onMessage method, or by listening for an ListenerContainerIdleEvent when there are no messages.
You can't perform arbitrary seeks ReplayJobKafkaConsumer.getAnotherSeekCallback().seek("top", 0, 23); from another thread.
You can't perform arbitrary seeks before partitions have been assigned.
So, as I have been telling you in other answers/comments, you must do the seek when the partition(s) are assigned.
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> map, ConsumerSeekCallback consumerSeekCallback) {
// Do the seeks here using the `consumerSeekCallback` parameter.
}
With modern versions of spring-kafka, you don't need to use ConsumerSeekAware unless you want to perform arbitrary seeks at runtime (after the initial seek). You can use a ConsumerAwareRebalanceListener instead.

does the flink sink only support bio?

The invoke method of sink seems no way to make async io? e.g. returns Future?
For example, the redis connector uses jedis lib to execute redis command synchronously:
https://github.com/apache/bahir-flink/blob/master/flink-connector-redis/src/main/java/org/apache/flink/streaming/connectors/redis/RedisSink.java
Then it will block the task thread of flink waiting the network response from redis server per command?! Is it possible for other operators running in the same thread with sink? If so, then it would block them too?
I know flink has asyncio api, but it seems not for used by sink impl?
https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/stream/asyncio.html
As #Dexter mentioned, you can use RichAsyncFunction. Here is an sample code(may need further update to make it work ;)
AsyncDataStream.orderedWait(ds, new RichAsyncFunction<Tuple2<String,MyEvent>, String>() {
transient private RedisClient client;
transient private RedisAsyncCommands<String, String> commands;
transient private ExecutorService executor;
#Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
client = RedisClient.create("redis://localhost");
commands = client.connect().async();
executor = Executors.newFixedThreadPool(10);
}
#Override
public void close() throws Exception {
// shut down the connection and thread pool.
client.shutdown();
executor.shutdown();
super.close();
}
public void asyncInvoke(Tuple2<String, MyEvent> input, final AsyncCollector<String> collector) throws Exception {
// eg.g get something from redis in async
final RedisFuture<String> future = commands.get("key");
future.thenAccept(new Consumer<String>() {
#Override
public void accept(String value) {
collector.collect(Collections.singletonList(future.get()));
}
});
}
}, 1000, TimeUnit.MILLISECONDS);

Signout with Google Sign-In option crashes with java.lang.IllegalStateException: GoogleApiClient is not connected yet

I am trying to use Google Sign-In for my android app from here.
I am able to log-in succesfully with the google account & able to fetch all the details. However, when ever I try to logout it fails with following error :
java.lang.IllegalStateException: GoogleApiClient is not connected yet.
I have read many answers suggesting to create googleClientApi object inside onCreate() and that's what I am doing.I have added callbacks for connected and suspended but the connect never goes into suspended mode.
Following is my code snippet :
public static void doInit(Context ctx, FragmentActivity fragmentActivity) {
GoogleSignInOptions gso = new GoogleSignInOptions.Builder(
GoogleSignInOptions.DEFAULT_SIGN_IN)
.requestEmail()
.build();
mGoogleApiClient = new GoogleApiClient.Builder(ctx)
.enableAutoManage(fragmentActivity , googleAuth)
.addApi(Auth.GOOGLE_SIGN_IN_API, gso)
.addConnectionCallbacks(googleAuth)
.build();
}
public static Intent doGoogleLogIn() {
return Auth.GoogleSignInApi.getSignInIntent(mGoogleApiClient);
}
public static boolean doGoogleLogOut() {
Auth.GoogleSignInApi.signOut(mGoogleApiClient).setResultCallback(
new ResultCallback<Status>() {
#Override
public void onResult(Status status) {
}
});
return true;
}
#Override
public void onConnectionFailed(ConnectionResult connectionResult) {
// An unresolvable error has occurred and Google APIs (including Sign-In) will not
// be available.
Log.d("Signin", "onConnectionFailed:" + connectionResult);
}
#Override
public void onConnected(#Nullable Bundle bundle) {
System.out.println("Connected...");
}
#Override
public void onConnectionSuspended(int i) {
System.out.println("Suspened....");
}
The only thing that is doubtful to me is, when I login and create googleApiClient object, its created from different activity that the one which I am using for logout. I don't suspect this is the reason because when the activity loaded, the isConnected on googleApiClient is returning true. However, the moment I do some UI action(Click on Logout), it starts returning false.
Primary requirement was to login and logout from different activities.
Finally I managed to make it work.
The actual cause of the error is "enableAutoManage" invocation at the time of Building Client object.
The API doc here suggests that it would automatically do the life cycle management by calling methods on onStart & onStop of the activity.
Therefore, if you want to use the same object across different activities then you should avoid calling "enableAutoManage" and invoke apiObject.connect(preferably in onStart of activity) and apiObject.disconnect() or logout (preferably in onStop of activity) manually.

Retrofit RxJava Simple test

I'm learning Retrofit and RxJava and I'v created test to connect github:
public class GitHubServiceTests {
RestAdapter restAdapter;
GitHubService service;
#Before
public void setUp(){
Gson gson = new GsonBuilder()
.setFieldNamingPolicy(FieldNamingPolicy.LOWER_CASE_WITH_UNDERSCORES)
.create();
restAdapter = new RestAdapter.Builder()
.setEndpoint("https://api.github.com")
.setConverter(new GsonConverter(gson))
.build();
service = restAdapter.create(GitHubService.class);
}
#Test
public void GitHubUsersListObservableTest(){
service.getObservableUserList().flatMap(Observable::from)
.subscribe(user -> System.out.println(user.login));
}
when I execute test, I see nothing in my console. But when I execute another test
#Test
public void GitHubUsersListTest(){
List<User> users = service.getUsersList();
for (User user : users) {
System.out.println(user.login);
}
it works, and I see user's logins in my console
Here is my Interface for Retrofit:
public interface GitHubService {
#GET("/users")
List<User> getUsersList();
#GET("/users")
Observable<List<User>> getObservableUserList();
}
where I'm wrong?
Because of the asynchronous call your test completes before a result is downloaded. That's typical issue and you have to 'tell' test to wait for the result. In plain java it would be:
#Test
public void GitHubUsersListObservableTest(){
CountDownLatch latch = new CountDownLatch(N);
service.getObservableUserList()
.flatMap(Observable::from)
.subscribe(user -> {
System.out.println(user.login);
latch.countDown();
});
latch.await();
}
Or you can use BlockingObservable from RxJava:
// This does not block.
BlockingObservable<User> observable = service.getObservableUserList()
.flatMap(Observable::from)
.toBlocking();
// This blocks and is called for every emitted item.
observable.forEach(user -> System.out.println(user.login));

Resources