Is there a simple way to say if a consumer (created with spring boot and #KafkaListener) is operating normally?
This includes - can access and poll a broker, has at least one partition assigned, etc.
I see there are ways to subscribe to different lifecycle events but this seems to be a very fragile solution.
Thanks in advance!
You can use the AdminClient to get the current group status...
#SpringBootApplication
public class So56134056Application {
public static void main(String[] args) {
SpringApplication.run(So56134056Application.class, args);
}
#Bean
public NewTopic topic() {
return new NewTopic("so56134056", 1, (short) 1);
}
#KafkaListener(id = "so56134056", topics = "so56134056")
public void listen(String in) {
System.out.println(in);
}
#Bean
public ApplicationRunner runner(KafkaAdmin admin) {
return args -> {
try (AdminClient client = AdminClient.create(admin.getConfig())) {
while (true) {
Map<String, ConsumerGroupDescription> map =
client.describeConsumerGroups(Collections.singletonList("so56134056")).all().get(10, TimeUnit.SECONDS);
System.out.println(map);
System.in.read();
}
}
};
}
}
{so56134056=(groupId=so56134056, isSimpleConsumerGroup=false, members=(memberId=consumer-2-32a80e0a-2b8d-4519-b71d-671117e7eaf8, clientId=consumer-2, host=/127.0.0.1, assignment=(topicPartitions=so56134056-0)), partitionAssignor=range, state=Stable, coordinator=localhost:9092 (id: 0 rack: null))}
We have been thinking about exposing getLastPollTime() to the listener container API.
getAssignedPartitions() has been available since 2.1.3.
I know that you haven't mentioned it in your post - but beware of adding items like this to a health check if you then deploy in AWS and use such a health check for your ELB scaling environment.
For example one scenario that can happen is that your app loses connectivity to Kafka - your health check turns RED - and then elastic beanstalks begins a process of killing and re-starting your instances (which will happen continually until your Kafka instances are available again). This could be costly!
There is also a more general philosophical question on whether health checks should 'cascade failures' or not e.g. kafka is down so app connected to kafka claims it is down, the next app in the chain also does the same, etc etc. This is often more normally implemented via circuit breakers which are designed to minimise slow calls destined for failure.
You could check using the AdminClient for the topic description.
final AdminClient client = AdminClient.create(kafkaConsumerFactory.getConfigurationProperties());
final String topic = "someTopicName";
final DescribeTopicsResult describeTopicsResult = client.describeTopics(Collections.singleton(topic));
final KafkaFuture<TopicDescription> future = describeTopicsResult.values().get(topic);
try {
// for healthcheck purposes we're fetching the topic description
future.get(10, TimeUnit.SECONDS);
} catch (final InterruptedException | ExecutionException | TimeoutException e) {
throw new RuntimeException("Failed to retrieve topic description for topic: " + topic, e);
}
Related
I am trying to use the Content-based routing in the latest version of the Spring Cloud stream. As per this document -> Content-based routing, it mentions how to use it in the legacy system/StreamListener.
This is my code with StreamListener
#StreamListener(target = EventChannels.FILE_REQUEST_IN
, condition = "headers['saga_request']=='FILE_SUBMIT'")
public void handleSubmitFile(#Payload FileSubmitRequest request) {
}
#StreamListener(target = EventChannels.FILE_REQUEST_IN
, condition = "headers['saga_request']=='FILE_CANCEL'")
public void handleCancelFile(#Payload FileCancelRequest request) {
}
By using the condition, it was possible to route the message to two different functions.
I am trying to consume the message with a Functional interface approach as below.
#Bean
public Consumer<String> consumeMessage(){
return event -> {
try {
LOGGER.info("Consumer is working: {}", event);
} catch (Exception ex) {
LOGGER.error("Exception while processing");
}
};
}
How can I achieve similar content-based routing in the functions? TIA.
Other details->
Spring boot version - 2.3.12.RELEASE
Spring cloud version - Hoxton.SR11
Have you seen this - https://docs.spring.io/spring-cloud-stream/docs/3.1.4/reference/html/spring-cloud-stream.html#_event_routing?
We provide two different routing models TO and FROM. The included link contains samples so please look through it and feel free to post any followups
I'm writing a kafka consumer using 'org.springframework.kafka.annotation.KafkaListener' (#KafkaListener) annotation. This annotation is expecting the topic to be already at the time of subscribing and trying to create the topic if the topic is not present.
In my case, i don't want the consumer to create a topic with default configuration but it should create a topic with custom configurations (like the no of partitions, clean up policy etc). Is there any option for this in spring-kafka?
See the documentation configuring topics.
If you define a KafkaAdmin bean in your application context, it can automatically add topics to the broker. To do so, you can add a NewTopic #Bean for each topic to the application context. The following example shows how to do so:
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
StringUtils.arrayToCommaDelimitedString(embeddedKafka().getBrokerAddresses()));
return new KafkaAdmin(configs);
}
#Bean
public NewTopic topic1() {
return new NewTopic("thing1", 10, (short) 2);
}
#Bean
public NewTopic topic2() {
return new NewTopic("thing2", 10, (short) 2);
}
By default, if the broker is not available, a message is logged, but the context continues to load. You can programmatically invoke the admin’s initialize() method to try again later. If you wish this condition to be considered fatal, set the admin’s fatalIfBrokerNotAvailable property to true. The context then fails to initialize.
If the broker supports it (1.0.0 or higher), the admin increases the number of partitions if it is found that an existing topic has fewer partitions than the NewTopic.numPartitions.
If you are using Spring Boot, you don't need an admin bean because boot will automatically configure one for you.
I'm quite new to the Microservice world and particularly vertX. I want my verticle to start anyway even there is no database connection available (e.g. database URL missing in configuration). I already managed to do this and my verticle is starting.
The issue now is that I want my verticle to notice when the database connection is available again and connect to it. How can I do this ?
I thought about creating another Verticle "DatabaseVerticle.java" which would send the current DB config on the event bus and my initial verticle would consume this message and check whether the config info is consistent (reply with success) or still missing some data (reply with fail and make the DatabaseVerticle check again).
This might work (and might not) but does not seem to be the optimal solution for me.
I'd be very glad if someone could suggest a better solution. Thank you !
For your use case, I'd recommend to use the vertx-config. In particular, have a look at the Listening to configuration changes section of the Vert.x Config documentation.
You could create a config retriever and set a handler for changes:
ConfigRetrieverOptions options = new ConfigRetrieverOptions()
.setScanPeriod(2000)
.addStore(myConfigStore);
ConfigRetriever retriever = ConfigRetriever.create(vertx, options);
retriever.getConfig(json -> {
// If DB config available, start the DB client
// Otherwise set a "dbStarted" variable to false
});
retriever.listen(change -> {
// If "dbStarted" is still set to false
// Check the config and start the DB client if possible
// Set "dbStarted" to true when done
});
The ideal way would be some other service telling your service about database connection. Either through event bus or HTTP, what you can do is when someone tries to access your database when connection is not made just try to make some DB call and handle the exception, return a boolean as false. Now when you get a message on event bus, consume it and save it in some config pojo. Now when someone tries to access your database, look for config and if available make a connection.
Your consumer:
public void start(){
EventBus eb = vertx.eventBus();
eb.consumer("database", message -> {
config.setConfig(message.body());
});
}
Your db client(Mongo for this eg):
public class MongoService{
private MongoClient client;
public boolean isAvailable = false;
MongoService(Vertx vertx){
if(config().getString("connection")){
client = MongoClient.createShared(vertx, config().getString("connection"));
isAvailable = true;
}
}
}
Not everything in Vertx should be solved by another verticle.
In this case, you can use .periodic()
http://vertx.io/docs/vertx-core/java/#_don_t_call_us_we_ll_call_you
I assume you have some function that checks the DB for the first time.
Let's call it checkDB()
class PeriodicVerticle extends AbstractVerticle {
private Long timerId;
#Override
public void start() {
System.out.println("Started");
// Should be called each time DB goes offline
final Long timerId = this.vertx.setPeriodic(1000, (l) -> {
final boolean result = checkDB();
// Set some variable telling verticle that DB is back online
if (result) {
cancelTimer();
}
});
setTimerId(timerId);
}
private void cancelTimer() {
System.out.println("Cancelling");
getVertx().cancelTimer(this.timerId);
}
private void setTimerId(final Long timerId) {
this.timerId = timerId;
}
}
Here I play a bit with timerId, since we cannot pass it to cancelTimer() right away. But otherwise, it's quite simple.
I m sending several retrofit calls via SyncAdapter onPerformSync and I m trying to regulate http calls by sending out via a try/catch sleep statement. However, this is blocking the UI and will be not responsive only after all calls are done.
What is a better way to regulate network calls (with a sleep timer) in background in onPerformSync without blocking UI?
#Override
public void onPerformSync(Account account, Bundle extras, String authority, ContentProviderClient provider, SyncResult syncResult) {
String baseUrl = BuildConfig.API_BASE_URL;
Retrofit retrofit = new Retrofit.Builder()
.baseUrl(baseUrl)
.addConverterFactory(GsonConverterFactory.create())
.build();
service = retrofit.create(HTTPService.class);
Call<RetroFitModel> RetroFitModelCall = service.getRetroFit(apiKey, sortOrder);
RetroFitModelCall.enqueue(new Callback<RetroFitModel>() {
#Override
public void onResponse(Response<RetroFitModel> response) {
if (!response.isSuccess()) {
} else {
List<RetroFitResult> retrofitResultList = response.body().getResults();
Utility.storeList(getContext(), retrofitResultList);
for (final RetroFitResult result : retrofitResultList) {
RetroFitReview(result.getId(), service);
try {
// Sleep for SLEEP_TIME before running RetroFitReports & RetroFitTime
Thread.sleep(SLEEP_TIME);
} catch (InterruptedException e) {
}
RetroFitReports(result.getId(), service);
RetroFitTime(result.getId(), service);
}
}
}
#Override
public void onFailure(Throwable t) {
Log.e(LOG_TAG, "Error: " + t.getMessage());
}
});
}
}
The "onPerformSync" code is executed within the "SyncAdapterThread" thread, not within the Main UI thread. However this could change when making asynchronous calls with callbacks (which is our case here).
Here you are using an asynchronous call of the Retrofit "call.enqueue" method, and this has an impact on thread execution. The question we need to ask at this point:
Where callback methods are going to be executed?
To get the answer to this question, we have to determine which Looper is going to be used by the Handler that will post callbacks.
In case we are playing with handlers ourselves, we can define the looper, the handler and how to process messages/runnables between handlers. But this time it is different because we are using a third party framework (Retrofit). So we have to know which looper used by Retrofit?
Please note that if Retrofit didn't already define his looper, you
could have caught an exception saying that you need a looper to
process callbacks. In other words, an asynchronous call needs to be in
a looper thread in order to post callbacks back to the thread from
where it was executed.
According to the code source of Retrofit (Platform.java):
static class Android extends Platform {
#Override CallAdapter.Factory defaultCallAdapterFactory(Executor callbackExecutor) {
if (callbackExecutor == null) {
callbackExecutor = new MainThreadExecutor();
}
return new ExecutorCallAdapterFactory(callbackExecutor);
}
static class MainThreadExecutor implements Executor {
private final Handler handler = new Handler(Looper.getMainLooper());
#Override public void execute(Runnable r) {
handler.post(r);
}
}
}
You can notice "Looper.getMainLooper()", which means that Retrofit will post messages/runnables into the main thread message queue (you can do research on this for further detailed explanation). Thus the posted message/runnable will be handled by the main thread.
So that being said, the onResponse/onFailure callbacks will be executed in the main thread. And it's going to block the UI, if you are doing too much work (Thread.sleep(SLEEP_TIME);). You can check it by yourself: just make a breakpoint in "onResponse" callback and check in which thread it is running.
So how to handle this situation? (the answer to your question about Retrofit use)
Since we are already in a background thread (SyncAdapterThread), so there is no need to make asynchronous calls in your case. Just make a Retrofit synchronous call and then process the result, or log a failure. This way, you will not block the UI.
I have a requirement to start a process on the server that may run for several minutes, so I was thinking of exposing the following hub method:-
public async Task Start()
{
await Task.Run(() => _myService.Start());
}
There would also be a Stop() method that allows a client to stop the running process, probably via a cancellation token. I've also omitted code that prevents it from being started if already running, error handling, etc.
Additionally, the long-running process will be collecting data which it needs to periodically broadcast back to the client(s), so I was wondering about using an event - something like this:-
public async Task Start()
{
_myService.AfterDataCollected += AfterDataCollectedHandler;
await Task.Run(() => _myService.Start());
_myService.AfterDataCollected -= AfterDataCollectedHandler;
}
private void AfterDataCollectedHandler(object sender, MyDataEventArgs e)
{
Clients.All.SendData(e.Data);
}
Is this an acceptable solution or is there a "better" way?
You don't need to use SignalR to start the work, you can use the applications already existing framework / design / API for this and only use SignalR for the pub sub part.
I did this for my current customers project, a user starts a work and all tabs belonging to that user is updated using signalr, I used a out sun library called SignalR.EventAggregatorProxy to abstract the domain from SignalR. Disclaimer : I'm the author of said library
http://andersmalmgren.com/2014/05/27/client-server-event-aggregation-with-signalr/
edit: Using the .NET client your code would look something like this
public class MyViewModel : IHandle<WorkProgress>
{
public MyViewModel(IEventAggregator eventAggregator)
{
eventAggregator.Subscribe(this);
}
public void Handle(WorkProgress message)
{
//Act on work progress
}
}