I have a situation where I want to use multiple consumer on on JVM. I am using Spring-Cloud-Stream.
Functioning will be like:- I will deploy my application on JVM 1, and If I set instance=3, then 3 consumers should be create.
Is there any configuration available for Spring-cloud-stream?
See concurrency in Consumer Properties.
Related
I have a .Net core application that consists of some background tasks (hosted services) and WEB APIs (which controls and get statuses of those background tasks). Other applications (e.g. clients) communicate with this service through these WEB API endpoints. We want this service to be highly available i.e. if a service crashes then another instance should start doing the work automatically. Also, the client applications should be able to switch to the next service automatically (clients should call the APIs of the new instance, instead of the old one).
The other important requirement is that the task (computation) this service performed in the background can’t be shared between two instances. We have to make sure only one instance does this task at a given time.
What I have done up to now is, I ran two instances of the same service and use a SQL server-based distributed locking mechanism (SqlDistributedLock) to acquire a lock. If a service could acquire a lock then goes and do the operation while the other node waiting to acquire the lock. If one service crashed the next node could be able to acquire the lock. On the client-side, I used Polly based retry mechanism to switch the calling URL to the next node to find the working node.
But this design has an issue, if the node which acquired the lock loses the connectivity to the SQL server then the second service managed to acquire the lock and started doing the work while the first service is also in the middle of doing the same.
I think I need some sought of leader election (seems done it wrongly), Can anyone help me with a better solution for this kind of a problem?
This problem is not specific to .Net or any other framework. So please make your question more general so as to make it more accessible. Generally the solution to this problem lies in the domain of Enterprise Integration Patterns, so consult the references as the status quo may change.
At first sight and based on my own experience developing distributed systems, I suggest two solutions:
use a load balancer or gateway to distribute requests between your service instances.
use a shared message queue broker to put requests in and let each service instance dequeue a request for processing.
Either is fine and I can use both for my own designs.
I have a use case of a lot consumer groups (and one topic per consumer group), I hence have to create many ConcurrentMessageListenerContainer instances each one for one topic/consumergroup. But I would like them to share a common thread pool to keep control of the calls of kafkaconsumer.poll and also how the records are processed. Do you think it is relevant to do that with spring kafka or do I have to implement my own version instantiating myself KafakConsumers ?
Using a pool won't help - the container currently uses a dedicated thread for each consumer. There is no support for sharing threads across containers.
I'm using Axon version (3.3) which seamlessly supports Kafka with annotation in the SpringBoot Main class using
#SpringBootApplication(exclude = KafkaAutoConfiguration.class)
In our use case, the command side microservice need to pick message from kafka topic rather than we expose it as Rest api. It will store the event in event store and then move it to another kafka topic for query side microservice to consume.
Since KafkaAutoCOnfiguration is disabled, I cannot use spring-kafka configuration to write a consumer. How can I consume a normal message in Axon?
I tried writing a normal Kafka spring Consumer but since Kafka Auto COnfiguration is disabled, initial trigger for the command is not picked up from the Kafka topic
I think I can help you out with this.
The Axon Kafka Extension is solely meant for Events.
Thus, it is not intended to dispatch Commands or Queries from one node to another.
This is very intentionally, as Event messages have different routing needs apposed to Command and Query messages.
Axon views Kafka a fine fit as an Event Bus and as such this is supported through the framework.
It is however not ideal for Command messages (should be routed to a single handler, always) or Query messages (can be routed to a single handler, several handlers or have a subscription model).
Thus, I you'd want to "abuse" Kafka for different types of messages in conjunction with Axon, you will have to write your own component/service for it.
I would however stick to the messaging paradigm and separate these concerns.
For far increasing simplicity when routing messages between Axon applications, I'd highly recommend trying out Axon Server.
Additionally, here you can hear/see Allard Buijze point out the different routing needs per message type (thus the reason why Axon's Kafka Extension only deals with Event messages).
I am trying to use Topshelf to create a Rebus endpoint that will run as a service. How should this be set up and are there any examples?
You can take a look at the Rebus samples repository, where the integration service sample in particular shows what you're after.
As you can see in Program.cs it uses Topshelf to basically just hold on to a Windsor container, which it disposes when the application shuts down.
The Castle Windsor installer syntax causes the installers to be automatically picked up, where the RebusInstaller shows how you'd typically let Rebus inject itself into your container, and the HandlerInstaller shows how you can add handlers to the container.
It should be fairly easy to adapt the sample to use another container - just remember to dispose it when the application shuts down, thus giving Rebus a chance to finish messages currently being handled and stop its worker threads.
I have a web application that maintains a version flag as a ServletContext.setAttribute('flag', flagValue)
A Filter uses this flagValue in its doFilter method applied to diff servlets.
There is a notification service which toggles the value of the 'flag'.
Now I need to deploy this app in a clustered environment. Given the fact that ServletContext is per JVM, how do I propagate the update received on the notification url across the multiple servers in the cluster?
You can use Dynacaches DistributedMap to share the value across the cluster, then the other servers just need to check for changes when they would be affected by it.
Ended up notifying individual AppServer urls.