Publish Axon Events onto a Kafka Topic - axon

I want to publish an event through one of my aggregate event handlers to the axon Kafka topic as I am using kafka as my event bus. What is the correct way to do that? Should I directly push the message to the topic or can I use AggregateLifecycle#apply(event) in this case?
I have multiple events getting published from my aggregate, through one of the event handlers I want to publish another event. I am using axon 4.2

Easiest would be to start using the Kafka Extension Axon provides. The shared repository contains all the necessary code to create a publishing end and a consuming end of Axon Event from and to a Kafka topic. For configuration convenience, there is a Spring Boot Starter present in that project too
Added, the repository has a (Kotlin) sample project showing how to configure it, which you can find here. Lastly, for a full description of how to set everything up, I strongly recommend to give Axon's Reference Guide a read, especially the Kafka page here.
I'd like to recommend you thought that this extension is perfectly suited to communicate between Axon and non-Axon applications, making Kafka a form of "Enterprise Service Bus". Using it as the EventBus replacement within Axon Framework is doable, but requires a bunch of fine tuning on you rend. It would be wiser to use Axon Server instead in those scenarios, or if you really must, share the data source containing your events directly between the applications.

Related

Iis it possible to store all events in central RDBMS DB in micro service architecture using axon 3.3.3?

I would like to understand an Axon feature.
Currently, we are developing an application using microservice architecture.
We want to store all service events in a central RDBMS database, like for example PostgreSQL.
Is it possible to use such a store?
We have used the below configuration to store events in same domain DB:
#Bean
public AggregateFactory<UserAggregate> userAggregateFactory() {
SpringPrototypeAggregateFactory<UserAggregate> aggregateFactory =
new SpringPrototypeAggregateFactory<>();
aggregateFactory.setPrototypeBeanName("userAggregate");
return aggregateFactory;
}
Now we want to store events in a central Event Store DB, not with domain DB.
Firstly, the AggregateFactory within any Axon application does not define where or how your events are stored at all.
I instead suggest to read the Event Bus & Event Store section of the Axon Framework reference guide on the matter to explain how you can achieve this.
The short answer to your question is by the way yes, you can have a single Event Store backed by a RDBMS, like PostgreSQL, to store all your events in.
Between duplicated instances of a given application it is actually highly recommended to use the same storage location.
As soon as you are going to span different Bounded Context's, I would suggest to define different Event Stores per context though.
Concluding, you are using an old version of Axon Framework.
I would highly recommend to move the at least the latest Axon 3 release, being 3.4.3, but ideally you start using 4.1.2.
Note that there is no active development taking place on Axon 3 any more, hence the suggestion.

How to create command by consuming message from kafka topic rather than through Rest API

I'm using Axon version (3.3) which seamlessly supports Kafka with annotation in the SpringBoot Main class using
#SpringBootApplication(exclude = KafkaAutoConfiguration.class)
In our use case, the command side microservice need to pick message from kafka topic rather than we expose it as Rest api. It will store the event in event store and then move it to another kafka topic for query side microservice to consume.
Since KafkaAutoCOnfiguration is disabled, I cannot use spring-kafka configuration to write a consumer. How can I consume a normal message in Axon?
I tried writing a normal Kafka spring Consumer but since Kafka Auto COnfiguration is disabled, initial trigger for the command is not picked up from the Kafka topic
I think I can help you out with this.
The Axon Kafka Extension is solely meant for Events.
Thus, it is not intended to dispatch Commands or Queries from one node to another.
This is very intentionally, as Event messages have different routing needs apposed to Command and Query messages.
Axon views Kafka a fine fit as an Event Bus and as such this is supported through the framework.
It is however not ideal for Command messages (should be routed to a single handler, always) or Query messages (can be routed to a single handler, several handlers or have a subscription model).
Thus, I you'd want to "abuse" Kafka for different types of messages in conjunction with Axon, you will have to write your own component/service for it.
I would however stick to the messaging paradigm and separate these concerns.
For far increasing simplicity when routing messages between Axon applications, I'd highly recommend trying out Axon Server.
Additionally, here you can hear/see Allard Buijze point out the different routing needs per message type (thus the reason why Axon's Kafka Extension only deals with Event messages).

How can i avoid using prooph's event sourcing?

Concept of Event Sourcing in my mind is that Event Sourcing is related to Domain layer which can't be coupled with Infrastructure layer. so I will not use prooph/event-sourcing component and this is why Prooph's team will not maintain event-sourcing component. (this article metioned)
Question is coming, Aggregate, DomainEvent and so on is belong to Domain layer, they are put in event-sourcing component. Event store is belong to Infrastructure layer, so i can use prooph/event-store component directly. However, I found class Prooph\EventSourcing\Aggregate\AggregateRepository is used in prooph/event-store-symfony-bundle, why does AggregateRepository is put in event sourcing? I consider Repository to Infrastructure conecrning, event store symfony bundle shouldn't use event sourcing component any more and Repository also shouldn't be put in event sourcing.
That's confusing me. so I can't use prooph/event-store now.
How do you think?
A repository is the link between the domain model and infrastructure. It's put into the event sourcing component because the event store does not care about aggregates and how they are organized at all. The event store manages streams of events. Only the repository puts that into shape. It uses the event stream capabilities of the even store to manage event history of aggregates. Hence, the repository is also your responsibility. You're right that a new version of the symfony bundle should no longer include a repository implementation but only provide prooph/event-store. That's not done yet. In fact, prooph/event-sourcing is maintained until end of 2019 so we are not in a hurry.
Anyway, I highly recommend to take a look at Event Machine. At the moment it is based on prooph/event-sourcing, service-bus and event-store, but already provides an abstraction layer and a way to fully decouple the domain model and other parts of your system from prooph and Event Machine itself. Just do the tutorial to learn more about it (takes 4-6 hours).

DiagnosticSource - DiagnosticListener - For frameworks only?

There are a few new classes in .NET Core for tracing and distributed tracing. See the markdown docs in here:
https://github.com/dotnet/corefx/tree/master/src/System.Diagnostics.DiagnosticSource/src
As an application developer, should we be instrumenting events in our code, such as sales or inventory depletion etc. using DiagnosticListener instances and then either subscribe and route messages to some metrics store or allow tools like Application Insights to automatically subscribe and push these events to the AI cloud?
OR
Should we create our own metrics collecting abstraction and inject/flow it down the stack "as per normal" and pretend I never saw DiagnosticListener?
I have a similar need to publish "health events" to Service Fabric which I could also solve (abstract) using DiagnosticListener instances sprinkled around.
DiagnosticListener intends to decouple library/app from the tracing system: i.e. any library can use DiagnosticSource` to notify any consumer about interesting operations.
Tracing system can dynamically subscribe to such events and get extensive information about the operation.
If you develop an application and use tracing system that supports DiagnostiListener, e.g. ApplicationInsights, you may use either DiagnosticListener to decouple your code from tracing system or use it's API directly. The latter is more efficient as there is no extra adapter that converts your DS events to AppInsights/other tracing systems events. You can also fine-tune these events more easily.
The former is better if you actually want this layer of abstraction.
You can configure AI to use any DiagnosticListener (by specifying includedDiagnosticSourceActivities) .
If you write a library and want to rely on something available on the platform so that any app can use it without bringing new extra dependencies - DiagnosticListener is your best choice.
Also consider that tracing and metrics collection is different, tracing is much heavier and does not assume any aggregation. If you want just custom-metrics/events without in/out-proc correlation, I'd recommend using tracing system APIs directly.

PACT: How to guard against consumer generating incorrect contracts

We have two micro-services: Provider and Consumer, both are built independently. Consumer micro-service makes a mistake in how it consumes Provider service (for whatever reason) and as a result, incorrect pact is published to the Pact Broker.
Consumer service build is successful (and can go all the way to release!), but next Provider service build will fail for the wrong reason. So we end up with the broken Provider service build and a broken release of Consumer.
What is the best practice to guard against situations like this?
I was hoping that Pact Broker can trigger the Provider tests automatically when contracts are published and notify Consumers if they fail, but it doesn't seem to be the case.
Thanks!
This is the nature of consumer-driven contracts - the consumer gets a significant say in the API!
As a general rule, if the contract doesn't change, there is no need to run the Provider build, albeit there is currently no easy way to know this in the Broker (see feature request https://github.com/bethesque/pact_broker/issues/48).
As for solutions you could use one or more of the below strategies.
Effective use of code branches
It is of course very important that new assumptions on the contract be validated by the Provider before the Consumer can be safely released. Have branches tested against the Provider before you merge into master.
But most importantly - you must be collaborating closely with the Provider team!
Use source control to detect a modified contract:
If you also checked the master pact files into source control, your CI build could conditionally act - if the contract has changed, you must wait for a green provider build, if not you can safely deploy!
Store in separate repository
If you really want the provider to maintain control, you could store contracts in an intermediate repository or file location managed by the provider. I'd recommend this is a last resort as it negates much of the collaboration pact intends to facilitate.
Use Pact Broker Webhooks:
I was hoping that Pact Broker can trigger the Provider tests automatically when contracts are published and notify Consumers if they fail, but it doesn't seem to be the case.
Yes, this is possible using web hooks on the Pact Broker. You could trigger a build on the Provider as soon as a new contract is submitted to the server.
You could envisage this step working with options 1 and 2.
See Using Pact where the Consumer team is different from the Provider team in our FAQ for more on this use case.
You're spot on, that is one of the current things lacking with the Pact workflow and it's something I've been meaning of working towards once a few other things align.
That being said, in the meantime, this isn't solving your current problem, so I'm going to suggest a potential workaround in your process. Instead of running the test for the consumer, them passing, and then releasing it straight away, you could have the test run on the consumer, then wait for the provider test to come back green before releasing the consumer/provider together. Another way would be to version your provider/consumer interactions (api versioning) so that you can release the consumer beforehand, but isn't "turned on" until the correct version of the provider is released.
None of these solutions are great and I wholeheartedly agree. This is something that I'm quite passionate about and will be working on soon to fix the developer experience with pact broker and releasing the consumer/provider in a better fashion.
Any and all comments are welcome. Cheers.
I think the problem might be caused by the fact that contracts are generated on the consumer side. It means that consumers can modify those contracts how they want. But in the end producer's build will suffer due to incorrect contracts generated by consumers.
Is there any way that contracts are defined by producer? As I think the producer is responsible for maintaining its own contracts. For instance, in case of Spring Cloud Contracts it is recommended to have contacts defined in producer sources (e.g. in the same git repo with producer source code) or in a separate scm repo that can be managed by producer and consumer together.

Resources