Ultra low latency processes using Axon Framework - axon

So, I'm working on a PoC for a low latency trading engine using axon and Spring Boot framework. Is it possible to achieve latency as low as 10 - 50ms for a single process flow? The process will include validations, orders, and risk management. I have done some initial tests on a simple app to update the order state and execute it and I'm clocking in 300ms+ in latency. Which got me curious as to how much can I optimize with Axon?
Edit:
The latency issue isn't related to Axon. Managed to get it down to ~5ms per process flow using an InMemoryEventStorageEngine and DisruptorCommandBus.
The flow of messages goes like this. NewOrderCommand(published from client) -> OrderCreated(published from aggregate) -> ExecuteOrder(published from saga) -> OrderExecutionRequested -> ConfirmOrderExecution(published from saga) -> OrderExecuted(published from aggregate)
Edit 2:
Finally switched over to Axon Server but as expected the average latency went up to ~150ms. Axon Server was installed using Docker. How do I optimize the application using AxonServer to achieve sub-millisecond latencies moving forward? Any pointers are appreciated.
Edit 3:
#Steven, based on your suggestions I have managed to bring down the latency to an average of 10ms, this is a good start ! However, is it possible to bring it down even further? As what I am testing now is just a small process out of a series of processes to be done like validations, risk management and position tracking before finally executing the order out. All of which should be done within 5ms or less. Worse case to tolerate is 10ms(These are the updated time budget). Also, do note below in the configs that the new readings are based on an InMemorySagaStore backed by a WeakReferenceCache. Really appreciate the help !
OrderAggregate:
#Aggregate
internal class OrderAggregate {
#AggregateIdentifier(routingKey = "orderId")
private lateinit var clientOrderId: String
private var orderId: String = UUID.randomUUID().toString()
private lateinit var state: OrderState
private lateinit var createdAtSource: LocalTime
private val log by Logger()
constructor() {}
#CommandHandler
constructor(command: NewOrderCommand) {
log.info("received new order command")
val (orderId, created) = command
apply(
OrderCreatedEvent(
clientOrderId = orderId,
created = created
)
)
}
#CommandHandler
fun handle(command: ConfirmOrderExecutionCommand) {
apply(OrderExecutedEvent(orderId = command.orderId, accountId = accountId))
}
#CommandHandler
fun execute(command: ExecuteOrderCommand) {
log.info("execute order event received")
apply(
OrderExecutionRequestedEvent(
clientOrderId = clientOrderId
)
)
}
#EventSourcingHandler
fun on(event: OrderCreatedEvent) {
log.info("order created event received")
clientOrderId = event.clientOrderId
createdAtSource = event.created
setState(Confirmed)
}
#EventSourcingHandler
fun on(event: OrderExecutedEvent) {
val now = LocalTime.now()
log.info(
"elapse to execute: ${
createdAtSource.until(
now,
MILLIS
)
}ms. created at source: $createdAtSource, now: $now"
)
setState(Executed)
}
private fun setState(state: OrderState) {
this.state = state
}
}
OrderManagerSaga:
#Profile("rabbit-executor")
#Saga(sagaStore = "sagaStore")
class OrderManagerSaga {
#Autowired
private lateinit var commandGateway: CommandGateway
#Autowired
private lateinit var executor: RabbitMarketOrderExecutor
private val log by Logger()
#StartSaga
#SagaEventHandler(associationProperty = "clientOrderId")
fun on(event: OrderCreatedEvent) {
log.info("saga received order created event")
commandGateway.send<Any>(ExecuteOrderCommand(orderId = event.clientOrderId, accountId = event.accountId))
}
#SagaEventHandler(associationProperty = "clientOrderId")
fun on(event: OrderExecutionRequestedEvent) {
log.info("saga received order execution requested event")
try {
//execute order
commandGateway.send<Any>(ConfirmOrderExecutionCommand(orderId = event.clientOrderId))
} catch (e: Exception) {
log.error("failed to send order: $e")
commandGateway.send<Any>(
RejectOrderCommand(
orderId = event.clientOrderId
)
)
}
}
}
Beans:
#Bean
fun eventSerializer(mapper: ObjectMapper): JacksonSerializer{
return JacksonSerializer.Builder()
.objectMapper(mapper)
.build()
}
#Bean
fun commandBusCache(): Cache {
return WeakReferenceCache()
}
#Bean
fun sagaCache(): Cache {
return WeakReferenceCache()
}
#Bean
fun associationsCache(): Cache {
return WeakReferenceCache()
}
#Bean
fun sagaStore(sagaCache: Cache, associationsCache: Cache): CachingSagaStore<Any>{
val sagaStore = InMemorySagaStore()
return CachingSagaStore.Builder<Any>()
.delegateSagaStore(sagaStore)
.associationsCache(associationsCache)
.sagaCache(sagaCache)
.build()
}
#Bean
fun commandBus(
commandBusCache: Cache,
orderAggregateFactory: SpringPrototypeAggregateFactory<Order>,
eventStore: EventStore,
txManager: TransactionManager,
axonConfiguration: AxonConfiguration,
snapshotter: SpringAggregateSnapshotter
): DisruptorCommandBus {
val commandBus = DisruptorCommandBus.builder()
.waitStrategy(BusySpinWaitStrategy())
.executor(Executors.newFixedThreadPool(8))
.publisherThreadCount(1)
.invokerThreadCount(1)
.transactionManager(txManager)
.cache(commandBusCache)
.messageMonitor(axonConfiguration.messageMonitor(DisruptorCommandBus::class.java, "commandBus"))
.build()
commandBus.registerHandlerInterceptor(CorrelationDataInterceptor(axonConfiguration.correlationDataProviders()))
return commandBus
}
Application.yml:
axon:
server:
enabled: true
eventhandling:
processors:
name:
mode: tracking
source: eventBus
serializer:
general : jackson
events : jackson
messages : jackson

Original Response
Your setup's description is thorough, but I think there are still some options I can recommend. This touches a bunch of locations within the Framework, so if anything's unclear on the suggestions given their position or goals within Axon, feel free to add a comment so that I can update my response.
Now, let's provide a list of the things I have in mind:
Set up snapshotting for aggregates if loading takes to long. Configurable with the AggregateLoadTimeSnapshotTriggerDefinition.
Introduces a cache for your aggregate. I'd start with trying out the WeakReferenceCache. If this doesn't suffice, it would be worth investigating the EhCache and JCache adapters. Or, construct your own. Here's the section on Aggregate caching, by the way.
Introduces a cache for your saga. I'd start with trying out the WeakReferenceCache. If this doesn't suffice, it would be worth investigating the EhCache and JCache adapters. Or, construct your own. Here's the section on Saga caching, by the way.
Do you really need a Saga in this setup? The process seems simple enough it could run within a regular Event Handling Component. If that's the case, not moving through the Saga flow will likely introduce a speed up too.
Have you tried optimizing the DisruptorCommandBus? Try playing with the WaitStrategy, publisher thread count, invoker thread count and the Executor used.
Try out the PooledStreamingEventProcessor (PSEP, for short) instead of the TrackingEventProcessor (TEP, for short). The former provides more configuration options. The defaults already provide a higher throughput compared to the TEP, by the way. Increasing the "batch size" allows you to ingest bigger amounts of events in one go. You can also change the Executor the PSEP uses for Event retrieval work (done by the coordinator) and Event processing (the worker executor is in charge of this).
There are also some things you can configure on Axon Server that might increase throughput. Try out the event.events-per-segment-prefetch, the event.read-buffer-size or command-thread. There might be other options that work, so it might be worth checking out the entire list of options here.
Although it's hard to deduce whether this will generate an immediate benefit, you could give the Axon Server runnable more memory / CPU. At least 2Gb heap and 4 cores. Playing with these numbers might just help too.
There's likely more to share, but these are the things I have on top of mind. Hope this helps you out somewhat David!
Second Response
To further deduce where we can achieve more performance, I think it would be essential to know what process your application is working on that take the longest. That will allow us to deduce what should be improved if we can improve it.
Have you tried making a thread dump to deduce what part's take up the most time? If you can share that as an update to your question, we can start thinking about the following steps.

Related

Add Quartz Job&Trigger to running Razor Pages application

I have a razor pages app that implements Quartz.NET to store jobs in a MySQL Database. The implementation works fine so far, I can connect to the db, store jobs and they are executed at the specified times. The issue I'm having currently is that I need to schedule and execute jobs based on user inputs(without restarting the app) and that I can't get to work. I'm very new to Quartz&Asp.net and I haven't been coding for very long either, so apologies if I've made any stupid mistakes.
I've read somewhere that I shouldn't initialize multiple schedulers so I've tried storing the scheduler object I've got so I can access and use it later. However when I try to access it from another class later then I get a Null reference exception. Tbh, this feels like it shouldn't even work so I'm not surprised it doesn't...can anyone please look at my code below and tell me if this can work? Or is there a better way to do this?
I've found one other solution where they basically create a job on startup that periodically checks a db for new jobs and adds them to the scheduler. I guess that would work, seems a bit clunky, though. Plus it's from 10 years ago so maybe there's a better way today? How to add job with trigger for running Quartz.NET scheduler instance without restarting server?
One other idea I've had was to open(and close) a new app whenever I need to create a job. I'm not sure I like that idea but seems less resource intensive than the recurring job described above. Would that be a viable option?
The code for my current solution:
Scheduler:
//Creating Scheduler
Scheduler = await schedulerFactory.GetScheduler();
Scheduler.JobFactory = jobFactory;
var key = new JobKey("Notify Job", "DEFAULT");
if (key == null)
{
//Create Job
IJobDetail jobDetail = CreateJob(jobMetaData);
//Create Trigger
ITrigger trigger = CreateTrigger(jobMetaData);
//Schedule Job
//await Scheduler.ScheduleJob(jobDetail, trigger, cancellationToken);
await Scheduler.AddJob(jobDetail, true);
}
//Start Scheduler
await Scheduler.Start(cancellationToken);
//Copying the scheduler object into a different class where it's easier to access.
ScheduleStore scheduleStore = new ScheduleStore();
scheduleStore.tempScheduler = Scheduler;
ScheduleStore:
public class ScheduleStore
{
public IScheduler tempScheduler { get; set; }
public ScheduleStore()
{
}
}
runtime Scheduler:
public class RunningScheduler : IHostedService {
public IScheduler scheduler { get; set; }
private readonly JobMetadata jobMetaData;
public RunningScheduler(JobMetadata job)
{
ScheduleStore scheduleStore = new ScheduleStore();
this.scheduler = scheduleStore.tempScheduler;
this.jobMetaData = job;
}
public async Task StartAsync(CancellationToken cancellationToken)
{
IJobDetail jdets = CreateJob(jobMetaData);
if (jobMetaData.CronExpression == "--")
{
ITrigger jtriggz = CreateSimpleTrigger(jobMetaData);
//the next line throws the exception.
await scheduler.ScheduleJob(jdets, jtriggz, cancellationToken);
//It's definitely the scheduler that's throwing the null pointer exception.
}
// the else does basically the same as the if, only with a cron trigger instead of a simple one so I've omitted it.
I see that you are using a hosted service. Have you noticed that Quartz has that support already built-in?
Quartz cannot handle new jobs "new code that runs" dynamically, but triggers for sure. You just need to obtain a reference to IScheduler and then you can add new triggers pointing to existing job or just call scheduler.TriggerJob which will call your job once with given parameters (job data map is powerful feature to pass execution parameters).
I'd advice checking the GitHub repository and its examples, there a specific ones for different features and ASP.NET Core and worker integrations.
Generatlly Quartz already has database persistence support which you can use. Just call scheduler methods to add jobs and triggers - they will be persisted and available between application restarts (and take effect immediately without the need for restart).

How to immediately stop processing new messages when inside a message handler?

I have a Rebus bus setup with a single worker and max parallelism of 1 that processes messages "sequentialy". In case an handler fails, or for specific business reason, I'd like the bus instance to immediately stop processing messages.
I tried using the Rebus.Event package to detect the exception in the AfterMessageHandled handler and set the number of workers to 0, but it seems other messages are processed before it can actually succeed in stoping the single worker instance.
Where in the event processing pipeline could I do
bus.Advanced.Workers.SetNumberOfWorkers(0); in order to prevent further message processing?
I also tried setting the number of workers to 0 inside a catch block in the handler itself, but it doesn't seem like the right place to do it since SetNumberOfWorkers(0) waits for handlers to complete before returning and the caller is the handler... Looks like a some kind of a deadlock to me.
Thank you
This particular situation is a little bit of a dilemma, because – as you've correctly observed – SetNumberOfWorkers is a blocking function, which will wait until the desired number of threads has been reached.
In your case, since you're setting it to zero, it means your message handler needs to finish before the number of threads has reached zero... and then: πŸ’£ β˜ πŸ”’
I'm sorry to say this, because I bet your desire to do this is because you're in a pickle somehow – but generally, I must say that wanting to process messages sequentually and in order with message queues is begging for trouble, because there are so many things that can lead to messages being reordered.
But, I think you can solve your problem by installing a transport decorator, which will bypass the real transport when toggled. If the decorator then returns null from the Receive method, it will trigger Rebus' built-in back-off strategy and start chilling (i.e. it will increase the waiting time between polling the transport).
Check this out – first, let's create a simple, thread-safe toggle:
public class MessageHandlingToggle
{
public volatile bool ProcessMessages = true;
}
(which you'll probably want to wrap up and make pretty somehow, but this should do for now)
and then we'll register it as a singleton in the container (assuming Microsoft DI here):
services.AddSingleton(new MessageHandlingToggle());
We'll use the ProcessMessages flag to signal whether message processing should be enabled.
Now, when you configure Rebus, you decorate the transport and give the decorator access to the toggle instance in the container:
services.AddRebus((configure, provider) =>
configure
.Transport(t => {
t.Use(...);
// install transport decorator here
t.Decorate(c => {
var transport = c.Get<ITransport>();
var toggle = provider.GetRequiredService<MessageHandlingToggle>();
return new MessageHandlingToggleTransportDecorator(transport, toggle);
})
})
.(...)
);
So, now you'll just need to build the decorator:
public class MessageHandlingToggleTransportDecorator : ITransport
{
static readonly Task<TransportMessage> NoMessage = Task.FromResult(null);
readonly ITransport _transport;
readonly MessageHandlingToggle _toggle;
public MessageHandlingToggleTransportDecorator(ITransport transport, MessageHandlingToggle toggle)
{
_transport = transport;
_toggle = toggle;
}
public string Address => _transport.Address;
public void CreateQueue(string address) => _transport.CreateQueue(address);
public Task Send(string destinationAddress, TransportMessage message, ITransactionContext context)
=> _transport.Send(destinationAddress, message, context);
public Task<TransportMessage> Receive(ITransactionContext context, CancellationToken cancellationToken)
=> _toggle.ProcessMessages
? _transport.Receive(context, cancellationToken)
: NoMessage;
}
As you can see, it'll just return null when ProcessMessages == false. Only thing left is to decide when to resume processing messages again, pull MessageHandlingToggle from the container somehow (probably by having it injected), and then flick the bool back to true.
I hope can work for you, or at least give you some inspiration on how you can solve your problem. πŸ™‚

How to forward incoming data via REST to an SSE stream in Quarkus

In my setting I want to forward certain status changes via an SSE channel (Server sent events). The status changes are initiated by calling a REST endpoint. So, I need to forward the incoming status change to the SSE stream.
What is the best/simplest way to accomplish this in Quarkus.
One solution I can think of is to use an EventBus (https://quarkus.io/guides/reactive-messaging). The SSE endpoint would subscribe to the status changes and push it through the SSE channel. The status change endpoint publishes appropriate events.
Is this a viable solution? Are there other (simpler) solutions? Do I need to use the reactive stuff in any case to accomplish this?
Any help is very appreciated!
Easiest way would be to use rxjava as a stream provider. Firstly you need to add rxjava dependency. It can go either from reactive dependencies in quarkus such as kafka, or by using it directly(if you don't need any streaming libraries):
<dependency>
<groupId>io.reactivex.rxjava2</groupId>
<artifactId>rxjava</artifactId>
<version>2.2.19</version>
</dependency>
Here's example on how to send random double value each second:
#GET
#Path("/stream")
#Produces(MediaType.SERVER_SENT_EVENTS)
#SseElementType("text/plain")
public Publisher<Double> stream() {
return Flowable.interval(1, TimeUnit.SECONDS).map(tick -> new Random().nextDouble());
}
We create new Flowable which will fire every second and on each tick we generate next random double. Investigate any other options on how you can create Flowable such as Flowable.fromFuture() to adapt it for your specific code logic.
P.S code above will generate new Flowable each time you query this endpoint, I made it to save up space, in your case I assume you'll have a single source of events that you can build once and use the same instance every time endpoint queried
Dmytro, thanks for pointing me in the right direction.
I have opted for Mutiny in connection with Kotlin. My code now looks like this:
data class DeviceStatus(var status: Status = Status.OFFLINE) {
enum class Status {OFFLINE, CONNECTED, ANALYZING, MAINTENANCE}
}
#ApplicationScoped
class DeviceStatusService {
var deviceStatusProcessor: PublishProcessor<DeviceStatus> = PublishProcessor.create()
var deviceStatusQueue: Flowable<DeviceStatus> = Flowable.fromPublisher(deviceStatusProcessor)
fun pushDeviceStatus(deviceStatus: DeviceStatus) {
deviceStatusProcessor.onNext(deviceStatus)
}
fun getStream(): Multi<DeviceStatus> {
return Multi.createFrom().publisher(deviceStatusQueue)
}
}
#Path("/deviceStatus")
class DeviceStatusResource {
private val LOGGER: Logger = Logger.getLogger("DeviceStatusResource")
#Inject
#field: Default
lateinit var deviceStatusService: DeviceStatusService
#POST
#Consumes(MediaType.APPLICATION_JSON)
fun status(status: DeviceStatus): Response {
LOGGER.info("POST /deviceStatus " + status.status);
deviceStatusService.pushDeviceStatus(status)
return Response.ok().build();
}
#GET
#Path("/eventStream")
#Produces(MediaType.SERVER_SENT_EVENTS)
#SseElementType(MediaType.APPLICATION_JSON)
fun stream(): Multi<DeviceStatus>? {
return deviceStatusService.getStream()
}
}
As minimal setup the service could directly use the deviceStatusProcessor as publisher. However, the Flowable adds buffering.
Comments on the implementation are welcome.

masstransit access Activity from service bus message

I am using Masstransit to send Request/Response via servicebus between two services(dont ask why).
I would like to set-up custom application insights telemetry. I know that ServiceBus Messages add Diagnostic metadata so consumer can extract it and use it to add correlation between services. However I can't access it in MassTransit, or at least I dont know how.
Any tips?
Couple of months passed and solution i implemented proves to be a good one.
I created class that implements IReceiveObserver. On PreReceive I am able to access (ServiceBusReceiveContext)context and start Telemetry operation that has correct parent id. so it looks like this:
public Task PreReceive(ReceiveContext context)
{
var serviceBusContext = (ServiceBusReceiveContext)context;
var requestActivity = new Activity("Process");
requestActivity.SetParentId(serviceBusContext.Properties["Diagnostic-Id"].ToString());
IOperationHolder<RequestTelemetry> operation = _telemetryClient.StartOperation<RequestTelemetry>(requestActivity);
operation.Telemetry.Success = true;
serviceBusContext.Properties.Add(TelemetryOperationKey, operation);
return Task.CompletedTask;
}
On PostReceive I am able to stop operation:
public Task PostReceive(ReceiveContext context)
{
var serviceBusContext = (ServiceBusReceiveContext)context;
var operation = (IOperationHolder<RequestTelemetry>)serviceBusContext.Properties[TelemetryOperationKey];
operation.Dispose();
return Task.CompletedTask;
}
I also do some magic when exception happens:
public Task ReceiveFault(ReceiveContext context, Exception exception)
{
_telemetryClient.TrackException(exception);
var serviceBusContext = (ServiceBusReceiveContext)context;
var operation = (IOperationHolder<RequestTelemetry>)serviceBusContext.Properties[TelemetryOperationKey];
operation.Telemetry.ResponseCode = "Fail";
operation.Telemetry.Success = false;
operation.Dispose();
return Task.CompletedTask;
}
It was difficult to find this solution by reading MassTransit documentation. I would say that MassTransit is a fantastic tool for some situations there is no alternatives. However documentation is pretty poor.
You can use Application Insights with MassTransit, there is a package available that writes metrics directly.
The documentation is available here:
https://masstransit-project.com/advanced/monitoring/applications-insights.html
Also, you can access Activity.Current from anyway, I think, based on my experience with DiagnosticSource. It might be different with AppInsights though.

Axon Sagas duplicates events in event store when replaying events to new DB

we have Axon application that stores new Order. For each order state change (OrderStateChangedEvent) it plans couple of tasks. The tasks are triggered and proceeded by yet another Saga (TaskSaga - out of scope of the question)
When I delete the projection database, but leave the event store, then run the application again, the events are replayed (what is correct), but the tasks are duplicated.
I suppose this is because the OrderStateChangedEvent triggers new set of ScheduleTaskCommand each time.
Since I'm new in Axon, can't figure out how to avoid this duplication.
Event store running on AxonServer
Spring boot application autoconfigures the axon stuff
Projection database contains the projection tables and the axon tables:
token_entry
saga_entry
association_value_entry
I suppose all the events are replayed because by recreating the database, the Axon tables are gone (hence no record about last applied event)
Am I missing something?
should the token_entry/saga_entry/association_value_entry tables be part of the DB for the projection tables on each application node?
I thought that the event store might be replayed onto new application node's db any time without changing the event history so I can run as many nodes as I wish. Or I can remove the projection dB any time and run the application, what causes that the events are projected to the fresh db again. Or this is not true?
In general, my problem is that one event produces command leading to new events (duplicated) produced. Should I avoid this "chaining" of events to avoid duplication?
THANKS!
Axon configuration:
#Configuration
public class AxonConfig {
#Bean
public EventSourcingRepository<ApplicationAggregate> applicationEventSourcingRepository(EventStore eventStore) {
return EventSourcingRepository.builder(ApplicationAggregate.class)
.eventStore(eventStore)
.build();
}
#Bean
public SagaStore sagaStore(EntityManager entityManager) {
return JpaSagaStore.builder().entityManagerProvider(new SimpleEntityManagerProvider(entityManager)).build();
}
}
CreateOrderCommand received by Order aggregate (method fromCommand just maps 1:1 command to event)
#CommandHandler
public OrderAggregate(CreateOrderCommand cmd) {
apply(OrderCreatedEvent.fromCommand(cmd))
.andThenApply(() -> OrderStateChangedEvent.builder()
.applicationId(cmd.getOrderId())
.newState(OrderState.NEW)
.build());
}
Order aggregate sets the properties
#EventSourcingHandler
protected void on(OrderCreatedEvent event) {
id = event.getOrderId();
// ... additional properties set
}
#EventSourcingHandler
protected void on(OrderStateChangedEvent cmd) {
this.state = cmd.getNewState();
}
OrderStateChangedEvent is listened by Saga that schedules couple of tasks for the order of the particular state
private Map<String, TaskStatus> tasks = new HashMap<>();
private OrderState orderState;
#StartSaga
#SagaEventHandler(associationProperty = "orderId")
public void on(OrderStateChangedEvent event) {
orderState = event.getNewState();
List<OrderStateAwareTaskDefinition> tasksByState = taskService.getTasksByState(orderState);
if (tasksByState.isEmpty()) {
finishSaga(event.getOrderId());
}
tasksByState.stream()
.map(task -> ScheduleTaskCommand.builder()
.orderId(event.getOrderId())
.taskId(IdentifierFactory.getInstance().generateIdentifier())
.targetState(orderState)
.taskName(task.getTaskName())
.build())
.peek(command -> tasks.put(command.getTaskId(), SCHEDULED))
.forEach(command -> commandGateway.send(command));
}
I think I can help you in this situation.
So, this happens because the TrackingToken used by the TrackingEventProcessor which supplies all the events to your Saga instances is initialized to the beginning of the event stream. Due to this the TrackingEventProcessor will start from the beginning of time, thus getting all your commands dispatched for a second time.
There are a couple of things you could do to resolve this.
You could, instead of wiping the entire database, only wipe the projection tables and leave the token table intact.
You could configure the initialTrackingToken of a TrackingEventProcessor to start at the head of the event stream instead of the tail.
Option 1 would work out find, but requires some delegation from the operations perspective. Option 2 leaves it in the hands of a developer, potentially a little safer than the other solution.
To adjust the token to start at the head, you can instantiate a TrackingEventProcessor with a TrackingEventProcessorConfiguration:
EventProcessingConfigurer configurer;
TrackingEventProcessorConfiguration trackingProcessorConfig =
TrackingEventProcessorConfiguration.forSingleThreadedProcessing()
.andInitialTrackingToken(StreamableMessageSource::createHeadToken);
configurer.registerTrackingEventProcessor("{class-name-of-saga}Processor",
Configuration::eventStore,
c -> trackingProcessorConfig);
You'd thus create the desired configuration for your Saga and call the andInitialTrackingToken() function and ensuring the creation of a head token of no token is present.
I hope this helps you out TomΓ‘Ε‘!
Steven's solution works like a charm but only in Sagas. For those who want to achieve the same effect but in classic #EventHandler (to skip executions on replay) there is a way. First you have to find out how your tracking event processor is named - I found it in AxonDashboard (8024 port on running AxonServer) - usually it is location of a component with #EventHandler annotation (package name to be precise). Then add configuration as Steven indicated in his answer.
#Autowired
public void customConfig(EventProcessingConfigurer configurer) {
// This prevents from replaying some events in #EventHandler
var trackingProcessorConfig = TrackingEventProcessorConfiguration
.forSingleThreadedProcessing()
.andInitialTrackingToken(StreamableMessageSource::createHeadToken);
configurer.registerTrackingEventProcessor("com.domain.notreplayable",
org.axonframework.config.Configuration::eventStore,
c -> trackingProcessorConfig);
}

Resources