I have an event sourced system that I am now implementing an endpoint for that will rebuild the read side Data Stores from the event store events. However, I am now running into what seems to be concurrency problems with how I am processing the events.
I decided to use my event handler code to process the events during the rebuild. In the system's normal state (not a read side db rebuild), my event handlers listen for events they are subscribed to and update the projections accordingly. However, when processing these events through their event handlers in line, I am seeing inconsistent results in the read side DB final state (if it even gets there, which it sometimes doesn't). I guess this means they are executing out of order.
Should I not be using event handlers in this way? I figured since I am processing events, that reusing the event handlers in this way would be quite appropriate.
I am using MediatR for in service messaging. All event handlers implement INotificationHandler.
Here is a sample of the code:
IEnumerable<IEvent> events = await _eventRepo.GetAllAggregateEvents(aggId);
int eventNumber = 0;
foreach (var e in events)
{
if (e.Version != eventNumber + 1)
throw new EventsOutOfOrderException("Events were out of order while rebuilding DB");
var ev = e as Event;
// publish different historic events to event handlers which work with read DBs
switch (e.Type)
{
case EventType.WarehouseCreated:
WarehouseCreated w = new WarehouseCreated(ev);
await _mediator.Publish(w);
break;
case EventType.BoxCreated:
BoxCreated b = new BoxCreated(ev);
await _mediator.Publish(b);
break;
case EventType.BoxLocationChanged:
BoxLocationChanged l = new BoxLocationChanged(ev);
await _mediator.Publish(l);
break;
}
eventNumber++;
}
I have already tried replacing the await keyword with a call to Wait() instead.
Something like _mediator.Publish(bcc).Wait().
But this doesn't seem like a great idea as there is async code behind this. Also it didn't work..
I have also tried queuing the event versions and having event type cases wait until their version is at the top of the queue before publishing the event.
Something like:
case EventType.BoxContentsChanged:
BoxContentsChanged bcc = new BoxContentsChanged(ev);
while (eventQueue.Peek() != bcc.Version)
continue;
await _mediator.Publish(bcc);
eventQueue.Dequeue();
break;
This also didn't work.
Anyway - if anyone has any ideas on how to deal with this problem, I would be very appreciative. I would prefer not to duplicate all the async event handler code in a synchronous way.
I guess the best way to do this is synchronously, to ensure consistency. This required me to duplicate some of my event handler logic in a Replay service and create synchronous repository methods. No race conditions now though, which is nice.
Related
I am working with SerialDevice on C++/winrt and need to listen for data coming over the port. I can successfully work with SerialDevice when data is streaming over the port but if nothing is read the DataReader.LoadAsync() function hangs even though I set timeouts through SerialDevice.ReadTimeout() and SerialDevice.WriteTimeout(). So to cancel the operation I am using IAsyncOperation's wait_for() operation which times out after a provided interval and I call IAsyncOperation's Cancel() and Close(). The problem is I can no longer make another call to DataReader.LoadAsync() without getting a take_ownership_from_abi exception. How can I properly cancel a DataReader.LoadAsync() call to allow subsequent calls to LoadAsync() on the same object?
To work around this, I tried setting the timeouts of SerialDevice but it didn't affect the DataRead.LoadAsync() calls. I also tried using create_task with a cancellation token which also didn't allow for an additional call to LoadAsync(). It took a lot of searching to find this article by Kenny Kerr:
https://kennykerr.ca/2019/06/10/cppwinrt-async-timeouts-made-easy/
where he describes the use of the IAsyncOperation's wait_for function.
Here is the initialization of the SerialDevice and DataReader:
DeviceInformation deviceInfo = devices.GetAt(0);
m_serialDevice = SerialDevice::FromIdAsync(deviceInfo.Id()).get();
m_serialDevice.BaudRate(nBaudRate);
m_serialDevice.DataBits(8);
m_serialDevice.StopBits(SerialStopBitCount::One);
m_serialDevice.Parity(SerialParity::None);
m_serialDevice.ReadTimeout(m_ts);
m_serialDevice.WriteTimeout(m_ts);
m_dataWriter = DataWriter(m_serialDevice.OutputStream());
m_dataReader = DataReader(m_serialDevice.InputStream());
Here is the LoadAsync call:
AsyncStatus iainfo;
auto async = m_dataReader.LoadAsync(STREAM_SIZE);
try {
auto iainfo = async.wait_for(m_ts);
}
catch (...) {};
if (iainfo != AsyncStatus::Completed)
{
async.Cancel();
async.Close();
return 0;
}
else
{
nBytesRead = async.get();
async.Close();
}
So in the case that the AsyncStatus is not Completed, the IAsyncOperation Cancel() and Close() are called which according to the documentation should cancel the Async call but now on subsequent LoadAsync calls I get a take_ownership_from_abi exception.
Anyone have a clue what I'm doing wrong? Why do the SerialDevice timeouts not work in the first place? Is there a better way to cancel the Async call that would allow for further calls without re-initializing DataReader? Generally, it feels like there is very little activity in the C++/winrt space and the documentation is severely lacking (didn't even find the wait_for method until about a day of trying other stuff and randomly searching for clues through different posts) - is there something I'm missing or is this really the case?
Thanks!
Cause: When the wait time is over, the async object is in the AsyncStatus::Started state. It means that the async object is still running.
Solution: When you use close() method, you could use Sleep(m_nTO) let asynchronous operation have enough time to close. Refer the following code.
if (iainfo != AsyncStatus::Completed)
{
m_nReadCount++;
//Sleep(m_nTO);
async.Cancel();
async.Close();
Sleep(m_nTO);
return 0;
}
we have Axon application that stores new Order. For each order state change (OrderStateChangedEvent) it plans couple of tasks. The tasks are triggered and proceeded by yet another Saga (TaskSaga - out of scope of the question)
When I delete the projection database, but leave the event store, then run the application again, the events are replayed (what is correct), but the tasks are duplicated.
I suppose this is because the OrderStateChangedEvent triggers new set of ScheduleTaskCommand each time.
Since I'm new in Axon, can't figure out how to avoid this duplication.
Event store running on AxonServer
Spring boot application autoconfigures the axon stuff
Projection database contains the projection tables and the axon tables:
token_entry
saga_entry
association_value_entry
I suppose all the events are replayed because by recreating the database, the Axon tables are gone (hence no record about last applied event)
Am I missing something?
should the token_entry/saga_entry/association_value_entry tables be part of the DB for the projection tables on each application node?
I thought that the event store might be replayed onto new application node's db any time without changing the event history so I can run as many nodes as I wish. Or I can remove the projection dB any time and run the application, what causes that the events are projected to the fresh db again. Or this is not true?
In general, my problem is that one event produces command leading to new events (duplicated) produced. Should I avoid this "chaining" of events to avoid duplication?
THANKS!
Axon configuration:
#Configuration
public class AxonConfig {
#Bean
public EventSourcingRepository<ApplicationAggregate> applicationEventSourcingRepository(EventStore eventStore) {
return EventSourcingRepository.builder(ApplicationAggregate.class)
.eventStore(eventStore)
.build();
}
#Bean
public SagaStore sagaStore(EntityManager entityManager) {
return JpaSagaStore.builder().entityManagerProvider(new SimpleEntityManagerProvider(entityManager)).build();
}
}
CreateOrderCommand received by Order aggregate (method fromCommand just maps 1:1 command to event)
#CommandHandler
public OrderAggregate(CreateOrderCommand cmd) {
apply(OrderCreatedEvent.fromCommand(cmd))
.andThenApply(() -> OrderStateChangedEvent.builder()
.applicationId(cmd.getOrderId())
.newState(OrderState.NEW)
.build());
}
Order aggregate sets the properties
#EventSourcingHandler
protected void on(OrderCreatedEvent event) {
id = event.getOrderId();
// ... additional properties set
}
#EventSourcingHandler
protected void on(OrderStateChangedEvent cmd) {
this.state = cmd.getNewState();
}
OrderStateChangedEvent is listened by Saga that schedules couple of tasks for the order of the particular state
private Map<String, TaskStatus> tasks = new HashMap<>();
private OrderState orderState;
#StartSaga
#SagaEventHandler(associationProperty = "orderId")
public void on(OrderStateChangedEvent event) {
orderState = event.getNewState();
List<OrderStateAwareTaskDefinition> tasksByState = taskService.getTasksByState(orderState);
if (tasksByState.isEmpty()) {
finishSaga(event.getOrderId());
}
tasksByState.stream()
.map(task -> ScheduleTaskCommand.builder()
.orderId(event.getOrderId())
.taskId(IdentifierFactory.getInstance().generateIdentifier())
.targetState(orderState)
.taskName(task.getTaskName())
.build())
.peek(command -> tasks.put(command.getTaskId(), SCHEDULED))
.forEach(command -> commandGateway.send(command));
}
I think I can help you in this situation.
So, this happens because the TrackingToken used by the TrackingEventProcessor which supplies all the events to your Saga instances is initialized to the beginning of the event stream. Due to this the TrackingEventProcessor will start from the beginning of time, thus getting all your commands dispatched for a second time.
There are a couple of things you could do to resolve this.
You could, instead of wiping the entire database, only wipe the projection tables and leave the token table intact.
You could configure the initialTrackingToken of a TrackingEventProcessor to start at the head of the event stream instead of the tail.
Option 1 would work out find, but requires some delegation from the operations perspective. Option 2 leaves it in the hands of a developer, potentially a little safer than the other solution.
To adjust the token to start at the head, you can instantiate a TrackingEventProcessor with a TrackingEventProcessorConfiguration:
EventProcessingConfigurer configurer;
TrackingEventProcessorConfiguration trackingProcessorConfig =
TrackingEventProcessorConfiguration.forSingleThreadedProcessing()
.andInitialTrackingToken(StreamableMessageSource::createHeadToken);
configurer.registerTrackingEventProcessor("{class-name-of-saga}Processor",
Configuration::eventStore,
c -> trackingProcessorConfig);
You'd thus create the desired configuration for your Saga and call the andInitialTrackingToken() function and ensuring the creation of a head token of no token is present.
I hope this helps you out Tomáš!
Steven's solution works like a charm but only in Sagas. For those who want to achieve the same effect but in classic #EventHandler (to skip executions on replay) there is a way. First you have to find out how your tracking event processor is named - I found it in AxonDashboard (8024 port on running AxonServer) - usually it is location of a component with #EventHandler annotation (package name to be precise). Then add configuration as Steven indicated in his answer.
#Autowired
public void customConfig(EventProcessingConfigurer configurer) {
// This prevents from replaying some events in #EventHandler
var trackingProcessorConfig = TrackingEventProcessorConfiguration
.forSingleThreadedProcessing()
.andInitialTrackingToken(StreamableMessageSource::createHeadToken);
configurer.registerTrackingEventProcessor("com.domain.notreplayable",
org.axonframework.config.Configuration::eventStore,
c -> trackingProcessorConfig);
}
There is a scenario in which a user terminates the application, while it is still processing data via BackgroundWorker.
I would like to send cancel or terminate command to this method. I tried calling CancelAsync() method, but it obviously didn't work the way I expected and the worker still continued processing.
What is a good way to signal the BackgroundWorker, and especially its RunWorkerCompleted method to stop processing?
Do I have to use state variables?
This is the code executed when you call CancelAsync() on a BackgroundWorker
public void CancelAsync()
{
if (!this.WorkerSupportsCancellation)
{
throw new InvalidOperationException(SR.GetString
("BackgroundWorker_WorkerDoesntSupportCancellation"));
}
this.cancellationPending = true;
}
As you can see, they set the internal cancellationPending variable to true after checking the value of WorkerSupportsCancellation.
So you need to set WorkerSupportsCancellation = true;, when you exit from your app call backGroundWorkerInstance.CancelAsync() and inside the DoWork or RunWorkerCompleted test the CancellationPending. If it's true stop your process.
I'm just learning nHibernate and have come across what probably is a simple issue to resolve.
Right so I've figured out so far that you can't/shouldn;t nest nHibernate Transactions within each other; in my case I figured this out when scope went to another routine and I started a new Transaction.
So should I be doing the following?
using (ITransaction transaction = session.BeginTransaction())
{
NHibernateMembership mQuery =
session.QueryOver<NHibernateMembership>()
.Where(x => x.Username == username)
.And(x => x.ApplicationName == ApplicationName)
.SingleOrDefault();
if (mQuery != null)
{
mQuery.PasswordQuestion = newPwdQuestion;
mQuery.PasswordAnswer = EncodePassword(newPwdAnswer);
session.Update(mQuery);
transaction.Commit();
passwordQuestionUpdated = true;
}
}
// Assume this is in another routine elsewhere but being
// called right after the first in the same request
using (ITransaction transaction = session.BeginTransaction())
{
NHibernateMembership mQuery =
session.QueryOver<NHibernateMembership>()
.Where(x => x.Username == username)
.And(x => x.ApplicationName == ApplicationName)
.SingleOrDefault();
if (mQuery != null)
{
mQuery.PasswordQuestion = newPwdQuestion;
mQuery.PasswordAnswer = EncodePassword(newPwdAnswer);
session.Update(mQuery);
transaction.Commit();
passwordQuestionUpdated = true;
}
}
Note: I know they are simply a copy, i'm just demonstrating my question
First Question
Is this the way it is MEANT to be done? Transaction per operation?
Second Question
Do I need call transaction.Commit(); each time or only in the last set?
Third Question
Is there a better way, automated or manual, to do this?
Third Question
Can I use session.Transaction.IsActive to determine if the "Current Session" already is part of a transaction - so in this case I can make the "Transaction wrap" in the highest level, let's say the Web Form code, and let routines be called within it and then commit all work at the end. Is this a flawed method?
I really want to hammer this down so I start as I mean to go on; I don;t want to find 1000s of lines of code in that I need to change it all.
Thanks in advance.
EDIT:
Right so I wrote some code to explain my issue exactly.
private void CallingRoutine()
{
using(ISession session = Helper.GetCurrentSession)
{
using (ITransaction transaction = session.BeginTransaction())
{
// RUN nHIbernate QUERY to get an OBJECT-1
// DO WORK on OBJECT
// Need to CALL an EXTERNAL ROUTINE to finish work
ExternalRoutine();
// DO WORK on OBJECT-1 again
// *** At this point ADO exception triggers
}
}
}
private bool ExternalRoutine()
{
using(ISession session = Helper.GetCurrentSession)
{
using (ITransaction transaction = session.BeginTransaction())
{
// RUN nHIbernate QUERY to get an OBJECT-2
// DO WORK on OBJECT
// Determine result
if(Data)
{
return true;
}
return false;
}
}
}
Hopefully this demonstrates the issue. This is how I understood to write the Transactions but notice how the ADO exception occurs. I'm obviously doing something wrong. How am I meant to write these routines?
Take for example if I was to write a helper object for some provider and within each routine exposed there is a nHibernate query run - how wold I write those routines, in regards to Transactions, assuming I knew nothing about the calling function and data - my job is to work with nHibernate effectively and efficiently and return a result.
This is what I assumed by writing the transaction how I did in ExternalRoutine() - to assume that this is the only use of nHibernate and to explicitly make the Transaction.
If possible, I would suggest using System.Transactions.TransactionScope:
using(var trans = new TransactionScope())
using(var session = .. create your session...) {
... do your stuff ...
trans.Complete();
}
The trans.Complete will cause the session to flush and commit the transaction, in addition you can have nested transactionscopes. The only "downside" to this is that it will escalate to DTC if you have multiple connections (or enlisted resources such as MSMQ), but this is not necessarily a downside unless you're using something like MySQL which doesn't play nicely with DTC.
For simple cases I would probably use a transaction scope in the BeginRequest and commit it in EndRequest if there were no errors (or use a filter if u're using ASP.NET MVC), but that really depends a lot on what you're doing - as long as your operations are short (which they should be in a web app), this should be fine. The overhead of creating a session and transaction scope is not that big that it should cause problems for accessing static resources (or resources that don't need the session), unless you're looking at a really high volume / high performance website.
I'm attempting to take a process that previously ran multiple remote object calls in parallel and make them run in serial.
I know it's a hackish approach (bandaid!), but so far my approach has been to take this for-loop that started each parallel request:
(original code)
for (i = 0; i < _processQ.length; i++){
fObj = _processQ.getItemAt(i);
f = fObj.Name;
fObj.Progress = "Processing...";
ro.process(gameid,f);
}
and change it to this:
(new code)
for (i = 0; i < _processQ.length; i++){
fObj = _processQ.getItemAt(i);
f = fObj.Name;
fObj.Progress = "Processing...";
ro.process(gameid,f);
//pause this loop until a result is returned from the request that was just made.
_pause_queue = true;
while(_pause_queue){
//do nothing
}
}
Where _pause_queue is a private global boolean for the class. In the fault-handler and response-handler functions, it is set back to false, so that the infinite where-loop is released and the for loop will continue.
This is functional, but the UI only updates after the while-loop ends, which causes the negative side effect that it looks like the application is hung while waiting for a response.
Is there something I can put inside the while loop to allow flash to update the display? In VB (ugh!) I would use DoEvents, but I don't know of an equivalent for Actionscript.
Alternately, is there a way to add a simple queueing system short of completely rewriting the application? This is legacy code and I'm on a tight deadline so I'd like to modify it as little as possible. (Otherwise I would rewrite it from the ground up and use Swiz to get the Event Chaining.)
Alternately, is there a way to add a
simple queueing system short of
completely rewriting the application?
Yes, you can use an array. It looks like you may already have one, with the _processQ variable. I might try something like this:
// A process queue item method
protected function processItemInQueue():Void{
if(_processQ.length > 0){
fObj = _processQ.pop();
f = fObj.Name;
fObj.Progress = "Processing...";
ro.process(gameid,f);
}
}
Then in your remote Object result handler:
protected function roResultHandler(event:ResultEvent):void{
// process results
// process next item in queue
processItemInQueue();
}
ActionScript is single threaded so a while (true) loop would indeed hang forever. What you might want to consider is using a Timer class to fire off events regularly or add an event listener to ENTER_FRAME (flash player thinks in the fps domain) which is fired at the start of each frame.
Looking at your code, unless ro.process() is a remote IO operation (fetching a web page or making a web service call), they would hang the UI while they do their work. You might want to split up the work and fire them every frame rather than all at once. You should also learn how to use EventDispatcher logic to listen to when they finish or faults instead of setting a global variable and loop.
Agree with clement -- why not make it all some event rather than trying to do everything in a single loop?
function outerFunction() {
var queue = new SomeAsyncProcessor();
for (i = 0; i < _processQ.length; i++){
queue.addProcess(_processQ.getItemAt(i));
}
queue.addEventListener('complete', handleProcess);
queue.start();
game.pause();
}
function handleProcess(event) {
var processor = event.currentTarget as SomeAsyncProcessor();
process.removeEventListener('complete', unpause);
game.unpause();
}
For a priorityqueue -- you could use this:
http://www.polygonal.de/legacy/as3ds/doc/de/polygonal/ds/PriorityQueue.html
Shouldn't be too much time to write one as well, mainly just wrapping what you have currently into one big event dispatcher (as long as your current processors already dispatch events).