I need to upgrade an event to a new version. This entails removing an old property that is growing (due to a design decision) and is not longer needed.
Having read the documentation available at https://docs.axoniq.io/reference-guide/axon-framework/events/event-versioning and implementing the required parts
#Bean
public EventUpcasterChain eventUpcasterChain(){
return new EventUpcasterChain(
new CurrentLoanBalanceAcceptedEventUpcaster(),
new InterestCalculatedEventUpcaster()
);
}
I cannot seem to get the upcasters to fire the "doUpcast" method. My understanding is that on start up the application will find all old events and convert them to the new version of the events. Is this correct? Or does it only upcast the old events when they are replayed? I am hoping there is a way to do this without replaying all the events.
Versions:
Spring Boot: 2.4.11
Axon Framework: 4.5.4
Update:
After trying to figure out how to remove nodes from the event xml using the upcasters I failed miserably. I must point out that this issue is not an Axon issue but was a poor design decision and a lack of understanding what Axon was doing with the events. The primary lesson is to keep events as simple as possible.
I managed to unload the unnecessary xml from the domain event store using the following query:
update domainevententry
set payload = lo_from_bytea(0, decode(regexp_replace(
subquery.output,
'\<nodeToReplace\>(.*)\<\/nodeToReplace\>',
''
), 'escape'))
from (
SELECT eventidentifier, payloadtype, encode(lo_get(payload::oid), 'escape') as output
FROM domainevententry
WHERE eventidentifier in (
'<id>'
)
AND payloadtype = '<payloadType>'
) as subquery
where domainevententry.eventidentifier = subquery.eventidentifier;
every upcaster in the upcaster chain is called only when it finds an 'old' event to convert to the new one. New events that are fired are the new version of it, so the upcaster will not be used for that.
Upcasters are used when:
An aggregate is loaded (without snapshot) and an old event is encountered
A projection is replayed and an old event is encountered
In all other cases, it will be the new event, so the upcaster won't have to be fired. The upcaster is there so aggregates can keep being loaded, and projections can be replayed from the beginning of the event store.
If this is not the case, we need to take a look at the Revision parameter on the old event definition and the new event definition. For example, if the old event had no #Revision annotation, the SimpleSerializedType needs a version of null or it won't match.
Please include the code of the upcasters if we need to dig further, that would help greatly!
Related
I have a set of events that have been refactored to another package. This works as is until I execute the event replay. Digging deeper I noticed a payloadtype in the domainevententry table and figure changing this would be sufficient but alas it seems the xml root element of the event needs to be changed as well. I am hoping there is a simple way to do this.
I cannot find any examples on upcasting to different packages or using XStream aliasing so any help would be greatly appreciated.
Thanks
As you noticed, the default payload type stored in events is the fully qualified class name. This ensures that out of the box serialization and deserialization work as intended. However, moving classes around thus means the payload type can no longer be found, requiring some adjustment to be made.
You could have used the EventTypeUpcaster, as referred to in the Reference Guide. The EventTypeUpcaster is dedicated to adjusting the payload type, and thus can also be used to deal with changing package names.
When using (the default) XStreamSerializer, aliasing the tags would indeed also work. How to set aliases can be seen here for example. AS noticed in that sample, the alias is added to the XStream instance. The XStreamSerializer uses an XStream instance to support de-/serialization from/to XML. To adjust the XStream instance, you can simply use the builder paradigm on the XStreamSerializer. The JavaDoc of the builder should be specific enough to help you out how to use it.
Went the long way round with this but it seems to work. As always, backup your database before executing large volume changes. I also restarted the service using the database on completion. Needless to say I will make sure the events are in logical packages before deploying next time :)
Database Engine: Postgres 10
Table: domainevententry
update domainevententry
set
payloadtype = '<new.package.Classname>',
payload = lo_from_bytea(0, decode(REPLACE(
subquery.output,
'<old.package.Classname>',
'<new.package.Classname>'
), 'escape'))
from (
SELECT eventidentifier, payloadtype, encode(lo_get(payload::oid), 'escape') as output FROM domainevententry
WHERE eventidentifier in (
'<event guid 1>',
'<event guid 2>'
)
AND payloadtype = '<old.package.Classname>'
) as subquery
where domainevententry.eventidentifier = subquery.eventidentifier;
Once that is completed I needed to update the OWNER of the large object:
ALTER LARGE OBJECT <LargeObjectId> OWNER TO database_role;
Probably not the most elegant solution but based on the time constraints I have, it did the job. There are probably encoding issues with this solution for the large object but it all worked out for me in the end. Feel free to share any optimizations that would make the above more suitable.
Firing off the Axon Framework replays rebuilt the projections and everything lined up.
With Doctrine and Symfony in my PHPUnit test method :
// Change username for user #1 (Sheriff Woody to Chuck Norris)
$form = $crawler->selectButton('Update')->form([
'user[username]' => 'Chuck Norris',
]);
$client->submit($form);
// Find user #1
$user = $em->getRepository(User::class)->find(1);
dump($user); // Username = "Sheriff Woody"
$user = $em->createQueryBuilder()
->from(User::class, 'user')
->andWhere('user.id = :userId')
->setParameter('userId', 1)
->select('
user
')
->getQuery()
->getOneOrNullResult()
;
dump($user); // Username = "Chuck Norris"
Why my two methods to fetch the user #1 return different results ?
diagnosis / explanation
I assume* you already created the User object you're editing via crawler before in that function and checked that it is there. This leads to it being a managed entity.
It is in the nature of data, to not sync itself magically with the database, but some automatism must be in place or some method executed to sync it.
The find() method will always try to use the cache (unless explicitly turned off, also see side note). The query builder won't, if you explicitly call getResult() (or one of its varieties), since you explicitly want a query to be executed. Executing a different query might lead to the cache not being hit, producing the current result. (it should update the first user object though ...) [updated, due to comment from Arno Hilke]
((( side note: Keeping objects in sync is hard. It's mainly about having consistency in the database, but all of ACID is wanted. Any process talking to the database should assume, that it only is working with the state at the moment of its first query, and is the only user of the database. Unless additional constraints must be met and inconsistent reads can occur, in which case isolation levels should be raised (See also: transactions or more precisely: isolation). So, automatically syncing is usually not wanted. Doctrine uses certain assumptions for performance gains (mainly: isolation / locking is optimistic). However, in your particular case, all of those things are of no actual concern... since you actually want a non-repeatable read. )))
(* otherwise, the behavior you're seeing would be really unexpected)
solution
One easy solution would be, to actively and explicitly sync the data from the database by either calling $em->refresh($user), or - before fetching the user again - to call $em->clear(), which will detach all entities (clearing the cache, which might have a noticable performance impact) and allowing you to call find again with the proper results being returned.
Please note, that detaching entities means, that any object previously returned from the entity manager should be discarded and fetched again (not via refresh).
alternate solution 1 - everything is requests
instead of checking the database, you could instead do a different request to a page that displays the user's name and checks that it has changed.
alternate solution 2 - using only one entity manager
using only one entity manager (that is: sharing the entity manager / database in the unit test with the server on the request) may be a reasonable solution, but it comes with its own set of problems. mainly omitted commits and flushes may avoid detection.
alternate solution 3 - using multiple entity managers
using one entity manager to set up the test, since the server is using a new entity manager to perform its work, you should theoretically - to do this actually properly - create yet another entity manager to check the server's behavior.
comment: the alternate solutions 1,2 and 3 would work with the highest isolation level, the initial solution probably wouldn't.
Why it uses d->eventFilters.prepend(obj) not append(obj) in function(QObject::installEventFilter),i want to know why design it in such way.I just curious about it.
void QObject::installEventFilter(QObject *obj)
{
Q_D(QObject);
if (!obj)
return;
if (d->threadData != obj->d_func()->threadData) {
qWarning("QObject::installEventFilter(): Cannot filter events for objects in a different thread.");
return;
}
// clean up unused items in the list
d->eventFilters.removeAll((QObject*)0);
d->eventFilters.removeAll(obj);
d->eventFilters.prepend(obj);
}
It's done that way because the most recently installed event filter is to be processed first, i.e. it needs to be at the beginning of the filter list. The filters are invoked by traversing the list in sequential order from begin() to end().
The most recently installed filter is to be processed first because the only two simple choices are to either process it first or last. And the second choice is not useful: when you filter events, you want to decide what happens before anyone else does. Well, but then some new user's filter will go before yours, so how that can be? As follows: event filters are used to amend functionality - functionality that already exists. If you added a filter somewhere inside the existing functionality, you'd effectively be interfacing to a partially defined system, with unknown behavior. After all, even Qt's implementation uses event filters. They provide the documented behavior. By inserting your event filter last, you couldn't be sure at all what events it will see - it'd all depend on implementation details of every layer of functionality above your filter.
A system with some event filter installed is like a layer of skin on the onion - the user of that system only sees the skin, not what's inside, not the implementation. But they can add their own skin on top if they wish so, and implement new functionality that way. They can't dig into the onion, because they don't know what's in it. Of course that's a generalization: they don't know because it doesn't form an API, a contract between them and the implementation of the system. They are free to read the source code and/or reverse engineer the system, and then insert the event filter anywhere in the list they wish. After all, once you get access to QObjectPrivate, you can modify the event filter list as you wish. But then you're responsible for the behavior of not only what you added on top of the public API, but of many of the underlying layers too - and your responsibility broadens. Updating the toolkit becomes next to impossible, because you'd have to audit the code and/or verify test coverage to make sure that something somewhere in the internals didn't get broken.
Simple thing, but doesn't work. We have at the bottom part of script
$oMan = $this->getContainer()->get('doctrine')->getManager();
// add entry of calling
$oLastCall = new CronLastCall();
$oLastCall->setType('key');
$oMan->persist($oLastCall);
$oMan->flush();
we insert to db once we create it, then do some stuff that can take a few minutes. Then call this one.
$oLastCall->setDateEnd(new \DateTime('now'));
$oMan->flush();
after this one - exist from method\action. So regarding logic (and doctrine2 manual I read) entity that were created already become 'managed' (we persist it) and we can simply update it. (I call flush (at the end) to update this entity, but it not updating.)
Where is trouble?
As far as I can tell, there's nothing wrong with the code you've shown. But if the $oLastCall object becomes detached between the first and second code blocks, you have to re-attach (merge) it to the manager so that it detects the changes for the second flush.
Merging can be done in this way:
$oLastCallMerged = $oMan->merge($oLastCall);
$oLastCallMerged->setDateEnd(new \DateTime('now'));
$oMan->flush();
You can also check the state (MANAGED/NEW/DETACHED/REMOVED) of an object using this code:
$oMan->getUnitOfWork()->getEntityState($oLastCall);
If that doesn't help (i.e. detachment of the object isn't your problem), you need to give more info about the context this code runs in and any errors you get. Is this code part of a Console Command or a regular web-app Controller? Do you get any output or errors when running it in 'dev' environment? (Check .../app/logs/dev.log.) Does the $oLastCall object stay in memory while waiting for the stuff that takes some minutes, or do you reload it from somewhere?
Btw, objects doesn't magically get detached by themselves. They'll only be detached if you load them from a different source than the entity manager (for example storing them in the session between requests) OR if you explicitly detach them by calling $oMan->detach($entity) or $oMan->clear().
Edit
You can also check if Doctrine detects the change by echoing out the changeset using $oMan->getUnitOfWork()->getEntityChangeSet($oLastCall) before and after the change, e.g:
error_log(json_encode($oMan->getUnitOfWork()->getEntityChangeSet($oLastCall)));
$oLastCall->setDateEnd(new \DateTime('now'));
error_log(json_encode($oMan->getUnitOfWork()->getEntityChangeSet($oLastCall)));
After flushing the manager all objects will be removed internally from the manager. So if you want to update the object later you need to call persist again before flushing.
Per Active reports documentation, the Report start is called first and Data initialize. Any reference to Data source inside Report start will invoke Data Initialize event. I am modifying an existing report; and to my surprise the order is reversed. I have an existing logic to get the count of nodes (my datasource is xml) in data initialize event; just because the event firing sequence is reversed, I am getting 0 as count always.
I am using Active reports for .NET 2.0 ( I think the version is 4.*). As I dont have support for the designer, I have the designer xml and code behind.
Please suggest what could force the event sequence to correct order.
There is probably not enough information for us to help you here. We'd need to see the report code at least, but better if you can do some debugging to find out more detail about what specific code you have that is causing the trouble.
I'm actually not sure about event order in this specific case, but to troubleshoot you could try with a new blank report to verify the event order in that case and move over one piece of code at a time to find out which line of code causes the event order to change.