axon framework: recreate a past situation - axon

I'm studying axon framework, I can't understand how I can recreate a past situation by limiting the events loaded by the EventStore.
I'm using this configuration:
EventSourcingRepository repository = EventSourcingRepository.builder (ShipmentAggregate.class) .eventStore (eventStore) .build ();
how can I limit the loading of events to a given wax or to a progressive datum? thanks.

You build an Eventhandler that tracks events from the given repository source and filter the events you need using optional MetaData parameters:
#EventHandler
public void on(AnEvent evt, #Timestamp Instant eventTimestamp) {
// if eventTimestamp < lastThursday
// do ....
}

Related

How to make logs in application insight without using Task.Delay method?

using Microsoft.ApplicationInsights;
using Microsoft.ApplicationInsights.DataContracts;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApp8
{
class Program
{
static IServiceCollection services = new ServiceCollection()
.AddLogging(loggingBuilder => loggingBuilder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>("", LogLevel.Trace))
.AddApplicationInsightsTelemetryWorkerService("Application_Key");
static IServiceProvider serviceProvider = services.BuildServiceProvider();
static ILogger<Program> logger = serviceProvider.GetRequiredService<ILogger<Program>>();
static TelemetryClient telemetryClient = serviceProvider.GetRequiredService<TelemetryClient>();
static void Main(string[] args)
{
using (telemetryClient.StartOperation<RequestTelemetry>("AppointmentPatientCommunication"))
{
logger.LogInformation("1st");
hero();
logger.LogError("2nd");
telemetryClient.TrackTrace("Here is the error");
telemetryClient.Flush();
}
}
static void hero()
{
using (telemetryClient.StartOperation<RequestTelemetry>("AppointmentPatientCommunication"))
{
logger.LogInformation("2nd");
telemetryClient.Flush();
}
}
}
}
I uploading this console application as my webjob to make a log in application insight. I am trying to avoid the use of task.delay() so that I can get real-time logging at perfect timing. I am uploading this webjob triggered manually, but I see no entry in my application insights. Could anyone help me out with this one?
Telemetry is not sent instantly. Telemetry items are batched and sent by the ApplicationInsights SDK. In Console apps, which exits right after calling Track() methods, telemetry may not be sent unless Flush() and Sleep/Delay is done before the app exits as shown in full example later in this article. Sleep is not required if you are using InMemoryChannel. There is an active issue regarding the need for Sleep which is tracked here: link.
So there are two types of channels: InMemoryChannel and ServerTelemetryChannel
For more details about the both the channels click on this link.
In my program to deal with the issue, I used InMemoryChannel. In the below code, I have shown a portion of code to show how I added it in my program.
static IServiceCollection services = new ServiceCollection()
.AddLogging(loggingBuilder => loggingBuilder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>("", LogLevel.Trace))
.AddSingleton(typeof(ITelemetryChannel), new InMemoryChannel())
.AddApplicationInsightsTelemetryWorkerService("Application_Key");
The Nuget Package I am using is Microsoft.ApplicationInsights.Channel
Thanks to # Peter Bons for your comment, Which helps to fix the problem.
The Flush() method in the Telemetry Client is used to flush the in-memory buffer when the application is shutting down. Normally, the SDK delivers data every 30 seconds or when the buffer is full (500 items), and there is no need to invoke the Flush() method manually for web applications unless the program is ready to be shut down.
The TelemetryClient object's Flush() method sends all of the data it presently holds in a buffer to the App Insights service.
Application Insights will transfer your data in batches in the background to make better use of the network.
In most cases, you won't need to call Flush(). However, if you know the process will leave after that point, you should execute Flush() to ensure that all of the data gets transmitted.
Here, I have added the Thread.Sleep(); call after the Flush Statement.
static void Main(string[] args)
{
using (telemetryClient.StartOperation<RequestTelemetry>("AppointmentPatientCommunication"))
{
logger.LogInformation("1st");
hero();
logger.LogError("2nd");
telemetryClient.TrackTrace("Here is the error");
# Flush takes some times to memory buffer at the shutdown activity.
telemetryClient.Flush();
# By default Flush takes 30 Sec so you have to wait for 30 sec.
Thread.Sleep(5000);
}
}
static void hero()
{
using (telemetryClient.StartOperation<RequestTelemetry>("AppointmentPatientCommunication"))
{
logger.LogInformation("2nd");
# Flush takes some times to memory buffer at the shutdown activity.
telemetryClient.Flush();
# By default Flush takes 30 Sec so you have to wait for 30 sec.
Thread.Sleep(5000);
}
}
Results in AI:

JpaSagaStore in conjunction with Jackson unable to properly store state

In a SpringBoot application, I have the following configuration:
axon:
axonserver:
servers: "${AXON_SERVER:localhost}"
serializer:
general: jackson
messages: jackson
events: jackson
logging.level:
org.axonframework.modelling.saga: debug
Downsizing the scenario to bare minimum, the relevant portion of Saga class:
#Slf4j
#Saga
#ProcessingGroup("AuctionEventManager")
public class AuctionEventManagerSaga {
#Autowired
private transient EventScheduler eventScheduler;
private ScheduleToken scheduleToken;
private Instant auctionTimerStart;
#StartSaga
#SagaEventHandler(associationProperty = "auctionEventId")
protected void on(final AuctionEventScheduled event) {
this.auctionTimerStart = event.getTimerStart();
// Cancel any pre-existing previous job, since the scheduling thread might be lost upon a crash/restart of JVM.
if (this.scheduleToken != null) {
this.eventScheduler.cancelSchedule(this.scheduleToken);
}
this.scheduleToken = this.eventScheduler.schedule(
this.auctionTimerStart,
AuctionEventStarted.builder()
.auctionEventId(event.getAuctionEventId())
.build()
);
}
#EndSaga
#SagaEventHandler(associationProperty = "auctionEventId")
protected void on(final AuctionEventStarted event) {
log.info(
"[AuctionEventManagerSaga] Current state: {scheduleToken={}, auctionTimerStart={}}",
this.scheduleToken,
this.auctionTimerStart
);
}
}
In the final compiled class, we will end up having 4 properties: log (from #Slf4j), eventScheduler (transient, #Autowired), scheduleToken and auctionTimerStart.
For reference information, here is a sample of the general approach I've been using for both Command and Event classes:
#Value
#Builder
#JsonDeserialize(builder = AuctionEventStarted.AuctionEventStartedBuilder.class)
public class AuctionEventStarted {
AuctionEventId auctionEventId;
#JsonPOJOBuilder(withPrefix = "")
public static final class AuctionEventStartedBuilder {}
}
When executing the code, you get the following output:
2020-05-12 15:40:01.180 DEBUG 1 --- [mandProcessor-4] o.a.m.saga.repository.jpa.JpaSagaStore : Updating saga id c8aff7f7-d47f-4616-8a96-a40044cb7e3b as {}
As soon as the general serializer is changed to xstream, the content is serialized properly, but I face another issue during deserialization, since I have private static final class Builder classes using Lombok.
So is there a way for Axon to handle these scenarios:
1- Axon to safely manage Jackson to ignore #Autowired, transient and static properties from #Saga classes? I've attempted to manually define #JsonIgnore at non-state properties and it still didn't work.
2- Axon to safely configure XStream to ignore inner classes (mostly Builder classes implemented as private static final)?
Thanks in advance,
EDIT: I'm pursuing a resolution using my preferred serializer: JSON. I attempted to modify the saga class and extend JsonSerializer<AuctionEventManagerSaga>. For that I implemented the methods:
#Override
public Class<AuctionEventManagerSaga> handledType() {
return AuctionEventManagerSaga.class;
}
#Override
public void serialize(
final AuctionEventManagerSaga value,
final JsonGenerator gen,
final SerializerProvider serializers
) throws IOException {
gen.writeStartObject();
gen.writeObjectField("scheduleToken", value.eventScheduler);
gen.writeObjectField("auctionTimerStart", value.auctionTimerStart);
gen.writeEndObject();
}
Right now, I have something being serialized, but it has nothing to do with the properties I've defined:
2020-05-12 16:20:01.322 DEBUG 1 --- [mandProcessor-0] o.a.m.saga.repository.jpa.JpaSagaStore : Storing saga id c4b5d94c-7251-40a5-accf-332768b1cacd as {"delegatee":null,"unwrappingSerializer":false}
EDIT 2 Decided to add more insight into the issue I experience when I switch general to use XStream (even though it's somewhat unrelated to the main issue described in the title).
Here is the issue it complains to me:
2020-05-12 17:08:06.495 DEBUG 1 --- [ault-executor-0] o.a.a.c.command.AxonServerCommandBus : Received command response [message_identifier: "79631ffb-9a87-4224-bed3-a957730dced7"
error_code: "AXONIQ-4002"
error_message {
message: "No converter available\n---- Debugging information ----\nmessage : No converter available\ntype : jdk.internal.misc.InnocuousThread\nconverter : com.thoughtworks.xstream.converters.reflection.ReflectionConverter\nmessage[1] : Unable to make field private static final jdk.internal.misc.Unsafe jdk.internal.misc.InnocuousThread.UNSAFE accessible: module java.base does not \"opens jdk.internal.misc\" to unnamed module #7728643a\n-------------------------------"
location: "1#600b5b87a922"
details: "No converter available\n---- Debugging information ----\nmessage : No converter available\ntype : jdk.internal.misc.InnocuousThread\nconverter : com.thoughtworks.xstream.converters.reflection.ReflectionConverter\nmessage[1] : Unable to make field private static final jdk.internal.misc.Unsafe jdk.internal.misc.InnocuousThread.UNSAFE accessible: module java.base does not \"opens jdk.internal.misc\" to unnamed module #7728643a\n-------------------------------"
}
request_identifier: "2f7020b1-f655-4649-bbe0-d6f458b3c2f3"
]
2020-05-12 17:08:06.505 WARN 1 --- [ault-executor-0] o.a.c.gateway.DefaultCommandGateway : Command 'ACommandClassDispatchedFromSaga' resulted in org.axonframework.commandhandling.CommandExecutionException(No converter available
---- Debugging information ----
message : No converter available
type : jdk.internal.misc.InnocuousThread
converter : com.thoughtworks.xstream.converters.reflection.ReflectionConverter
message[1] : Unable to make field private static final jdk.internal.misc.Unsafe jdk.internal.misc.InnocuousThread.UNSAFE accessible: module java.base does not "opens jdk.internal.misc" to unnamed module #7728643a
-------------------------------)
Still no luck on resolving this...
I've worked on Axon systems where the only used Serializer implementation was the JacksonSerializer too. Mind you though, this is not what the Axon team recommends. For messages (i.e. commands, events and queries) it makes perfect sense to use JSON as the serialized format. But switching the general Serializer to jackson means you have to litter your domain logic (e.g. your Saga) with Jackson specifics "to make it work".
Regardless, backtracking to my successful use case of jackson-serialized-sagas. In this case we used the correct match of JSON annotations on the fields we desired to take into account (the actual state) and to ignore the one's we didn't want deserialized (with either transient or #JsonIgnore). Why both do not seem to work in your scenario is not entirely clear at this stage.
What I do recall is that the referenced project's team very clearly decided against Lombok due to "overall weirdnes" when it comes to de-/serialization. As a trial it thus might be worth to not use any Lombok annotations/logic in the Saga class and see if you can de-/serialize it correctly in such a state.
If it does work at that moment, I think you have found your culprit for diving in further search.
I know this isn't an exact answer, but I hope it helps you regardless!
Might be worthwhile to share the repository where this problems occurs in; might make the problem clearer for others too.
I was able to resolve the issue #2 when using XStream as general serializer.
One of the Sagas had an #Autowired dependency property that was not transient.
XStream was throwing some cryptic message, but we managed to track the problem and address it.
As for JSON support, we had no luck. We ended up switched everything to XStream for now, as the company only uses Java and it would be ok to decode the events using XStream.
Not the greatest solution, as we really wanted (and hoped) JSON would be supported properly out of the box. Mind you, this is in conjunction with using Lombok which caused for the nuisance in this case.

Reset redis cache expiry using spring data redis

I have a requirement to reset the expire time if the record is accessed before its initial expire time. I am using Spring data redis API to use Redis as Cache. I am using RediscacheManager's setDefaultExpiration(5000) to set default expiration. Unable to find any solutions or documentation about resetting the expiry time. Any guidance is appreciated.
Also, wondering, why couldn't this be a natural feature of Redis Cache, after all, it should get the most used records from cache.
Wrote this method and called from appropriate places. Worked like a charm for me.
public void resetExpire(String keyPattern) {
LOG.debug("Getting Multiple keys from cache with pattern: " + keyPattern);
Set<String> keylist = redisTemplate.keys(keyPattern);
redisTemplate.executePipelined(new RedisCallback<Object>() {
public Object doInRedis(RedisConnection connection) throws DataAccessException {
keylist.forEach(key->
redisTemplate.expire(key, 5000, TimeUnit.SECONDS));
return null;
}
});
}

Configuring Logback: report time from application start instead of current date/time

During troubleshooting in development environment I would like to have time from application start instead of current date/time in logs.
Like in dmesg output.
What configuration and formatter I should use?
UPDATE There is an example from official site: https://logback.qos.ch/manual/layouts.html#writingYourOwnLayout where custom layout is implemented:
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.core.LayoutBase;
public class MySampleLayout extends LayoutBase<ILoggingEvent> {
public String doLayout(ILoggingEvent event) {
StringBuffer sbuf = new StringBuffer(128);
sbuf.append(event.getTimeStamp() - event.getLoggingContextVO.getBirthTime());
sbuf.append(" ");
sbuf.append(event.getLevel());
return sbuf.toString();
}
}
For me it's over complicated. Such simple thing shouldn't require compilation but rather configuration... Why I need to repackage jar or extend CLASSPATH to include custom written writer?
Seems that official docs has notes on this:
r / relative Outputs the number of milliseconds elapsed since the start
of the application until the creation of the logging event.
But you can't format it as date:
<encoder>
<pattern>%r %5p [%15.15t] %logger%n%m%wEx%n</pattern>
</encoder>

WCF Client Proxies, Client/Channel Caching in ASP.Net - Code Review

long time ASP.Net interface developer being asked to learn WCF, looking for some education on more architecture related fronts - as its not my strong suit but I'm having to deal.
In our current ASMX world we adopted a model of creating ServiceManager static classes for our interaction with web services. We're starting to migrate to WCF, attempting to follow the same model. At first I was dealing with performance problems, but I've tweaked a bit and we're running smoothly now, but I'm questioning my tactics. Here's a simplified version (removed error handling, caching, object manipulation, etc.) of what we're doing:
public static class ContentManager
{
private static StoryManagerClient _clientProxy = null;
const string _contentServiceResourceCode = "StorySvc";
// FOR CACHING
const int _getStoriesTTL = 300;
private static Dictionary<string, GetStoriesCacheItem> _getStoriesCache = new Dictionary<string, GetStoriesCacheItem>();
private static ReaderWriterLockSlim _cacheLockStories = new ReaderWriterLockSlim();
public static Story[] GetStories(string categoryGuid)
{
// OMITTED - if category is cached and not expired, return from cache
// get endpoint address from FinderClient (ResourceManagement SVC)
UrlResource ur = FinderClient.GetUrlResource(_contentServiceResourceCode);
// Get proxy
StoryManagerClient svc = GetStoryServiceClient(ur.Url);
// create request params
GetStoriesRequest request = new GetStoriesRequest{}; // SIMPLIFIED
Manifest manifest = new Manifest{}; // SIMPLIFIED
// execute GetStories at WCF service
try
{
GetStoriesResponse response = svc.GetStories(manifest, request);
}
catch (Exception)
{
if (svc.State == CommunicationState.Faulted)
{
svc.Abort();
}
throw;
}
// OMITTED - do stuff with response, cache if needed
// return....
}
internal static StoryManagerClient GetStoryServiceClient(string endpointAddress)
{
if (_clientProxy == null)
_clientProxy = new StoryManagerClient(GetServiceBinding(_contentServiceResourceCode), new EndpointAddress(endpointAddress));
return _clientProxy;
}
public static Binding GetServiceBinding(string bindingSettingName)
{
// uses Finder service to load a binding object - our alternative to definition in web.config
}
public static void PreloadContentServiceClient()
{
// get finder location
UrlResource ur = FinderClient.GetUrlResource(_contentServiceResourceCode);
// preload proxy
GetStoryServiceClient(ur.Url);
}
}
We're running smoothly now with round-trip calls completing in the 100ms range. Creating the PreloadContentServiceClient() method and adding to our global.asax got that "first call" performance down to that same level. And you might want to know we're using the DataContractSerializer, and the "Add Service Reference" method.
I've done a lot of reading on static classes, singletons, shared data contract assemblies, how to use the ChannelFactory pattern and a whole bunch of other things that I could do to our usage model...admittedly, some of its gone over my head. And, like I said, we seem to be running smoothly. I know I'm not seeing the big picture, though. Can someone tell me what I've ended up here with regards to channel pooling, proxy failures, etc. and why I should head down the ChannelFactory path? My gut says to just do it, but my head can't comprehend why...
Thanks!
ChannelFactory is typically used when you aren't using Add Service Reference - you have the contract via a shared assembly not generated via a WSDL. Add Service Reference uses ClientBase which is essentially creating the WCF channel for you behind the scenes.
When you are dealing with REST-ful services, WebChannelFactory provides a service-client like interface based off the shared assembly contract. You can't use Add Service Reference if your service only supports a REST-ful endpoint binding.
The only difference to you is preference - do you need full access the channel for custom behaviors, bindings, etc. or does Add Service Reference + SOAP supply you with enough of an interface for your needs.

Resources