Axon for choreography, DisallowReplay or Subscribing? - axon

I've wrote some choreography-based inter-service communication using Axon Framework.
https://github.com/jinyoung/lab-shop-eventsourcing4
Event Storming Model is here: https://dev.msaez.io/#/storming/129002f0e576ba8633b5b7f4520abbe1
The order service publishes "OrderPlacedEvent" and the delivery service takes the event with EventHandler:
#OrderAggregate.java in order service:
#Aggregate
#Data
#ToString
public class OrderAggregate {
#AggregateIdentifier
private Long id;
private String productId;
private Integer qty;
private String customerId;
private java.math.BigDecimal amount;
private String status;
private String address;
public OrderAggregate(){}
#CommandHandler
public OrderAggregate(OrderCommand command){
OrderPlacedEvent event = new OrderPlacedEvent();
BeanUtils.copyProperties(command, event);
apply(event);
}
..
#PolicyHandler.java in delivery service:
#Service
#ProcessingGroup("delivery")
public class PolicyHandler{
#Autowired
CommandGateway commandGateway;
#EventHandler
#DisallowReplay
#AllowReplay(false)
public void wheneverOrderPlaced_AddToDeliveryList(OrderPlacedEvent orderPlaced, ReplayStatus status){
System.out.println(orderPlaced.toString()); // called again when it restarted
AddToDeliveryListCommand command = new AddToDeliveryListCommand();
command.setId(System.currentTimeMillis());
command.setOrderId(orderPlaced.getId());
commandGateway.send(command);
}
}
The problem is when the delivery service is restarted, all the EventHandler is called again with the 'OrderPlacedEvent'. That occurs undesired problem - reproducing the delivery start command again.
So, I tried to use #DisallowReplay option - that doesn't work. still receives the all the events.
And So I tried to set the configuration
axon:
axonserver:
servers: localhost
eventhandling:
processors:
delivery:
mode: subscribing
# source: eventbus
In this case, the EventHandler never get called. Is it right that the event is firstly stored in Event Store and after it is sent to Event Bus? So I tried to set the 'source' value to eventBus, there was some Bean error from Spring Boot again.
What I want is that the subscribing service receives the event only once even if the service is restarted.

I think what you miss is a persisted token store. The progress the event processor made is now not stored, so when starting up again it starts from the beginning again.
If this is just for local development you could set the processor to start at the tail instead, to solve the problem. For production you would want to use a relational database (using JPA or JDBC) to store the tokens, or MongoDB (using the Axon mongo extension).

Related

Empty data set returned after a few repeated calls to DAO

I have got a serious problem where the DAO-layer stops returning records after a few calls. I'm using Spring Framework 5.3.10. The main components involved are:
Spring MVC Connection pooling over HikariCP 5.0.0
JDBC connector Jaybird 4.0.3 (Firebird 3.0.7 database server)
ThreadPoolExecutor (using default values)
Spring Transactions
Mybatis
I have got one Spring controller (A), that repeatedly (every 2 - 3 seconds) calls a method of a Spring Service (B) asynchronously (method marked with #Async) and a different parameter for each call. There is a DAO-layer (C) declared as a Spring service. The worker method in the Spring service (B) calls a DAO-method in the beginning of each run to retrieve a data set from a database table corresponding to the passed parameter. At the end of the execution of the worker method in the Spring service (B), rows corresponding to the input parameter are updated (not the field corresponding to the input parameter). The method in the Spring service (B) takes a long time to process the data, about 10 - 15 seconds.
After about the third or fourth call from the Spring controller (A), the call to the DAO-method returns an empty result set. When calling the method in the Spring service (B) slowly, waiting for the previous call to complete, everything is working correctly.
Setting transaction isolation has got no effect whatsoever.
I have tried to solve this problem for a couple of days now, and getting nowhere. I would be very grateful if somebody can point me in the right direction how to solve this. Using some kind of mutexes or semaphores is just a way to circumvent the problem without really solving it.
Schematically
Controller A <---------
| |
| | repeats every 2-3 secs.
Service B |
worker method |
takes 15 - 20 secs. ----
calls DAO-method getData(token)
|
do work
|
calls DAO-method updateData(token)
Controller (A)
#Controller
#RequestMapping("/test")
public class TestController {
#Autowired
private TestService testService;
...
...
#GetMapping(value="/RunWorker")
public String runWorker(ModelMap map, HttpServletRequest hsr) {
...
testService.workerMethod(token);
...
}
}
Service (B)
public interface TestService {
public void workerMethod(long token);
}
#Service
public class TestServiceImpl implements TestService {
#Autowired
private TestDAO testDao;
#Override
public void workerMethod(long token) {
List<MyData> myDataSet = testDao.getData(token);
...
// very long process
...
testDao.updateData(token);
}
}
DAO (C)
public interface TestDAO {
public List<MyData> getData(long token);
public void updateData(long token);
}
#Service
public class TestDAOImpl implements TestDAO {
#Autowired
private TestMapper testMapper; // using Mybatis mappers
public List<MyData> getData(long token) {
return testMapper.getData(token);
}
public void updateData(long token) {
testMapper.updateData(token);
}
}
Mapper class (D)
public interface TestMapper {
#Select("SELECT * FROM TESTTABLE WHERE TOKEN=#{token}")
public List<MyData> getData(#Param("token") long token);
#Update("UPDATE TESTTABLE SET STATUS=9 WHERE TOKEN=#{token}
public void updateData(#Param("token") long token);
}
Thanks #M. Deinum for the suggestion about #Repository. This did not help, however.
I remade the Spring service (B) to a Spring bean with prototype scope, and injecting it with #Lookup. The behavior is still the same. After the second call, the DAO-method getData returns an empty result set. Very puzzling and frustrating.
I solved the problem. It was probably resource exhaustion due to repeated multiple calls to the Spring service (B) with the same call parameters. I guess the statement pool got depleted, active statements not returning fast enough, and then returning empty data sets for each call.
Best regards,
Peter

Integration Spring Reactive with Spring MVC + MySQL

Trying to figure out if I can use Spring Reactive (Flux/Mono) with Spring MVC ?
The structure of microservices using Spring MVC + Feign Client, Eureka Server (Netflix OSS), Hystrix, MySQL database.
My first microservice addDistanceClient adds data to the database.
Here is an example controller:
#RequestMapping("/")
#RestController
public class RemoteMvcController {
#Autowired
EmployeeService service;
#GetMapping(path = "/show")
public List<EmployeeEntity> getAllEmployeesList() {
return service.getAllEmployees();
}
}
Here I can use Mono/Flux, I think there will be no problems.
My second microservice is showDistanceClient - it is not directly connected to the database.
He has a method that calls the method (as described above) on the first microservice to retrieve data from the database.
It uses the Feign Client.
Second microservice controller:
#Controller
#RequestMapping("/")
public class EmployeeMvcController {
private ServiceFeignClient serviceFeignClient;
#RequestMapping(path = "/getAllDataFromAddService")
public String getData2(Model model) {
List<EmployeeEntity> list = ServiceFeignClient.FeignHolder.create().getAllEmployeesList();
model.addAttribute("employees", list);
return "resultlist-employees";
}
}
and ServiceFeignClient itself, with which we call the method on the first microservice, looks like this:
#FeignClient(name = "add-client", url = "http://localhost:8081/", fallback = Fallback.class)
public interface ServiceFeignClient {
class FeignHolder {
public static ServiceFeignClient create() {
return HystrixFeign.builder().encoder(new GsonEncoder()).decoder(new GsonDecoder()).target(ServiceFeignClient.class, "http://localhost:8081/", new FallbackFactory<ServiceFeignClient>() {
#Override
public ServiceFeignClient create(Throwable throwable) {
return new ServiceFeignClient() {
#Override
public List<EmployeeEntity> getAllEmployeesList() {
System.out.println(throwable.getMessage());
return null;
}
};
}
});
}
}
#RequestLine("GET /show")
List<EmployeeEntity> getAllEmployeesList();
}
It is working properly now. Those, if both microservices are OK, I get data from the database.
If the first microservice (addDistanceClient) is dead, then when I call the method on second microservice (showDistanceClient) to get data from the database through the first microservice (using Feign Client on second microservice), I get a page on which the spinner is spinning and the text that the service is unavailable, try again later. All perfectly.
My goal:
To do this using Spring Reactive (not sure if this will help me, but I think I'm thinking in the right direction) to make the message that the service is currently unavailable and the spinning spinner on the second microservice will automatically disappear and the data from the database will be displayed as soon as the first microservice (addDistanceClient) will come to life again (without re-sending the request, i.e. without reloading the page).
Will I be able to do this through Spring WebFlux ?
I know that a stream is used through Spring WebFlux, which itself will notify us if data appears in it, we do not need to resubmit the request here.
I started thinking about this and cannot figure out how to do this:
1) using Spring Reactive
In this case, I need to implement Flux/Mono into the MVC model in the second showDistanceClient microservice, which returns HTML. I don't understand how. I know how to do this with REST.
2) If the first item is incorrect, maybe I need to use a WebSocket for this ?
If so, please share useful links with examples. I will be very grateful.
Indeed, this topic is very interesting to me and I want to understand it.
I will be very grateful for your help. Thanks everyone!
UPDATED POST:
I updated both controllers with REST + WebFlux. Everything works for me.
The first addDistanceClient service and its controller:
#RestController
#RequestMapping("/")
public class BucketController {
#Autowired
private BucketRepository bucketRepository;
// Get all Bucket from the database (every 1 second you will receive 1 record from the DB)
#GetMapping(value = "/stream/buckets/delay", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<Bucket> streamAllBucketsDelay() {
return bucketRepository.findAll().delayElements(Duration.ofSeconds(5));
}
}
He pulls out all the records from the database with an interval of 5 seconds each record. I added an interval for an example to test.
The second service is showDistanceClient and its controller.
Here I used WebClient instead of Feign Client.
#RestController
#RequestMapping("/")
public class UserController {
#Autowired
private WebClient webClient;
#Autowired
private WebClientService webClientService;
// Using WebClient
#GetMapping(value = "/getDataByWebClient",produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<Bucket> getDataByWebClient() {
return webClientService.getDataByWebClient();
}
}
and its Service layer (WebClientService):
#Service
public class WebClientService {
private static final String API_MIME_TYPE = "application/json";
private static final String API_BASE_URL = "http://localhost:8081";
private static final String USER_AGENT = "User Service";
private static final Logger logger = LoggerFactory.getLogger(WebClientService.class);
private WebClient webClient;
public WebClientService() {
this.webClient = WebClient.builder()
.baseUrl(API_BASE_URL)
.defaultHeader(HttpHeaders.CONTENT_TYPE, API_MIME_TYPE)
.defaultHeader(HttpHeaders.USER_AGENT, USER_AGENT)
.build();
}
public Flux<Bucket> getDataByWebClient() {
return webClient.get()
.uri("/stream/buckets/delay")
.exchange()
.flatMapMany(clientResponse -> clientResponse.bodyToFlux(Bucket.class));
}
}
Now everything works in a reactive environment. Fine.
But my problem remained unresolved.
My goal: everything works, everything is fine, and if I suddenly called on the second service a method that using WebClient called the first service to get the data, and at that moment my first service died, I received a message that the service is temporarily unavailable and then my first service My request for data was revived and I received all the data and instead of reporting that the service was temporarily unavailable I would get all the data (important: without reloading the page).
How do I achieve this ?

Multi-entity Aggregates command handling

I have an aggregate root like this:
Aggregate root:
#NoArgsConstructor
#Aggregate(repository = "positionAggregateRepository")
#AggregateRoot
#XSlf4j
#Data
public class HopAggregate {
#AggregateIdentifier
private String hopId;
private FilteredPosition position;
private LocalDate positionDate;
#AggregateMember
private Security security;
#CommandHandler
public HopAggregate(NewHopCommand cmd) {
log.info("creating new position , {}", cmd.getDateId());
apply(new HopEvent(cmd.getHopId(), cmd.getDateId(), cmd.getFilteredPosition(), cmd.getSecurity(), false));
}
#CommandHandler
public void handle(UpdateHopCommand cmd) {
log.info("creating hop update event {}", cmd);
apply(new HopEvent(this.hopId, this.positionDate, cmd.getFilteredPosition(), this.security, true));
}
#CommandHandler
public void handle(SecurityUpdate cmd) {
log.info("updating security {}", cmd);
apply(new SecurityUpdateEvent(this.hopId, cmd.getFilteredSecurity()));
}
#EventSourcingHandler
public void on(HopEvent evt) {
if (evt.getIsUpdate()) {
log.info("updating position {}", evt);
this.position = evt.getFilteredPosition();
} else {
log.info("adding new position to date {}", evt);
this.hopId = evt.getHopId();
this.positionDate = evt.getDate();
this.position = evt.getFilteredPosition();
this.security= evt.getSecurity();
}
}
#EventSourcingHandler
public void on(SecurityUpdateEvent evt) {
log.info("hop id {}, security update {}", this.hopId, evt.getFilteredSecurity().getSecurityId());
}
}
Child entity:
#XSlf4j
#Data
#RequiredArgsConstructor
#NoArgsConstructor
public class IpaSecurity implements Serializable {
#EntityId
#NonNull
private String id;
#NonNull
private FilteredSecurity security;
}
My issue is that when i am pushing and update like this:
#EventHandler
public void handleSecurityEvent(SecurityUpdate securityUpdate) {
log.info("got security event {}", securityUpdate);
commandGateway.send(securityUpdate);
}
and my command being:
#Data
#RequiredArgsConstructor
#NoArgsConstructor
#ToString
public class SecurityUpdate {
#NonNull
#TargetAggregateIdentifier
private String id;
#NonNull
private FilteredSecurity filteredSecurity;
}
I am getting aggregate root not found exception:
Command 'com.hb.apps.ipa.events.SecurityUpdate' resulted in org.axonframework.modelling.command.AggregateNotFoundException(The aggregate was not found in the event store)
I am not sure how to handle this scenario. My requirement is that each aggregate should check whether it contains the security and then update it if the command was issued. What am i missing? let me know if you need any more info on the code.
Thanks for your help.
A Command is always targeted towards a single entity.
This entity can be an Aggregate, an entity contained in an Aggregate (what Axon Framework calls an Aggregate Member) or a simple singleton component.
Important to note though, is that there will only be one entity handling the command.
This is what requires you to set the #TargetAggregateIdentifier in your Command for Axon to be able to route it to a single Aggregate instance if the Command Handler in question is part of it.
The AggregateNotFoundException you're getting signals that the #TargetAggregateIdentifier annotated field in your SecurityUpdate command does no correspond to any existing Aggregate.
I'd thus suspect that the id field in the SecurityUpdate does not correspond to any #AggregateIdentifier annotated field in your HopAggregate aggregates.
A part from the above, I have a couple of other suggestions when looking at your snippets which I'd like to share with you:
#Aggregate is meta-annotated with #AggregateRoot. You're thus not required to specify both on an Aggregate class
For logging messages being handled, you can utilize LoggingInterceptor. You can configure this on any component capable of handling messages, thus providing a universal way of logging. This will omit the necessity to add log lines in your message handling functions
You're publishing a HopEvent on both the create and update commands. Doing so makes your HopEvent very generic. Ideally, your events clarify business operations occurring in your system. My rule of thumb typically is such: "If I tell my business manager/customer about the event class, he/she should know exactly what it does". I'd thus suggest to rename the event to something more specific
Just as with the HopEvent, the UpdateHopCommand is quite generic. Your commands should express the intent to perform an operation in your application. Users will typically not desire an update, they desire an address change for example. Your commands classes ideally reflect this
The suggested naming convention for commands is to start with verb in the present tense. Thus, it should no be SecurityUpdate, but UpdateSecurity. A command is a request expressing intent, the messages ideally reflect this
Hope this helps you out #juggernaut!

Using SignalR to send Real Time Notification

I want to create a new system to send real time trade execution messages to users using SignalR. In the old system, each client connects to the trading server using Java Applet TCP connection.
I use the following tutorial as reference
http://www.asp.net/signalr/overview/getting-started/tutorial-server-broadcast-with-signalr
There is a line of code in the StockTicker constructor to update the stock prices:
_timer = new Timer(UpdateStockPrices, null, _updateInterval, _updateInterval);
However, I need to update trade execution in real time instead of updating stock prices per 250ms.
Is it okay to create TCP connection to my trading server per client in the constructor? It seems that in the sample code, the constructor of StockTicker (i.e. my TradingManager) will be called one time only. But in my design, I want to create a TCP connection per client. How should I change the code to do this?
Here is my code:
TradingHub.cs
public class TradingHub : Hub
{
private readonly TradingManager _tradingManager;
public TradingHub() : this(TradingManager.Instance) { }
public TradingHub(TradingManager tradingManager)
{
_tradingManager = tradingManager;
}
...
}
TradingManager.cs
public class TradingManager
{
// Singleton instance
private readonly static Lazy<TradingManager> _instance = new Lazy<TradingManager>(
() => new TradingManager());
...
public static TradingManager Instance{ get{ return _instance.Value; } }
public TradingManager()
{
...
this.apiConnector.MessageReceived += new CustomEventHandler(this.api_MessageReceived);
init();
}
private IHubConnectionContext<dynamic> Clients { get; set; }
private void init()
{
TradingSession tradingSession = getLoginSession(user);
// connect to trading server using TCP connection
this.apiConnector.ensureConnected(host, port, tradingSession);
// send keep alive message to trading server periodically
_timer = new Timer(sendKeepAlive, null, _updateInterval, _updateInterval);
}
private void api_MessageReceived(object sender, CustomEventArgs e)
{
// when web server receives trade execution from server, send out the message immediately
Clients.Caller.SendTradeExecutionMessage(......);
}
public static TradingSession getLoginSession(string user)
{
...
}
private void sendKeepAlive(object state)
{
...
}
}
If you were to make a new TradingManager in your Hub constructor instead of referencing a singleton, you would be creating more than one TradingManager per SignalR connection. Hubs are reinstantiated per method call. Every time you invoke a hub method or a hub event is called (e.g. OnConnected, OnReconnected, OnDisconnected), your constructor will be called.
However, OnConnected is only called once per SignalR connection. By the way, SignalR connections are completely orthogonal to TCP connections. With the long polling transport, for example, a new HTTP request is sent each time a message is received.
I think you want to create a new TradingManager instance each time OnConnected is called and potentially associate it with the client's Context.ConnectionId and store it (perhaps in a ConcurrentDictionary) so you can retrieve it using the connection id when your Hub methods are called. You can then dereference the stored TradingManager instance for a given connection id in OnDisconnected.
You can learn more about SignalR connections at:
http://www.asp.net/signalr/overview/guide-to-the-api/handling-connection-lifetime-events
You can learn more about the Hub API and the On* methods at:
http://www.asp.net/signalr/overview/guide-to-the-api/hubs-api-guide-server#connectionlifetime

FlushMode type Commit used with REQUIRES_NEW Transaction Attribute

I'm developing with JBoss 7.1.1 Final, Weld, Hibernate 4, Seam 3 and I don't understand following behavior. I use Seam Managed Persistence Context for Entity managers and Persistence Interceptor from Seam 3. I have following CDI Bean:
#ViewScoped
#Named
public class RegistrationController implements Serializable {
#Inject
private RegisterService service;
#Inject
private EntityManager em;
public void register() {
Person p = service.register("username","password");
Person pp = em.find(Person.class, p.getId()); //returns null
}
}
And following EJB
#Stateless
#Local(IRegisterService.class)
public RegisterService implements IRegisterService {
#Inject
private EntityManager em;
#Override
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public Person register(String username, String password) {
return em.merge(new Person(username, password));
}
}
So since I use Seam Persistence module I assume that this flow of operations will occur:
1) registrationController.register() is called from frontend
2) New transaction A is initiated
3) service.register(...) is called
4) Transaction A is suspended and transaction B is created for execution of service.register (since it is annotated by REQUIRES_NEW)
5) The execution service.register(...) is completed
6) Transaction B is committed
7) Since I use COMMIT flush type, flush will be called
8) Transaction A is opened back
Now, em.find(Person.class, p.getId()) is trying to find just persisted person. Since transaction B was committed and entity manager flushed, it should find it. But it returns null. If I do flush manually, then it works.
Where am I making mistake? Is there some misunderstanding?
From looking at your code I'd say - since RegistrationController is a ordinary managed bean - that it will not start its own transaction. This basically means that you have a single transaction B.

Resources