Axon framework: only first event is applied during command handler - axon

I'm trying to create a command handler that treats the command as a number of sub-commands. Each sub-command will generate an event (which should then update the state of the aggregate). The processing of each sub-command relies on the state of the aggregate being up-to-date (from the previous sub-command).
For example, consider the following aggregate:
package axon.poc
import org.axonframework.commandhandling.CommandHandler
import org.axonframework.eventsourcing.EventSourcingHandler
import org.axonframework.modelling.command.AggregateIdentifier
import org.axonframework.modelling.command.AggregateLifecycle
import org.axonframework.spring.stereotype.Aggregate
import org.slf4j.LoggerFactory
import java.util.UUID
#Aggregate
class Aggregate() {
companion object {
private val logger = LoggerFactory.getLogger(Aggregate::class.java)
}
#AggregateIdentifier
internal var aggregateId: UUID? = null
private var value: Int = 0
#CommandHandler
constructor(command: Command): this() {
logger.info("generating create event")
var applyMore = AggregateLifecycle.apply(CreatedEvent(command.aggregateId))
for (i in 0 until command.value) {
applyMore = applyMore.andThenApply {
logger.info("generating update event: ${value+1}")
UpdatedEvent(command.aggregateId, value+1)
}
}
logger.info("completed command handler")
}
#EventSourcingHandler
fun on(event: CreatedEvent) {
logger.info("event sourcing handler: $event")
this.aggregateId = event.aggregateId
this.value = 0
}
#EventSourcingHandler
fun on(event: UpdatedEvent) {
logger.info("event sourcing handler: $event")
this.value = event.value
}
}
When Command(value = 2) is handled by this code, it generates
[main] INFO org.axonframework.spring.stereotype.Aggregate - generating create event
[main] INFO org.axonframework.spring.stereotype.Aggregate - completed command handler
[main] INFO org.axonframework.spring.stereotype.Aggregate - event sourcing handler: CreatedEvent(aggregateId=65a7a461-61bb-451f-b2d9-8460994eeb1a)
[main] INFO org.axonframework.spring.stereotype.Aggregate - generating update event: 1
[main] INFO org.axonframework.spring.stereotype.Aggregate - generating update event: 1
[main] INFO org.axonframework.spring.stereotype.Aggregate - event sourcing handler: UpdatedEvent(aggregateId=65a7a461-61bb-451f-b2d9-8460994eeb1a, value=1)
[main] INFO org.axonframework.spring.stereotype.Aggregate - event sourcing handler: UpdatedEvent(aggregateId=65a7a461-61bb-451f-b2d9-8460994eeb1a, value=1)
The first event (CreatedEvent) is being handled before the applyMore is executed. However, even though the applyMore is chained, the UpdatedEvent's are not handled by the event sourcing handler until both have been generated.
I was expecting (and hoping for):
[main] INFO org.axonframework.spring.stereotype.Aggregate - generating create event
[main] INFO org.axonframework.spring.stereotype.Aggregate - completed command handler
[main] INFO org.axonframework.spring.stereotype.Aggregate - event sourcing handler: CreatedEvent(aggregateId=65a7a461-61bb-451f-b2d9-8460994eeb1a)
[main] INFO org.axonframework.spring.stereotype.Aggregate - generating update event: 1
[main] INFO org.axonframework.spring.stereotype.Aggregate - event sourcing handler: UpdatedEvent(aggregateId=65a7a461-61bb-451f-b2d9-8460994eeb1a, value=1)
[main] INFO org.axonframework.spring.stereotype.Aggregate - generating update event: 2
[main] INFO org.axonframework.spring.stereotype.Aggregate - event sourcing handler: UpdatedEvent(aggregateId=65a7a461-61bb-451f-b2d9-8460994eeb1a, value=2)
Is this a bug with Axon? Or am I misunderstanding something with how it should be used? How can a number of "commands" be handled atomically? ie. all pass or all fail.

TLDR; It's a bug.
You've hit a very specific situation that only occurs in constructors. The challenge, from an Axon perspective, is that you need an instance to apply events on. However, that instance is only available once the constructor is complete.
The andThenApply function is exactly provided for this purpose (and you're using it correctly). However, the code is (erroneously) evaluated a little too soon in your case. I had to run your code locally and debug through to find out what was exactly happening.
The root cause is that the andThenApply implementation of AnnotatedAggregate calls apply, instead of publish. The former will see it's currently executing delayed tasks, and schedules the actual publication at the end of these tasks. The next task does the same. Therefore both events end up being created first, and then published after they have been both created.
Would you be interested in filing this as an issue in the Axon issue tracker? That way, credits go where they belong.

Related

locust task method can't get the global value when "on_test_start" exist

I have a task in locust script,before the task running with hundreds users,I wangt another operation can change the var "user_var", so that "user_var"'s new value can be used when the task running.
but unfortunately, when i'm running the script, the result is different。
"user_var" in on_test_start has been changed, in the task,its value is still zero.
I print the var id,it's different. so what happened,can somebody tell me? thanks
code as follow
base_url="http://baidu.com"
user_var = 0
print("init var id:{}".format(id(user_var)))
#events.test_start.add_listener
def on_test_start(environment,**kwargs):
global user_var
user_var = 1
print("method var id:{}".format(id(user_var)))
print("user_var:{}".format(user_var))
class MyService(HttpUser):
wait_time=between(1,2)
#task()
def points_acquire(self):
print("class var id:{}".format(id(user_var)))
print("user_var:{}".format(user_var))
if __name__=='__main__':
run_script=os.path.basename(__file__)
master_cmd='start locust -f {} --host={} --master '.format(run_script,base_url)
worker_cmd=' && start locust -f {} --worker'.format(run_script)
total_cmd=master_cmd+worker_cmd*1
os.system(total_cmd)
Per definiton of the event:
"""
Fired when a new load test is started. It's not fired again if the number of
users change during a test. When running locust distributed the event is only fired
on the master node and not on each worker node.
"""
https://docs.locust.io/en/1.4.0/writing-a-locustfile.html#test-start-and-test-stop
So you are initiating variable as 0 and the event doesnt fire in worker nodes so it stays 0,in master node the event fires and changes the value.
Edit: It seems the logic behind this has changed since last I checked it might work if you update your locust version

Understanding clear about Diagnostics in Durable Functions

I'am working with Durable functions having 3 activities that executes one by one.All 3 activity executed one by one with any delayed, but since few days activities are not running at all and got stopped from executing.I see no exceptions and logs except below one,
[FunctionName("StartProcess")]
public static async Task Run([OrchestrationTrigger] DurableOrchestrationContext context)
{
//number of activities
// RandomNumberGeneration
// RandomNumberValidation
// DatabaseInsertion
}
My StartProcess OrchestrationTrigger is executing always but not able to invoke RandomNumberGeneration() Activity trigger.So I got below log when StartProcess() OrchestrationTrigger is executing and want to know more about it
0ee5eaa5ddb240d28df0f1077a563cc6: Function 'StartProcess (Orchestrator)', version '' started. IsReplay: False. Input: (42 bytes). State: Started. HubName: DurableFunctionsHub. AppName: schedulars. SlotName: Production. ExtensionVersion: 1.0.0.0.
Function started (Id=b8abdfcc-1794-4bea-b5b7-5a35bffacb4b)
0ee5eaa5ddb240d28df0f1077a563cc6: Function 'DatabaseInsertion (Activity)', version '' scheduled. Reason: StartProcess. IsReplay: False. State: Scheduled. HubName: DurableFunctionsHub. AppName: schedulars. SlotName: Production. ExtensionVersion: 1.0.0.0.
Function completed (Success, Id=b8abdfcc-1794-4bea-b5b7-5a35bffacb4b, Duration=18ms)
Below is log when StartProcess execute always and also invokes other activities.
Function started (Id=4e61d349-b6e5-4c04-9d85-8d490f2a4a4e)
9573f926bf884bb0a519e5cd1434621c: Function 'StartProcess (Orchestrator)', version '' started. IsReplay: True. Input: (42 bytes). State: Started. HubName: DurableFunctionsHub. AppName: schedulars. SlotName: Production. ExtensionVersion: 1.0.0.0.
9573f926bf884bb0a519e5cd1434621c: Function 'StartProcess (Orchestrator)', version '' completed. ContinuedAsNew: False. IsReplay: True. Output: (null). State: Completed. HubName: DurableFunctionsHub. AppName: schedulars. SlotName: Production. ExtensionVersion: 1.0.0.0.
Function completed (Success, Id=4e61d349-b6e5-4c04-9d85-8d490f2a4a4e, Duration=3270ms)
Gone through below Diagnostics in Durable Functions : https://learn.microsoft.com/en-us/azure/azure-functions/durable-functions-diagnostics
But not able to get it clearly
Does StartProcess() is unable to start another activity because DataBaseInsertion() ActivityTrigger is already in scheduled state and waiting for completion of it?
please help in understanding What is issue and Why it is not running?

exactly once delivery Is it possible through spring-cloud-stream-binder-kafka or spring-kafka which one to use

I am trying to achieve exactly once delivery using spring-cloud-stream-binder-kafka in a spring boot application.
The versions I am using are:
spring-cloud-stream-binder-kafka-core-1.2.1.RELEASE
spring-cloud-stream-binder-kafka-1.2.1.RELEASE
spring-cloud-stream-codec-1.2.2.RELEASE spring-kafka-1.1.6.RELEASE
spring-integration-kafka-2.1.0.RELEASE
spring-integration-core-4.3.10.RELEASE
zookeeper-3.4.8
Kafka version : 0.10.1.1
This is my configuration (cloud-config):
spring:
autoconfigure:
exclude: org.springframework.cloud.netflix.metrics.servo.ServoMetricsAutoConfiguration
kafka:
consumer:
enable-auto-commit: false
cloud:
stream:
kafka:
binder:
brokers: "${BROKER_HOST:xyz-aws.local:9092}"
headers:
- X-B3-TraceId
- X-B3-SpanId
- X-B3-Sampled
- X-B3-ParentSpanId
- X-Span-Name
- X-Process-Id
zkNodes: "${ZOOKEEPER_HOST:120.211.316.261:2181,120.211.317.252:2181}"
bindings:
feed_platform_events_input:
consumer:
autoCommitOffset: false
binders:
xyzkafka:
type: kafka
bindings:
feed_platform_events_input:
binder: xyzkafka
destination: platform-events
group: br-platform-events
I have two main classes:
FeedSink Interface:
package au.com.xyz.proxy.interfaces;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.messaging.MessageChannel;
public interface FeedSink {
String FEED_PLATFORM_EVENTS_INPUT = "feed_platform_events_input";
#Input(FeedSink.FEED_PLATFORM_EVENTS_INPUT)
MessageChannel feedlatformEventsInput();
}
EventConsumer
package au.com.xyz.proxy.consumer;
#Slf4j
#EnableBinding(FeedSink.class)
public class EventConsumer {
public static final String SUCCESS_MESSAGE =
"SEND-SUCCESS : Successfully sent message to platform.";
public static final String FAULT_MESSAGE = "SOAP-FAULT Code: {}, Description: {}";
public static final String CONNECT_ERROR_MESSAGE = "CONNECT-ERROR Error Details: {}";
public static final String EMPTY_NOTIFICATION_ERROR_MESSAGE =
"EMPTY-NOTIFICATION-ERROR Empty Event Received from platform";
#Autowired
private CapPointService service;
#StreamListener(FeedSink.FEED_PLATFORM_EVENTS_INPUT)
/**
* method associated with stream to process message.
*/
public void message(final #Payload EventNotification eventNotification,
final #Header(KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment acknowledgment) {
String caseMilestone = "UNKNOWN";
if (!ObjectUtils.isEmpty(eventNotification)) {
SysMessage sysMessage = processPayload(eventNotification);
caseMilestone = sysMessage.getCaseMilestone();
try {
ClientResponse response = service.sendPayload(sysMessage);
if (response.hasFault()) {
Fault faultDetails = response.getFaultDetails();
log.error(FAULT_MESSAGE, faultDetails.getCode(), faultDetails.getDescription());
} else {
log.info(SUCCESS_MESSAGE);
}
acknowledgment.acknowledge();
} catch (Exception e) {
log.error(CONNECT_ERROR_MESSAGE, e.getMessage());
}
} else {
log.error(EMPTY_NOTIFICATION_ERROR_MESSAGE);
acknowledgment.acknowledge();
}
}
private SysMessage processPayload(final EventNotification eventNotification) {
Gson gson = new Gson();
String jsonString = gson.toJson(eventNotification.getData());
log.info("Consumed message for platform events with payload : {} ", jsonString);
SysMessage sysMessage = gson.fromJson(jsonString, SysMessage.class);
return sysMessage;
}
}
I have set the autocommit property for Kafka and spring container as false.
if you see in the EventConsumer class I have used Acknowledge in cases where I service.sendPayload is successful and there are no Exceptions. And I want container to move the offset and poll for next records.
What I have observed is:
Scenario 1 - In case where the Exception is thrown and there are no new messages published on kafka. There is no retry to process the message and it seems there is no activity. Even if the underlying issue is resolved. The issue I am referring to is down stream server unavailability. Is there a way to retry the processing n times and then give up. Note this is retry of processing or repoll from the last committed offset. This is not about Kafka instance not available.
If I restart the service (EC2 instance) then the processing happens from the offset where the last successful Acknowledge was done.
Scenario 2 - In case where Exception happened and then a subsequent message is pushed to kafka. I see the new message is processed and the offset moved. It means I lost the message which was not acknowledged. So the question is if I have handled the Acknowledge. How do I control to read from last commit not just the latest message and process it. I am assuming there is internally a poll happening and it did not take into account or did not know about the last message not being acknowledged. I don't think there are multiple threads reading from kafka. I dont know how the #Input and #StreamListener annotations are controlled. I assume the thread is controlled by property consumer.concurrency which controls the thread and by default it is set to 1.
So I have done research and found a lot of links but unfortunately none of them answers my specific questions.
I looked at (https://github.com/spring-cloud/spring-cloud-stream/issues/575)
which has a comment from Marius (https://stackoverflow.com/users/809122/marius-bogoevici):
Do note that Kafka does not provide individual message acking, which
means that acknowledgment translates into updating the latest consumed
offset to the offset of the acked message (per topic/partition). That
means that if you're acking messages from the same topic partition out
of order, a message can 'ack' all the messages before it.
not sure if it is the issue with order when there is one thread.
Apologies for long post, but I wanted to provide enough information. The main thing is I am trying to avoid losing messages when consuming from kafka and I am trying to see if spring-cloud-stream-binder-kafka can do the job or I have to look at alternatives.
Update 6th July 2018
I saw this post https://github.com/spring-projects/spring-kafka/issues/431
Is this a better approach to my problem? I can try latest version of spring-kafka
#KafkaListener(id = "qux", topics = "annotated4", containerFactory = "kafkaManualAckListenerContainerFactory",
containerGroup = "quxGroup")
public void listen4(#Payload String foo, Acknowledgment ack, Consumer<?, ?> consumer) {
Will this help in controlling the offset to be set to where the last
successfully processed record? How can I do that from the listen
method. consumer.seekToEnd(); and then how will listen method reset to get the that record?
Does putting the Consumer in the signature provide support to get
handle to consumer? Or I need to do anything more?
Should I use Acknowledge or consumer.commitSyncy()
What is the significance of containerFactory. do I have to define it
as a bean.
Do I need #EnableKafka and #Configuration for above approach to work?
Bearing in mind the application is a Spring Boot application.
By Adding Consumer to listen method I don't need to implement
ConsumerAware Interface?
Last but not least, Is it possible to provide some example of above approach if it is feasible.
Update 12 July 2018
Thanks Gary (https://stackoverflow.com/users/1240763/gary-russell) for providing the tip of using maxAttempts. I have used that approach. And I am able to achieve exactly once delivery and preserve the order of the message.
My updated cloud-config:
spring:
autoconfigure:
exclude: org.springframework.cloud.netflix.metrics.servo.ServoMetricsAutoConfiguration
kafka:
consumer:
enable-auto-commit: false
cloud:
stream:
kafka:
binder:
brokers: "${BROKER_HOST:xyz-aws.local:9092}"
headers:
- X-B3-TraceId
- X-B3-SpanId
- X-B3-Sampled
- X-B3-ParentSpanId
- X-Span-Name
- X-Process-Id
zkNodes: "${ZOOKEEPER_HOST:120.211.316.261:2181,120.211.317.252:2181}"
bindings:
feed_platform_events_input:
consumer:
autoCommitOffset: false
binders:
xyzkafka:
type: kafka
bindings:
feed_platform_events_input:
binder: xyzkafka
destination: platform-events
group: br-platform-events
consumer:
maxAttempts: 2147483647
backOffInitialInterval: 1000
backOffMaxInterval: 300000
backOffMultiplier: 2.0
Event Consumer remains the same as my initial implementation. Except for rethrowing the error for the container to know the processing has failed. If you just catch it then there is no way container knows the message processing has failures. By doing acknoweldgement.acknowledge you are just controlling the offset commit. In order for retry to happen you must throw the exception. Don't forget to set the kafka client autocommit property and spring (container level) autocommitOffset property to false. Thats it.
As explained by Marius, Kafka only maintains an offset in the log. If you process the next message, and update the offset; the failed message is lost.
You can send the failed message to a dead-letter topic (set enableDlq to true).
Recent versions of Spring Kafka (2.1.x) have special error handlers ContainerStoppingErrorHandler which stops the container when an exception occurs and SeekToCurrentErrorHandler which will cause the failed message to be redelivered.

Event listener not thrown because of registering issue

I have a problem trying to register my own Event/Listener to the event dispatcher. What I'm registering through the services of my bundle #MyBundle/Resources/services.yml is loaded only during the rendering process, so it's not available when doing a dispatch in the controler.
webservice.listener.data_connect:
class: Trav\CoreBundle\EventListener\WebService\WebServiceListener
arguments:
mailer: '#doctrine.orm.entity_manager'
tags:
- { name: kernel.event_listener, event: trav.webservice.error_connection, method: onDataConnectEvent, class: Trav\CoreBundle\EventListener\WebService\WebServiceListener }
but when doing that in the defaultControler:
$this->event_dispatcher = $this->container->get("event_dispatcher");
$this->event_dispatcher->dispatch("travelyo.webservice.listener.data_connect", new DataConnectEvent(array()));
It's not working, trying to debug, I can see that in the dispatch method, it cannot find the listener I want to attach.
When trying to put in the event kernel.request instead of trav.webservice.error_connection so it's working(listener is not called, see in the debug bar), but the Event i ge in the WebServiceListener::OnDataConnect is GetResponseEvent and not DataConnectEvent.
Is someone has any idea whats wrong here ?
I've been inspired from this: http://iamproblematic.com/leveraging-the-symfony2-event-dispatcher/.
Which seems to work exactly the same way
The event you dispatch needs to match the event you are listening for. The example code is sending a travelyo.webservice.listener.data_connect event and the listener is configured to receive the trav.webservice.error_connection event, which means that this listener will not receive the event.

SDL Tridion 2009: Creating components through TOM API (via Interop) fails

Am facing a problem, while creating components through TOM API using .NET/COM Interop.
Actual Issue:
I have 550 components to be created through custom page. I am able to create between 400 - 470 components but after that it is getting failed and through an error message saying that
Error: Thread was being aborted.
Any idea / suggestion, why it is getting failed?
OR
Is there any restriction on Tridion 2009?
UPDATE 1:
As per #user978511 request, below is error on Application event log:-
Event code: 3001
Event message: The request has been aborted.
...
...
Process information:
Process ID: 1016
Process name: w3wp.exe
Account name: NT AUTHORITY\NETWORK SERVICE
Exception information:
Exception type: HttpException
Exception message: Request timed out.
...
...
...
UPDATE 2:
#Chris: This is my common function, which is called in a loop by passing list of params. Here am using Interop dll's.
public static bool CreateFareComponent(.... list of params ...)
{
TDSE mTDSE = null;
Folder mFolder = null;
Component mComponent = null;
bool flag = false;
try
{
mTDSE = TDSEInitialize();
mComponent = (Component)mTDSE.GetNewObject(ItemType.ItemTypeComponent, folderID, null);
mComponent.Schema = (Schema)mTDSE.GetObject(constants.SCHEMA_ID, EnumOpenMode.OpenModeView, null, XMLReadFilter.XMLReadAll);
mComponent.Title = compTitle;
...
...
...
...
mComponent.Save(true);
flag = true;
}
catch (Exception ex)
{
CustomLogger.Error(String.Format("Logged User: {0} \r\n Error: {1}", GetRemoteUser(), ex.Message));
}
return flag;
}
Thanks in advance.
Sounds like a timeout, most likely in IIS which is hosting your custom page.
Are you creating them all in one synchronous request? Because that is indeed likely to time out.
You could instead create them in batches - or make sure your operations are done asynchronously and then polling the status regularly.
The easiest would just be to only create say 10 Components in one request, wait for it to finish, and then create another 10 (perhaps with a nice progress bar? :))
How you call TDSE object. I would like to mention here "Marshal.ReleaseComObject" procedure. Without releasing COMs objects can lead to enormous memory leaks.
Here is code for component creating:
private Component NewComponent(string componentName, string publicationID, string parentID, string schemaID)
{
Publication publication = (Publication)mTdse.GetObject(publicationID, EnumOpenMode.OpenModeView, null, XMLReadFilter.XMLReadContext);
Folder folder = (Folder)mTdse.GetObject(parentID, EnumOpenMode.OpenModeView, null, XMLReadFilter.XMLReadContext);
Schema schema = (Schema)mTdse.GetObject(schemaID, EnumOpenMode.OpenModeView, publicationID, XMLReadFilter.XMLReadContext);
Component component = (Component)mTdse.GetNewObject(ItemType.ItemTypeComponent, folder, publication);
component.Title = componentName;
component.Schema = schema;
return component;
}
After that please not forget to release mTdse ( in my case it is previously created TDSE object). Disposing "Components" object can be useful also after finish working with them.
For large Tridion batch operations I always use a Console Application and run it directly on the server.
Use Console.WriteLine to write to the output window and Console.ReadLine as the last line of code in the app (so the window stays open). I also use Log4Net as the logger.
This is by far the best approach if you have access to a remote session on the server - or can ask an admin to run it for you and give you access to the log folder via a network share.
As per #chris suggestions and part of immediate fix I have changed my web.config execution time out to 8000 seconds.
<httpRuntime executionTimeout="8000"/>
With this change, custom page is able to handle as of now.
Any more best suggestion, please post it.

Resources