Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
How do we implement schedulable state in corda? In my case i need to issue a monthly statement, so can schedulablestate be used for this?
There are a number of things you need to do.
Firstly, your state object needs to implement the SchedulableState interface. It adds an additional method:
interface SchedulableState : ContractState {
/**
* Indicate whether there is some activity to be performed at some future point in time with respect to this
* [ContractState], what that activity is and at what point in time it should be initiated.
* This can be used to implement deadlines for payment or processing of financial instruments according to a schedule.
*
* The state has no reference to it's own StateRef, so supply that for use as input to any FlowLogic constructed.
*
* #return null if there is no activity to schedule.
*/
fun nextScheduledActivity(thisStateRef: StateRef, flowLogicRefFactory: FlowLogicRefFactory): ScheduledActivity?
}
This interface requires a method named nextScheduledActivity to be implemented which returns an optional ScheduledActivity instance. ScheduledActivity captures what FlowLogic instance each node will run, to perform the activity, and when it will run is described by a java.time.Instant. Once your state implements this interface and is tracked by the vault, it can expect to be queried for the next activity when committed to the vault. Example:
class ExampleState(val initiator: Party,
val requestTime: Instant,
val delay: Long) : SchedulableState {
override val contract: Contract get() = DUMMY_PROGRAM_ID
override val participants: List<AbstractParty> get() = listOf(initiator)
override fun nextScheduledActivity(thisStateRef: StateRef, flowLogicRefFactory: FlowLogicRefFactory): ScheduledActivity? {
val responseTime = requestTime.plusSeconds(delay)
val flowRef = flowLogicRefFactory.create(FlowToStart::class.java)
return ScheduledActivity(flowRef, responseTime)
}
}
Secondly, the FlowLogic class which is schedulted to start (in this case FlowToStart) must be also annotated with #SchedulableFlow. E.g.
#InitiatingFlow
#SchedulableFlow
class FlowToStart : FlowLogic<Unit>() {
#Suspendable
override fun call() {
// Do stuff.
}
}
Now, when ExampleState is stored in the vault, the FlowToStart will be schedulted to start at the offset time specified in the ExampleState.
That's it!
Related
One of our dev teams is doing something I've never seen before.
First they're defining an abstract class for their consumers.
public abstract class KafkaConsumerListener {
protected void processMessage(String xmlString) {
}
}
Then they use 10 classes like the one below to create 10 individual consumers.
#Component
public class <YouNameIt>Consumer extends KafkaConsumerListener {
private static final String <YouNameIt> = "<YouNameIt>";
#KafkaListener(topics = "${my-configuration.topicname}",
groupId = "${my-configuration.topicname.group-id}",
containerFactory = <YouNameIt>)
public void listenToStuff(#Payload String message) {
processMessage(message);
}
}
So with this they're trying to start 10 Kafka listeners (one class/object per listener). Each listener should have own consumer group (with own name) and consume from one (but different) topic.
They seem to use different ConcurrentKafkaListenerContainerFactories, each with #Bean annotation so they can assign different groupId to each container factory.
Is something like that supported by Spring Kafka?
It seems that it worked until few days ago and now it seems that one consumer group gets stuck all the time. It starts, reads few records and then it hangs, the consumer lag is getting bigger and bigger
Any ideas?
Yes, it is supported, but it's not necessary to create multiple factories just to change the group id - the groupId property on the annotation overrides the factory property.
Problems like the one you describe is most likely the consumer thread is "stuck" in user code someplace; take a thread dump to see what the thread is doing.
So, I'm working on a PoC for a low latency trading engine using axon and Spring Boot framework. Is it possible to achieve latency as low as 10 - 50ms for a single process flow? The process will include validations, orders, and risk management. I have done some initial tests on a simple app to update the order state and execute it and I'm clocking in 300ms+ in latency. Which got me curious as to how much can I optimize with Axon?
Edit:
The latency issue isn't related to Axon. Managed to get it down to ~5ms per process flow using an InMemoryEventStorageEngine and DisruptorCommandBus.
The flow of messages goes like this. NewOrderCommand(published from client) -> OrderCreated(published from aggregate) -> ExecuteOrder(published from saga) -> OrderExecutionRequested -> ConfirmOrderExecution(published from saga) -> OrderExecuted(published from aggregate)
Edit 2:
Finally switched over to Axon Server but as expected the average latency went up to ~150ms. Axon Server was installed using Docker. How do I optimize the application using AxonServer to achieve sub-millisecond latencies moving forward? Any pointers are appreciated.
Edit 3:
#Steven, based on your suggestions I have managed to bring down the latency to an average of 10ms, this is a good start ! However, is it possible to bring it down even further? As what I am testing now is just a small process out of a series of processes to be done like validations, risk management and position tracking before finally executing the order out. All of which should be done within 5ms or less. Worse case to tolerate is 10ms(These are the updated time budget). Also, do note below in the configs that the new readings are based on an InMemorySagaStore backed by a WeakReferenceCache. Really appreciate the help !
OrderAggregate:
#Aggregate
internal class OrderAggregate {
#AggregateIdentifier(routingKey = "orderId")
private lateinit var clientOrderId: String
private var orderId: String = UUID.randomUUID().toString()
private lateinit var state: OrderState
private lateinit var createdAtSource: LocalTime
private val log by Logger()
constructor() {}
#CommandHandler
constructor(command: NewOrderCommand) {
log.info("received new order command")
val (orderId, created) = command
apply(
OrderCreatedEvent(
clientOrderId = orderId,
created = created
)
)
}
#CommandHandler
fun handle(command: ConfirmOrderExecutionCommand) {
apply(OrderExecutedEvent(orderId = command.orderId, accountId = accountId))
}
#CommandHandler
fun execute(command: ExecuteOrderCommand) {
log.info("execute order event received")
apply(
OrderExecutionRequestedEvent(
clientOrderId = clientOrderId
)
)
}
#EventSourcingHandler
fun on(event: OrderCreatedEvent) {
log.info("order created event received")
clientOrderId = event.clientOrderId
createdAtSource = event.created
setState(Confirmed)
}
#EventSourcingHandler
fun on(event: OrderExecutedEvent) {
val now = LocalTime.now()
log.info(
"elapse to execute: ${
createdAtSource.until(
now,
MILLIS
)
}ms. created at source: $createdAtSource, now: $now"
)
setState(Executed)
}
private fun setState(state: OrderState) {
this.state = state
}
}
OrderManagerSaga:
#Profile("rabbit-executor")
#Saga(sagaStore = "sagaStore")
class OrderManagerSaga {
#Autowired
private lateinit var commandGateway: CommandGateway
#Autowired
private lateinit var executor: RabbitMarketOrderExecutor
private val log by Logger()
#StartSaga
#SagaEventHandler(associationProperty = "clientOrderId")
fun on(event: OrderCreatedEvent) {
log.info("saga received order created event")
commandGateway.send<Any>(ExecuteOrderCommand(orderId = event.clientOrderId, accountId = event.accountId))
}
#SagaEventHandler(associationProperty = "clientOrderId")
fun on(event: OrderExecutionRequestedEvent) {
log.info("saga received order execution requested event")
try {
//execute order
commandGateway.send<Any>(ConfirmOrderExecutionCommand(orderId = event.clientOrderId))
} catch (e: Exception) {
log.error("failed to send order: $e")
commandGateway.send<Any>(
RejectOrderCommand(
orderId = event.clientOrderId
)
)
}
}
}
Beans:
#Bean
fun eventSerializer(mapper: ObjectMapper): JacksonSerializer{
return JacksonSerializer.Builder()
.objectMapper(mapper)
.build()
}
#Bean
fun commandBusCache(): Cache {
return WeakReferenceCache()
}
#Bean
fun sagaCache(): Cache {
return WeakReferenceCache()
}
#Bean
fun associationsCache(): Cache {
return WeakReferenceCache()
}
#Bean
fun sagaStore(sagaCache: Cache, associationsCache: Cache): CachingSagaStore<Any>{
val sagaStore = InMemorySagaStore()
return CachingSagaStore.Builder<Any>()
.delegateSagaStore(sagaStore)
.associationsCache(associationsCache)
.sagaCache(sagaCache)
.build()
}
#Bean
fun commandBus(
commandBusCache: Cache,
orderAggregateFactory: SpringPrototypeAggregateFactory<Order>,
eventStore: EventStore,
txManager: TransactionManager,
axonConfiguration: AxonConfiguration,
snapshotter: SpringAggregateSnapshotter
): DisruptorCommandBus {
val commandBus = DisruptorCommandBus.builder()
.waitStrategy(BusySpinWaitStrategy())
.executor(Executors.newFixedThreadPool(8))
.publisherThreadCount(1)
.invokerThreadCount(1)
.transactionManager(txManager)
.cache(commandBusCache)
.messageMonitor(axonConfiguration.messageMonitor(DisruptorCommandBus::class.java, "commandBus"))
.build()
commandBus.registerHandlerInterceptor(CorrelationDataInterceptor(axonConfiguration.correlationDataProviders()))
return commandBus
}
Application.yml:
axon:
server:
enabled: true
eventhandling:
processors:
name:
mode: tracking
source: eventBus
serializer:
general : jackson
events : jackson
messages : jackson
Original Response
Your setup's description is thorough, but I think there are still some options I can recommend. This touches a bunch of locations within the Framework, so if anything's unclear on the suggestions given their position or goals within Axon, feel free to add a comment so that I can update my response.
Now, let's provide a list of the things I have in mind:
Set up snapshotting for aggregates if loading takes to long. Configurable with the AggregateLoadTimeSnapshotTriggerDefinition.
Introduces a cache for your aggregate. I'd start with trying out the WeakReferenceCache. If this doesn't suffice, it would be worth investigating the EhCache and JCache adapters. Or, construct your own. Here's the section on Aggregate caching, by the way.
Introduces a cache for your saga. I'd start with trying out the WeakReferenceCache. If this doesn't suffice, it would be worth investigating the EhCache and JCache adapters. Or, construct your own. Here's the section on Saga caching, by the way.
Do you really need a Saga in this setup? The process seems simple enough it could run within a regular Event Handling Component. If that's the case, not moving through the Saga flow will likely introduce a speed up too.
Have you tried optimizing the DisruptorCommandBus? Try playing with the WaitStrategy, publisher thread count, invoker thread count and the Executor used.
Try out the PooledStreamingEventProcessor (PSEP, for short) instead of the TrackingEventProcessor (TEP, for short). The former provides more configuration options. The defaults already provide a higher throughput compared to the TEP, by the way. Increasing the "batch size" allows you to ingest bigger amounts of events in one go. You can also change the Executor the PSEP uses for Event retrieval work (done by the coordinator) and Event processing (the worker executor is in charge of this).
There are also some things you can configure on Axon Server that might increase throughput. Try out the event.events-per-segment-prefetch, the event.read-buffer-size or command-thread. There might be other options that work, so it might be worth checking out the entire list of options here.
Although it's hard to deduce whether this will generate an immediate benefit, you could give the Axon Server runnable more memory / CPU. At least 2Gb heap and 4 cores. Playing with these numbers might just help too.
There's likely more to share, but these are the things I have on top of mind. Hope this helps you out somewhat David!
Second Response
To further deduce where we can achieve more performance, I think it would be essential to know what process your application is working on that take the longest. That will allow us to deduce what should be improved if we can improve it.
Have you tried making a thread dump to deduce what part's take up the most time? If you can share that as an update to your question, we can start thinking about the following steps.
My use-case is to set consumer group offset based on timestamp.
For this I am using seekToTimestamp method of ConsumerSeekCallback inside onPartitionsAssigned() method of ConsumerSeekAware.
Now when I started my application it seeks to the timestamp I specified but during rebalancing, it seeks to that timestamp again.
I want this to happen only when if ConsumerGroup Offset is less than the offsets at that particular timestamp, if it's greater than that then it should not seek.
Is there a way we can achieve this or does Spring-Kafka provides some listeners for the new ConsumerGroup so when the new consumer group gets created it will invoke seek based on timestamp otherwise will use the existing offsets?
public class KafkaConsumer implements ConsumerSeekAware {
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments, ConsumerSeekCallback callback) {
long timestamp = 1623775969;
callback.seekToTimestamp(new ArrayList<>(assignments.keySet()), timestamp);
}
}
Just add a boolean field (boolean seeksDone;) to your implementation; set it to true after seeking and only seek if it is false.
You have to decide, though, what to do if you only get partitions 1 and 3 on the first rebalance and 1, 2, 3, 4 on the next.
Not an issue if you only have one application instance, of course. But, if you need to seek each partition when it is first assigned, you'll have to track the state for each partition.
I would like to consume messages from the beginning Offset. For this, I have added a property "seekToBeginning"=true in the properties file. My class that has the #KafkaListener implements ConsumerSeekAware and I have overriden the method onPartitionsAssigned() like the below. I would like to know if i'm doing it the right way. This method gets called 3 times (there are 3 partitions). Also, my worry is this method gets called when there is a CommitFailedException also. Pls let me know if the below if correct or should I filter by partition and how. Also pls let me know how to handle this in case of CommitFailedException.
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments, ConsumerSeekCallback callback) {
if (seekToBeginning)
{
assignments.forEach(
(topic, action) -> callback.seekToBeginning(topic.topic(), topic.partition()));
}
}```
If you have concurrency = 3 then, yes, it will be called 3 times, once per consumer.
Since 2.3.4, there is a more convenient method:
/**
* Queue a seekToBeginning operation to the consumer for each
* {#link TopicPartition}. The seek will occur after any pending offset commits.
* The consumer must be currently assigned the specified partition(s).
* #param partitions the {#link TopicPartition}s.
* #since 2.3.4
*/
default void seekToBeginning(Collection<TopicPartition> partitions) {
You need a boolean field to only do the seeks on the initial assignment and not after a rebalance.
If you only have one consumer (concurrency = 1), it can be a simple boolean.
e.g. boolean initialSeeksDone.
With concurrency > 1, you need a ThreadLocal:
ThreadLocal<Boolean> initialSeeksDone;
then
if (this.initialSeeksDone.get() == null) {
//seek
this.initialSeeksDone.set(true);
}
I have an ObligationV1 and two states ObligationStateV1 and ObligationStateV2.
How do I achieve A state is upgraded while the contract stays the same. where the state goes from V1 to V2 without changing the contract version. Based on the examples exampleLink, docs
It seems that the code will end up looking like this where you have a new ObligationContractV2? The example was trying to achieve
This CorDapp shows how to upgrade a state without upgrading the Contract. But I don't see how does the implementation actually prove that the new states is still referring to the old contract?
open class ObligationContractV2 : UpgradedContractWithLegacyConstraint {
override val legacyContract: ContractClassName = ObligationContractV1.id
override val legacyContractConstraint: AttachmentConstraint = AlwaysAcceptAttachmentConstraint
override fun upgrade(oldState: ObligationStateV1) = ObligationContractV2.ObligationStateV2(oldState.a, oldState.b, 0)
data class ObligationStateV2(val a: AbstractParty, val b: AbstractParty, val value:Int ) : ContractState {
override val participants get() = listOf(a, b)
}
override fun verify(tx: LedgerTransaction) {}
}
The contract class must change whenever you upgrade a state, but the rules it imposes can remain the same.
You could achieve this by delegating the transaction checking to the old contract:
override fun verify(tx: LedgerTransaction) {
ObligationContractV1().verify()
}
You could also delegate the checking to the old contract, and adding additional checks:
override fun verify(tx: LedgerTransaction) {
ObligationContractV1().verify()
additionalChecks()
}
However, note that delegating verify in this way while upgrading states will only work if the original contract isn't hardcoded to verify the transaction in terms of the old state. You'd have to write the original contract in terms of some interface or abstract class implemented by both the old state class and the new state class, or in some other way write the old contract in an open-ended manner. If you didn’t write the old contract in this forward-thinking way initially, you'll have to re-write the verify method.