How does one express an Asynchronous interaction in UML? - asynchronous

I'm new to sequence diagrams, and I am struggling to find out how to markup asynchronous flows.
Here is a synchronous HTTP flow:
#startuml
"Purchase" -> "Auth" : Check user
"Auth" --> "Purchase" : 2s
"Purchase" -> "Payment" : Deduct balance
"Payment" --> "Purchase" : 2s
"Purchase" -> "Provision" : Supply
#enduml
Lets pretend the last "Provision" step is asynchronous. How would one best express that? Just the responses?
"Purchase" -> "Auth" : Check user
"Auth" --> "Purchase" : 2s sync
"Purchase" -> "Payment" : Deduct balance
"Payment" --> "Purchase" : 2s sync
"Purchase" -> "Provision" : Supply
"Provision" --> "Purchase" : 0s async
#enduml
https://plantuml.com/demo-javascript-asynchronous is a tool to preview the diagram.

Well, UML knows asynchronous messages. Just use those.
In PlantUML the notation is not always conforming to the UML specification. However there are two different arrowheads. I would interpret ->> as an asynchronous message.
Asynchronous messages cannot have a reply message. Of course the recipient of such a message could later answer with another asynchronous message. However, this is only possible, when the first message did explicitely contain a reference to the sender. For synchronous messages this is implied.
Please note, that a sequence diagram is usually not defining all possible sequences. The response timing might be outside of the system. Nevertheless, you can show a sequence, where it takes 2 s, 60 s or even never responds. These are just three different sequences.
By the way, the official notation for the reply message label is assignment target=message name:return value. If it is clear from the context, you can omit the label. It is not meant as a place to make assertions about the duration.

disclaimer: I am the author of ZenUML, so my answer maybe biased. Trying not to be though.
An asynchronous Message (messageSort equals asynchCall or asynchSignal) has an open arrow head.
A synchronous Message (messageSort equals synchCall) has filled arrow head.
from UML2.5.1formal-17-12-05 (section 17.4.4.1)
PlantUML does not seem to have a specific syntax for sync or async message. To be aligned with the spec, you can use different arrows: -> for sync messages and ->> for async messages.
ZenUML uses dot (e.g. Purchase->Provision.Supply) for sync messages and column (e.g. Purchase-Provision:Supply) for async messages. ZenUML also enables execution bar for sync messages automatically.
This is how it looks like in ZenUML using return keyword:
Or using assignment for reply messages:

Related

Streaming multiple events of different types using Axon

I am working on building streaming APIs for client/server communication using Axon and ServerSentEvents and not sure if it is possible to stream and identify multiple different events using Axon query update emitter and subscription query.
I am using Axon QueryUpdateEmitter.emit to emit the events from a projection based on different events. Emitter is emitting in projection whereas subscription query is taking place in the REST API that is supposed to stream the server sent events to client.
For example,
I want to emit 3 different events for a use case which creates, updates and deletes an entity.
I am wondering if we can emit different types of data from different events but still combine in one stream, i.e. send actual object upon entity create and update in the emitter but, since I don’t have any entity/data to emit in case of delete, I thinking whether to send a simple message for delete?
I also want a way to specify the type of event while emitting so when ServerSentEvent is build from subscription query, I can specify the type/action (for ex, differentiate between create or update event) along with data.
Main idea is to emit different events and add them in one stream despite knowing all events may not return exactly same data (create, update vs. delete) as part of one subscription query and to be able to accurately identify the event and specify in the stream of ServerSentEvents with appropriate event type.
Any ideas on how I can achieve this?
Here's how I am emitting an event upon creation using QueryUpdateEmitter:
#EventHandler
public void on(LibraryCreatedEvent event, #Timestamp Instant timestamp) {
final LibrarySummaryEntity librarySummary = mapper.createdEventToLibrarySummaryEntity(event, timestamp);
repository.save(librarySummary);
log.debug("On {}: Saved the first summary of the library named {}", event.getClass().getSimpleName(), event.getName());
queryUpdateEmitter.emit(
AllLibrarySummariesQuery.class,
query -> true,
librarySummary
);
log.debug("emitted library summary: {}", librarySummary.getId());
}
Since I need to distinguish between create and update so I tried using GenericSubscriptionQueryUpdateMessage.asUpdateMessage upon update event and added some metadata along with it but not sure if that is in the right direction as I am not sure how to retrieve that information during subscription query.
Map<String, String> map = new HashMap();
map.put(“Book Updated”, event.getLibraryId());
queryUpdateEmitter.emit(AllLibrarySummariesQuery.class,query → true,GenericSubscriptionQueryUpdateMessage.asUpdateMessage(librarySummary).withMetaData(map));
Here's how I am creating subscription query:
SubscriptionQueryResult<List<LibrarySummaryEntity>, LibrarySummaryEntity> result = queryGateway.subscriptionQuery(new AllLibrarySummariesQuery(),ResponseTypes.multipleInstancesOf(LibrarySummaryEntity.class),ResponseTypes.instanceOf(LibrarySummaryEntity.class));
And the part where I am building server sent event:
(.event is where I want to specify the type of event - create/update/delete and send the applicable data accordingly)
Flux<ServerSentEvent<LibrarySummaryResponseDto>> sseStream = result.initialResult()
.flatMapMany(Flux::fromIterable).map(value -> mapper.libraryEntityToResponseDto(value))
.concatWith((streamingTimeout == -1)? result.updates().map(value -> mapper.libraryEntityToResponseDto(value)): result.updates().take(Duration.ofMinutes(streamingTimeout)).map(value -> mapper.libraryEntityToResponseDto(value)))
.log()
.map(created -> ServerSentEvent.<LibrarySummaryResponseDto>builder()
.id(created.getId())
.event("library creation")
.data(created).build())
.doOnComplete(() -> {log.info("streaming completed");})
.doFinally(signal -> result.close());
As long as the object you return matches the expected type when making the subscription query, you should be good!
Note that this means you will have to make a response object that can fit your scenarios. Whether response is something you'd emit as the update (through the QueryUpdateEmitter) or a map operation from where you return the subscription query, is a different question, though.
Ideally, you'd decouple your internal messages from what you send outward, like with SSE. To move to a more specific solution, you could benefit from having a Flux response type. You can simply attach any mapping operations to adjust the responses emitted by the QueryUpdateEmitter to your desired SSE format.
Concluding, the short answer is "yes you can," as long as the emitted response object matches the expected update type when dispatching the subscription query on the QueryGateway.

Making blocking http call in akka stream processing

I am new to akka and still trying to understand the different akka and streaming concepts. For some new feature i need to add a http call to already existing stream which is working on an internal object. Something like this -
val step1Flow = Flow[SampleObject].filter(...--Filtering condition--...)
val step2Flow = Flow[SampleObject].map(obj => {
...
-- Business logic to update values in the obj --
...
})
...
override val flowGraph: Flow[SampleObject, SampleObject, NotUsed] =
bufferIn.via(Flow.fromGraph(GraphDSL.create() {
implicit builder =>
import GraphDSL.Implicits._
...
val step1 = builder.add(step1Flow)
val step2 = builder.add(step2Flow)
val step3 = builder.add(step3Flow)
...
source ~> step1 ~> step2 ~> step3 ~> merge
...
}
I need to add the new http request flow (lets call it newFlow) after step1. All these flow have Inlet and Outlet as SampleObject. Now my understanding is that the newFlow would need to be blocking because the outlet need to be SampleObject only. For that I have used Await function on the http call future. The code looks like this -
val responseFuture: Future[(Try[HttpResponse], SomeContext)] =
Source
.single(httpRequest -> context)
.via(Retry(retrySettings).join(clientFlow))
.runWith(Sink.head)
...
val (httpTry, passedAlongContext) = Await.result(responseFuture, 30.seconds)
-- logic to process response and return SampleObject --
Now this works fine but i think there should be a better way to do this without using wait. Also i think this would block the main thread till the request completes, which is going to affect the app throughput.
Could you please guide if the approach i used is correct or not. And how do i make use of some other thread pool to handle these blocking call so my main threadpool is not affected
This question seems very similar to mine but i do not understand it completely - connect Akka HTTP to Akka stream . Also i can't change the step2 or further flows.
EDIT : Added some code details for the stream
I ended up using the approach mentioned in the question because i couldn't find anything better after looking around. Adding this step decreased the throughput of my application as expected, but there are approaches to increase that can be used. Check these awesome blogs by Colin Breck -
https://blog.colinbreck.com/maximizing-throughput-for-akka-streams/
https://blog.colinbreck.com/partitioning-akka-streams-to-maximize-throughput/
To summarize -
Use Asynchronous Boundaries for flows which are blocking.
Use Futures if possible and add callbacks to futures. There are several ways to do that.
Use Buffers. There are several types of buffers available, choose what suits your needs.
Other than these, you can use inbuilt flows like -
Use "Broadcast" to broadcast your events to multiple consumers.
Use "Partition" to partition your stream into multiple streams based
on some condition.
Use "Balance" to partition your stream when there is no logical way to partition your events or they all could have different work loads.
You could use any one or multiple things from above options.

Drop / skip messages - C++ Actor Framework (CAF)

Using the C++ Actor Framework (CAF), I want to be able to skip / drop messages. E.g. incoming messages are being received at 100Hz. I only want the receiving actor to process messages at 1Hz (skipping 99 messages per second).
Does CAF provide any native functionality to do this?
Thanks.
There is support for both skipping and dropping. Skipping leaves messages in the mailbox, allowing actors to process it some later point after changing their behavior. Dropping is generally viewed as an error (unexpected messages).
The mechanism to do is in CAF is via default handlers. CAF dispatches any message that was not processed by the current behavior to a "fallback", which then decides what to do with the unmatched input.
You can override this handler however you want, but CAF also offers standard implementations to choose from:
skip: leaves the message in the mailbox. This message gets automatically re-matched later.
drop: considered an error. Terminates the receiver with an unexpected_message error, also sending an error message to the sender.
print_and_drop: like drop, but also prints an error to stderr (this is the default).
CAF also comes with examples showcasing how to use these handlers, e.g., https://github.com/actor-framework/actor-framework/blob/master/examples/dynamic_behavior/skip_messages.cpp. If you are looking for a "silent" drop that discards the message without an error:
caf::skippable_result silent_drop(scheduled_actor*, message&) {
return caf::message(); // "void" result
}
All that being said, if all you are looking for is simply checking that some amount of time has passed before processing the next message: why not just leave the message handler early?
caf::behavior my_actor(caf::stateful_actor<my_state>* self) {
return {
[](const my_input& x) {
if (!self->state.active())
return;
// ...
}.
};
}
Here, the idea is that active returns true only if some amount of time has passed since last processing a message.

Is there an operator which is the opposite of mergeMap()?

We have a case where we only want one HTTP request to go out at a time and only want to allow retry or call again if the pending call either succeeds or fails. We've been doing the classic wrap it in a boolean but were wondering if there was an operator for it. My hunch is no.
I think you would need switchMap. It will only subscribe to the last value of the external observable and it will map it to an internal observable. With switchMap, its possible to cancel previous http requests on the fly.
For more info take a look in the following link

Any other examples of multi-state Agent programming in FSharp?

I'm investigating F# agents that have multiple states, i.e., using the "let rec/and" keyword combination (per Expert F# 3.0's "Message Processing and State Machines") to provide multiple async blocks. The only example I've been able to find so far is the "throttling agent" discussed here (also Fssnip.net). Are there any other resources for learning this pattern?
edit: My specific application is an agent that has two states,
| StartFeed rateMultiplier replychannel ->
- replychannel out data values at a delay (provided with each value)
multiplied by rateMultiplier
- loop by using
thisAgent.Post(StartFeed rateMultiplier replychannel)
| Pause ->
I would like to provide some way to pass in a feed rate multiplier value that increases/decreases the delay by the passed-in multiplier in the "feed" async state, without interrupting the feed of values. I guess the question boils down to "how do you keep an async state block actively looping while still being aware of new messages?" Almost like skipping the inbox.Receive asynchronous wait, unless a message actually comes in? Inbox.scan?
edit 2: Given the message queue aspect of MailboxProcessor, I can see that an external message (with a different rateMultiplier value) that is received by the agent and placed in the queue will successfully change the rate without interrupting the flow of data values out. Any advice on the "Pause" would be still be appreciated.
I have found Tomas Petricek's entry https://github.com/tpetricek/FSharp.AsyncExtensions/blob/master/src/Agents/BlockingQueueAgent.fs , which gives an agent, with the standard mailboxprocessor queue, a way to choose what async block it will employ to process the next incoming message (ie, let the agent 'change its state'):
inbox.Receive() is used for the 'standard state' - the agent's message 'inbox' queue is neither full nor empty (State #1)
inbox.Scan() is used for the 'edge' or limiting cases of empty (State #2) and full (State #3) message 'inbox' queue
the actions the agent (in whichever of the three states) can take in response to received messages are written as **distinct async blocks that are given their own 'and' async block in the agent's 'let rec' loop; I had thought that 'let rec...and...' async blocks were restricted to having a message receipt function (.Receive, .Scan, etc), which is incorrect, they may be any async block that maintains the desired control flow, as seen in the next feature of the 'let rec...and...' agent body:
once the agent, in whichever of the 3 states, responds to a new message by routing to the appropriate action, the action is itself finished with a call to another 'and' async block of the agent body 'let rec' loop, a 'chooseState()', an if/then block that determines which state will handle a new message and calls that 'and' async block from among the 3 available.
This example seems essential in demonstrating idiomatic use of the multi-state agent body construction, specifically how to combine the three functions of message receipt, response, and looping control as mutually recursive elements of a single 'let rec...and...and..." construction.
Of course other message-passing frameworks exist, but this is a general logic/routing design for a more complex agent, whatever the framework, so:
thanks, Tomas.

Resources