Should logic go into trigger actions or in the entry? - workflow-foundation-4

I was wondering if it is better to put long execution logic in triggers or in states.
My concern is that if I put a complex, long logic within the triggers, then my state machine would stay too much time in a transitioning phase, and the information about the current state is no more meaningful.
Would the entry of each state the correct place to put long running logic?
Thanks

You don't put logic in a state directly but in a state entry or exit event. So no matter where you place it is runs as part of a state transition. Now as to what is the correct place, that completely depends on the logic. Simply stated, put it where it is appropriate :-)

Related

Two conflicting long lived process managers

Let assume we got two long lived process managers. Both sagas operates over 10 milion items for example. First saga adds something to each item. Second saga removes it from each item. Given both process managers need few minutes to complete its job if I run them simultaneously I get into troubles.
Part of those items would hold the value while rest of them not. The result is close to random actually and depends on command order that affect particular item. I wondered if redispatching "Remove" command in case of failure would solve the problem. I mean if you try remove non existing value you should wait for the first saga to add the value. But while process managers are working someone else may dispatch "Remove" or "Add" command. In such case my approach would fail.
How may I solve such problem? :)
It seems that you would want the second saga to not run if the first saga is running (and presumably not run until some process which depends on whatever the first saga added being there). So the apparent solution would be to have a component (could be a microservice, could also be a record in a strongly consistent datastore like zookeeper/etcd/consul) that gives permission for the sagas to start executing. An example protocol might look like:
Saga sends a message to the component identifying the saga and conveying the intention to start
Component validates that no sagas might be running which would prevent this saga from running
Component responds with permission to start running
Subsequent saga attempts result in rejection until the running saga tells the component it's OK to run the other saga
Assuming that this component is reliably durable, the failure mode to worry about is that permission is granted but this component never processes the message that the saga finished (causes of this could include the permission message not getting delivered/processed or the saga crashing). No amount of acknowledgements or extra messages can solve this (it's basically the Two Generals' Problem).
A mitigation is to have this component (or something watching this component) alert if it seems that too much time has passed without saga completion. Whatever/whoever is responsible for ensuring liveness would then investigate to see if the saga is still running and if none is running, inform the component that it's OK to run the other saga. Note that this is not foolproof: it's quite possible for the decider in question to make what turns out to be the wrong decision.
I feel like I need more context. Whilst you don't say it explicitly, is the problem that the second saga tries to remove values that haven't been added by the first?
If that was true, a simple solution would be to just use a third state.
What I mean by that is to just more explicitly define and declare item state. You currently seem to have two states with value, and without value, but nothing to indicate if an item is ready to be processed by the second saga because the first saga has already done it's work on the item in question.
So all that needs to happen is that the second saga keeps looking for items where:
(with_value == true & ready_for_saga2 == true)
Ready_for_saga2 or "Saga 1 processing complete", whatever seems more appropriate in your context.
I'd say that the solution would vary based on which actual problem, we're trying to solve.
Say it's an inventory and add are items added to the inventory and remove are items requested for delivery. Then the order of commands does not matter that much because you could just process the request for delivery, when new items are added to the inventory.
This would lead to an aggregate root with two collections: Items and PendingOrders.
One process manager adds new inventory to Items - if any orders are pending, it will complete these orders in the same transaction and remove both the item and the order from the collections.
If the other process manager adds an order (tries to remove an item), it will either do it right away, if there's any items left - or it will add the order to the pending orders to be processed when new items arrive (and maybe notify someone about the delay, while we're at it).
This way we end up with the same state regardless of the order of commands, but the actual real-world-problem has great influence on the model chosen.
If we have other real world problems, we can make a model those too.
Let's say you have two users that each starts a process that bulk updates titles on inventory items. In this case you - and the users - have to decide how best to resolve this conflict - what will lead to the best real world outcome.
If you want consistency across all the items - all or no items should be updated by a single bulk update - I would embed this knowledge in a new model. Let's call it UpdateTitlesProcesses. We have only one instance of this model in the system. The state is shared between processes. This model is effectually a command queue, and when a user initiates the bulk operation, it adds all the commands to the queue and starts processing each item one at a time.
When the second user initiates another title update, the business logic in our models will reject this, as there's already another update started. Or if the experts say that the last write should win, then we ditch the remaining commands from the first process and add the new ones (and similarly we should decide what should happen if a user issues a single title update, not bulk - should it be rejected, prioritized or put on hold?).
So in short I'd say:
Make it clear which real world problem we are solving - and thus which conflict resolution outcome is best (probably a trade off, often also something that requires user interaction or notification).
Model this explicitly (where processes, actions and conflict handling are also part of the model).

Ngrx complex state reducer

I struggle finding the right way to mutate my state in an ngrx application as the state is rather complex and depending on many factors. This Question is not about doing one piece of code correct but more about how to design such a software in general, what are doe's and don'ts when finding some hacky solutions and workarounds.
The app 'evolved' by time and i wan't to share this process in an abstracted way to make my point clear:
Stage 1
State contains Entities. Those represent nodes in a tree and are linked by ids. Modifying or adding an entity requires a check about the type of nodes the new/modified ones should be connected with. Also it might be that upon modifying a node, other nodes had to be updated.
The solution was: create functions that do the job. Call them right in the reducer so everything is always up to date and synchronus when used (there are services that might modify state).
Stage 2
A configuration is added to the state having an impact on the way the automatically modifyed nodes are modifyed/created. This configuration is saved in it's own state right under the root state.
The solution:
1) Modify the actions to also take the required data from the configuration.
2) Modify the places where the actions are created/dispatched (add some ugly
this.state.select(fromRoot.getX)
.first()
.(subscribe(element => {this.state.dispatch(new Action({...old_payload, newPayload: element}))})
wrapper around the dispatch-calls)
3) modify the functions doing the node-modification and
4) adding the argument-passing to the function calls inside the reducer
Stage 3
Now i'am asked to again add another configuration to the process, also retrived by the backend and also saved in another state right under the root state
State now looks like:
root
|__nodes
|__config_1
|__config_2
i was just about to repeat the steps as in stage 2 but the actions get really ig with all the data passed in and functions have to carry around a lot of data. This seems to be wrong, when i actually dispatch the action on the state containing all the needed info.
How can i handle this correct?
Some ideasi already had:
use Effects: they are able to get everything from state they need and can create everything - so i only need to dispatch an action with only the actions payload, the effect then can grab everything from the state it needs. I don't like this idea because it triggers asynchronus tasks to modify the state and add not-state-changing actions.
use a service: with a service holding state it would be much like with effects but without using actions to just create asynchronus calls which then dispatch the actions that relly change state.
do all the stuffi n the component: at the moment the components are kept pretty simple when it comes to changing state as i prefer the idea that actions carry as little data as possible, since reducers can access the state to get theyr data - but this is where the problem occus, this time i can't get hands on the data i need.

State of the World HazelCast Map with Listener

With Hazelcast, I imagine a common scenario is that consumers want to know about the current State of the world (what ever is currently in the Map) and then updates with nothing lost in between.
Imagine a scenario where a Hazelcast map is holding some variety of data that a consumer wants to be streamed (maybe via Rx) the current state and then any updates from the listener.
The API suggests that we add a Listener for updates and treat the Map as a normal ConcurrentMap. However, while I'm enumerating the Map, updates may come in via the listener, so ensuring the correct order of items is hard.
We could share a lock between the map enumerator and the listener, but that seems a bit of a code smell.
So my question in general is, if we want to stream the SoTW and then updates, how can we do this? Is there anything built into Hazelcast that can help us?
Thanks for the help
First of all, and I guess it is just unluckily explained, a map has no order!
The second thing, a Hazelcast map is a non snapshotting, non-persistent data-structure. Just like a ConcurrentHashMap, changes to the data-structure are reflected to the iterator and vice-versa.
Events, on the other side, are a completely independent system, especially since events are delivered asynchronously. So it might happen, that the events actually arrives before you advanced the iterator up to the same element or the other round.
If you need a better guarantee, iterate first, add the listener, apply changes (you need to do a delta in a second iteration) between start of the first run and events that you might have missed. Not an actually nice way to handle it but I don't think there's another way.

How to "keep track" of user activity with functional programming?

tl;dr
In a program that calls a function onEnterFrame on each frame, how do you store and mutate state? For instance if you are making a level editor or a painting program where keeping track of state and making small incremental changes are tempting / enticing / inviting. What is the most performany way to handle such a thing with minimal global state mutations?
long version:
In a interactive program that accepts input from the user, like mouse clicks and key strokes, we may need to keep track of the state of the data model. For instance:
Are some elements selected?
Is the mouse cursor hovering over an element, which one?
How long is the mouse button held down? Is this a click or a drag?
We also, sometimes need make small changes to a large model:
In a level editor, we may need to add one wall to an existing large set of prefabs. You don't want to recreate the set, no?
Read Prof Frisby's mostly-adequate-guide so far, there are many functional solutions to issues that deal with extracting a piece of data from some source of input, performing computation on that data and passing the result to some output.
Sometimes an app let's the user interact and perform a sequence of mutations on data. For instance, what if a program let's the user draw (like Paint) on a canvas and we need to store the state of the painting as well as the actions that led to that state (for undo and logging/debugging purposes)?
What state is acceptable to store and what should we absolutely avoid?
Currently my conclusions is that we should never store state that we only need temporarily, we should pass it to the function that needs it directly.
But what if there are several functions that need a specific computation? Like the case in which we check if the mouse's cursor is hovering over a specific area, why would we want to recompute that?
Are there ways to further minimize mutations of global state?
Storing state isn't the problem. It is mutating global state that is the problem. There are solutions to handling this. One that comes to mind is the State Monad. However, I am not sure this is ideal for undoing operations. But it is a place to start.
If you just want to look at the problem as an initial state and a set of operations then you can think of the operations as a List that can be traversed (with the head being the latest operation). Undoing a set of n operations could be accomplished by traversing the first n elements of the list and cons-ing the inverse of these operations to the list.
That way you don't modify global state at all.

debugging Flex event flow

I'm stepping through my code to figure out why a certain function takes more time to run the first time it gets called than on successive calls. The code flow for each function call is the same up to when a dispatchEvent gets called. I'm pretty sure it's different afterwards, as that call takes a lot more time the first time around. Unfortunately, I have no idea which other parts of the code chew on this specific event and thus cannot step through the handling of such event.
The question: is there a way to either figure out who handles such events or magically step through the handling code without explicitly setting breakpoints there?
thank you!
Not in an easy fashion, no. It's a pro and con of Flash (or any event based framework). You don't know when it's fired, where it's fired from (think bubbling), or who is listening for it. But at the same time, anyone could listen for any event, from anywhere (so long as it's within the display tree).
Normally what I do is just do a workspace search (ctrl+H in Flash Builder) and search for that specific event (you should be using static constants for dispatching/listening event types) and see who's doing what.

Resources