I have to execute transaction "or roll it back" with several states.
For instance:
1. Create and save state A.
2. Update state B.
3. Create and save state C.
At the moment I do it with subFlow execution for the each of the state. But theoretically, subFlow for step 2 can be crashed. In this case step 1 and step 3 will be stored but step 2 will be missed. Do you have the best practice pattern or example how to do it with Corda?
Flows in Corda are checkpointed. If a node crashes, it replays the flow actions since the last checkpoint. It is therefore not possible for step 2 to be "missed".
Related
Suppose a cluster of 5 nodes(ABCDE), node-A is elected leader at the beginning, and while leader-A issues AppendEntries RPCs to the follower(BCDE) to replicate log entry(log-X),only node-B receives and returns success, at this point leader-A crashes.
If node C(or D or E) wins the next leader election then everything's fine, because only node B has log-X, and that means log-X is not committed.
My question is, could node-B (which has the highest term and longest log) win the next leader election? If so, will node-B spread log-X to other nodes?
Yes B could win the election, if it does become leader then the first thing it does it to create a log entry with the new term in its log, and start replicating its log to all the followers. As B's log includes log-X, if all goes well, eventually the log-X entry will be replicated & considered committed.
If node C wins the election, then when it becomes leader, it won't have the log-X entry, and it'll end up overwriting that entry on Node B.
See section 5.4.2 of the raft paper for more details.
Also this means you can't treat failure as meaning the attempted entry definitely doesn't exist, only that the caller doesn't know the outcome. Section 8 has some suggestions for handling this.
Using Redux and Immutable, just incorporated Sagas into the API layer, but now trying to tackle undo/redo with server persistence
Constraints:
Assume a max undo/redo stack of 2
State tree is very large. Having copies of the state tree living in past/present is not desirable.
Assumption: After adding 1 item to the tree, I'm assuming that 100 items in present tree will now become 101 in present and 100 in past. I know that immutableJS does have structural sharing, but not sure if this is true
My approach thus far:
Three data structures: PENDING, CURRENT, FUTURE
Using Saga middleware, append "TRANSACTION START" to dispatched action
Wrap data reducer in undoable library (or similar), filter for all actions with "TRANSACTION_START", hold in PENDING stack. Saga has called POST in the meantime.
TRANSACTION_SUCCESS fires when POST completes, Sagas will retry 3x before throwing TRANSACTION_FAIL.
On TRANSACTION_FAIL, remove from PENDING stack (in reducer) and throw error. Since stack is small, I don't mind iterating through to throw it out.
Merge PENDING and CURRENT for the optimistic state tree
Once PENDING size > 2 (from more ADDs) && bottom item in stack marked COMPLETE, move bottom (think a dequeue) into CURRENT.
On dispatch of UNDO, pop from PENDING stack and store in FUTURE, cancel POST, re-render state.
References:
Saga Recipe
http://yelouafi.github.io/redux-saga/docs/recipes/index.html
This library for immutable structures + undo/redo
https://github.com/dairyisscary/redux-undo-immutable
(7 months since last update so using it as a guide)
I've seen the implementation of a commandSaga with RxJS, but that's not feasible given our current team.
https://github.com/salsita/redux-saga-rxjs/blob/master/examples/undo-redo-optimistic/src/sagas/commandSaga.js
Questions:
Am I on the right track?
This code uses a command class, but it does not address server-side updating. Is using Sagas unnecessary?
Sanity check?
Say I want to start friendship between A and B.
Say I want to end friendship between A and B.
Those are two tasks I want to send to a queue having multiple consumers (workers).
I want to guarantee processing order so, how to avoid the second task to be performed before the first?
My solution: make tasks sticky (tasks about A are always sent to the same consumer).
Implementation: use RabbitMQ's exchanges and map tasks to the available consumers.
How do I map A to its consumer? I'm thinking about nginx's ip_hash. I think I need something similar.
I don't know if it is relevant but A and B are uuid.v4() UUIDs.
Can you point me out to the algorithm I need to accomplish mapping, please?
Well, there are two options:
make one exchange / queue for all events and guarantee that they're gonna be inserted in proper order. Create one worker for them. This costs more on inserting data (and doesn't give you option of scalability).
prepare your app for such situation, e.g. when you get message destroyFriendship and friendship does not exist - save message to db containing future friendship ending. Then you can have multiple workers making and destroying friendship and do not have to care about proper order. Simply do your job, make friends and if there's row in db about ending of friendship - destroy it (or simply do not create). Of course you need to check timestamp of creation/destroying time and check if destroying time was after creation time!
Of course you can count somehow hash of A/B, but it would be IMO more costfull then preparing app. Scalling app using excahnges/queues is not really good - you're going to create more and more queues and it's going to end up in too many queues/exchanges in rabbitmq.
If you have to use solution you specified - you can for example count crc32 from A and B, and using it's value calcalate to which queue task should be send. But having multiple consumers might result wrong here - what if one of consumers is blocked somehow and other receive message with destroying friendship? Using this solution I'd say that it's dangerous to have more than 1 worker per group of A/B.
Section speaks about a merge operation in FRP streams processing (Sodium library is used). Book shows a below diagram of streams combination, and says that when event enters FRP logic through a stream, it causes a cascade of state changes that happen in a transactional context, so all changes are atomic.
Streams of events - sDeselect, sSelect (see 2 events: "+" and "-") are originating from UI controls, since they happen within the same FRP transaction their carried events are considered simultaneous. Then book says
The merge implementation has to store the events in temporary storage
until a time when it knows it won’t receive any more input. Then it
outputs an event: if it received more than one, it uses the supplied
function to combine them; otherwise, it outputs the one event it
received.
Question: When it is a time when "no more input will come"? How merge function knows this moment? Is it simply the time it gets a value from the 2nd incoming stream on a given diagram or i'm missing smth? Can you illustrate it with a better streams example?
The way Sodium does this is to assign rank numbers to the structure of the directed graph of FRP logic held in memory, in such a way that if B depends on A, then B's rank will be higher than A's. (Cycles are broken in the graph traversal that assigns these ranks.) These numbers are then used as the priorities in a priority queue with low rank values processed first.
During event processing, when the priority queue contains nothing lower than the rank of the merge, then it is known that there can be no more input data for the merge, and it will output a value.
When a workflow has a receive activity that occurs after another receive activity and the second receive activity is called first the workflow holds the caller by blocking for 1 minute before timing out.
I want the workflow to return immediately when there are no matching workflow instances.
I do not want to change the timeout on the client as some calls may take a while.
This is a known issue in WF4, at least I am not aware of it being fixed yet.