What is the use of Encumbrances in Corda - corda

In Corda I was able to spend my encumbering state in a stand alone transaction without the encumbered state. Is this by design or am I missing something?
As of now I am able to enforce encumbrances only through my contract.
I have implemented encumbrance using
transactionBuilder.addOutputState(state = TesterState(data = 1, participants = listOf(serviceHub.myInfo.legalIdentities.first())),contract = TesterContract.ID, notary = serviceHub.networkMapCache.notaryIdentities.first(), encumbrance = 1)
transactionBuilder.addOutputState(state =TimeEncumbranceState(Timestamp = System.currentTimeMillis(), participants = listOf(serviceHub.myInfo.legalIdentities.first())),contract = TimeEncumbranceContract.ID,notary = serviceHub.networkMapCache.notaryIdentities.first())
I am also able to consume these states individually if the contract checks for encumbrance are absent.
Are there any non-contract enforcement of encumbrance?

Assume you have State A (the encumbered state) to be encumbered by State B (the encumbrance/encumbering state). The encumbrance state, if present, forces additional controls over the encumbered state. You cannot spend encumbered state without the encumbrance state
However, in the current encumbrance design as of Corda v3.x, nobody stops you from spending encumbrance state on its own (so a malicious user could indeed freeze A because A refers to B by stateRef)
You can easily fix the aforementioned issue by always requiring circular encumbrance links. State A encumbered by State B, State B encumbered by State A. A->B->A therefore B cannot be spent on its own.
Corda 4.x should have added bidirectional checks against encumbrances. See here However, encumbrances are not encouraged to be used yet.

Related

In SCORM 2004 (4th ed), why does a choice navigation to a cluster activity change the Current Activity?

The SCORM 2004 4th Edition pseudocode handles the case for a choice request (SB.2.9, steps 12 onward) like so:
If the target activity is a leaf activity Then
Exit Choice Sequencing Request Process (Delivery Request: the target activity; Exception: n/a)
End If
Apply the Flow Subprocess to the target activity in the Forward direction with consider children equal to True
// The identified activity is a cluster. Enter the cluster and attempt to find a descendent leaf to deliver.
If the Flow Subprocess returns False Then
// Nothing to deliver, but we succeeded in reaching the target activity - move the current activity.
Apply the Terminate Descendent Attempts Process to the common ancestor
Apply the End Attempt Process to the common ancestor
Set the Current Activity to the target activity
Exit Choice Sequencing Request Process (Delivery Request: n/a; Exception: SB.2.9-9)
// Nothing to deliver.
Else
Exit Choice Sequencing Request Process (Delivery Request: for the activity identified by the Flow Subprocess; Exception: n/a)
End If
It looks like this means that if the target activity resolves to a cluster activity but the Flow Subprocess can not find any available descendent leaf activity, the Current Activity is still modified and the sequencing request "succeeds" despite returning an exception.
What is the expected behavior for the LMS in this scenario? A cluster activity can't be delivered but this terminates the previous activity. Should the LMS simply deliver a blank page instead of an activity and hope the learner will be available to navigate away to another activity using the navigation controls?
The definition of the Overall Sequencing Process helpfully doesn't specify how an exception is supposed to be handled, but considering that this behavior sets the Current Activity and all successive requests will reference that one instead of the previously active activity, clearly something needs to happen or the LMS will be stuck in an inconsistent state.
Your reading of the pseudocode is correct. Choice is a bit special compared to the other flow events, but the steps of termination and showing the user a "please select an activity from the activity tree" screen could happen in several cases. The only somewhat unique part is the setting of the current activity, which makes it so other flow navigation events the user may select start from their last intentional choice, and not from whatever was previously loaded. It's not unusual for the Current Activity to be on a cluster, as stated on SN-4-18: "During termination behavior, Sequencing Exit Action rules are evaluated on all of the ancestors of the current activity – this is done in the Sequencing Exit Action Rule Subprocess. The result of this subprocess will be that either the “just terminated” leaf activity remains the Current Activity, or an ancestor of the leaf activity becomes the Current Activity".
You're also correct that OP.1 ("Overall Sequencing Process") is mute on the topic, going so far as to say "Behavior not specified." for a Not Valid sequencing request. I believe the most common choice is the aforementioned "please select an activity from the activity tree" style screen in place of where the visible SCO would have been.
The spec tries very hard to separate LMS display choices from how sequencing itself operates. SN-5-3 states: "SCORM imposes no requirements on the type or style of the user interface presented to a learner at run-time, including any user interface devices for navigation. The nature of the user interface and the mechanisms for capturing interactions between the learner and the LMS are intentionally unspecified. Issues such as look and feel, presentation style and placement of user interface devices or controls are outside the scope of SCORM."
But the specification otherwise says some instructive things. Page SN-3-6 states that "As depicted in Figure 3.2.1c, the target of the Choice navigation request (Activity B) has a Sequencing Control Flow defined to be False. In this case, no activity can be identified for delivery (clusters cannot be delivered). Because Activity B has Sequencing Control Choice defined to be True, an LMS shall provide some mechanism for the learner to select (trigger a navigation request for) one of Activity B’s children directly, but not Activity B."
While this doesn't explicitly state that there should be instructive text displayed to the learner in that SCO area or that the choice shouldn't be allowed, it does state that something should be done that should guide the learner into intentionally performing another step to launch something else. Again, this isn't exactly the same use case, but it's probably the closest it gets relative to choice.

Self-made queue priority policy for Simulation with Simmer in R

I want to simulate a planning process in R. I found out that I have to use the simmer package for simulation in R.
When the tasks arrive in my simulation, they all arrive on the 'Waiting List'. The tasks that have to be scheduled have different priorities and characteristics. Deciding which task is going to be scheduled, should depend on a 'weight'. This weight depends on the number of days it waits in the Waiting List and the priority. But I can only find the priority-function that needs a pre-defined priority-value (and my weight gets higher when the task is longer in the Waiting List).
I hope that it's possible to make a function, based on information in the attributes of an arrival, that decides which task has to be scheduled. Is this possible?
Thanks in advance.
With kind regards,
Roos
Priorities are static, but there's a trick you can do to recompute all the priorities based on that weight or other attributes. The idea is that arrivals should subscribe to a signal that triggers priority recalculation, which should be raised by the arrival that is going to release the resource. The key activity is renege_if(), which causes an arrival to leave a queue and follow a signal handler. For example:
traj <- trajectory() %>%
# ...
renege_if(
"recompute priority",
out = trajectory() %>%
set_prioritization(function() {
# check attributes using get_attribute()
# then, return the new prioritization values
}) %>%
# go 2 steps back to renege_if
rollback(2)) %>%
seize("your resource") %>%
renege_abort() %>%
timeout(some_task) %>%
# trigger this before releasing the resource
send("recompute priority") %>%
timeout(0) %>%
release("your resource") %>%
# ...
Every arrival subscribes to the signal "recompute priority" and tries to seize the resource. Those in the queue, will receive the signal eventually, will follow the signal handler to set the new priority and then will rollback to the same situation as before: they susbscribe to the signal again and enter the queue. But now the resource is released, so the one with the highest priority will seize the resource.
Update: See this section about the implementation of custom service policies.

undo/redo with sagas/redux (pt 2)

Using Redux and Immutable, just incorporated Sagas into the API layer, but now trying to tackle undo/redo with server persistence
Constraints:
Assume a max undo/redo stack of 2
State tree is very large. Having copies of the state tree living in past/present is not desirable.
Assumption: After adding 1 item to the tree, I'm assuming that 100 items in present tree will now become 101 in present and 100 in past. I know that immutableJS does have structural sharing, but not sure if this is true
My approach thus far:
Three data structures: PENDING, CURRENT, FUTURE
Using Saga middleware, append "TRANSACTION START" to dispatched action
Wrap data reducer in undoable library (or similar), filter for all actions with "TRANSACTION_START", hold in PENDING stack. Saga has called POST in the meantime.
TRANSACTION_SUCCESS fires when POST completes, Sagas will retry 3x before throwing TRANSACTION_FAIL.
On TRANSACTION_FAIL, remove from PENDING stack (in reducer) and throw error. Since stack is small, I don't mind iterating through to throw it out.
Merge PENDING and CURRENT for the optimistic state tree
Once PENDING size > 2 (from more ADDs) && bottom item in stack marked COMPLETE, move bottom (think a dequeue) into CURRENT.
On dispatch of UNDO, pop from PENDING stack and store in FUTURE, cancel POST, re-render state.
References:
Saga Recipe
http://yelouafi.github.io/redux-saga/docs/recipes/index.html
This library for immutable structures + undo/redo
https://github.com/dairyisscary/redux-undo-immutable
(7 months since last update so using it as a guide)
I've seen the implementation of a commandSaga with RxJS, but that's not feasible given our current team.
https://github.com/salsita/redux-saga-rxjs/blob/master/examples/undo-redo-optimistic/src/sagas/commandSaga.js
Questions:
Am I on the right track?
This code uses a command class, but it does not address server-side updating. Is using Sagas unnecessary?
Sanity check?

Handling Race Conditions / Concurrency in Network Protocol Design

I am looking for possible techniques to gracefully handle race conditions in network protocol design. I find that in some cases, it is particularly hard to synchronize two nodes to enter a specific protocol state. Here is an example protocol with such a problem.
Let's say A and B are in an ESTABLISHED state and exchange data. All messages sent by A or B use a monotonically increasing sequence number, such that A can know the order of the messages sent by B, and A can know the order of the messages sent by B. At any time in this state, either A or B can send a ACTION_1 message to the other, in order to enter a different state where a strictly sequential exchange of message needs to happen:
send ACTION_1
recv ACTION_2
send ACTION_3
However, it is possible that both A and B send the ACTION_1 message at the same time, causing both of them to receive an ACTION_1 message, while they would expect to receive an ACTION_2 message as a result of sending ACTION_1.
Here are a few possible ways this could be handled:
1) change state after sending ACTION_1 to ACTION_1_SENT. If we receive ACTION_1 in this state, we detect the race condition, and proceed to arbitrate who gets to start the sequence. However, I have no idea how to fairly arbitrate this. Since both ends are likely going to detect the race condition at about the same time, any action that follows will be prone to other similar race conditions, such as sending ACTION_1 again.
2) Duplicate the entire sequence of messages. If we receive ACTION_1 in the ACTION_1_SENT state, we include the data of the other ACTION_1 message in the ACTION_2 message, etc. This can only work if there is no need to decide who is the "owner" of the action, since both ends will end up doing the same action to each other.
3) Use absolute time stamps, but then, accurate time synchronization is not an easy thing at all.
4) Use lamport clocks, but from what I understood these are only useful for events that are causally related. Since in this case the ACTION_1 messages are not causally related, I don't see how it could help solve the problem of figuring out which one happened first to discard the second one.
5) Use some predefined way of discarding one of the two messages on receipt by both ends. However, I cannot find a way to do this that is unflawed. A naive idea would be to include a random number on both sides, and select the message with the highest number as the "winner", discarding the one with the lowest number. However, we have a tie if both numbers are equal, and then we need another way to recover from this. A possible improvement would be to deal with arbitration once at connection time and repeat similar sequence until one of the two "wins", marking it as favourite. Every time a tie happens, the favourite wins.
Does anybody have further ideas on how to handle this?
EDIT:
Here is the current solution I came up with. Since I couldn't find 100% safe way to prevent ties, I decided to have my protocol elect a "favorite" during the connection sequence. Electing this favorite requires breaking possible ties, but in this case the protocol will allow for trying multiple times to elect the favorite until a consensus is reached. After the favorite is elected, all further ties are resolved by favoring the elected favorite. This isolates the problem of possible ties to a single part of the protocol.
As for fairness in the election process, I wrote something rather simple based on two values sent in each of the client/server packets. In this case, this number is a sequence number starting at a random value, but they could be anything as long as those numbers are fairly random to be fair.
When the client and server have to resolve a conflict, they both call this function with the send (their value) and the recv (the other value) values. The favorite calls this function with the favorite parameter set to TRUE. This function is guaranteed to give the opposite result on both ends, such that it is possible to break the tie without retransmitting a new message.
BOOL ResolveConflict(BOOL favorite, UINT32 sendVal, UINT32 recvVal)
{
BOOL winner;
int sendDiff;
int recvDiff;
UINT32 xorVal;
xorVal = sendVal ^ recvVal;
sendDiff = (xorVal < sendVal) ? sendVal - xorVal : xorVal - sendVal;
recvDiff = (xorVal < recvVal) ? recvVal - xorVal : xorVal - recvVal;
if (sendDiff != recvDiff)
winner = (sendDiff < recvDiff) ? TRUE : FALSE; /* closest value to xorVal wins */
else
winner = favorite; /* break tie, make favorite win */
return winner;
}
Let's say that both ends enter the ACTION_1_SENT state after sending the ACTION_1 message. Both will receive the ACTION_1 message in the ACTION_1_SENT state, but only one will win. The loser accepts the ACTION_1 message and enters the ACTION_1_RCVD state, while the winner discards the incoming ACTION_1 message. The rest of the sequence continues as if the loser had never sent ACTION_1 in a race condition with the winner.
Let me know what you think, and how this could be further improved.
To me, this whole idea that this ACTION_1 - ACTION_2 - ACTION_3 handshake must occur in sequence with no other message intervening is very onerous, and not at all in line with the reality of networks (or distributed systems in general). The complexity of some of your proposed solutions give reason to step back and rethink.
There are all kinds of complicating factors when dealing with systems distributed over a network: packets which don't arrive, arrive late, arrive out of order, arrive duplicated, clocks which are out of sync, clocks which go backwards sometimes, nodes which crash/reboot, etc. etc. You would like your protocol to be robust under any of these adverse conditions, and you would like to know with certainty that it is robust. That means making it simple enough that you can think through all the possible cases that may occur.
It also means abandoning the idea that there will always be "one true state" shared by all nodes, and the idea that you can make things happen in a very controlled, precise, "clockwork" sequence. You want to design for the case where the nodes do not agree on their shared state, and make the system self-healing under that condition. You also must assume that any possible message may occur in any order at all.
In this case, the problem is claiming "ownership" of a shared clipboard. Here's a basic question you need to think through first:
If all the nodes involved cannot communicate at some point in time, should a node which is trying to claim ownership just go ahead and behave as if it is the owner? (This means the system doesn't freeze when the network is down, but it means you will have multiple "owners" at times, and there will be divergent changes to the clipboard which have to be merged or otherwise "fixed up" later.)
Or, should no node ever assume it is the owner unless it receives confirmation from all other nodes? (This means the system will freeze sometimes, or just respond very slowly, but you will never have weird situations with divergent changes.)
If your answer is #1: don't focus so much on the protocol for claiming ownership. Come up with something simple which reduces the chances that two nodes will both become "owner" at the same time, but be very explicit that there can be more than one owner. Put more effort into the procedure for resolving divergence when it does happen. Think that part through extra carefully and make sure that the multiple owners will always converge. There should be no case where they can get stuck in an infinite loop trying to converge but failing.
If your answer is #2: here be dragons! You are trying to do something which buts up against some fundamental limitations.
Be very explicit that there is a state where a node is "seeking ownership", but has not obtained it yet.
When a node is seeking ownership, I would say that it should send a request to all other nodes, at intervals (in case another one misses the first request). Put a unique identifier on each such request, which is repeated in the reply (so delayed replies are not misinterpreted as applying to a request sent later).
To become owner, a node should receive a positive reply from all other nodes within a certain period of time. During that wait period, it should refuse to grant ownership to any other node. On the other hand, if a node has agreed to grant ownership to another node, it should not request ownership for another period of time (which must be somewhat longer).
If a node thinks it is owner, it should notify the others, and repeat the notification periodically.
You need to deal with the situation where two nodes both try to seek ownership at the same time, and both NAK (refuse ownership to) each other. You have to avoid a situation where they keep timing out, retrying, and then NAKing each other again (meaning that nobody would ever get ownership).
You could use exponential backoff, or you could make a simple tie-breaking rule (it doesn't have to be fair, since this should be a rare occurrence). Give each node a priority (you will have to figure out how to derive the priorities), and say that if a node which is seeking ownership receives a request for ownership from a higher-priority node, it will immediately stop seeking ownership and grant it to the high-priority node instead.
This will not result in more than one node becoming owner, because if the high-priority node had previously ACKed the request sent by the low-priority node, it would not send a request of its own until enough time had passed that it was sure its previous ACK was no longer valid.
You also have to consider what happens if a node becomes owner, and then "goes dark" -- stops responding. At what point are other nodes allowed to assume that ownership is "up for grabs" again? This is a very sticky issue, and I suspect you will not find any solution which eliminates the possibility of having multiple owners at the same time.
Probably, all the nodes will need to "ping" each other from time to time. (Not referring to an ICMP echo, but something built in to your own protocol.) If the clipboard owner can't reach the others for some period of time, it must assume that it is no longer owner. And if the others can't reach the owner for a longer period of time, they can assume that ownership is available and can be requested.
Here is a simplified answer for the protocol of interest here.
In this case, there is only a client and a server, communicating over TCP. The goal of the protocol is to two system clipboards. The regular state when outside of a particular sequence is simply "CLIPBOARD_ESTABLISHED".
Whenever one of the two systems pastes something onto its clipboard, it sends a ClipboardFormatListReq message, and transitions to the CLIPBOARD_FORMAT_LIST_REQ_SENT state. This message contains a sequence number that is incremented when sending the ClipboardFormatListReq message. Under normal circumstances, no race condition occurs and a ClipboardFormatListRsp message is sent back to acknowledge the new sequence number and owner. The list contained in the request is used to expose clipboard data formats offered by the owner, and any of these formats can be requested by an application on the remote system.
When an application requests one of the data formats from the clipboard owner, a ClipboardFormatDataReq message is sent with the sequence number, and format id from the list, the state is changed to CLIPBOARD_FORMAT_DATA_REQ_SENT. Under normal circumstances, there is no change of clipboard ownership during that time, and the data is returned in the ClipboardFormatDataRsp message. A timer should be used to timeout if no response is sent fast enough from the other system, and abort the sequence if it takes too long.
Now, for the special cases:
If we receive ClipboardFormatListReq in the CLIPBOARD_FORMAT_LIST_REQ_SENT state, it means both systems are trying to gain ownership at the same time. Only one owner should be selected, in this case, we can keep it simple an elect the client as the default winner. With the client as the default owner, the server should respond to the client with ClipboardFormatListRsp consider the client as the new owner.
If we receive ClipboardFormatDataReq in the CLIPBOARD_FORMAT_LIST_REQ_SENT state, it means we have just received a request for data from the previous list of data formats, since we have just sent a request to become the new owner with a new list of data formats. We can respond with a failure right away, and sequence numbers will not match.
Etc, etc. The main issue I was trying to solve here is fast recovery from such states, with going into a loop of retrying until it works. The main issue with immediate retrial is that it is going to happen with timing likely to cause new race conditions. We can solve the issue by expecting such inconsistent states as long as we can move back to proper protocol states when detecting them. The other part of the problem is with electing a "winner" that will have its request accepted without resending new messages. A default winner can be elected by default, such as the client or the server, or some sort of random voting system can be implemented with a default favorite to break ties.

Discrete Event Simulation without global Queue?

I am thinking about modelling a material flow network. There are processes which operate at a certain speed, buffers which can overflow or underflow and connections between these.
I don't see any problems modelling this in a classic Discrete Event Simulation (DES) fashion using a global event queue. I tried modelling the system without a queue but failed in early stages. Still I do not understand the underlying reason why a queue is needed, at least not for events which originate "inside" the network.
The idea of a queue-less DES is to treat the whole network as a function which takes a stream of events from the outside world and returns a stream of state changes. Every node in the network should only be affected by nodes which are directly connected to it. I have set some hopes on Haskell's arrows and Functional Reactive Programming (FRP) in general, but I am still learning.
An event queue looks too "global" to me. If my network falls apart into two subnets with no connections between them and I only ask questions about the state changes of one subnet, the other subnet should not do any computations at all. I could use two event queues in that case. However, as soon as I connect the two subnets I would have to put all events into a single queue. I don't like the idea, that I need to know the topology of the network in order to set up my queue(s).
So
is anybody aware of DES algorithms which do not need a global queue?
is there a reason why this is difficult or even impossible?
is FRP useful in the context of DES?
To answer the first point, no I'm not aware of any discrete-event simulation (DES) algorithms that do not need a global event queue. It is possible to have a hierarchy of event queues, in which each event queue is represented in its parent event queue as an event (corresponding to the time of its next event). If a new event is added to an event queue such that it becomes the queue's next event, then the event queue needs to be rescheduled in its parent to preserve the order of event execution. However, you will ultimately still boil down to a single, global event queue that is the parent of all of the others in hierarchy, and which dispatches each event.
Alternatively, you could dispense with DES and perform something more akin to a programmable logic controller (PLC) which reevaluates the state of the entire network every small increment of time. However, typically, that would be a lot slower (it may not even run as fast as real-time), because most of the time it would have nothing to do. If you pick too big a time increment, the simulation may lose accuracy.
The simplest answer to the second point is that, ultimately, to the best of my knowledge, it is impossible to do without a global event queue. Each simulation event needs to execute at the correct time, and - since time cannot run backwards - the order in which events are dispatched matters. The current simulation time is defined by the time that the current event executes. If you have separate event queues, you also have separate clocks, which would make things very confusing, to say the least.
In your case, if your subnetworks are completely independent, you could simulate each subnetwork individually. However, if the state of one subnetwork affects the state of the total network, and the state of the total network affects the state of each subnetwork, then - since an event is influenced by the events that preceded it, can only influence the events that follow, but cannot influence what preceded it - you have to simulate the whole network with a global event queue.
If it's any consolation, a true DES simulation does not perform any processing in between events (other that determining what the next event is), so there should be no wasted processing in one subnetwork if all the action is taking place in another.
Finally, functional reactive programming (FRP) is absolutely useful in the context of a DES. Indeed, I now write of lot of my DES simulations in Scala using this approach.
I hope this helps!
UPDATE: Since writing the above, I've used Sodium (an excellent FRP library, which was referenced by the OP in the comments below), and can add some further explanation: Sodium provides a means for subscribing to events, and for performing actions when those events occur. However, here I'm using the term event in a general sense, such as a button being clicked by a user in a GUI, or a network package arriving, etc. In other words, the events are not necessarily simulation events.
You can still use Sodium—or any other FRP library—as part of a simulation, to subscribe to simulation events and perform actions when they occur; however, these tools typically have no built-in support for simulation, and so you must incorporate a simulation engine as the source of simulation events, in the same way that a GUI is incorporated as the source of user interaction events. It is within this engine that the global event queue must reside.
Incidentally, if you are trying to perform parallel or distributed simulation model execution, things get considerably more complicated. You have multiple event queues in these situations, but they must be synchronized (giving the appearance of a single queue). The two basic approaches are conservative synchronization and optimistic synchronization.

Resources