Is it possible for 2 nodes to communicate if they have different state version but same flow version?
I have a case where there may be N number of nodes in the network, and some nodes may upgrade slower than others.
i.e Node A has ObligationV1 and NodeB has upgraded to ObligationV2 but the same flow-cordapp is deployed on both.
Can this work?
Can a common flow handle different versions of state creations? (assuming the sequence of send/receive is the same)
Can the network operator / responder blacklist the V1?
If this works, does it mean if NodeA is initiator and NodeB is responder, then the states created are V1. Vice versa, then it becomes V2 ?
1) Yes, it should work. Like how we see with the new CorDapp templates, flows are independent of the CorDapp containing states and contracts.
2) Yes, that shouldn't be a problem. The new states will be of a different type.
3) Not right now, the whitelist is append-only. This may change with signature constraints which are in the works.
4) It's entirely up to you - NodeB would still have V1 in their classpath and could continue to create V1 states.
Related
As my question title suggests, I have a confusion about the fat-tree structure.
I am trying to write a program, where I get a certain number of nodes as my input and I should generate an output that builds a fat-tree topology out of them.
For example, if my input is 4, my output must represent a fat-tree topology made by 4 nodes(n1,n2,n3,n4)
As far as I could read, fat-tree topology is only dependant on the number of ports rather than the nodes. This is why I am confused about whether it is possible to create a fat-tree structure with the number of nodes as my only input at all!.
I am very new to networking concepts, I would appreciate any guidance
If I understood the question, you have a certain amount of nodes in input, and you want to build a FatTree topology with these nodes.
Unfortunately, you cannot create a complete FatTree topology with an arbitrary number of nodes.
If you are confused about the construction, I suggest to have a look at this link
For my master thesis, I explored some data center topologies and their feasability for network tomography-based monitoring applications. This resultet in a few python models—FatTree included—implemented using the networkx library, that are available on Github. The code is not the prettiest, especially the visualisation parts, and could surely be improved, but I hope it can still be useful to gain an intuition about how these topologies scale.
If you start playing around with the different scales of the FatTree you will quickly see, that Giuseppe is right. A fat-tree has a very strict structure that is only dependant on the port number parameter. It is therefore indeed not possible to construct a fat-tree with an arbitrary number of nodes.
Although I'm late in answering this, and others have already given the correct answer, I'd still like to add some value with respect to FatTree topology design.
-> For a k-port switch based Fattree topology, you can derive these values by using tree data structure properties and the topology requirements:
- number of core switches = (k/2)^2
- number of pods = k
- number of aggregation switches in each pod = k/2
- number of edge switches in each pod = k/2
- each aggregation switch connected to k/2 core switches and k/2 edge switches
- each edge switch connected to k/2 aggregation switches and k/2 nodes
- i.e., each pod consists of (k/2)^2 nodes
- number of nodes possible to be connected to the network = (K^3)/4
Since the number of servers possible to be connected to this network is expressed in terms of k, now you can clearly see that you can't create a fattree topology with any number of nodes. The count of nodes can only take the forms (k^3)/4 for even values of k (to be integer values), e.g., 16, 54, and so on. So, in other words, you can't have a proper fattree topology with random node count (different than listed above or if not expressed as above)!
I would like to have more than one version of certain flow pairs (both the InitiatingFlow and InitiatedBy) in a node's cordapps directory.
The reason for maintaining several copies of certain flow pairs is that some of the nodes may be using a previous version of the flow because they have yet to migrate the version of the flow.
As the flow's version is only in the annotation, I suspect there would be more than one class with the same fully-qualified name. This would result in a runtime error.
Can you provide an example of flow pairs with different versions that can remain in the same cordapps folder?
The correct approach here is not to define several flows pairs, but to use the flow version number in the InitiatingFlow to control how the corresponding InitiatedBy flow behaves.
For example, suppose we have an InitiatingFlow that:
Sends an Int in version 1
Sends a String in subsequent versions
The corresponding InitiatedBy flow may look like this:
#Suspendable
override fun call() {
val otherFlowVersion = otherSession.getCounterpartyFlowInfo().flowVersion
val receivedString = if (otherFlowVersion == 1) {
otherSession.receive<Int>().unwrap { it.toString() }
} else {
otherSession.receive<String>().unwrap { it }
}
}
By using the InitiatingFlow's version number, the InitiatedBy flow is able to communicate with parties running any version of the InitiatingFlow.
Note that there is no equivalent version number for the InitiatedBy flow, which means that the InitiatingFlow cannot condition its behaviour on the version of the InitiatedBy flow. The InitiatedBy flow is the side that must adapt to handle changes in the InitiatingFlow, and not vice-versa.
Additional information on flow versioning can be found here.
Corda has fairly limited versioning model for flows. For example, there is no way for the initiating flow to adapt its behaviour to work with an older version of the responding flow.
The way this could be worked around is through Protocol Handshake pattern. It works like the following. Implement a pair of subflows, say, InitiatorProtocolHandshakeFlow and RespondToProtocolHandshakeFlow. Make every initiating flow in the CorDapp that establishes a new flow session with another note to invoke InitiatorProtocolHandshakeFlow with the counterparty session, and make every responding flow to invoke RespondToProtocolHandshakeFlow with the counterparty session before sending or receiving anything else. Make the flows to negotiate relevant 'protocol features'. This could be implemented in a number of way, of which the simplest one to make the responder to send (and initiator to receive) a simple version number. Unlike flow version number in the annotation, the initiating flow can actually alter its behaviour based on the number provided by the responding flow.
How can defined topology in Castalia-3.2 for WBAN ?
How can import topology in omnet++ to casalia ?
where the topology defined in default WBAN scenario in Castalia?
with regard
thanks
Topology of a network is an abstraction that shows the structure of the communication links in the network. It's an abstraction because the notion of a link is an abstraction itself. There are no "real" links in a wireless network. The communication is happening in a broadcast medium and there are many parameters that dictate if a packet is received or not, such as the power of transmission, the path loss between transmitter and receiver, noise and interference, and also just luck. Still, the notion of a link could be useful in some circumstances, and some simulators are using it to define simulation scenarios. You might be used to simulators that you can draw nodes and then simply draw lines between them to define their links. This is not how Castalia models a network.
Castalia does not model links between the nodes, it models the channel and radios to get a more realistic communication behaviour.
Topology is often confused with deployment (I confuse them myself sometimes). Deployment is just the placement of nodes on the field. There are multiple ways to define deployment in Castalia, if you wish, but it is not needed in all scenarios (more on this later). People can confuse deployment with topology, because under very simplistic assumptions certain deployments lead to certain topologies. Castalia does not make these assumptions. Study the manual (especially chapter 4) to get a better understanding of Castalia's modeling.
After you have understood the modeling in Castalia, and you still want a specific/custom topology for some reason then you could play with some parameters to achieve your topology at least in a statistical sense. Assuming all nodes use the same radios and the same transmission power, then the path loss between nodes becomes a defining factor of the "quality" of the link between the nodes. In Castalia, you can define the path losses for each and every pair of nodes, using a pathloss map file.
SN.wirelessChannel.pathLossMapFile = "../Parameters/WirelessChannel/BANmodels/pathLossMap.txt"
This tells Castalia to use the specific path losses found in the file instead of computing path losses based on a wireless channel model. The deployment does not matter in this case. At least it does not matter for communication purposes (it might matter for other aspects of the simulation, for example if we are sampling a physical process that depends on location).
In our own simulations with BAN, we have defined a pathloss map based on experimental data, because other available models are not very accurate for BAN. For example the, lognormal shadowing model, which is Castalia's default, is not a good fit for BAN simulations. We did not want to enforce a specific topology, we just wanted a realistic channel model, and defining a pathloss map based on experimental data was the best way.
I have the impression though that when you say topology, you are not only referring to which nodes could communicate with which nodes, but which nodes do communicate with which nodes. This is also a matter of the layers above the radio (MAC and routing). For example it's the MAC and Routing that allow for relay nodes or not.
Note that in Castalia's current implementations of 802.15.6MAC and 802.15.4MAC, relay nodes are not allowed. So you can not create a mesh topology with these default implementations. Only a star topology is supported. If you want something more you'll have to implemented yourself.
Will I have loop problems with this topology because S2 and S3 are both connected to S1 and s4 ?
Please check out this wikipage. It summarizes switching loops and the problem associated with them.
Obviously, to solve this issue you could just remove a link/ switch (S1 or S4) which forms the physical loop in the first place; although the result is you lose redundancy.
The ideal solution is to configure spanning tree protocol (STP) on these switches to dynamically block some interface(s) so that one active path exists between the two endpoints (PCs on S2 to PCs on S3) at any time. Note, with spanning tree configured you do not get load-balancing over the redundant link/ switch.
What you need is Spanning tree protocol, turn on this function in your switch.
It's a loop-free topology for any bridged Ethernet local area network.
This protocol will disables links that are not part of the spanning tree, leaving a single active path between any two network nodes.
We often use RSTP(802.1w) instead of STP(IEEE 802.1D), for the previous provides faster spanning tree convergence after a topology change.
If you use VLAN in your environment, you may choose MSTP(Multiple Spanning Tree Protocol).
In simple ring topology, u can use ring protocols too!
like G8032 (ERPS)
G8032 wiki
http://en.wikipedia.org/wiki/Ethernet_Ring_Protection_Switching
G8032 archive
http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/cether/configuration/xe-3s/ce-xe-3s-book/ce-g8032-ering-pro.html
We have N cache-nodes with basic consistent-hashing in a ring.
Questions:
Is data-structure of this ring stored:
On each of these nodes?
Partly on each node with its ranges?
On a separate machine as a load balancer?
What happens to the ring when other nodes join it?
Thanks a lot.
I have found an answer to the question No 1.
Answer 1:
All the approaches are written in my blog:
http://ivoroshilin.com/2013/07/15/distributed-caching-under-consistent-hashing/
There are a few options on where to keep ring’s data structure:
Central point of coordination: A dedicated machine keeps a ring and works as a central load-balancer which routes request to appropriate nodes.
Pros: Very simple implementation. This would be a good fit for not a dynamic system having small number of nodes and/or data.
Cons: A big drawback of this approach is scalability and reliability. Stable distributed systems don’t have a single poing of failure.
No central point of coordination – full duplication: Each node keeps a full copy of the ring. Applicable for stable networks. This option is used e.g. in Amazon Dynamo.
Pros: Queries are routed in one hop directly to the appropriate cache-server.
Cons: Join/Leave of a server from the ring requires notification/amendment of all cache-servers in the ring.
No central point of coordination – partial duplication: Each node keeps a partial copy of the ring. This option is direct implementation of CHORD algorithm. In terms of DHT each cache-machine has its predessesor and successor and when receiving a query one checks if it has the key or not. If there’s no such a key on that machine, a mapping function is used to determine which of its neighbors (successor and predessesor) has the least distance to that key. Then it forwards the query to its neighbor thas has the least distance. The process continues until a current cache-machine finds the key and sends it back.
Pros: For highly dynamic changes the previous option is not a fit due to heavy overhead of gossiping among nodes. Thus this option is the choice in this case.
Cons: No direct routing of messages. The complexity of routing a message to the destination node in a ring is O(lg N).