How to define topology in Castalia-3.2 for WBAN - simulator

How can defined topology in Castalia-3.2 for WBAN ?
How can import topology in omnet++ to casalia ?
where the topology defined in default WBAN scenario in Castalia?
with regard
thanks

Topology of a network is an abstraction that shows the structure of the communication links in the network. It's an abstraction because the notion of a link is an abstraction itself. There are no "real" links in a wireless network. The communication is happening in a broadcast medium and there are many parameters that dictate if a packet is received or not, such as the power of transmission, the path loss between transmitter and receiver, noise and interference, and also just luck. Still, the notion of a link could be useful in some circumstances, and some simulators are using it to define simulation scenarios. You might be used to simulators that you can draw nodes and then simply draw lines between them to define their links. This is not how Castalia models a network.
Castalia does not model links between the nodes, it models the channel and radios to get a more realistic communication behaviour.
Topology is often confused with deployment (I confuse them myself sometimes). Deployment is just the placement of nodes on the field. There are multiple ways to define deployment in Castalia, if you wish, but it is not needed in all scenarios (more on this later). People can confuse deployment with topology, because under very simplistic assumptions certain deployments lead to certain topologies. Castalia does not make these assumptions. Study the manual (especially chapter 4) to get a better understanding of Castalia's modeling.
After you have understood the modeling in Castalia, and you still want a specific/custom topology for some reason then you could play with some parameters to achieve your topology at least in a statistical sense. Assuming all nodes use the same radios and the same transmission power, then the path loss between nodes becomes a defining factor of the "quality" of the link between the nodes. In Castalia, you can define the path losses for each and every pair of nodes, using a pathloss map file.
SN.wirelessChannel.pathLossMapFile = "../Parameters/WirelessChannel/BANmodels/pathLossMap.txt"
This tells Castalia to use the specific path losses found in the file instead of computing path losses based on a wireless channel model. The deployment does not matter in this case. At least it does not matter for communication purposes (it might matter for other aspects of the simulation, for example if we are sampling a physical process that depends on location).
In our own simulations with BAN, we have defined a pathloss map based on experimental data, because other available models are not very accurate for BAN. For example the, lognormal shadowing model, which is Castalia's default, is not a good fit for BAN simulations. We did not want to enforce a specific topology, we just wanted a realistic channel model, and defining a pathloss map based on experimental data was the best way.
I have the impression though that when you say topology, you are not only referring to which nodes could communicate with which nodes, but which nodes do communicate with which nodes. This is also a matter of the layers above the radio (MAC and routing). For example it's the MAC and Routing that allow for relay nodes or not.
Note that in Castalia's current implementations of 802.15.6MAC and 802.15.4MAC, relay nodes are not allowed. So you can not create a mesh topology with these default implementations. Only a star topology is supported. If you want something more you'll have to implemented yourself.

Related

Is it possible to generate a fat tree by any number of nodes?

As my question title suggests, I have a confusion about the fat-tree structure.
I am trying to write a program, where I get a certain number of nodes as my input and I should generate an output that builds a fat-tree topology out of them.
For example, if my input is 4, my output must represent a fat-tree topology made by 4 nodes(n1,n2,n3,n4)
As far as I could read, fat-tree topology is only dependant on the number of ports rather than the nodes. This is why I am confused about whether it is possible to create a fat-tree structure with the number of nodes as my only input at all!.
I am very new to networking concepts, I would appreciate any guidance
If I understood the question, you have a certain amount of nodes in input, and you want to build a FatTree topology with these nodes.
Unfortunately, you cannot create a complete FatTree topology with an arbitrary number of nodes.
If you are confused about the construction, I suggest to have a look at this link
For my master thesis, I explored some data center topologies and their feasability for network tomography-based monitoring applications. This resultet in a few python models—FatTree included—implemented using the networkx library, that are available on Github. The code is not the prettiest, especially the visualisation parts, and could surely be improved, but I hope it can still be useful to gain an intuition about how these topologies scale.
If you start playing around with the different scales of the FatTree you will quickly see, that Giuseppe is right. A fat-tree has a very strict structure that is only dependant on the port number parameter. It is therefore indeed not possible to construct a fat-tree with an arbitrary number of nodes.
Although I'm late in answering this, and others have already given the correct answer, I'd still like to add some value with respect to FatTree topology design.
-> For a k-port switch based Fattree topology, you can derive these values by using tree data structure properties and the topology requirements:
- number of core switches = (k/2)^2
- number of pods = k
- number of aggregation switches in each pod = k/2
- number of edge switches in each pod = k/2
- each aggregation switch connected to k/2 core switches and k/2 edge switches
- each edge switch connected to k/2 aggregation switches and k/2 nodes
- i.e., each pod consists of (k/2)^2 nodes
- number of nodes possible to be connected to the network = (K^3)/4
Since the number of servers possible to be connected to this network is expressed in terms of k, now you can clearly see that you can't create a fattree topology with any number of nodes. The count of nodes can only take the forms (k^3)/4 for even values of k (to be integer values), e.g., 16, 54, and so on. So, in other words, you can't have a proper fattree topology with random node count (different than listed above or if not expressed as above)!

P2P Network Bootstrapping

For P2P networks, I know that some networks have initial bootstrap nodes. However, one would assume that with all new nodes learning of peers from said bootstrap nodes, the network would have difficulty adding new peers and would end up with many cliques - unbalanced, for lack of a better word.
Are there any methods to prevent this from occuring? I'm aware that some DHTs structure their routing tables to be a bit less susceptible to this, but I would think the problem would still hold.
To clarify, I'm asking about what sort of peer mixing algorithms exist/are commonly used for peer to peer networks.
However, one would assume that with all new nodes learning of peers from said bootstrap nodes, the network would have difficulty adding new peers and would end up with many cliques - unbalanced, for lack of a better word.
If the bootstrap nodes were the only source of peers and no further mixing occured that might be an issue. But in practice bootstrap nodes only exist for bootstrapping (possibly only ever once) and then other peer discovery mechanisms take over.
Natural mixing caused by connection churn should be enough to randomize graphs over time, but proactive measures such as a globally agreed-on mixing algorithm to drop certain neighbors in favor of others can speed up that process.
I'm aware that some DHTs structure their routing tables to be a bit less susceptible to this, but I would think the problem would still hold.
The local buckets in kademlia should provide an exhaustive view of the neighborhood, medium-distance buckets will cover different parts of the keyspace for different nodes and the farthest buckets will preferentially contain long-lived nodes which should have a good view of the network.
This doesn't leave much room for clique-formation.

What does the term ‘fully-converged’ mean?

What does the term ‘fully-converged’ mean in networking? and again what does it mean "Prone to looping due to convergence".
Im reading about different protocols like R.I.P, OSPF, BGP, and I didn't really understand those terms.
Im looking around but I can't find any specific answers about it.
Any ideas?
When a network contains layer 2 and layer 3 devices, then this devices takes some time to 'understand' the topology of the network.
This is done when the layer 3 devices builds up a table ( routing table) based on the various routes, may be static, dynamic or directly connected routes. This table develops until the complete topology of the network is mapped out in the routing table. Then we can say the network is fully-converged.
Routing-management protocols like RIP, OSPF, and BGP are all about the distribution of information about network paths between and among network nodes. All such protocols tend to have a model where each node has its own view of what some or all of the network-path topology looks like. The nodes exchange information to drive all nodes to a common, "converged", view of the network. A "fully converged" network is one in which all the nodes fully understand the paths through it.

Simulation of LTE networks using Mininet

I am planning to do a project on WIFI offloading using Software Defined Networking. Basically to switch the signals from WIFI to LTE and vice-versa based on the signal strength. Could anybody let me know how i could simulate this and carry out certain experimental tests? I know there is a software called Mininet and i am not sure if we can create base-stations to simulate the experiments. Is it possible to simulate this using Mininet?
Thanks!
You can sett up some link parameters such as bandwidth and latency in mininiet to "emulate" an wireless base station (LTE/WIFI). However, I'm not quite sure how you should emulate signal strength. You could of course write a program with a "node" that moves around on a 3 axis graph where the x,y,z values could give some signal strength value based on output effect multiplied with the vector found in the graph, and when it reaches a threshold have it change to a node (mininet link) that is closer. I.e. change the forwarding tables so your "stream"/tunnel uses another link.
I did an undergraduate thesis on wireless network emulation in Mininet and used Mininet and Ns-3 together using this project. We primarily did validity testing on this platform to determine it's accuracy and limitations (especially at scale). The wireless model is very good until a very clear performance degradation when the CPU usage reaches (100/n)% - where n is the number of available cores on the machine (for a single threaded implementation).
It also has the functionality to set up multiple base stations, although I didn't delve deeply into this beyond initial testing that it works. The main benefits achieved through this tool is the accurate performance degradation due to distance from the source and interference.
A lot of time has past and it seems better ways are available:
ns-3 has LTE features (it seems it got it through LENA)
for mininet-wifi, here, here and here

Consistent hashing: Where is the data-structure of ring kept

We have N cache-nodes with basic consistent-hashing in a ring.
Questions:
Is data-structure of this ring stored:
On each of these nodes?
Partly on each node with its ranges?
On a separate machine as a load balancer?
What happens to the ring when other nodes join it?
Thanks a lot.
I have found an answer to the question No 1.
Answer 1:
All the approaches are written in my blog:
http://ivoroshilin.com/2013/07/15/distributed-caching-under-consistent-hashing/
There are a few options on where to keep ring’s data structure:
Central point of coordination: A dedicated machine keeps a ring and works as a central load-balancer which routes request to appropriate nodes.
Pros: Very simple implementation. This would be a good fit for not a dynamic system having small number of nodes and/or data.
Cons: A big drawback of this approach is scalability and reliability. Stable distributed systems don’t have a single poing of failure.
No central point of coordination – full duplication: Each node keeps a full copy of the ring. Applicable for stable networks. This option is used e.g. in Amazon Dynamo.
Pros: Queries are routed in one hop directly to the appropriate cache-server.
Cons: Join/Leave of a server from the ring requires notification/amendment of all cache-servers in the ring.
No central point of coordination – partial duplication: Each node keeps a partial copy of the ring. This option is direct implementation of CHORD algorithm. In terms of DHT each cache-machine has its predessesor and successor and when receiving a query one checks if it has the key or not. If there’s no such a key on that machine, a mapping function is used to determine which of its neighbors (successor and predessesor) has the least distance to that key. Then it forwards the query to its neighbor thas has the least distance. The process continues until a current cache-machine finds the key and sends it back.
Pros: For highly dynamic changes the previous option is not a fit due to heavy overhead of gossiping among nodes. Thus this option is the choice in this case.
Cons: No direct routing of messages. The complexity of routing a message to the destination node in a ring is O(lg N).

Resources