So I have come across many network simulators and most of them are either on permanently (as in you have to close it to turn it off), or have a stop and start button. now, I have been looking for a network simulator than can simulate packet loss, delay, bandwidth speeds of choice for windows that can also be cycled on or off by a set time. I found one such program called FnLag, but the problem is (for me at least) is that it is not free.
basically the shorter version of my question is does anyone know of a network simulator for windows or linux that is free that can simulate packet loss, delay, bandwidth control, seperate downstream and upstream rules, tcp and udp selection that can be cycled on and off or burst/pulse feature? i can elaborate further if necessary.
There is a very good open source solution for network simulations called - WANEM
WANem is a wide area network emulator. It supports various features such a bandwidth limitation, latency, packet loss, network disconnection among other wide area network characteristics.
Here is the url: http://wanem.sourceforge.net
Related
I want multiple IoT devices (Say 50) communicating to a server directly asynchronously via TCP. Assume all of them have a heartbeat pulse every 30 seconds and may drop off and reconnect at variable times.
Can anyone advice me the best way to make sure no data is dropped or blocked when multiple devices are communicating simultaneously?
TCP by itself ensures no data loss during the communication between a client and a server. It does that by the use of sequence numbers and ACK messages.
Technically, before the actual data transfer happens, a TCP connection is created between the client (which can be an IoT device, or any other device) and the server. Then, the data is split into multiple packets and sent over the network through that connection. All TCP-related mechanisms like flow-control, error-detection, congestion-detection, and many others, take place once the data starts to flow.
The wiki page for TCP is a pretty good start if you want to learn more about how it works.
Apart from that, as long as your server has enough capacity to support the flow of requests coming from the devices, then everything should work (at least in theory).
I don't think you are asking the right question. There is no way to make sure that no data is dropped or blocked. Networks do not always work (that is why the word work is in network, to convince you otherwise ).
The right question is: how do I make my distributed system as available and reliable as possible? The answer involves viewing interruption and congestion as part of the normal operation, and build your software appropriately.
There is a timeless usenix/acm/? paper from the late 70s early 80s that invigorated the notion that end-to-end protocols are much more effective then over-featured middle to middle protocols; and most guarantees of middle to middle amount to best effort. If you rely upon those guarantees, you are bound to fail. Sorry, cannot find the reference right now, but it is widely cited.
Background:
I am using the RxAndroidBle library and have a requirement to quickly (as possible) connect to multiple devices at a time and start communicating. I used the RxBluetoothKit for iOS, and have started to use RxAndroidBle on my Pixel 2. This had worked as expected and I could establish connections to 6-8 devices, as required, in a few hundred milliseconds. However, broadening my testing to phones such as the Samsung S8 and Nexus 6P, it seems that establishing a single connection can now take upwards of 5-6 seconds instead of 50-60 millis. I will assume for the moment that that disparity is within the vendor-specific BT implementations. Ultimately, this means that connecting to, e.g., 5 devices now takes 30 seconds instead of < 1 second.
Question:
What I understand from the documentation and other questions asked, RxAndroidBle queues all scanning, connecting, and communication requests and executes them serially to be safe and maintain stability based on the variety of Bluetooth implementations in the Android ecosystem. However, is there currently a way to execute the requests (namely, connecting) in parallel to accept this risk and potentially cut my total time to establish multiple connections down to whichever device takes the longest to connect?
And side question: are there any ideas to diagnose what could possibly be taking 5 seconds to establish a connection with a device? Or do we simply need to accept that some phones will take that long in some instances?
However, is there currently a way to execute the requests (namely, connecting) in parallel to accept this risk and potentially cut my total time to establish multiple connections down to whichever device takes the longest to connect?
Yes. You may try to establish connections using autoConnect=true which would prevent locking the queue for longer than few milliseconds. The last connection should be started with autoConnect=false to kick off a scan. Some stack implementations are handling this quite OK but your mileage may vary.
And side question: are there any ideas to diagnose what could possibly be taking 5 seconds to establish a connection with a device?
You can check the Bluetooth HCI snoop log. Also you may try using a BLE sniffer to check what is actually happening "on-air" (e.g. an nRF51 Development Kit).
Or do we simply need to accept that some phones will take that long in some instances?
This is also an option since usually there is little one can do about connecting time. From my experience BLE stack/firmware implementations are wildly different from each other.
I'm using IT Guru's Opnet to simulate different networks. I've run the basic HomeLAN scenario and by default it uses an ethernet connection running at a data rate of 20Kbps. Throughout the scenarios this is changed from 20K to 40K, then to 512K and then to a T1 line running at 1.544Mbps. My question is - does increasing the data rate for the line increase the throughput?
I have this graph output from the program to display my results:
Please note it's the image on the forefront which is of interest
In general, the signaling capacity of a data path is only one factor in the net throughput.
For example, TCP is known to be sensitive to latency. For any particular TCP implementation and path latency, there will be a maximum speed beyond which TCP cannot go regardless of the path's signaling capacity.
Also consider the source and destination of the traffic: changing the network capacity won't change the speed if the source is not sending the data any faster or if the destination cannot receive it any faster.
In the case of network emulators, also be aware that buffer sizes can affect throughput. The size of the network buffer must be at least as large as the signal rate multiplied by the latency (the Bandwidth Delay Product). I am not familiar with the particulars of Opnet, but I have seen other emulators where it is possible to set a buffer size too small to support the select rate and latency.
I have written a couple of articles related to these topics which may be helpful:
This one discusses common network bottlenecks: Common Network Performance Problems
This one discusses emulator configuration issues: Network Emulators
Suppose, there is a network which gives a lot of Timeout errors when packets are transmitted over it. Now, timeouts can happen either because the network itself is inherently lossy (say, poor hardware) or it might be that the network is highly congested, due to which network devices are losing packets in between, leading to Timeouts. Now, what additional statistics about the traffic being transmitted (like Missing Packets errors etc.) are required that might help us to find out whether timeouts are happening due to poor hardware, or too much network load.
Please note that we have access only to one node in the network (from which we are transmitting packets) and as such, we cannot get to know the load being put by other nodes on the network. Similarly, we don't really have any information about the hardware being used in the network. Statistics is all that we have.
A network node only has hardware information about its local collision domain, which on a standard network will be the cable that links the host to the switch.
All the TCP stack will know about lost packets is that it is not receiving acknowledgements so it needs to resend, there is no mechanism for devices (E.g. switches & routers) between a source and destination to tell the source that there is a problem.
Without access to any other nodes the only way to ascertain if your problem is load based would be to run a test that sends consistent traffic over the network for a long period, if the packet retry count per second/minute/hour remains the same then it would suggest that there is a hardware issue, if the losses only occur during peak traffic periods then the issue could be load related. Of course there could be a situation where misconfigured hardware issues will only be apparent during high traffic periods, this takes things back to the main problem which is that you need access to network stats from beyond your single node.
In practice, nearly all loss on terrestrial network paths is due to either congestion or firewalls. Loss due to bit-errors is extremely rare. Even on wireless networks, forward error correction handles most bit/media/transmission errors. Congestion can be caused by a lot of different factors: any given network path will involve dozens of devices and if any one of them becomes overloaded for even a moment, packets will be dropped.
The only way to tell the difference between congestion induced packet loss and media errors is that media errors will occur independent of load. In other words, the loss rate will be the same whether you are sending a lot of data or only a little data.
To test that, you will need some control, or at least knowledge, of the load on the path. Since you don't have control and the only knowledge you have is from source-node observation, the best you can do is to take test samples (using ping is the easiest) around the clock and throughout the week, recording loss rates and latencies. These should give you an idea of when the path is relatively idle. If loss rates remain significant even when the path is (probably) idle, then there might be a media-loss issue. But again, that is extremely rare.
For background, I have written a few articles on the subject:
Loss, Latency, and Speed, discussing what statistics you can observe about a path and what they mean.
Common Network Performance Problems, discussing the most common components in a network path and how they affect performance (congestion).
Imagine you have many clustered servers, across many hosts, in a heterogeneous network environment, such that the connections between servers may have wildly varying latencies and bandwidth. You want to build a map of the connections between servers by transferring data between them.
Of course, this map may become stale over time as the network topology changes - but lets ignore those complexities for now and assume the network is relatively static.
Given the latencies between nodes in this host graph, calculating the bandwidth is a relative simply timing exercise. I'm having more difficulty with the latencies - however. To get round-trip time, it is a simple matter of timing a return-trip ping from the local host to a remote host - both timing events (start, stop) occur on the local host.
What if I want one-way times under the assumption that the latency is not equal in both directions? Assuming that the clocks on the various hosts are not precisely synchronized (at least that their error is of the the same magnitude as the latencies involved) - how can I calculate the one-way latency?
In a related question - is this asymmetric latency (where a link is quicker in direction than the other) common in practice? For what reasons/hardware configurations? Certainly I'm aware of asymmetric bandwidth scenarios, especially on last-mile consumer links such as DSL and Cable, but I'm not so sure about latency.
Added: After considering the comment below, the second portion of the question is probably better off on serverfault.
To the best of my knowledge, asymmetric latencies -- especially "last mile" asymmetries -- cannot be automatically determined, because any network time synchronization protocol is equally affected by the same asymmetry, so you don't have a point of reference from which to evaluate the asymmetry.
If each endpoint had, for example, its own GPS clock, then you'd have a reference point to work from.
In Fast Measurement of LogP Parameters
for Message Passing Platforms, the authors note that latency measurement requires clock synchronization external to the system being measured. (Boldface emphasis mine, italics in original text.)
Asymmetric latency can only be measured by sending a message with a timestamp ts, and letting the receiver derive the latency from tr - ts, where tr is the receive time. This requires clock synchronization between sender and receiver. Without external clock synchronization (like using GPS receivers or specialized software like the network time protocol, NTP), clocks can only be synchronized up to a granularity of the roundtrip time between two hosts [10], which is useless for measuring network latency.
No network-based algorithm (such as NTP) will eliminate last-mile link issues, though, since every input to the algorithm will itself be uniformly subject to the performance characteristics of the last-mile link and is therefore not "external" in the sense given above. (I'm confident it's possible to construct a proof, but I don't have time to construct one right now.)
There is a project called One-Way Ping (OWAMP) specifically to solve this issue. Activity can be seen in the LKML for adding high resolution timestamps to incoming packets (SO_TIMESTAMP, SO_TIMESTAMPNS, etc) to assist in the calculation of this statistic.
http://www.internet2.edu/performance/owamp/
There's even a Java version:
http://www.av.it.pt/jowamp/
Note that packet timestamping really needs hardware support and many present generation NICs only offer millisecond resolution which may be out-of-sync with the host clock. There are MSDN articles in the DDK about synchronizing host & NIC clocks demonstrating potential problems. Timestamps in nanoseconds from the TSC is problematic due to core differences and may require Nehalem architecture to properly work at required resolutions.
http://msdn.microsoft.com/en-us/library/ff552492(v=VS.85).aspx
You can measure asymmetric latency on link by sending different sized packets to a port that returns a fixed size packet, like send some udp packets to a port that replies with an icmp error message. The icmp error message is always the same size, but you can adjust the size of the udp packet you're sending.
see http://www.cs.columbia.edu/techreports/cucs-009-99.pdf
In absence of a synchronized clock, the asymmetry cannot be measured as proven in the 2011 paper "Fundamental limits on synchronizing clocks over networks".
https://www.researchgate.net/publication/224183858_Fundamental_Limits_on_Synchronizing_Clocks_Over_Networks
The sping tool is a new development in this space, which uses clock synchronization against nearby NTP servers, or an even more accurate source in the form of a GNSS box, to estimate asymmetric latencies.
The approach is covered in more detail in this blog post.