I am new to VEINS/Omnet++ and trying various broadcast suppression techniques and would like to calculate the packet loss ratio. I assume I have to use this formula :
Packet Loss Ratio = TotalLostPackets / SentPackets
But since some nodes send 0 packets, is there an easy way to specify this in the Omnet++ .anf config file or maybe in VEINS without doing manual adjustments? Otherwise if any node sends a 0 packet, then all graphs appear as infinity.
Thank you!
This does not directly answer your question, but I would warn against using this equation in a simulation where not all nodes might send the same number of packets or where broadcasts are sent. Each packet sent as a broadcast can potentially be received by many other nodes meaning that even a simulation where only 1 packet is sent might also record 7 successful receptions and 5 packet losses. Your equation would calculate the loss rate as 5/1=500% whereas I would find a rate of 5/12=42% more reasonable.
As a side effect of calculating loss rate as "fail/(success+fail)" you will not need to take special care for nodes that did not send/receive packets.
Related
Context
We have got un unsteady transmission channel. Some packets may be lost.
Sending a single network packet in any direction (from A to B or from B to A) takes 3 seconds.
We allow a signal delay of 5 seconds, no more. So we have a 5-second buffer. We can use those 5 seconds however we want.
Currently we use only 80% of the transmission channel, so we have 1/4 more room to utilize.
The quality of the video cannot be worsened.
Problem
We need to make the quality better. How to handle lost packets?
Solution proposition
A certain thing - we cannot use TCP in this case, because when TCP detects some problems, it requests retransmission of lost data. That would mean that a packet would arrive after 9 seconds, which is more than the limit.
Therefore we need to use UDP and handle those errors ourselves. How to do it then? How to make sure that not so many packets will be lost as currently, without retransmitting them?
Its a complicated solution, but by far the best option will be to add forward error correction (FEC). This is how images are transfered form space probes where the latency is measures in minutes or hours. Its also use by cell phones where delayed packets are bad for two way communication.
A not as good, but eaiser to implement option is to use UDT. This is a UDP with tcp like retranmission library, but allows you much more controll over the protocol.
Context
We have got un unsteady transmission channel. Some packets may be lost.
Sending a single network packet in any direction (from A to B or from B to A) takes 3 seconds.
We allow a signal delay of 5 seconds, no more. So we have a 5-second buffer. We can use those 5 seconds however we want.
Currently we use only 80% of the transmission channel, so we have 1/4 more room to utilize.
The quality of the video cannot be worsened.
We need to use UDP.
Problem
We need to make the quality better. How to handle lost packets? We need to use UDP and handle those errors ourselves. How to do it then? How to make sure that not so many packets will be lost as currently (we can't guarantee 100%, so we only want it better), without retransmitting them? We can do everything, this is theory.
There are different logic's to handle these things.It depends on what application you are using. Are you doing real time video streaming? stringent requirements?
As you said you have a buffer, you can actually maintain a buffer for the packets and then send an acknowledgement for the lost packets (if you feel you can wait).
As this is video application, send acknowledgements only to the key frames. make sure that you have a key or I frame and then do interpolation at the rx side.
Look into something called forward error correction, fountain codes, luby codes. Here, you will encode the packets 1 and 2 and produce packet 3. If packet1 is lost, use packet3 and packet2 to get the packet1 back at the rx side. Basically you send redundant packets. Its little harsh on network but you get most of the data.
When using a non beaconing Zigbee network, I know that the 802.15.4 spec defines the use of CSMA-CA to control when two devices get access to a channel to make sure no two nodes "step on each others toes" so to speak. My understanding is that very simply, it requires each node to "listen before talking". Is that correct? Is there more information on the Zigbee implementation of this? In other words, where do I go to learn more about how to program a Zigbee chip to implement the same?
Also, if i have 20 end nodes sending data asynchronously to one coordinator, is the channel access mechanism enough to ensure that they do not broadcast at the same time and flood the coordinator? If five nodes (for example) attempt to broadcast at the same time, how will mutual exclusion be ensured? Where can I get some details on that?
Thanks
Rishi
The maximum size of a 802.15.4 packet is 1024 bits of payload. So the maximum duration of the frame (running in standard 250kbps rate on the 2.4GHz band) is about 5ms when you take preamble etc into account. If your end devices are polling at 1 poll/second it should easily manage 20 end nodes I think. If it gets too much the exponential backoff should ease the collision rate.
I'm sure you've seen these when searching, but just in case:
http://www.prismmodelchecker.org/casestudies/zigbee.php
http://www.dagstuhl.de/Materials/Files/07/07101/07101.FruthMatthias.Slides.pdf
http://www-public.it-sudparis.eu/~gauthier/Tools/802_15_4_MAC_PHY_Usage.pdf
I've read that Iperf basically tries to send as much information down a connection as quickly as possible reporting on the throughput achieved. This tool is especially useful in determining the volume of data that links between two machines can supply.
Is it possible to gather the same results by sending regular data, as in not testing data?
What I'm trying to do is sending data in the foreground while gathering statistics in the background (throughput and jitter).
How iperf calculates these two values?
This is the closest thing I've found
http://openmaniak.com/iperf.php
I have the similar question about how iperf works. Please refer to the following post where I did some research and gave an overview.
How iperf calculates network statistics
Generally, in iperf, it embedded timestamps and sequence number in the payload in the sender side. When receiver receives the packet, it extracts these content and calculates the statistics. You can find more detail in the post.
Throughput is simple: assuming the client is saturating the network, the server only needs to count the received bytes, and divide that by some duration.
This post explains this topic in a greater detail.
Iperf 2 calculates Jitter for UDP only. It is based on what is prescribed within by RTP implementation, as stated by the code.
RTP is used in implementations of audio streaming, where jitter plays a major role, so it's a good place to take the algorithm from - what Iperf reports is what many applications where you would be interested in jitter would see.
See RFC 1889, section 6.3.1, "interarrival jitter" field:
The interarrival jitter J is defined to be the mean deviation (smoothed
absolute value) of the difference D in packet spacing at the receiver compared
to the sender for a pair of packets. As shown in the equation below, this is
equivalent to the difference in the "relative transit time" for the two
packets; the relative transit time is the difference between a packet's RTP
timestamp and the receiver's clock at the time of arrival, measured in the same
units.
If Si is the RTP timestamp from packet i, and Ri is the time of arrival in RTP
timestamp units for packet i, then for two packets i and j, D may be expressed
as
D(i,j)=(Rj-Ri)-(Sj-Si)=(Rj-Sj)-(Ri-Si)
The interarrival jitter is calculated continuously as each data packet i is
received from source SSRC_n, using this difference D for that packet and the
previous packet i-1 in order of arrival (not necessarily in sequence),
according to the formula
J=J+(|D(i-1,i)|-J)/16
I have a voice-chat service which is experiencing variations in the delay between packets. I was wondering what the proper response to this is, and how to compensate for it?
For example, should I adjust my audio buffers in some way?
Thanks
You don't say if this is an application you are developing yourself or one which you are simply using - you will obviously have more control over the former so that may be important.
Either way, it may be that your network is simply not good enough to support VoIP, in which case you really need to concentrate on improving the network or using a different one.
VoIP typically requires an end to end delay of less than 200ms (milli seconds) before the users perceive an issue.
Jitter is also important - in simple terms it is the variance in end to end packet delay. For example the delay between packet 1 and packet 2 may be 20ms but the delay between packet 2 and packet 3 may be 30 ms. Having a jitter buffer of 40ms would mean your application would wait up to 40ms between packets so would not 'lose' any of these packets.
Any packet not received within the jitter buffer window is usually ignored and hence there is a relationship between jitter and the effective packet loss value for your connection. Packet loss typically impacts users perception of voip quality also - different codes have different tolerance - a common target might be that it should be lower than 1%-5%. Packet loss concealment techniques can help if it just an intermittent problem.
Jitter buffers will either be static or dynamic (adaptive) - in either case, the bigger they get the greater the chance they will introduce delay into the call and you get back to the delay issue above. A typical jitter buffer might be between 20 and 50ms, either set statically or adapting automatically based on network conditions.
Good references for further information are:
- http://www.voiptroubleshooter.com/indepth/jittersources.html
- http://www.cisco.com/en/US/tech/tk652/tk698/technologies_tech_note09186a00800945df.shtml
It is also worth trying some of the common internet connection online speed tests available as many will have specific VoIP test that will give you an idea if your local connection is good enough for VoIP (although bear in mind that these tests only indicate the conditions at the exact time you are running your test).