Context
We have got un unsteady transmission channel. Some packets may be lost.
Sending a single network packet in any direction (from A to B or from B to A) takes 3 seconds.
We allow a signal delay of 5 seconds, no more. So we have a 5-second buffer. We can use those 5 seconds however we want.
Currently we use only 80% of the transmission channel, so we have 1/4 more room to utilize.
The quality of the video cannot be worsened.
We need to use UDP.
Problem
We need to make the quality better. How to handle lost packets? We need to use UDP and handle those errors ourselves. How to do it then? How to make sure that not so many packets will be lost as currently (we can't guarantee 100%, so we only want it better), without retransmitting them? We can do everything, this is theory.
There are different logic's to handle these things.It depends on what application you are using. Are you doing real time video streaming? stringent requirements?
As you said you have a buffer, you can actually maintain a buffer for the packets and then send an acknowledgement for the lost packets (if you feel you can wait).
As this is video application, send acknowledgements only to the key frames. make sure that you have a key or I frame and then do interpolation at the rx side.
Look into something called forward error correction, fountain codes, luby codes. Here, you will encode the packets 1 and 2 and produce packet 3. If packet1 is lost, use packet3 and packet2 to get the packet1 back at the rx side. Basically you send redundant packets. Its little harsh on network but you get most of the data.
Related
I was doing some experiments.
And I used OnOffApplication to generate the traffic.
However things didn't seem right.
And i use
MaxBytes to send the amount of traffic that I want.
And the traffic is heavy.
So there will be some packets being dropped.
And it seems OnOffApplication doesn't care about the dropped packets. ( I'm not sure. It's my guess)
It only send the packets until it reaches MaxBytes , and doesn't care about whether the packet is received or not.
Is my guess right?
And, if my guess is right, then is there any alternative choice that I can use.
To generate traffic that each flow has a certain size, and have to re-transmit until all packets in the same flow is received.
My code is in below
OnOffHelper source ("ns3::TcpSocketFactory", Address (InetSocketAddress(r_ipaddr, port)));
source.SetAttribute ("OnTime", RandomVariableValue (ConstantVariable (1)));
source.SetAttribute ("OffTime", RandomVariableValue (ConstantVariable (0)));
source.SetAttribute ("DataRate", DataRateValue (DataRate(linkBw)));
source.SetAttribute("PacketSize",UintegerValue (packetSize));
source.SetAttribute ("MaxBytes", UintegerValue (tempsize*1000));
From the application point of view, OnOff is only a packet generator. It sends packets with specific characteristics (rate, max number etc). It does not track them. That's by design.
If you use TCP though, then the socket will track and make sure that any lost segments are re-transmitted.
The application will generate the MaxBytes in terms of load, but the actual packets transmitted on the wire (or the air) may differ due to the fact that TCP (by design) does not respect the message boundaries, as it is a bytestream oriented protocol. So it may boundle data packets together, or packet segments, with re-trasnmitted segmets etc.
I have a system that sends "many" (hundreds) of UDP datagrams in bursts, every once in awhile (say, 10 times a minute). According to nload, this averages about 222kBit/s. The content of these datagrams is JSON. I've considered altering the system so that it waits some time (500ms?) and combines many of the JSON objects into one datagram, before sending. But I'm not sure it's worth the effort (bandwidth, protocol, frequency of sending considered.) Would the new approach provide any real benefits over the current one?
The short answer is that it's up to you to decide that.
The long version is that it depends on your use case. Since we don't know what you're building, it's hard to say what's more important - latency? Throughput? Reliability? Something else? Let's analyze some pros and cons. Here's what I came up with:
Pros to sending larger packets:
Fewer messages means fewer system calls and less I/O against the network. That means fewer blocked/waiting threads and less time spent on interrupts.
Fewer, larger packets means less overhead for each individual packet (stuff like IP/UDP headers that's send with each packet). Therefore a higher data rate is (theoretically) achievable, although keep in mind that all of these headers (L2+IP+UDP) typically add up to no more than 60-70 bytes per packet since the UDP header is only 8 bytes long.
Since UDP doesn't guarantee ordering, larger packets with more time between them will reduce any existing reordering.
Cons to sending larger packets:
Re-writing existing code, and making it (slightly) more complicated.
UDP is unreliable, so a loss of a single (large) packet would be more significant compared to the loss of a small packet.
Latency - some data will have to wait 500ms to be sent. That means that a delay is added between the sender and the receiver.
Fragmentation - if one of the packets you create crosses the MTU boundary (typically 1450-1500 bytes including the IP+UDP header, which is normally 28 bytes long), the IP layer would need to fragment the packet into several smaller ones. IP fragmentation is considered bad for a multitude of reasons.
Processing of larger packets might take longer
Context
We have got un unsteady transmission channel. Some packets may be lost.
Sending a single network packet in any direction (from A to B or from B to A) takes 3 seconds.
We allow a signal delay of 5 seconds, no more. So we have a 5-second buffer. We can use those 5 seconds however we want.
Currently we use only 80% of the transmission channel, so we have 1/4 more room to utilize.
The quality of the video cannot be worsened.
Problem
We need to make the quality better. How to handle lost packets?
Solution proposition
A certain thing - we cannot use TCP in this case, because when TCP detects some problems, it requests retransmission of lost data. That would mean that a packet would arrive after 9 seconds, which is more than the limit.
Therefore we need to use UDP and handle those errors ourselves. How to do it then? How to make sure that not so many packets will be lost as currently, without retransmitting them?
Its a complicated solution, but by far the best option will be to add forward error correction (FEC). This is how images are transfered form space probes where the latency is measures in minutes or hours. Its also use by cell phones where delayed packets are bad for two way communication.
A not as good, but eaiser to implement option is to use UDT. This is a UDP with tcp like retranmission library, but allows you much more controll over the protocol.
Reading lots on this for my first network game, I understand the core difference of guaranteed delivery versus time-to-deliver for TCP v UDP. I've also read diametrically opposed views whether realtime games should use UDP or TCP! ;)
What no-one has covered well is how to handle the issue of a dropped packet.
TCP : Read an article using TCP for an FPS that recommended only using TCP. How would an authoritative server using TCP client input handle a packet drop and sudden epic spike in lag? Does the game just stop for a moment and then pick up where it left off? Is TCP packet loss so rare that it's not really that much of an issue and an FPS over TCP actually works well?
UDP : Another article suggested only ever using UDP. Clearly one-shot UDP events like "grenade thrown" aren't reliable enough as they won't fire some of the time. Do you have to implement a message-received, resend protocol manually? Or some other solution?
My game is a tick-based authoritative server with 1/10th second updates from the server to clients and local simulation to keep things seeming more responsive, although the question is applicable to a lot more applications.
I did a real-time TV editing system. All real-time communication was via UDP, but none-real-time used TCP as it is simpler. With the UDP we would send a state packet every frame. e.g. start video in 100 frames, 99,98,…3,2,1,0,-1,-2,-3 so even if no message gets through until -3 then the receiver would start on the 4th frame (just skipping the first 3), hoping that no one would notice, and knowing that this was better than lagging from here on in. We even added the countdown from around +¼ second (as no-one will notice), this way hardly any frames where dropped.
So in summary, we sent the same status packet every frame. It contained all real-time data about past, current, and future events.
The trick is keeping this data-set small. So instead of sending play button pressed event (there is an unbound number of these), we send the video-id, frame-number, start-mask and end-mask. (start/stop mask are frame numbers, if start-mask is positive and stop-mask is negative then show video, at frame frame-number).
Now we need to be able to start a video during another or shortly after it stops. So we consider how many consecutive video can be played at the same time. We need a slot for each, but can we reuse them immediately? If we have pressed stop, so do not know the stop mask until then, then reuse the slot will the video stop. Well there will be no slot for this video, so we should stop it. So yes we can reuse the slot immediately, as long as we use unique IDs.
Other tips: Do not send +1 events instead send current total. If two players have to update the some total, then each should have their own total, sum all totals at point of use, but never edit someone else's total.
I'm currently designing a sensor network that will have small ATtiny85 probes that each have a temperature sensor, a barometer, and a humidity sensor. I think I will use these (http://goo.gl/TqaDjl) to communicate as they are low cost and don't need much range. Im not sure though how I will get the probes to communicate with the main control, as the transmitter transmits digitally and I will have +20 probes that all need to send data without signals overlapping or getting messed up every minute. I think the easiest way would be to time the probes so that they don't overlap in transmission but I'm not sure.
Questions:
-Is using RF the cheapest and best option for this system?
-How can I prevent communication overlapping?
-What is the easiest way to send data digitally from an arduino (or ATtiny85)?
I guess I'm late to the party, but I'll offer some insight into collision control with a ton of chattering transmitters on one link, a la 802.11. This is somewhat packetized.
If two transmitters try to transmit at the same time, you're bound to get a mangled mess of rotten bacon at the receivers.
A simplified version of WiFi-style collisions would be good. Basically, it uses preambles that can be detected, and for longer transmissions that have a higher chance of conflicting, it can use shorter request/clear to send packets.
While this is likely overkill, I'd go for preambles. Start by transmitting a steady stream of something recognizable, like in hex, 555533330f0f00ff which is basically alternating 1s and 0s but with changing frequency(0101, then 0011, then 00001111, and so on), a readily recognizable pattern that is unlikely to be given off by stray radiation or noise.
This pattern could undergo a shift so there's a finite set of other preambles that should be bitwise-shifted relative to the original.
If a transmitter detects this preamble, it should STOP and wait. If you limit all packets to a certain temporal length, collisions should not occur if you wait sufficient time between packets. If during the time of one packet, a preamble is heard, then your station should wait for the full length of the transmission(listening to its length and other header fields so it knows how long to wait). Once the packet is done, your station can transmit its preamble.
This is where the WiFi resemblance stops and simpler protocols take over.
Note that if 2 stations are waiting on a packet they can start their preambles almost simultaneously. To resolve this, each station should have a different zero bit flipped in its preamble. If it detects a 1 for that bit, it sees that there's another station preambling, and should back off.
Each station should wait a certain delay(up to you) after each packet so other stations can start their transmissions.
A few sketches of the communication patterns show that this is sufficient for your needs.
Now if it's a master-slave-style system as long as you only have one network it should be easier since there should only be one outstanding request that would involve a slave transmitting.
Those will be by far the cheapest method. As for the best method, there are a variety of choices much better, but more expensive. A network of Xbee modules comes to mind, but those are much more expensive than $1.25 a pair.
Using the RF modules is very do-able however. To prevent communication overlapping, put a RF transmitter and receiver on each sensor node and the main hub. The main hub can send "hey sensor1 give me your data", which gets broadcasted to all of the sensors. However, only sensor1 will realize "hey I am sensor 1, here is my data" which the hub will listen for. Then, the hub will go on and say "hey sensor2 send me your data" and so on and so forth.
I think your original approach may be best. The approach of putting a Tx and Rx on every device may be affordable, but I question if it will work. With 20 devices transmitting on the same frequency, which one will the receiver "hear". Most important, how will a device receive any remote transmitter's signal when its own transmitter is very close? Keep in mind: these are AM radios and will "send" a carrier even if not sending any data. Get a small number of transmitters before trying to go full scale.
To avoid the problem of receiving the one active transmitter among the soup of inactive transmitters, you want only 1 transmitter powered at 1 time. You would control Vcc to one transmitter, turn it on, send the burst of data, and then power it off.
-How can I prevent communication overlapping?
You can't -- you have to accept that there will be occasional overlaps. Add a CRC to the transmitted data so that the receiver can detect garbage.
The timing of the multiple transmitters is surely a project in itself. You surely don't want to run them all at the same transmission period. They may not collide at the beginning, but when two devices did drift together and start colliding, they would stay together and collide for a long time, until the clocks drifted apart.
I would start with something simple. For example with three devices, run the transmissions at 2000 ms, 2200 ms, 2400 ms period (use EEPROM to configure). That way, if a pair happens to collide at one data point, then next transmissions that pair will be 200 ms apart.