Wouldn't it make sense to just send the RTS to the Access Point rather than broadcast it. I understand why the Access Point broadcasts the CTS frame, so that other stations don't send packets and collision doesn't occur.
In wireless networking there is a famous problem called "hidden node problem". RTS,CTS and CTS to self used in 802.11 are solutions to address hidden-node problem.
I suggest you to understnd hidden-node-problem here https://en.wikipedia.org/wiki/Hidden_node_problem
Why RTS/CST is a broadcast rather than a unicast?
A broadcast will be recieved by all stations and Access-points in the range. Both RTS and CTS has a field called "duration" that includes "the time duration medium should be reserved in microseconds". All the STA's and AP's who see's this RTS/CTS will update their NAV [network-allocation-vector which is a virtual carrier sensing mechanism]. Means they will keep quiet for those many micro-seconds.
This avoids collisions.
If RTS/CTS are just directed to AP, other STA's or AP's in the visinity wont see this and may result in collisions.
Hope it helps.
AFAIK, RTS/CTS mechanism is a unicast sequence of 4 packets. RTS + CTS + DATA + ACK. All the stations (other than the intended target station) will extract only the header part from RTS/CTS frames, to be precise from CTS frame (they will not look into the packet details because it is a unicast packet) and get the duration field. Accordingly these stations set their NAV timer and sit idle until the timer expires.
Related
Basically, I know that Unicast is one to one, Multicast is one to many (the ones that requested the data) and Broadcast is one to all (whether they want the data or not).
In unicast, a data is duplicated and sent one by one by the transmitter to all the receivers that need it. Bandwidth usage here is simply equal to data size * number of receivers, right ?
But how does that work for multicast and broadcast transmissions ?
I understood that it is simpler (and more cost effective) for the transmitter that has to send the data just once to the switch (no matter how many receivers). Then the switch takes care of it by forwarding the data to many (multicast) or to all (broadcast).
But in the end, the bandwidth usage will increase with the number of receivers too. That means (as before) multiplying the data size by the number of receivers, right ?
For example, is the bandwidth usage of 100 unicast transmissions different from 1 broadcast (or multicast) transmission to 100 receivers ? If so, what’s the trick ?
Many thanks for your time
I was doing some experiments.
And I used OnOffApplication to generate the traffic.
However things didn't seem right.
And i use
MaxBytes to send the amount of traffic that I want.
And the traffic is heavy.
So there will be some packets being dropped.
And it seems OnOffApplication doesn't care about the dropped packets. ( I'm not sure. It's my guess)
It only send the packets until it reaches MaxBytes , and doesn't care about whether the packet is received or not.
Is my guess right?
And, if my guess is right, then is there any alternative choice that I can use.
To generate traffic that each flow has a certain size, and have to re-transmit until all packets in the same flow is received.
My code is in below
OnOffHelper source ("ns3::TcpSocketFactory", Address (InetSocketAddress(r_ipaddr, port)));
source.SetAttribute ("OnTime", RandomVariableValue (ConstantVariable (1)));
source.SetAttribute ("OffTime", RandomVariableValue (ConstantVariable (0)));
source.SetAttribute ("DataRate", DataRateValue (DataRate(linkBw)));
source.SetAttribute("PacketSize",UintegerValue (packetSize));
source.SetAttribute ("MaxBytes", UintegerValue (tempsize*1000));
From the application point of view, OnOff is only a packet generator. It sends packets with specific characteristics (rate, max number etc). It does not track them. That's by design.
If you use TCP though, then the socket will track and make sure that any lost segments are re-transmitted.
The application will generate the MaxBytes in terms of load, but the actual packets transmitted on the wire (or the air) may differ due to the fact that TCP (by design) does not respect the message boundaries, as it is a bytestream oriented protocol. So it may boundle data packets together, or packet segments, with re-trasnmitted segmets etc.
I am studying about wireless networks and specifically about the IEEE 802.11. I cannot understand whether two users in the different BSSs that work at the same frequency and the same location can interfere with each other or not. I know that a BSS is formed from users that use the same frequency but i cannot figure out if a nearby BSS can use the same frequency as one of its neighbours.
Thank you for your time!
802.11 WiFi uses CSMA/CA or "Carrier-Sense Multiple Access with Collision Avoidance" to ensure all stations using the same or similar frequency can co-operate.
Before sending onto the network, a station will listen to the medium to see if something else is using it, this is called "Clear Channel Assessment (CCA)".
If the station detects energy on the medium, it assumes it is being used and will backoff for a random (very short, in the order of microseconds) time and then try again. Eventually it should see the medium is clear and be able to proceed with its transmit.
Every unicast frame sent on a WiFi network is ACKnowledged as soon as it is received by the destination, with an ACK frame. If a station transmits and doesn't receive an ACK, it will retransmit. This avoids problems where something has decided to use the medium mid-way through a station transmitting a packet, causing corruption.
All this operates outside of the concept of a BSS, as regardless of which BSS a station is in, it still needs to play fair with all the other stations on the frequency in the same or other BSSes.
The net effect is you can have many stations in many BSSs all on the same channel happily co-habiting, the downside is performance degradation as it gets harder to get a clear channel, and the likelihood of corrupt frames and retransmits increases.
Context
We have got un unsteady transmission channel. Some packets may be lost.
Sending a single network packet in any direction (from A to B or from B to A) takes 3 seconds.
We allow a signal delay of 5 seconds, no more. So we have a 5-second buffer. We can use those 5 seconds however we want.
Currently we use only 80% of the transmission channel, so we have 1/4 more room to utilize.
The quality of the video cannot be worsened.
We need to use UDP.
Problem
We need to make the quality better. How to handle lost packets? We need to use UDP and handle those errors ourselves. How to do it then? How to make sure that not so many packets will be lost as currently (we can't guarantee 100%, so we only want it better), without retransmitting them? We can do everything, this is theory.
There are different logic's to handle these things.It depends on what application you are using. Are you doing real time video streaming? stringent requirements?
As you said you have a buffer, you can actually maintain a buffer for the packets and then send an acknowledgement for the lost packets (if you feel you can wait).
As this is video application, send acknowledgements only to the key frames. make sure that you have a key or I frame and then do interpolation at the rx side.
Look into something called forward error correction, fountain codes, luby codes. Here, you will encode the packets 1 and 2 and produce packet 3. If packet1 is lost, use packet3 and packet2 to get the packet1 back at the rx side. Basically you send redundant packets. Its little harsh on network but you get most of the data.
I'm currently designing a sensor network that will have small ATtiny85 probes that each have a temperature sensor, a barometer, and a humidity sensor. I think I will use these (http://goo.gl/TqaDjl) to communicate as they are low cost and don't need much range. Im not sure though how I will get the probes to communicate with the main control, as the transmitter transmits digitally and I will have +20 probes that all need to send data without signals overlapping or getting messed up every minute. I think the easiest way would be to time the probes so that they don't overlap in transmission but I'm not sure.
Questions:
-Is using RF the cheapest and best option for this system?
-How can I prevent communication overlapping?
-What is the easiest way to send data digitally from an arduino (or ATtiny85)?
I guess I'm late to the party, but I'll offer some insight into collision control with a ton of chattering transmitters on one link, a la 802.11. This is somewhat packetized.
If two transmitters try to transmit at the same time, you're bound to get a mangled mess of rotten bacon at the receivers.
A simplified version of WiFi-style collisions would be good. Basically, it uses preambles that can be detected, and for longer transmissions that have a higher chance of conflicting, it can use shorter request/clear to send packets.
While this is likely overkill, I'd go for preambles. Start by transmitting a steady stream of something recognizable, like in hex, 555533330f0f00ff which is basically alternating 1s and 0s but with changing frequency(0101, then 0011, then 00001111, and so on), a readily recognizable pattern that is unlikely to be given off by stray radiation or noise.
This pattern could undergo a shift so there's a finite set of other preambles that should be bitwise-shifted relative to the original.
If a transmitter detects this preamble, it should STOP and wait. If you limit all packets to a certain temporal length, collisions should not occur if you wait sufficient time between packets. If during the time of one packet, a preamble is heard, then your station should wait for the full length of the transmission(listening to its length and other header fields so it knows how long to wait). Once the packet is done, your station can transmit its preamble.
This is where the WiFi resemblance stops and simpler protocols take over.
Note that if 2 stations are waiting on a packet they can start their preambles almost simultaneously. To resolve this, each station should have a different zero bit flipped in its preamble. If it detects a 1 for that bit, it sees that there's another station preambling, and should back off.
Each station should wait a certain delay(up to you) after each packet so other stations can start their transmissions.
A few sketches of the communication patterns show that this is sufficient for your needs.
Now if it's a master-slave-style system as long as you only have one network it should be easier since there should only be one outstanding request that would involve a slave transmitting.
Those will be by far the cheapest method. As for the best method, there are a variety of choices much better, but more expensive. A network of Xbee modules comes to mind, but those are much more expensive than $1.25 a pair.
Using the RF modules is very do-able however. To prevent communication overlapping, put a RF transmitter and receiver on each sensor node and the main hub. The main hub can send "hey sensor1 give me your data", which gets broadcasted to all of the sensors. However, only sensor1 will realize "hey I am sensor 1, here is my data" which the hub will listen for. Then, the hub will go on and say "hey sensor2 send me your data" and so on and so forth.
I think your original approach may be best. The approach of putting a Tx and Rx on every device may be affordable, but I question if it will work. With 20 devices transmitting on the same frequency, which one will the receiver "hear". Most important, how will a device receive any remote transmitter's signal when its own transmitter is very close? Keep in mind: these are AM radios and will "send" a carrier even if not sending any data. Get a small number of transmitters before trying to go full scale.
To avoid the problem of receiving the one active transmitter among the soup of inactive transmitters, you want only 1 transmitter powered at 1 time. You would control Vcc to one transmitter, turn it on, send the burst of data, and then power it off.
-How can I prevent communication overlapping?
You can't -- you have to accept that there will be occasional overlaps. Add a CRC to the transmitted data so that the receiver can detect garbage.
The timing of the multiple transmitters is surely a project in itself. You surely don't want to run them all at the same transmission period. They may not collide at the beginning, but when two devices did drift together and start colliding, they would stay together and collide for a long time, until the clocks drifted apart.
I would start with something simple. For example with three devices, run the transmissions at 2000 ms, 2200 ms, 2400 ms period (use EEPROM to configure). That way, if a pair happens to collide at one data point, then next transmissions that pair will be 200 ms apart.