Basically, I know that Unicast is one to one, Multicast is one to many (the ones that requested the data) and Broadcast is one to all (whether they want the data or not).
In unicast, a data is duplicated and sent one by one by the transmitter to all the receivers that need it. Bandwidth usage here is simply equal to data size * number of receivers, right ?
But how does that work for multicast and broadcast transmissions ?
I understood that it is simpler (and more cost effective) for the transmitter that has to send the data just once to the switch (no matter how many receivers). Then the switch takes care of it by forwarding the data to many (multicast) or to all (broadcast).
But in the end, the bandwidth usage will increase with the number of receivers too. That means (as before) multiplying the data size by the number of receivers, right ?
For example, is the bandwidth usage of 100 unicast transmissions different from 1 broadcast (or multicast) transmission to 100 receivers ? If so, what’s the trick ?
Many thanks for your time
Related
MTU (Maximum transmission unit) is the maximum frame size that can be transported.
When we talk about MTU, it's generally a cap at the hardware level and is for the lower level layers - DataLink and Physical layer.
Now, considering the OSI layer, it does not matter how efficient are the upper layers or what kind of magic-sauce they are applying, data-link layer will always construct frames of size < 1500 bytes (or whatever is the MTU) and anything in the "internet" will always be transmitted at that frame size.
Does the internet's transmission rate really capped at 1500 bytes. Now-a-days, we see speeds in 10-100 Mbps and even Gbps. I wonder for such speeds, does the frames still get transmitted at 1500 bytes, which would mean lots and lots and lots of fragmentation and re-assembly at the receiver. At this scale, how does the upper layer achieve efficiency ?!
[EDIT]
Based on below comments, I re-frame my question:
If data-layer transmits at 1500 byte frames, I want to know how is upper layer at the receiver able to handle such huge incoming data-frames.
For ex: If internet speed in 100 Mbps, upper layers will have to process 104857600 bytes/second or 104857600/1500 = 69905 frames/second. Network layer also need to re-assemble these frames. How network layer is able to handle at such scale.
If data-layer transmits at 1500 byte frames, I want to know how is
upper layer at the receiver able to handle such huge incoming
data-frames.
1500 octets is a reasonable MTU (Maximum Transmission Unit), which is the size of the data-link protocol payload. Remember that not all frames are that size, it is just the maximum size of the frame payload. There are many, many things with much smaller payloads. For example, VoIP has very small payloads, often smaller than the overhead of the various protocols.
Frames and packets get lost or dropped all the time, often on purpose (see RED, Random Early Detection). The larger the data unit, the more data that is lost when a frame or packet is lost, and with reliable protocols, such as TCP, the more data must be resent.
Also, having a reasonable limit on frame or packet size keeps one host from monopolizing the network. Hosts must take turns.
For ex: If internet speed in 100 Mbps, upper layers will have to
process 104857600 bytes/second or 104857600/1500 = 69905
frames/second. Network layer also need to re-assemble these frames.
How network layer is able to handle at such scale.
Your statement has several problems.
First, 100 Mbps is 12,500,000 bytes per second. To calculate the number of frames per second, you must take into account the data-link overhead. For ethernet, you have 7 octet Preabmle, a 1 octet SoF, a 14 octet frame header, the payload (46 to 1500 octets), a four octet CRC, then a 12 octet Inter-Packet Gap. The ethernet overhead is 38 octets, not counting the payload. To now how many frames per second, you would need to know the payload size of each frame, but you seem to wrongly assume every frame payload is the maximum 1500 octets, and that is not true. You get just over 8,000 frames per second for the maximum frame size.
Next, the network layer does not reassemble frame payloads. The payload of the frame is one network-layer packet. The payload of the network packet is the transport-layer data unit (TCP segment, UDP datagram, etc.). The payload of the transport protocol is application data (remember that the OSI model is just a model, and OSes do not implement separate session and presentation layers; only the application layer). The payload of the transport protocol is presented to the application process, and it may be application data or an application-layer protocol, e.g. HTTP.
The bandwidth, 100 Mbps in your example, is how fast a host can serialize the bits onto the wire. That is a function of the NIC hardware and the physical/data-link protocol it uses.
which would mean lots and lots and lots of fragmentation and
re-assembly at the receiver.
Packet fragmentation is basically obsolete. It is still part of IPv4, but fragmentation in the path has been eliminated in IPv6, and smart businesses, do not allow IPv4 packet fragments due to fragmentation attacks. IPv4 packets may be fragmented if the DF bit is not set in the packet header, and the MTU in the path shrinks smaller than the original MTU. For example, a tunnel will have a smaller MTU because of the tunnel overhead. If the DF bit is set, then a packet too large for the MTU on the next link, the packet is dropped. Packet fragmentation is very resource intensive on a router, and there is a set of steps that must be performed to fragment a packet.
You may be confusing IPv4 packet fragmentation and reassembly with TCP segmentation, which is something completely different.
I was doing some experiments.
And I used OnOffApplication to generate the traffic.
However things didn't seem right.
And i use
MaxBytes to send the amount of traffic that I want.
And the traffic is heavy.
So there will be some packets being dropped.
And it seems OnOffApplication doesn't care about the dropped packets. ( I'm not sure. It's my guess)
It only send the packets until it reaches MaxBytes , and doesn't care about whether the packet is received or not.
Is my guess right?
And, if my guess is right, then is there any alternative choice that I can use.
To generate traffic that each flow has a certain size, and have to re-transmit until all packets in the same flow is received.
My code is in below
OnOffHelper source ("ns3::TcpSocketFactory", Address (InetSocketAddress(r_ipaddr, port)));
source.SetAttribute ("OnTime", RandomVariableValue (ConstantVariable (1)));
source.SetAttribute ("OffTime", RandomVariableValue (ConstantVariable (0)));
source.SetAttribute ("DataRate", DataRateValue (DataRate(linkBw)));
source.SetAttribute("PacketSize",UintegerValue (packetSize));
source.SetAttribute ("MaxBytes", UintegerValue (tempsize*1000));
From the application point of view, OnOff is only a packet generator. It sends packets with specific characteristics (rate, max number etc). It does not track them. That's by design.
If you use TCP though, then the socket will track and make sure that any lost segments are re-transmitted.
The application will generate the MaxBytes in terms of load, but the actual packets transmitted on the wire (or the air) may differ due to the fact that TCP (by design) does not respect the message boundaries, as it is a bytestream oriented protocol. So it may boundle data packets together, or packet segments, with re-trasnmitted segmets etc.
I am studying about wireless networks and specifically about the IEEE 802.11. I cannot understand whether two users in the different BSSs that work at the same frequency and the same location can interfere with each other or not. I know that a BSS is formed from users that use the same frequency but i cannot figure out if a nearby BSS can use the same frequency as one of its neighbours.
Thank you for your time!
802.11 WiFi uses CSMA/CA or "Carrier-Sense Multiple Access with Collision Avoidance" to ensure all stations using the same or similar frequency can co-operate.
Before sending onto the network, a station will listen to the medium to see if something else is using it, this is called "Clear Channel Assessment (CCA)".
If the station detects energy on the medium, it assumes it is being used and will backoff for a random (very short, in the order of microseconds) time and then try again. Eventually it should see the medium is clear and be able to proceed with its transmit.
Every unicast frame sent on a WiFi network is ACKnowledged as soon as it is received by the destination, with an ACK frame. If a station transmits and doesn't receive an ACK, it will retransmit. This avoids problems where something has decided to use the medium mid-way through a station transmitting a packet, causing corruption.
All this operates outside of the concept of a BSS, as regardless of which BSS a station is in, it still needs to play fair with all the other stations on the frequency in the same or other BSSes.
The net effect is you can have many stations in many BSSs all on the same channel happily co-habiting, the downside is performance degradation as it gets harder to get a clear channel, and the likelihood of corrupt frames and retransmits increases.
I have a system that sends "many" (hundreds) of UDP datagrams in bursts, every once in awhile (say, 10 times a minute). According to nload, this averages about 222kBit/s. The content of these datagrams is JSON. I've considered altering the system so that it waits some time (500ms?) and combines many of the JSON objects into one datagram, before sending. But I'm not sure it's worth the effort (bandwidth, protocol, frequency of sending considered.) Would the new approach provide any real benefits over the current one?
The short answer is that it's up to you to decide that.
The long version is that it depends on your use case. Since we don't know what you're building, it's hard to say what's more important - latency? Throughput? Reliability? Something else? Let's analyze some pros and cons. Here's what I came up with:
Pros to sending larger packets:
Fewer messages means fewer system calls and less I/O against the network. That means fewer blocked/waiting threads and less time spent on interrupts.
Fewer, larger packets means less overhead for each individual packet (stuff like IP/UDP headers that's send with each packet). Therefore a higher data rate is (theoretically) achievable, although keep in mind that all of these headers (L2+IP+UDP) typically add up to no more than 60-70 bytes per packet since the UDP header is only 8 bytes long.
Since UDP doesn't guarantee ordering, larger packets with more time between them will reduce any existing reordering.
Cons to sending larger packets:
Re-writing existing code, and making it (slightly) more complicated.
UDP is unreliable, so a loss of a single (large) packet would be more significant compared to the loss of a small packet.
Latency - some data will have to wait 500ms to be sent. That means that a delay is added between the sender and the receiver.
Fragmentation - if one of the packets you create crosses the MTU boundary (typically 1450-1500 bytes including the IP+UDP header, which is normally 28 bytes long), the IP layer would need to fragment the packet into several smaller ones. IP fragmentation is considered bad for a multitude of reasons.
Processing of larger packets might take longer
Wouldn't it make sense to just send the RTS to the Access Point rather than broadcast it. I understand why the Access Point broadcasts the CTS frame, so that other stations don't send packets and collision doesn't occur.
In wireless networking there is a famous problem called "hidden node problem". RTS,CTS and CTS to self used in 802.11 are solutions to address hidden-node problem.
I suggest you to understnd hidden-node-problem here https://en.wikipedia.org/wiki/Hidden_node_problem
Why RTS/CST is a broadcast rather than a unicast?
A broadcast will be recieved by all stations and Access-points in the range. Both RTS and CTS has a field called "duration" that includes "the time duration medium should be reserved in microseconds". All the STA's and AP's who see's this RTS/CTS will update their NAV [network-allocation-vector which is a virtual carrier sensing mechanism]. Means they will keep quiet for those many micro-seconds.
This avoids collisions.
If RTS/CTS are just directed to AP, other STA's or AP's in the visinity wont see this and may result in collisions.
Hope it helps.
AFAIK, RTS/CTS mechanism is a unicast sequence of 4 packets. RTS + CTS + DATA + ACK. All the stations (other than the intended target station) will extract only the header part from RTS/CTS frames, to be precise from CTS frame (they will not look into the packet details because it is a unicast packet) and get the duration field. Accordingly these stations set their NAV timer and sit idle until the timer expires.