Can a machine send n network messages/packets (to n destinations), simultaneously? Is there a upper bound on level of parallelism and what affects network's parallelism.
More specific, say there are 2 packets and four event, s1, r1 and s2, r2 denotes send/receive packet 1 and send/receive packet 2. When we send asynchronously (like s1, s2...r1,r2) and synchronously (s1...r1,s2...,r2), does it matter? Could total latency be shorten in the case of asynchronous send.
Yes, they can. A NIC just transmits the frames the driver tells it to, and does it as soon as it can. NICs don't care about destinations.
Higher layers (e.g: TCP) are responsible for retransmissions, and have their own buffering. NICs usually can have several frames ready to be sent, but they stay little time in the NIC, as soon as the medium is free, and the NIC has transmitted a frame without collisions, it can take another frame ready for transmission.
You basically have three choices:
Point to point
Multicast
Broadcast
A point to point message goes from one source to one destination.
A multicast message goes from the source, to the router, and from there gets distributed to the multiple recipients. There are some relatively intelligent switches (usually called "layer 2+" or something on that order) that can handle multicasting as well, but your average "garden variety" switch usually can't/won't handle it.
A broadcast goes from the source to everything else on the local subnet. There are a cople of ways this can be done. Probably the most common is to send to the address 255.255.255.255. Broadcasting is supported directly by Ethernet, which has a "broadcast" MAC address of FF:FF:FF:FF:FF:FF.
Note that IPv6 no longer supports broadcasting, but does have an "all hosts" multicast group that's vaguely similar.
Related
I understand that it's different than a hub in that instead of packets being broadcasted to all devices connected to the device, it knows exactly who requested the packet by looking at the MAC layer.
However, is it still possible to use a packet sniffer like Wireshark to intercept packets meant for other users of the switch? Or is this only a problem with ethernet hubs that doesn't affect switches due to the nature of how a switch works?
On a slightly off topic side note, what exactly is classified as a LAN? For example, imagine two separate ethernet switches are hooked up to a router. Would each switch be considered a separate LAN? What is the significance of having multiple LAN's within the same network?
it knows exactly who requested the packet by looking at the MAC layer.
More exactly, the switch uses the MAC destination address to forward a frame to the port associated with that address. Addresses are automatically learned by looking at the MAC source address on received frames.
A switch is stateless, ie. is has no memory who requested which data. A layer-2 switch also has no understanding of IP packets, addresses or protocols. All a basic switch does is learn source addresses and forward by destination address.
is it still possible to use a packet sniffer like Wireshark to intercept packets meant for other users of the switch?
Yes. You'll need a managed switch supporting port mirroring or SPANning. This doesn't intercept frames, it just copies them to the mirror port. If you need to actually intercept frames you have to put your interceptor in between the nodes (physically or logically).
With a repeater hub, every bit is repeated to every node in the collision domain, making monitoring effortless.
what exactly is classified as a LAN?
This depends on who you ask and on the context. A LAN can be a layer-1 segment/bus aka collision domain (obsolete), a layer-2 segment (broadcast domain), a layer-3 subnet (mostly identical with an L2 segment) or a complete local network installation (when contrasted with SAN or WAN).
Adding to #Zac67:
Regarding this question:
is it still possible to use a packet sniffer like Wireshark to
intercept packets meant for other users of the switch?
There are also active ways in which you can trick the Switch into sending you data that is meant for other machines. By exploiting the Switch's mechanism, one can send a frame with a spoofed source MAC, and then the Switch will transfer frames destined to this MAC - to the sender's port (until someone else sends a frame with that MAC address).
This video discusses this in detail:
https://www.youtube.com/watch?v=YVcBShtWFmo&list=PL9lx0DXCC4BMS7dB7vsrKI5wzFyVIk2Kg&index=18
In general, I recommend the following video that explains this in detail and in a visual way:
https://www.youtube.com/watch?v=Youk8eUjkgQ&list=PL9lx0DXCC4BMS7dB7vsrKI5wzFyVIk2Kg&index=17
what exactly is classified as a LAN?
So indeed this is one of the least-well-defined terms in Computer Networks. With regards to the Data Link Layer, a LAN can be defined as a segment, that is - a broadcast domain. In this case, two devices are regarded as part of the same segment iff they are one hop away from one another - that is, they can switch frames in the second layer.
In my college lab, all the PCs are connected via a hub. I want to capture data packets using Wireshark, but it only displays the interface of my own PC. How can I capture the packets of other PCs?
I've tried all the interfaces, and I can't get it to work.
Odds are you're connected to a switch rather than a hub. The problem there is that only packets intended for your network card's hardware (MAC) address and broadcast packets will be sent to your PC. The switch remembers the hardware address of devices plugged into it and performs packet forwarding based on those addresses. This vastly increases the potential bandwidth of your network segment, but makes snooping on other traffic more difficult. You will need to perform what's called ARP cache poisoning. Basically you need to trick every other computer connected to the switch to send its traffic to you rather than its true destination. You will then need to forward those packets not actually for you onto the correct destination otherwise it will take down the entire segment you're on and people will get nosy.
This type of redirection is possible, but it seems like you'll need to do quite a bit more research and understand exactly what is going on before attempting it. To get started, look into the Address Resolution Protocol; understand what a "layer 2" switch is doing; find out how to inject and reroute packets on the network; think about the consequences of getting caught.
If you're serious about moving forward, check out http://www.admin-magazine.com/Articles/Arp-Cache-Poisoning-and-Packet-Sniffing for some starting tips.
One of the major factors that affect TCP performance in 802.11 ad hoc networks is the unfairness in the MAC. Could someone please illustrate for me what this "unfairness" means?
In ad hoc networks, you usually are trying to do multihop routing. 802.11 CSMA/CA can manifest the "exposed terminal problem" in these situations. Consider a linear topology
... A <---> T <---> B ...
A and B are not in CSMA range. Suppose T is already transmitting a data stream to A. Now suppose a TCP stream starts getting routed through B. Because B CSMAs with A, it will essentially be locked out of the channel. The TCP connection being routed through B will eventually timeout.
Another possibility is the "hidden terminal problem". Consider the topology
A ---> X <--- B
A and B cannot CSMA. Suppose A and B each try to send a TCP stream to X. Because they cannot CSMA with each other, both win their respective channel contention rounds and transmit, only to have their frames collide at X. This can be solved to some extent with RTS/CTS. But in general, the reason for poor TCP performance in wireless environments has to do with the fact that TCP uses a dropped packet as a congestion signal, i.e., a TCP source will cut its window and there by drop its throughput. In wireless networks, dropped packets can be due to any number of transient things (e.g., collision, interference). A TCP source misinterprets these packet drops as a congestion signal and will throttle its send rate and underutilize the channel.
Another problem that can arise is due to the "capture effect". Again, consider
A ----> X <-------------B
Both A and B are in range of X, but B is farther away and thus has a lower received signal strength at X. Again, A and B cannot CSMA. In this case, X may "capture" the stronger transmitter A, i.e., its radio will decode A's frames but consider B's as noise (even though in the absence of A, X would go right ahead and decode B's frames). This sets up an unfair advantage for A if both are trying to route a TCP stream through X.
802.11's DCF also favors the last winner of the channel contention round. As a result, this gives a slight advantage to long-lived, bulk TCP transfers.
Note that all these problems affect all transport protocols, not just TCP. It's just that the way TCP is designed makes it react particularly poorly to these scenarios.
I've got a question about UDP-transfer: How does the transfer-time of a datagram differs if it is sended as Broadcast or Unicast (same Datagram-packet and network). Which conditions affect the transfer-time of broadcast-/unicast-packets ? How does the time, taken by the socket.send(packet) calls, differs?
thx
PS: Wifi is the network I'm working with
In terms of transmitting the frame, it is dependent upon the underlying MAC layer.
With Ethernet, we use CSMA/CD, which basically transmits the frame and if collision is encountered, it stops sending and drops the frame.
With 802.11 (wireless), we use CSMA/CA. In this approach, sending unicast is more expensive (and takes more time) since it does RTS/CTS (request to send/clear to send) exchnage before it sends the unicast frame. For broadcast, 802.11 does not do any such things and hence is faster. But, then it is also more unreliable as compared to unicast frames.
It depends on the network and it depends on what you consider part of the 'transfer time'. For sending on an ethernet LAN, (either wired or wireless), most of the sending stack will be the same -- the only difference will be when determining the ethernet address to use, where the broadcast might be faster (since it uses the fixed broadcast address), while the unicast may have to do an ARP lookup to find the address. But if the address is in the ARP cache, there's likely no difference.
Next on the ethnet itself, if its wireless or bridged (shared) wired, there's no difference -- its just a packet sent to an address. If its a switched ethernet however, the broadcast is somewhat more likely to suffer a collision (it will collide if any switch port is busy, rather than just the destination port) which may slow it down.
Finally on the receving end, with a broadcast there are mutiple receivers while for unicast there is only one. The broadcast receivers may well be of different speeds and load levels, so they vary in how long they take to process the packet. So if you need to wait for all of them to deal with it, it will likely be slower, but if you need only one, it may be faster.
If you have a switch with at least one subscriber to a multicast address, how much additional load would each additional subscriber add?
Example:
You have a 10G switch (with IGMP) with 10 servers and no other activity.
When Server1 subscribers to a 1G multicast feed, the switch will have 1G of load.
What would the load be after Server2 and Server3 subscribed?
Obviously traffic to the switch would not increase, but what about the switch's internal load?
Houw would the answer be different without IGMP?
The whole idea of multicast is that it is efficient. The presence of one subscriber downstream causes the switch to send an IGMP join request of its own upstream and pass incoming multicasts downstream, without duplication. The addition of further downstream subscribers has no effect at all except to increment an internal subscriber count for that group. When that goes back to zero it sends an IGMP leave request of its own upstream.
I don't know what you mean by 'without IGMP'. There is no such thing as UDP multicast without IGMP. It is a contradiction in terms.
Firstly, some background information for you.
The traditional definition of routers and switches are along the lines of:
Router: a device capable of routing a packet form one IP subnet to a different IP subnet
Switch: a device capable of switching a packet within the same IP subnet
However, this traditional definition no longer holds these days because we have switches that can route traffic from one IP subnet to another IP subnet and even perform complex operations such as QoS at wire speed.
Therefore it is often easier to redefine Routers and Switches as follows:
Router: a device that uses the CPU to route packets, often inspects parts of packets that are higher up the OSI layer.
Switch: a device with ASIC(s) (a.k.a switching chips) that switches/routes traffic at full wire speed. What this means is that if the switch has 24 1Gbps ports, it will be able to switch 24Gbps bi-directional traffic without dropping any packets.
Now, to answer your question, it is important to determine whether the ASIC in your switch is capable of handling multicast traffic or not. If so, adding "load" really isn't an issue, as long as you ensure that each switch port is not congested (e.g. 2Gbps of traffic trying to egress out of 1Gbps port). If the ASIC in your switch is NOT capable of handling multicast traffic, it is highly likely that the switch will simply send all multicast traffic up to the CPU. Then it would be up to the software to determine where each packet goes. CPUs on switches are not powerful, because their primary role isn't to route/switch packets, but to manage the switch (e.g. configure the ASIC so that packets get switched properly). Therefore, if your switch is sending packets up to the CPU, the switch will struggle. You won't get anywhere near 1Gbps of multicast via the CPU.
Without IGMP, switches, by default, will flood out the traffic on all ports. Again, this is not a problem for the switch itself because it can handle this at wirespeed. It may cause problems for other parts of the network because traffic is needlessly being duplicated.
The reason for this long answer is because the phrase "10G switch" in your example is quite misleading, and it led me to believe that you maybe thinking that a powerful CPU sits at the center of the switch that is capable of performing 10Gbps bi-directional switching. This is simply not the case, and talking about "load" on a switch therefore often makes little sense.
I hope this helps.