How much additional load does a multicast subscriber impose on a switch? - networking

If you have a switch with at least one subscriber to a multicast address, how much additional load would each additional subscriber add?
Example:
You have a 10G switch (with IGMP) with 10 servers and no other activity.
When Server1 subscribers to a 1G multicast feed, the switch will have 1G of load.
What would the load be after Server2 and Server3 subscribed?
Obviously traffic to the switch would not increase, but what about the switch's internal load?
Houw would the answer be different without IGMP?

The whole idea of multicast is that it is efficient. The presence of one subscriber downstream causes the switch to send an IGMP join request of its own upstream and pass incoming multicasts downstream, without duplication. The addition of further downstream subscribers has no effect at all except to increment an internal subscriber count for that group. When that goes back to zero it sends an IGMP leave request of its own upstream.
I don't know what you mean by 'without IGMP'. There is no such thing as UDP multicast without IGMP. It is a contradiction in terms.

Firstly, some background information for you.
The traditional definition of routers and switches are along the lines of:
Router: a device capable of routing a packet form one IP subnet to a different IP subnet
Switch: a device capable of switching a packet within the same IP subnet
However, this traditional definition no longer holds these days because we have switches that can route traffic from one IP subnet to another IP subnet and even perform complex operations such as QoS at wire speed.
Therefore it is often easier to redefine Routers and Switches as follows:
Router: a device that uses the CPU to route packets, often inspects parts of packets that are higher up the OSI layer.
Switch: a device with ASIC(s) (a.k.a switching chips) that switches/routes traffic at full wire speed. What this means is that if the switch has 24 1Gbps ports, it will be able to switch 24Gbps bi-directional traffic without dropping any packets.
Now, to answer your question, it is important to determine whether the ASIC in your switch is capable of handling multicast traffic or not. If so, adding "load" really isn't an issue, as long as you ensure that each switch port is not congested (e.g. 2Gbps of traffic trying to egress out of 1Gbps port). If the ASIC in your switch is NOT capable of handling multicast traffic, it is highly likely that the switch will simply send all multicast traffic up to the CPU. Then it would be up to the software to determine where each packet goes. CPUs on switches are not powerful, because their primary role isn't to route/switch packets, but to manage the switch (e.g. configure the ASIC so that packets get switched properly). Therefore, if your switch is sending packets up to the CPU, the switch will struggle. You won't get anywhere near 1Gbps of multicast via the CPU.
Without IGMP, switches, by default, will flood out the traffic on all ports. Again, this is not a problem for the switch itself because it can handle this at wirespeed. It may cause problems for other parts of the network because traffic is needlessly being duplicated.
The reason for this long answer is because the phrase "10G switch" in your example is quite misleading, and it led me to believe that you maybe thinking that a powerful CPU sits at the center of the switch that is capable of performing 10Gbps bi-directional switching. This is simply not the case, and talking about "load" on a switch therefore often makes little sense.
I hope this helps.

Related

Why am I unable to prioritize TCP traffic using ToS fields?

I am trying to prioritize TCP traffic using ToS field in IP header.
I am saturating the interface(ethernet) by sending 1GB data through iperf with ToS field set to 0x10 (Minimize-Delay).
I then start another TCP client with default ToS (0).
Expectation :
My TCP client should not send data till iperf completes sending its data.
Result:
The data from my client is sent even tough iperf is sending packets with higher priority.
I also tried to create the same scenario by creating 2 separate clients and allocating 0x10 and 0x08 ToS to respective clients using iptables.
I used :
iptables -A PREROUTING -t mangle -p tcp --sport 5000 -j TOS --set-tos Minimize-Delay
I am still not able to prioritize one client over other.
Altough I can see the packets marked with ToS in wireshark.
I am using Ubuntu (14.04) with iptables version 1.4.21
Can someone kindly help me solve the issue?
Thanks
Varun
TL;DR
Simply setting ToS or DSCP markings on packets does nothing. You actually need to configure a device to do different things with different markings. If you want that to include queuing, you need to configure queues, and assign different markings to different queues.
A More Complete Explanation
You are wanting to use QoS. QoS is a huge subject, but I will try to explain a few things. A guide to using QoS on Linux can be found at Traffic Control HOWTO.
ToS, which has really been supplanted by DSCP (Differentiated Services Code Point), is simply marking packets to differentiate the various packets for different treatment at some point. The ToS field was part of the original IPv4 packet specification, but there is nothing in the standard that mandates a device must use or respect that field.
QoS involves differentiating (marking) packets, and then doing something based on the marking. What you do with the markings can be things like shaping, policing, queuing (including priority queuing).
A hardware interface will have a FIFO queue, and that is the default and only queue, regardless of packet markings. The hardware is completely unaware of packet headers or markings.
Actually using the markings is usually done in a layer-3 network device, e.g. a router. For instance, you can configure different software queues for a router interface, and you can assign packets with different markings to different queues. Queues are relatively small, not like real buffers. Priority queues will be served before regular queues. Queues don't exist until you define them, and packets are not assigned to different queues unless you have configured rules to do that. You could assign BE (Best Effort, ToS 0) packets to a priority queue, and EF (Expedited Forwarding) packets to a low priority queue.
When a queue fills up, new packets destined for that queue will be dropped (called tail-drop). Tail-drop can be a problem for TCP because it can cause all TCP flows using a queue to become synchronized (global synchronization) where they back off and ramp up synchronously, alternately starving or flooding a queue. There are methods to try to prevent this, e.g. RED (Random Early Detection). RED will actually drop random packets in a queue. This is to force the various TCP flows using the queue to back off and ramp up on different schedules.
Many network switches will automatically assign BE (Best Effort, ToS 0) to anything coming into the switch, unless you have configured the switch to trust the markings on one or more interfaces. Routers will typically trust the markings, but they will not do anything with the markings unless you configure them to do so.
QoS will not work across the Internet, only within your network. You need to have a comprehensive set of QoS policies that are consistently implemented across your network. You may be able to negotiate, for a fee, for your ISP to respect some of your QoS markings and policies, but that only goes as far as your ISP. As your traffic leaves your network, or you ISPs network if you have an agreement with it, your QoS markings and policies will be completely ignored, and the packets will probably be set to BE.

Writing client-server application in global network

I know, how to write a C# application that works through a local network.
I mean I know, how to make my client-side application access my server-side application in a single local network.
But I wonder: How do such apps, as Skype, TeamViewer, and many other connect via global network?
I apologise, if this question is simple or obvious, but I couldn't find any information about this stuff.
Please, help me, I'll be very grateful. Any information is accepted - articles, plain info, books,and so on...
Question is very wide and I try to do short overview.
Following major difference between LAN (Local Area Network) and WAN (Wide Area Network):
Network quality:
LAN is more or less stable, WAN can be with network issues like:
Packet loss (you need use loss-tolerant transport like TCP or UDP with retransmits or packet loss concealment)
Packet jitter (interpacket intervals may differ a lot from sending part). Most common thing is packets bursts.
Packet reordering
Packet duplication
Network connectivity
WAN is less stable than LAN. So you need properly handle all things like:
Connection stale
Connection loss
Errors in the middle of the connection (if you use UDP for example)
Addresses:
In WAN you deal with different network equipment between client and server (or peers in case of peer-to-peer communication). You need to take in account:
NATs - most of the clients are behind NAT and you need to pass them through. According technics are called "NAT traversal"
Firewalls - may ISP has own rules what client can do or can't. So if you do something specific like custom transport protocol you may bump into ISP firewalls.
Routing - especially multicast and broadcast communication. In common case multicast is not possible to route. Broadcasts are never routed. So you need to avail this type of communication if you want to use WAN.
May be I forgot something. But these points are major. You can read many articles about any of them.

In my peer to peer application, should I use multiple ports?

I am building a simple peer to peer application where about 8 participants all connect to each other (n*n). I will be using UDP with a reliability and ordering protocol layered on top. Each peer will be broadcasting a few KB of data per second.
It occurred to me that there are two ways of configuring the ports on each peer:
Each peer takes one port and all messages are received via that
Each peer takes a port for each other peer, and only communicates with a peer using its corresponding port
What are the advantages and disadvantages of each approach?
Creating only one port to communicate with each peers is the best option. You create only one socket and use only one port to send/receive data with as many peers as you want. You can distinguish received data by seeing source address of each packet. This way has Less complicated code and is more resource efficient.
Creating multiple port has absolutely no advantage. It will complicate your code and will use much more resource with no benefit. Resource consumption will grow with more peers.
I have never done anything with this model of application, but here are my few thoughts.
1 big thing to consider will be whether any firewalls are in place between the peers. If so, a hole will need to be punched through for each listening port. This can be done either once (eg. router port-forward rule) or dynamically (UPnP, etc), but you may not be able to count on automatic full-cone NAT to do this for you. If you are expecting any firewalls between your peers, I would recommend using a single port for ease of programming on your part and identify your peers strictly by their remote address or some other in-protocol identifier.
Using a single port per user will make identifying communication much simpler, but if you expect your number of participants to grow by n(n-1)/2. If you never expect more than a small number (eg. 20), port per peer will work decently well without much effort.
Another option for you (possibly) may be using multicast. If all of your peers are on the same broadcast domain, this will reduce bus contention and may make your coding somewhat cleaner as well.
Hope this helps. I apologize if this isn't what you're looking for. Good luck!
Each peer takes one port and all messages are received via that
If each peer can get the source IP/source port of the incoming datagram (and I bet it can), this is enough to differentiate the peers.
Each peer takes a port for each other peer, and only communicates with
a peer using its corresponding port
See above, and most importantly this contradict your base idea of broadcasting in the first place. It just add a level of complexity (and is probably not very scalable, even if for now you envision just 8 peers).
In your base requirement I think you may have a dilemma between:
broadcast everything to everyone,
but still you want a peer to be able to "only communicates with a peer", which is inherently unicast.
This raises some problems, as you already realized by asking the question.
I see 2 other problems:
Scalability-wise, the broadcast everything approach whereas you sometime actually need unicast is going to put some useless load on the network. This is not pretty.
The broadcast approach dictates UDP, but still you want reliable data transfer, so as you stated you'll have to add a "reliability and ordering protocol layered on top". This (not so easy) work would not be needed if only we could use TCP.
There is a third approach:
use broadcast UDP for each peer to announce itself on the network, so that other peers can...
...discover it and then establish a unicast TCP connection with this peer. No more reliability and ordering problems + reduced network load.
This approach is used in SSDP (Simple Service Discovery Protocol), part of UPnP. I do not suggest you use SSDP, it's probably bloated for what you want to do, you said you wanted something simple.
All in all, you first have to resolve your dilemma: decide and differentiate the data that really need to be broadcasted vs the unicast part. YMMV.
PS: with broadcast UDP also comes the problem that though OK on a LAN, this will not pass a router unless you use multicast routing. But that's another story.

Capturing data packets in closed LAN

In my college lab, all the PCs are connected via a hub. I want to capture data packets using Wireshark, but it only displays the interface of my own PC. How can I capture the packets of other PCs?
I've tried all the interfaces, and I can't get it to work.
Odds are you're connected to a switch rather than a hub. The problem there is that only packets intended for your network card's hardware (MAC) address and broadcast packets will be sent to your PC. The switch remembers the hardware address of devices plugged into it and performs packet forwarding based on those addresses. This vastly increases the potential bandwidth of your network segment, but makes snooping on other traffic more difficult. You will need to perform what's called ARP cache poisoning. Basically you need to trick every other computer connected to the switch to send its traffic to you rather than its true destination. You will then need to forward those packets not actually for you onto the correct destination otherwise it will take down the entire segment you're on and people will get nosy.
This type of redirection is possible, but it seems like you'll need to do quite a bit more research and understand exactly what is going on before attempting it. To get started, look into the Address Resolution Protocol; understand what a "layer 2" switch is doing; find out how to inject and reroute packets on the network; think about the consequences of getting caught.
If you're serious about moving forward, check out http://www.admin-magazine.com/Articles/Arp-Cache-Poisoning-and-Packet-Sniffing for some starting tips.

Question about IP Multicasting?

Hi i am on creating streaming application. in that i am using IP Multicasting.
Tell me how to validate the client before adding it in the group.
is that anything i have to do with IGMP?
You don't do it with your application.
IGMP is an internet layer protocol, it may not even reach your application.
Whenever a unit wants to receive multicast to a certain address, it sends an IGMP request to join a group. A router receives the request and remembers that this user wants to belong to this group.
Whenever the router receives a multicast packed destined for that address, it routes it to all the group members, possibly taking some access control restrictions into account.
All group manupulation is performed by routers. You just send your UDP packets to a multicast address (that is 224/4), and the routers decide whether to route it to a subscriber.
If you want to limit destinations where your multicast packets go, you do it on routers.
You should understand though, that the word "routes" above means that the router emits the packet into appripriate interface with a multicast destination address in Ethernet header and multicast destination address in IP header. An Ethernet switch attached to the interface, if any, will distribute the packet over all active ports. Since it knows nothing about internet routing, it will just see the broadcast/multicast bit set in the Ethernet header and act accordingly.
There are, though, some link layer devices (Ethernet switches) who peek into network layer headers and limit multicast to the subscribed units. That is called IGMP snooping. Some of them can also be capable to control access.
OK, there is a legitimate need to control who can join a multicast group. The only way I can see that being done is by filtering IGMP packets inbound on the router interfaces. This would work if the list of "allowed subscribers" is sufficiently static, but if there's a lot of changes, it would rapidly become untenable.
If (and only if) there's administrative control all the way down to a "customer-placed" router, I suspect something could be done there, to limit the groups that device has visibility of, but that is heavily dependent of environment (in a "broadband and multicast video from a single provider" scenario, a contractual requirement for using a provider-managed DSL router would be possible).
In addition to Quassnoi's comments on how multicast works, I have to wonder... Why do you want to restrict multicast membership and/or validate the recipient before having it added to the group?

Resources