I am designing a system which has multiple small embedded systems hosts communicating in a LAN. UDP multicast fits nicely for my purpose.
But I am worried about multicast support in commonplace routers. I need to deploy the system to normal households equipped with a Wifi router, so I could encounter any kind of routers. I will use UDP broadcast if multicast will be more trouble than benefit.
To decide, I am thankful for any data or experience on multicast support in today's commonplace routers:
Do all consumer routers sold today support multicast reasonably? Limitation to LAN is not an issue for me, I do not need multicast across the Internet.
How about older routers?
Are there any big issues in commonplace multicast implementations I need to be aware of (e.g. packet drops, configuration issues, etc.)?
Are you talking switches or routers? In a consumer setting I suspect switches. My experience is that they all support multicast, though not at wire speed. Also the cheap ones tend to broadcast any multicast traffic to all ports (no IGMP snooping). Packet loss is definitely something you need to deal with, it can and will happen even on 'professional' networking gear.
Edit: as long as you are in switched network, you typically don't need to configure anything.
In many scenarios the equipment either does not support IGMP spoofing, or it is off by default. There are two problems:
Any wireless interface can be saturated by the traffic.
Poorly configured units may inadvertently route traffic out the default gateway stalling legitimate traffic.
In either case, you will have your equipment discarded as the cost of investigation will almost certainly outweigh the benefit received.
If your traffic has a limited rate and you are not concerned about the WiFi impact, you could use the local broadcast address to ensure delivery to recipients, without impacting on the routed communications.
You could install a discovery mechanism into your nodes it may be worthwhile to implement a unicast overlay to ensure that traffic does not have inadvertent impact.
A single larger group of customers with some non-compliant devices requesting support will swamp any costs associated with development or additional traffic caused by not implementing true multicasting.
In worst case scenarios, when the routers don't allow multicast traffic, I would encapsulate the multicast packets into a unicast IP address. This way the routers would handle them as normal unicast data. You might want to check mrouted .
Good luck
Related
As I understand it TCP is required for congestion control and error recovery or reliable delivery of information from one node to another and its not the fastest of protocols for delivering information.
Some routing protocols such as EIGRP and OSPF ride directly on top of IP. Even ICMP rides directly over IP.
Why is UDP even required at all? Is it only required so that developers/programmers can identify what application the inbound packet should be sent to based on the destination port number contained within the packet?
If that is the case then how is information gathered from protocols that ride directly on top of IP sent to the appropriate process when there is no port number information present?
Why are voice and video sent over UDP? Why not directly over IP?
(Note that I do understand thoroughly the use case for TCP. I am not asking why use UDP over TCP or vice versa. I am asking why use UDP at all and how can some protocols use directly the IP layer. Whats the added advantage or purpose of UDP over IP?)
Your question makes more sense in terms of why is UDP useful (than why is UDP required).
UDP is a recognized protocol by the Internet Assigned Numbers Authority. UDP can be useful if you want to write a network protocol that's datagram based and you want to play more nicely with Internet devices.
Routers can have rules to do things like drop any packet that doesn't make sense. So if you try and send packets using say an unassigned IP protocol number between hosts separated by one or more routers, the packets may well never get delivered as you've intended. The same could happen with packets from an unrecognized UDP protocol but that's at least one less door to worry about whether your packet can make it through.
Internet endpoints (like hosts) may do similar filtering too. If you want to write your own datagram based protocol and use a typical host operating system, you're more likely to need to write your software as a privileged process if not as a kernel extension if you're trying to ride it as its own IP protocol (than if you'll be using UDP).
Hope this answer is useful!
First of all, IP and UDP are protocols on the different layers, IP by definition is Internet layer when UDP is transport layer. Layers were introduced to simplify network protocols architecture and to separate concerns. Application layer protocols are supposed to be based on transport layer (with some exceptions).
Most popular transport protocols (in IP network) are UDP and TCP. While TCP is feature rich but with many tradeoffs UDP is very simple but gives a lot of freedom and so typically is a base for other protocols.
The main feature of UDP is multiplexing: ports that allow multiple protocol instances (aka sockets) to coexist on the same node. This means that implementing your own protocol over IP instead of UDP either you won't be able to have multiple instances of your protocol on the same machine or you'll have to implement multiplexing yourself.
There're other features like segmentation and checksum. These features are not mandatory.
And as was mentioned in another answer there're lots of middleware like routers, NATs and firewalls that can ruin the idea of a custom "right over IP" protocol, but it's more like a collateral damage than a feature of UDP.
I know, how to write a C# application that works through a local network.
I mean I know, how to make my client-side application access my server-side application in a single local network.
But I wonder: How do such apps, as Skype, TeamViewer, and many other connect via global network?
I apologise, if this question is simple or obvious, but I couldn't find any information about this stuff.
Please, help me, I'll be very grateful. Any information is accepted - articles, plain info, books,and so on...
Question is very wide and I try to do short overview.
Following major difference between LAN (Local Area Network) and WAN (Wide Area Network):
Network quality:
LAN is more or less stable, WAN can be with network issues like:
Packet loss (you need use loss-tolerant transport like TCP or UDP with retransmits or packet loss concealment)
Packet jitter (interpacket intervals may differ a lot from sending part). Most common thing is packets bursts.
Packet reordering
Packet duplication
Network connectivity
WAN is less stable than LAN. So you need properly handle all things like:
Connection stale
Connection loss
Errors in the middle of the connection (if you use UDP for example)
Addresses:
In WAN you deal with different network equipment between client and server (or peers in case of peer-to-peer communication). You need to take in account:
NATs - most of the clients are behind NAT and you need to pass them through. According technics are called "NAT traversal"
Firewalls - may ISP has own rules what client can do or can't. So if you do something specific like custom transport protocol you may bump into ISP firewalls.
Routing - especially multicast and broadcast communication. In common case multicast is not possible to route. Broadcasts are never routed. So you need to avail this type of communication if you want to use WAN.
May be I forgot something. But these points are major. You can read many articles about any of them.
I would like to do it with the Reliability of a TCP Unicast.
I have used UPnP based Multicast (M-Search) but many devices filter out Multicast messages so I end up losing them.
Also, how does Bonjour compare to UPnP?
For the second question
: In aspect of the functionality and reliability, uPnP and Bonjour are very similiar. The main difference of them is how prescriptive is. uPnP is more prescriptive.
I was trying to understand how torrents work?
And after reading a lot on web I now know the basics about it but I have a very
important question related to working of torrents!
In torrents how do peer-to-peer connections take place?
Almost all the peers have private-IP(for e.g 192.x.x.x) addresses then how does connections take place without a server(As I have read: There is no server involved in torrents) ?
Thanks a lot!
There are a few alternatives:
Peers behind NAT simply don't connect to other peers behind NATs. This creates two classes of peers, where the ones that are connectable will have an advantage when trading pieces, and typically achieve faster download rates.
Peers behind NAT use UPnP or NAT-PMP to set up port forwarding in order to be connectable by other peers
peers using uTP and Peer exchange can support a simple hole-punching mechanism (uTorrent and libtorrent supports this for instance). A peer can help in introducing two of its connections to each other, they try to connect to at the same time and of one of them have a full-cone NAT, they are very likely to succeed in establishing the connection.
Peers supporting DHT and uTP may use a relatively new feature where the port announced to the DHT is derived from their UDP packets. Using the same socket for DHT and uTP increases the chances that a peer behind a full-cone NAT can accept incoming connections without UPnP or NAT-PMP set up. Simply because the DHT traffic will keep a pinhole open on the NAT.
If you have a swarm of only peers behind symmetric NATs, nobody is going to be able to connect to anyone else, and bittorrent is not going to work. In practice (at least in moderately large swarms) there are always some peers that are connectable.
If you have a switch with at least one subscriber to a multicast address, how much additional load would each additional subscriber add?
Example:
You have a 10G switch (with IGMP) with 10 servers and no other activity.
When Server1 subscribers to a 1G multicast feed, the switch will have 1G of load.
What would the load be after Server2 and Server3 subscribed?
Obviously traffic to the switch would not increase, but what about the switch's internal load?
Houw would the answer be different without IGMP?
The whole idea of multicast is that it is efficient. The presence of one subscriber downstream causes the switch to send an IGMP join request of its own upstream and pass incoming multicasts downstream, without duplication. The addition of further downstream subscribers has no effect at all except to increment an internal subscriber count for that group. When that goes back to zero it sends an IGMP leave request of its own upstream.
I don't know what you mean by 'without IGMP'. There is no such thing as UDP multicast without IGMP. It is a contradiction in terms.
Firstly, some background information for you.
The traditional definition of routers and switches are along the lines of:
Router: a device capable of routing a packet form one IP subnet to a different IP subnet
Switch: a device capable of switching a packet within the same IP subnet
However, this traditional definition no longer holds these days because we have switches that can route traffic from one IP subnet to another IP subnet and even perform complex operations such as QoS at wire speed.
Therefore it is often easier to redefine Routers and Switches as follows:
Router: a device that uses the CPU to route packets, often inspects parts of packets that are higher up the OSI layer.
Switch: a device with ASIC(s) (a.k.a switching chips) that switches/routes traffic at full wire speed. What this means is that if the switch has 24 1Gbps ports, it will be able to switch 24Gbps bi-directional traffic without dropping any packets.
Now, to answer your question, it is important to determine whether the ASIC in your switch is capable of handling multicast traffic or not. If so, adding "load" really isn't an issue, as long as you ensure that each switch port is not congested (e.g. 2Gbps of traffic trying to egress out of 1Gbps port). If the ASIC in your switch is NOT capable of handling multicast traffic, it is highly likely that the switch will simply send all multicast traffic up to the CPU. Then it would be up to the software to determine where each packet goes. CPUs on switches are not powerful, because their primary role isn't to route/switch packets, but to manage the switch (e.g. configure the ASIC so that packets get switched properly). Therefore, if your switch is sending packets up to the CPU, the switch will struggle. You won't get anywhere near 1Gbps of multicast via the CPU.
Without IGMP, switches, by default, will flood out the traffic on all ports. Again, this is not a problem for the switch itself because it can handle this at wirespeed. It may cause problems for other parts of the network because traffic is needlessly being duplicated.
The reason for this long answer is because the phrase "10G switch" in your example is quite misleading, and it led me to believe that you maybe thinking that a powerful CPU sits at the center of the switch that is capable of performing 10Gbps bi-directional switching. This is simply not the case, and talking about "load" on a switch therefore often makes little sense.
I hope this helps.