layer dependence about network - networking

i have a question about network layer, that is:
as we all know, in layer architecture, the N+2 layer should only depends on the N+1 layer, while knows nothing about N layer. for example, in a typical application, the web layer should only depends on the business logic layer, but not the data access layer
when it comes to computer network, things seem to be different. In application layer, program has to know not only transmition layer(TCP port), but also network layer(IP address)
this confuse me, what do you think about this?
thanks for your help.

generally you are right. Unfortunately borders between layers in networks are kinda blurry, not just because we have a standard which is not used (OSI) and de facto standard which does not enforce the idea you mentioned, but also because the protocols are often not strictly bound to one layer but can do stuff on more then one of them. Good amount of protocols is developed before the OSI model and before they were standardized and then it was already too late to make some radical changes. So there are protocols that are considered to be between two layers (or on both layers) like MPLS, ARP etc. And protocols that are based on another protocol which is on the same layer, like OSPF that runs on top of IP even if they are considered to be on L3.
What you mentioned is another example. The reason for that is that addressing is not done on the most-upper layer (application layer) but on network layer (for host/network adapter) and transport layer (for process/application). So you need to know the IP address and port number (and actually a protocol) to be able to address the remote application. That's where the network sockets come in as an gateway (or API) between application and the network. So, even if you are technically correct about defying the principle of layered model, you are not really doing anything on L3 or L4 (but you can;) ). You don't need to fragment packets, handle retransmissions or worry about error corrections etc., you are just passing down the required addressing information when creating a socket.
TCP/IP is more oriented towards the feasibility of implementation, where OSI is more concerned about the standard then the implementation of that standard. This has it's bad and good sides. The ability to freely implement the protocol can be an advantage if you use that power well and since you are not strictly bound to some specification you can do some things more efficiently... or fail epically. The drawbacks of mixing 'responsibilities' are obvious and great example are protocols like H.323 which embed the IP addresses inside user's payload so if you want to do NAT for example you need to inspect the payload, change IP addresses, recalculate checksums, and stuff like that instead of just handling the translation on network layer.
Why are stuff still like this? Probably because there is no easy way to change any of that because of sheer number of devices and protocols, applications, etc that needs to be updated and this takes a lot of time. Just look at the speed of adopting IPv6 which has been around for more then 15 years.

Related

TCP/IP and OSI in practice

I was studiyng those protocols, and I even understand the basis of each layer they have, but I can't understand how they work in practice.
For example: When an application make a request, isn't it the thing that fill all those informations (like the destination, the port, the protocols used, etc.)? In other words, when my browser make a request to a server, isn't the browser - the application - that fill the entire request layer by layer?
With that in mind I can't see when the application layer gets separated from the other ones, could you explain?
In practice layer up to 4 (TCP and UDP) are implemented inside an OS. The mechanism is called sockets. The application provides IP address and port, and selects transport protocols. Then it provides data, and OS is handling all the filling out. So, separating layers 2, 3, and 4 from the rest makes sense. Separating 2 and 3 from 4 is necessary for the network in between to work.
The rest IMHO does not. In TCP/IP model for example there is only one application layer on top of the transport layer. I do not know if anyone truly understands the intended functionality of session and presentation layer in OSI model. We all learn them and some protocols that are attributed to these layers for some unknown reason (e.g., i don't know why is TLS layer 5 and not 6). IMHO, these layers could make sense if you are designing particular class of applications, but if you consider all current Internet applications, I don't think these generalization makes sense.

minimum number of layers required for interconnection of two systems

when i was studying my computer networks chapters i saw OSI OSI reference model which recommend seven layes .
my question is :what is the minimum number of layer required for interconnection of two systems and why?i mean which all can be excluded .(i know that the standard is developed for interoperablity but i want to know for my academic reasons.)`
The minimum requirement is the Physical layer. Connect the two devices with a wire, and send signals between them without any higher layer protocol. That's what you have when you make an intercom using tin cans and a string. It's also essentially what happens between the CO and a phone in old-fashioned analog telephony.
The best answer is all of them are required, but the question is really meaningless.
The OSI model is a conceptual model, i.e. it represents all that is needed to create a complete communication network between applications.
Lets take an OSI modeled application using the serial protocol (RS-232 or a derivative):
The serial protocol defines either the first layer or the first two layers (depending on whether you consider the 7 or 8 bit serial packet a frame or not); however in order to communicate, a networking stack using the serial protocol needs to define the rest of the layers:
It needs to state how applications communicate with the networking stack.
It needs to define how application data is represented in the network
It needs to define how a communication session is started and terminated, etc. etc.
Some of these definitions might be trivial, e.g. if the network consists of just two nodes connected with a single serial link then all the routing and addressing definitions in layers 3 and 4 amount to: there is none only two nodes can communicate.
The best you can ask is whether a networking stack conforms or not to the OSI model.
The answer to this, as EJP commented , will most probably be no.
One of the reasons the OSI model is taught is that it underlines a very important aspect still in use in communications today: that of modularity.
Another one is that it provides a good list of concerns/features a communication stack must support.
The OSI model was meant to describe an architectural model where each layer was modular, i.e. as long as the implementations followed the model you could mix and match them to create your networking stack: need more security - change your presentation layer to one using encryption, more reliability use a transportation layer with ECC, etc.
But none of the layers were optional.
Echoes of this allowed (allow?) computers to connect to file servers whether using TCP/IP, IPX or NetBios; allow you to access the internet by ethernet or wifi, using ADSL or cable and once IPv6 rolls out you'll still be using the same HTTP communicate with Stackoverflow servers.

What percentage of users are behind symmetric NATs, such that "p2p" traffic needs to be relayed?

We're implementing a SIP-based solution and have configured the setup to work with RTPProxy. Right now, we're routing everything through RTPProxy as we were having some issues with media transport relying on ICE. If we're not mistaken, a central relay server is necessary for relaying streaming data between two clients if they're behind symmetric NATs. In practice, is this a large percentage of all consumer users? How much bandwidth woudl we save if we implemented proper routing to skip the relay server when not necessary. Are there better solutions we're missing?
In falling order of usefulness:
There is a direct connection between the two endpoints in both directions. You just connect and you are essentially done.
There is a direct connection between the two endpoints in one direction. In that case you just connect via the right direction by trying both.
Both parties are behind NATs of some kind.
Luckily, UPnP works in one end, you can then upgrade the connection to the above scheme
UPnP doesn't work, but STUN does. Use it to punch a hole in the NAT. There are a couple of different protocols but the general trick is to negotiate via a middle man that coordinates the NAT-piercing.
You fall back to let another node on the network act as a relaying proxy.
If you implement the full list above, then you have to give up very few connections and don't have to spend much time on bandwidth utilization at proxies. The BitTorrent protocol, of which I am somewhat familiar, usually stops at UPnP, but provides a built-in test to test for connectivity through the NAT.
One really wonders why IPv6 did not get implemented earlier - this is a waste of programmers time.
Real world NAT types survey (not a huge dataset, though):
http://nattest.net.in.tum.de/results.php
According to Google, about 8% of the traffic has to be relayed: http://code.google.com/apis/talk/libjingle/important_concepts.html
A large percentage (if not the majority) of home users uses NAT, as that is what those xDSL/cable routers use to provide network access to the local network.
You can theoretically use UPnP to open ports and set-up forwarding rules on the router to go through the NAT transparently. Unfortunately (or fortunately, depending on who you are) many users disable UPnP as a matter of course on their router and may not appreciate having to add forwarding rules manually.
What you might be able to do (and what Skype does AFAIK) is to have some of the users that have clear network paths and enough bandwidth act as relay nodes. Apart from the routing and QoS issues, you would at least have to find some way to ensure the privacy of any relayed data from anyone, including the owner of the relay node. In addition, there might be legal issues to settle with this approach, apart from the technical ones.

How are network protocols implemented?

I know that a protocol is a set of rules that governs communication between two computers on a network, but how are thoses rules implemented for the computer? Is a protocol basically a piece of code or, in other words, software?
Protocols are generally built upon each other. At the risk of sounding pedantic, here's an example of a protocol and where/how it's implemented:
Application Protocol - the way a particular application talks to another instance of itself or a corresponding server; this is implemented in the application code or a shared library
TCP (or UDP, or another layer) - the way that information is sent at the binary level and split up into usable chunks, then reassembled at the destination; this is usually implemented as part of the operating system, but it is still software code
IP - the way that information (having already been split or truncated by something like TCP or UDP) makes its way from one place to another by routing over one or more "hops"; this is always software code, but is sometimes implemented in the OS and sometimes implemented in the network device (your LAN card, for example)
base-T (ethernet), token ring, etc - Here we are physically getting into how the hardware talks to one another; ie, which wire corresponds to a particular type of signal; this is always implemented in hardware
electricity /photons - the laws that govern (or at least define) how electrons (or photons) flow over a conductive material or over the air; this is usually implemented in hardware ;)
In a sense, these are all "protocols" (a set of rules or expected behaviors that allow communication to take place), and they're built on one another.
Bear in mind that (aside from electricity) this is not an exhaustive list of the sort of protocols that exist at any of these layers!
Edit Thanks to dmckee for pointing out that electricity isn't the only physical process used in networking ;)
Networking protocols are not pieces of code or software, they are only a set of rules. When software uses a specific networking protocol, then the software is known as an implementation. There can be many different software implementations of the same protocol (i.e. Windows and UNIX have different TCP/IP implementations). It is possible to understand networking protocols without any knowledge of programming.
EDIT: How are they implemented? Here's a paper on taking an abstract specification of a protocol and implementing it into C. You'll see that less-strict protocols leave out certain details that programmers have to guess on, which makes some implementations incompatible with others.
A network protocol is basically like a spoken language. It is implemented by code that sends and receives specially prepared messages over the network/internet, much like the vocal chords you need to speak (the network and hardware) and a brain to actually understand what someone said (the protocol stack/software).
Sometimes protocols are implemented directly on the hardware [for speed reasons] (like the Ethernet protocol for LANs) - but it is always software/code required to do something useful with a protocol.
This might be interesting for you:
The OSI Model
Protocol (Computing)
Software implements the rules defined in the protocol, some protocols are formal defined and some informal.
a protocol is a set of rules governing the communication between two entities.
in the computer/programming context, a protocol is a set of rules governing the communication between two programs.
in the computer network context, a protocol is a set of rules governing the communication between two programs, well, over network.
in computers, in the end everything is embodied in code...
Protocols are basically set of rules. The way to implement them is to first of all make a state machine diagram as it completely tells that what is going to be the current state and how the state is going to change on the basis of input and what output actions are going to be performed.
Your answer is a very short one:
BY READING THE RFC.
The main networking problem is to share data between computers. All the networking protocols try to solve is a little part of that major problem. Some of them (the protocols) are implemented as software, some others as hardware. In short, protocols like algorithms, can be implemented it in many programming languages.
Back to the TCP, it is implemented by the operating system.

Practical implications of OSI vs TCP/IP networking

I'm supposed to be setting up a 'geolocation based', ipv6, wireless mesh network to run on google android.
I found what seems to be a good app to support the meshing:
http://www.open-mesh.net/wiki/batman-adv
"Batman-advanced is a new approach to
wireless networking which does no
longer operate on the IP basis. Unlike
B.A.T.M.A.N, which exchanges
information using UDP packets and sets
routing tables, batman-advanced
operates on ISO/OSI Layer 2 only and
uses and routes (or better: bridges)
Ethernet Frames. It emulates a virtual
network switch of all nodes
participating. Therefore all nodes
appear to be link local, thus all
higher operating protocols won't be
affected by any changes within the
network. You can run almost any
protocol above B.A.T.M.A.N. Advanced,
prominent examples are: IPv4, IPv6,
DHCP, IPX."
But other members in my team has said it's a no-go because it operates on OSI, rather than TCP/IP. This was the first I'd heard of OSI, and I'm wondering how much of a problem this is? What are the implications for mesh network apps that can be developed on top of it? Considering the android is relatively new, we don't need to worry too much about compatibility with existing apps, so does it matter?
I haven't spent a lot of time working with networks, so please put in noobmans terms.
"You can run almost any protocol above B.A.T.M.A.N. Advanced, prominent examples are: IPv4, IPv6, DHCP, IPX."
"But other members in my team has said it's a no-go because it operates on OSI, rather than TCP/IP. "
The other members in your team are confused by the buzzword-fest in BATMAN.
The "IP" of TCP/IP is IPv4 (or IPv6). So BATMAN supports TCP/IP directly and completely.
There's no conflict of any kind. Just confusion.
They're probably referring to the OSI model, which is a commonly-used way of distinguishing between network layers. I'm not sure it's a useful way of looking at things, but it's taught in every networking course on the planet.
OSI level 2 is the data link layer, which operates immediately above the actual physical level. Basically, it's in charge of flow control, error detection, and possibly error correction. The data link layer is strictly "single hop". It's only concerned about about point-to-point data transfers, not about multi-hop transfers or routing.
If they're actually referring the OSI networking protocal itself, run screaming as fast as you can. OSI was notoriously hard to implement, and I've never heard of an actual working installation. See the Wikipedia article for the gory details.
The OSI model and the OSI protocols are different.
The OSI model is a way of breaking things down: physical, link, network, transport, session, presentation, application. OSI protocols are protocol implementations that map directly to those layers in the model.
The model is a way of looking at things. It mostly makes sense, but it breaks down at the higher levels. For example: what does a presentation layer really do?
During the '90s, OSI was (in some circles) thought to be the future, but was actually the downfall of some companies, and wasted the resources of many others. For example, DECnet Phase V was Digital's insanely complex implementation of an OSI stack that met government OSI requirements, but was run over by the TCP/IP steamroller.
The test is: What are the bytes on the wire? In this case it is UDP over IP, not the OSI equivalent, which was CLNP.
Having said all that, if it is a layer two protocol, it will probably have scalability problems because it is a layer two protocol. Fine for a small number of nodes, but if you're trying to get scale, you need a better solution.
"ISO/OSI Layer 2" does not mean the OSI protocols. It refers to the "Seven Layer" model of network stacks. It means the Data Link layer.
The layers are: Physical, Data Link, Network, Transport, Session, Presentation, Application.
OSI is a model not a protocol like IP and TCP. What your team seem to be saying is that the mesh won't be using IP. I suspect they are wrong as the text you have quoted states the BATMAN protocol is capable of supporting IP & IPv6 and if that is the case you'd need a very strong reason to use anything else.

Resources