Since all the devices on the internet talks without a presetting of the baud rate, it seems all of them should have the same baud rate. If so what is the baud rate of devices communicating over internet protocol?
Internet Protocol (IP) is at the Network layer in the OSI model. Concerns about physical transmission (such as baud rate) would be considered in the Physical and Data Link layers.
A more general question of: "How does IP manage across heterogeneous physical connections with differing speeds" is roughly: It doesn't, flow control happens at higher layers of the protocol, such as TCP.
Related
Nowadays, I am making a project relating to protocol communication between 2 FPGA.
When I read information about TCP/IP ethernet, the window receive which the amount data that computer can accept.
And there are a relationship between Window receive and transmitting data rate.
But in my project, I connect two FPGA by Aurora Protocol (Xilinx) not TCP/IP. I wonder that is there definition as window receive for protocol between 2 FPGA ?
I am not good at about electronics as well as networks.
Thanks
I am not much aware of Aurora Protocol (link level), but it is not directly comparable to TCP/IP which is a higher level protocol. See OSI Model.
The TCP/IP sliding window mechanism helps in providing a reliable transmission by controlling the flow of packets between sender and receiver. The Ethernet which is usually the link layer for TCP/IP has its own flow control mechanisms.
Check the section 3 of this document which might give you some insight.
When using a specific datalink layer technology, for example Ethernet, do we have limitations of the used medium (i.e. Ethernet). Can we use radio connections to connect two LANs over Ethernet?
Ethernet (IEEE 802.3) is a series of layer-1/2 protocols used on various media. I don't believe there is currently an approved standard for ethernet over radio. One obstacle to that is that radio is a shared medium, and ethernet must be able to simultaneously send and listen for collisions on a shared medium, but the limitations of radio prevent that.
The IEEE has defined several other standards for radio LAN communications (802.11, 802.15, 802.16, etc.). These standards are not ethernet, but completely different LAN protocols. The best known are Wi-Fi (802.11) and Bluetooth (802.15.2).
I'm a novice in this area looking for clarification. I believe that CDMA would be classified as part of the physical layer, so what is used for the data link layer (according to the OSI model) in cellular networks? Is TCP/UDP used in cellular networks? If so, in what capacity?
On a CDMA network (and some others, such as GPRS and HSPA), PPP is used at the Data Link Layer (layer 2).
TCP/UDP (or more generally, IP) is indeed used in CDMA networks to mostly for connection to the CMDA providers ISP network for Internet access by phones and "data sticks".
These data sticks usually provide an emulated modem on a serial port over USB, which is used in a very similar manner to dial-up modems of days gone by. You'd use the same "AT commands" to establish a connection, the only difference being the relatively high speed of the emulated serial port.
I was wondering if it is layer 7 for websocket as the application is actually the browser.
Websocket depends on TCP (OSI#4) and only the handshake phase is initialized by HTTP (OSI#7) 1. Although it uses TCP port 80 only.
According to the runtime behavior, I have to say WebSocket should be a special OSI#7 protocol. Then we can put SSL/TLS into OSI#6 (see wikipedia), and the implementation inside browser into OSI#5.
It is better to understand the layer using TCP/IP model rather than OSI model. WebSocket layers on top of TCP, considered as transport layer in TCP/IP model, and one can layer application layer protocol on top of WebSocket.
HTTP, SSL, HTTPS, WebSockets, etc. are all application layer protocols.
But the OSI protocol stack doesn't apply to TCP/IP, which has its own layer model: same names, different functions. It isn't helpful to keep using the obsolete OSI stack as though it actually reflected any reality. It doesn't.
Only the Handshake is interpreted by https server by upgrade request. Apart from that Websocket is independent TCP-based protocol. So i would say host layer #4 and #7.
https://www.rfc-editor.org/rfc/rfc6455#page-11
L1 does not have a map where a cable is digged in the soil (how deep, where), nor in wich cable certain wires delivering information is flowing, or where is is layed in cable self, nor it dictates how cable is marked. L1 is only physical layer, not where and how the wires are layed. So L0 is needed.
L1: "The physical layer is responsible for the transmission and reception of unstructured raw data between a device and a physical transmission medium. It converts the digital bits into electrical, radio, or optical signals. Layer specifications define characteristics such as voltage levels, the timing of voltage changes, physical data rates, maximum transmission distances, modulation scheme, channel access method and physical connectors. This includes the layout of pins, voltages, line impedance, cable specifications, signal timing and frequency for wireless devices. Bit rate control is done at the physical layer and may define transmission mode as simplex, half duplex, and full duplex. The components of a physical layer can be described in terms of a network topology. Physical layer specifications are included in the specifications for the ubiquitous Bluetooth, Ethernet, and USB standards. An example of a less well-known physical layer specification would be for the CAN standard."
What steps are performed by ethernet device to connect to a n/w?
Does it start communication from it's lowest or highest supported speed?
To make it more clear I have following example
If device A is currently setup at 10Mbps and just connected to N/W router R,which can support both 10Mbps and 100Mbps. Which device(A or R) will communicate first and what speed. If device A, how will router understand the speed of transmission send by device A?
You seem to be asking about ethernet autonegotiation - IEEE 802.3ab; Autonegotiation should implement this set of priorities when autonegotiating a 1Gbps link...
1000BASE-T full duplex
1000BASE-T half duplex
100BASE-T2 full duplex
100BASE-TX full duplex
100BASE-T2 half duplex
100BASE-T4
100BASE-TX half duplex
10BASE-T full duplex
10BASE-T half duplex
It doesn't matter which device communicates autonegotiation pulses first; the bottom line is that IEEE defines a very specific time window for both of them to finish autonegotiation.
Whether a device can actually link at these speeds depends on whether both ethernet PHYs support said mode, and whether the channel (i.e. wiring) has sufficient electrical bandwidth capacity... each PHY will conduct a signal to noise ratio test to determine whether the channel supports enough capacity for the desired speed.
Reading this Ethernet PHY application note (National Semiconductor) may be boring, or full of excitement depending on your tolerance for implementation details.