Transmitting data rate and Receive Window Size - networking

Nowadays, I am making a project relating to protocol communication between 2 FPGA.
When I read information about TCP/IP ethernet, the window receive which the amount data that computer can accept.
And there are a relationship between Window receive and transmitting data rate.
But in my project, I connect two FPGA by Aurora Protocol (Xilinx) not TCP/IP. I wonder that is there definition as window receive for protocol between 2 FPGA ?
I am not good at about electronics as well as networks.
Thanks

I am not much aware of Aurora Protocol (link level), but it is not directly comparable to TCP/IP which is a higher level protocol. See OSI Model.
The TCP/IP sliding window mechanism helps in providing a reliable transmission by controlling the flow of packets between sender and receiver. The Ethernet which is usually the link layer for TCP/IP has its own flow control mechanisms.
Check the section 3 of this document which might give you some insight.

Related

what is ``differentiated service : congestion experienced``?

I was trying to send packets and capture them using wireshark to see how they look like using scapy
sr1(IP(src='192.168.1.35', dst='192.168.1.1', tos=3)/'my packet')
when i sent the packet and captured it in wireshark i got the full ip stack but this piece of the ip stack got my attention
Differentiated Services Field: 0x03 (DSCP: CS0, ECN: CE)
0000 00.. = Differentiated Services Codepoint: Default (0)
.... ..11 = Explicit Congestion Notification: Congestion Experienced (3)]]
what is Explicit Congestion Notification: Congestion Experienced ?
Try to read RFC 3168.
TCP/IP is a complex protocol (of protocols). Your congestion notification looks to be a congestion check (ECN) implemented in your router (or source-dest path), a feature your/a router has (a good thing, an extra thing). The message you report, not necessarily has to do with your networking experiment. Can be a context issue related with overall traffic on your (intra/external) net, what usually tell systems aware, to modify speed transmissions, among other things, etc.
[ECN notifies networks about congestion with the goal of reducing packet loss and delay by making the sending device decrease the transmission rate until the congestion clears, without dropping packets. RFC 3168, The Addition of Explicit Congestion Notification (ECN) to IP, defines ECN].
Reading RFCs is a good habit.
TCP/IP is an endless deep ocean. Getting a good foundation is not easy. But try hard, there is light at the end of the tunnel.

Packet loss while using UDP to fetch Data from Memcached

I have heard, Many companies like facebook are using UDP to fetch data from memcached. I have a doubt, How they make sure there is NO packet loss and order of received packet is per requirement.As we know tcp provide such facility but udp does not.
OSI Model has 7 layers which are:
Application Layer
Presentation Layer
Session Layer
Transport Layer
Network Layer
Data Link Layer
Physical Layer
Splitting things to layers is very good approach to solve problems but it doesn't mean you have to do all the network operations in network layer.
As you have mentioned, TCP provides feedback to end systems when UDP doesn't but UDP has it's own advantages. First of UDP's datagram is simpler than TCP's one. And also most of huge systems like Facebook uses UDP because using TCP for these kind systems would not be very clever since all the data senders would have to keep track of sending rate, retransmissions rate for many many receivers. So if they've used TCP, their network layer would be under very very big presure.
So they make flow control in the application layer to reduce network traffic.

with SIP, when to use TCP not UDP?

I know pretty the differences between UDP and TCP in general (eg. http://www.onsip.com/about-voip/sip/udp-versus-tcp-for-voip)
Question is, in what circumstances would using TCP as the transport have advantages specifically under SIP VOiP communications?
A lot of people would generally associate UDP with voip and probably leave it at that, but in simple terms there are two parts to voip - connection and voice data transfer.
SIP is a very light weight protocol, once the connections is established it's effectively left idle until the infrequent event of someone making a phone call. TCP (unlike UDP) will actually reduce traffic to the server by eliminating need to;
Re-register every few minutes
Refresh/ping server
You can run SIP over TCP and then use (as is recommended) UDP for RTP.
I couldn't help but also point out the obvious things that I have looked over. Eg. number of devices connecting to the server. As the number grows, the equation tilts in UDPs favor. But then you also have to consider SIP User Agents expanding to cover multiple codecs, multimedia, video and screen-sharing. The INVITE packets can start to grow large and potentially run over the UDP single datagram size thereby tilting the equation again in favor of TCP.
All that being said I hope you have enough information to answer the question you were looking to answer.
Hope this helps.
Credit: The wonderful discussion at onSip: https://www.onsip.com/blog/sip-via-udp-vs-tcp
SIP over TCP has a significant advantage over UDP for mobile devices. The reason is due to the use of NAT, and how NAT table entries in a wireless router or a cell providers' router are generally timed out much quicker for UDP vs TCP. Since keeping the same NAT table entry is necessary to be able to reliably receive calls, SIP must periodically send out keep-alives to maintain the NAT table entry. The required frequency of keep-alives is much higher for UDP (maybe every 30 seconds) vs TCP (maybe every 15 minutes) thus resulting in noticeably higher mobile device battery usage. Often when you see someone complaining about how their battery usage takes a major hit when using a VOIP client, it's because the client is using UDP.
So, TCP wins out over UDP hands down for mobile devices.
Note that the above assumes you want to be able to reliably receive calls on your mobile device. If all you want to do is be able to make calls, then it's a different story.
If a message is large (within 200 bytes of MTU size), RFC 3261 section 18.1.1 mandates use of TCP (to be precise, it mandates use of a "congestion controlled transport protocol, such as TCP"). I've hit that in practice when sending an initial INVITE with lots of headers and a complex Request URI.
You cannot reliably assemble an audio stream from a TCP based protocol. In audio it is far better to lose a packet than to have a packet retransmitted because of a packet drop. Audio does not work if there is excessive jitter in the packet timing. Audio is real-time and requires a protocol like UDP to work correctly. Packet loss does not break audio, it only reduces the quality. TCP's perfect delivery does not help audio in any way, there can be no quality if you get 100% of the packets, but they are not in real time. In audio it is the timing (latency, jitter) that determine quality more than data integrity.
This sip works BEST when signal and control are over TCP but voice data is over UDP.
I have been working with transmission of digital voice over network protocols since I designed one of the first smartphones in 1987 for the newly emerging digital cellular network in Japan. Since 1987, the only aspect of digital voice transmission that has not changed is what I describe here. The real-time nature of audio (voice) transmission and how that impacts system design is still exactly the same as it was in the dinosaur days I come from.
TCP can get through with perfect clarity on a lossy connection, when UDP may not be understandable. You get lower latency with UDP, but that doesn't help you if you can't understand what is being said.

Which OSI layer does WebSocket Protocol lay on?

I was wondering if it is layer 7 for websocket as the application is actually the browser.
Websocket depends on TCP (OSI#4) and only the handshake phase is initialized by HTTP (OSI#7) 1. Although it uses TCP port 80 only.
According to the runtime behavior, I have to say WebSocket should be a special OSI#7 protocol. Then we can put SSL/TLS into OSI#6 (see wikipedia), and the implementation inside browser into OSI#5.
It is better to understand the layer using TCP/IP model rather than OSI model. WebSocket layers on top of TCP, considered as transport layer in TCP/IP model, and one can layer application layer protocol on top of WebSocket.
HTTP, SSL, HTTPS, WebSockets, etc. are all application layer protocols.
But the OSI protocol stack doesn't apply to TCP/IP, which has its own layer model: same names, different functions. It isn't helpful to keep using the obsolete OSI stack as though it actually reflected any reality. It doesn't.
Only the Handshake is interpreted by https server by upgrade request. Apart from that Websocket is independent TCP-based protocol. So i would say host layer #4 and #7.
https://www.rfc-editor.org/rfc/rfc6455#page-11
L1 does not have a map where a cable is digged in the soil (how deep, where), nor in wich cable certain wires delivering information is flowing, or where is is layed in cable self, nor it dictates how cable is marked. L1 is only physical layer, not where and how the wires are layed. So L0 is needed.
L1: "The physical layer is responsible for the transmission and reception of unstructured raw data between a device and a physical transmission medium. It converts the digital bits into electrical, radio, or optical signals. Layer specifications define characteristics such as voltage levels, the timing of voltage changes, physical data rates, maximum transmission distances, modulation scheme, channel access method and physical connectors. This includes the layout of pins, voltages, line impedance, cable specifications, signal timing and frequency for wireless devices. Bit rate control is done at the physical layer and may define transmission mode as simplex, half duplex, and full duplex. The components of a physical layer can be described in terms of a network topology. Physical layer specifications are included in the specifications for the ubiquitous Bluetooth, Ethernet, and USB standards. An example of a less well-known physical layer specification would be for the CAN standard."

If UDP is unreliable why is it used at transport layer

Sorry for what is a stupid question.
Function of transport layer is reliable delivery of messages. UDP is inherently unreliable, why do we use it at Transport layer then?
Thanks
EDIT: Just to clarify, I have read the Wiki and other sources. My question is
UDP is Unreliable (I know why and the advantages and where it is used etc.) , why not use it(UDP) at some other layer, rather than Transport layer which implies reliability.
Sometimes it is more important that the data be sent quickly and without pauses than that the stream be reliable. DNS uses UDP because the transaction between a DNS server and client consists of only one packet each way. If the packet is lost, it will be re transmitted at the request of the client.
Similarly, streaming video often uses UDP as a transport protocol because the occasional loss of a packet is acceptable. It is preferable that the image quality suffer as a result of lost packets, rather than the video stream suffer jitter or pauses (lag) as a result of TCP synchronization.
Games also often use UDP, sacrificing engine accuracy for improved speed/user experience.
These and more examples can be found in the relevant portions of the wikipedia article.
EDIT
UDP is used at the transport layer because it is a transport layer protocol. It provides "provides end-to-end communication services for applications" (RFC1122).
Reliability services are optional for transport layer protocols.
... rather than Transport layer which implies reliability
There's more than one dimension within "reliability." It's interesting to note that UDP is reliable in that it provides a checksum to prevent against corruption.
Stream protocols like TCP create problems for latency-sensitive applications. For latency-sensitive apps, UDP's natural limitation (to shed traffic during congestion) is a huge boon.
why not use it(UDP) at some other layer
IP datagrams are designed to be small enough to make the next hop transit. UDP datagrams can span IP datagrams, so there's some value added there. But if TCP were a layer above UDP, it would be limited by UDP's semantics (TCP ports are bound to a connection, UDP datagrams are not).
The reason why UDP is used at the transport layer is because the way these layers are set up. UDP is inherently a protocol for transferring data from point A to point B, not as an application or at the hardware layer.
At the transport layer there is no assumption of reliability, but rather that UDP is a protocol for transferring data. Under the 7 layer style of networking, it falls in the interface between the network and session layers. The name Transport layer simply says what it does. Reference wikipedia for more information on the OSI model.
TLDR The reason UDP is in the transport layer is because it is a protocol for data transport, and is therefore in the transport layer. All protocols that deal with data transport fall under this category
Transport layer classes
Class 0 - Simple class
Class 1 - Basic error recovery class
Class 2 - Multiplexing class
Class 3 - Error Recovery and multiplexing class
Class 4 - Error detection and recovery class

Resources