Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
TCP can detect whether a packet was sent successfully anyways so instead of waiting for the pong, why not just check if there's an error when the ping is sent? I just don't find the need for pong.
Having ping and pong creates an end-to-end test for both connectivity and a functional endpoint at the other end.
Using just TCP, only confirms that the TCP stack says the packet was delivered to the next stop in a potential connectivity chain and does not confirm that the other endpoint is actually functioning (only that the packet was delivered to the TCP stack).
This is particularly important when there are proxies or other intermediaries in the networking chain between endpoints which is very often the case in professionally hosted environments. Only a ping and pong confirms that the entire end-to-end chain, including both client and server are fully functioning.
Here's a related answer: WebSockets ping/pong, why not TCP keepalive?
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am currently studying the transport layer of the TCP/IP model, in particular of the protocols that are used, TCP and UDP. The thing I don't understand is if we use the browser through a URL to request a resource from a web server for example playing a live stream so through an HTTP request, in this case, the protocol that is used is TCP instead of UDP, right? But on the other hand in the cisco course from which I am studying as well as the school books also states that the UDP protocol is used for Live Streaming, Multiplayer games, VOIP.
In this case which of the two protocols are we using? My doubt lies precisely in the fact that if the request that is made is a web request through a URL and therefore an HTTP request, how is UDP implemented since HTTP uses TCP?
The thing that is not clear to me is how UDP is implemented when we, through URL (so an HTTP request which is of the TCP type), request a resource from a web server. If the resource in question is "watch a live stream on Twitch", how does the UDP protocol implemented since we are already using a "logical connection" established through TCP and we are already using reliable communication?
UDP is a protocol which does not care about reliable data transmission, i.e. packets can be lost, duplicated, reordered etc. TCP instead cares about reliability, which comes with added overhead and also comes with potential latency issues if packets got lost and need to be resend.
Based on this UDP is used when latency is a concern but reliability is not. This is the case for real-time media, like VoIP audio and video telephony. Too much latency in this use case is not acceptable for such bidirectional communication. Thus here are media codecs used which can deal with packet loss, i.e. latency is favored and non-reliability as a side effect is dealt with.
In streaming video (non-real-time) like on youtube latency is not that much of concern though. More important here is efficient use of bandwidth which means efficient media codecs with high compression rates. The more efficient a codec is the less it can handle loss of data. Thus reliability of the connection is a concern here and TCP is more suitable.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
As expected behavior, when server sends FIN-ACK packet, client should also send FIN-ACK packet and the connection is terminated. But for some of my cases, when server sends FIN-ACK packet, immediately new http request is going from the client, before FIN-ACK is sent back from client. This is causing connection reset packet. What is the reason behind this and how to solve this issue.
... when server sends FIN-ACK packet, client should also send FIN-ACK packet ...
While common this is not actually required. FIN from a server signals only that the server will not send any more data - but it still might receive data. Similar a FIN from the client signals that the client will not send any more data - but it might also still receive data. Only when both sides have send a FIN and got the matching ACK, then it is clear that no more data will be send in both direction and thus the connection is closed.
... when server sends FIN-ACK packet, immediately new request is going from the client, before FIN-ACK is sent back from client.
There is no "request" at the TCP level. Request and response have only a meaning at the application level for some application protocols. It is unclear what application protocol is spoken here.
What is the reason behind this and how to solve this issue.
As said above, FIN from the server only means that the server will not send any more data. But it might still receive data. Nothing here forbids the client to send more data. There is no issue to solve here.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I was studying the Github API and came across the following in their Rate Limiting section
For unauthenticated requests, the rate limit allows for up to 60 requests per hour. Unauthenticated requests are associated with the originating IP address, and not the user making requests.
I was curious to see what HTTP headers are used to track the limits and what happens when they are exceeded, so I wrote a bit of Bash to quickly exceed the 60 requests/hour limit:
for i in `seq 1 200`;
do
curl https://api.github.com/users/diegomacario/repos
done
Pretty quickly I got the following response:
{
"message": "API rate limit exceeded for 104.222.122.245. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"documentation_url": "https://developer.github.com/v3/#rate-limiting"
}
It seems like Github is counting the number of requests from the public IP mentioned in the response to determine when to throttle a client. From what I understand about LANs, there are many devices that share this public IP. Is every device in the LAN behind this IP rate limited because I exceeded the limit?. On a side note, what other ways exist of rate-limiting non-authenticated endpoints?
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
So anywhere I read anything about UDP, people say this;
Messages can be received out of order
It's possible a message never arrives at all
The first one isn't clear to me. This is what can happen with TCP:
I send 1234, client receives 12 and then 34
So no problem; just prepend the message length and it's all good. After all, an integer is always 4 bytes, so even if the client receives the prepended length in 2 goes, it will know to keep reading until it has at least 4 bytes to know the msg length.
Anyway, back to UDP, what's the deal now when people say 'packages can be received out of order'?
A) Send `1234`, client receives `34` and then `12`
B) Send `1234` and `5678`, client receives `5678` and then `1234`
If it's A, I don't see how I can make UDP work for me at all. How would the client ever know what's what?
It's entirely possible that a network has many paths to reach a given point, so one of the datagram could take one route to reach the other end, another packet could take another path. Given this, the last packet sent could arrive before another packet. UDP takes no measures to correct this, as there's no notion of a connection, and in-order delivery.
At this points it depends on how you send your data. For UDP, each send() or similar call sends one UDP datagram, and recv() receives one datagram. A datagrams can be reordered with respect to other datagrams, or disappear entirely. Data cannot be reordered or dropped within a datagram, you either receive exactly the message that was sent, or you don't receive it at all.
If you need datagrams/messages to arrive in order, you need to add a sequence number to your packets, queue and reorder them at the receiving end.
The usual metaphore is:
TCP is a telephone conversation: words arrive in the same order as they were spoken
UDP is sending a series of letters by mail: the letters may get lost, may arrive, and can arrive in any order.
TCP also involves a connection : if the telephone line is disrupted by a thunderstorm, the connection breaks, and has to be built up again. (you need to dial again)
UDP is connectionless and unreliable: if the mailman is hit by a truck, some letters may be lost. Some letters could also be picked up and delivered by other mailmen. Letters can even be dropped on the floor if your mailbox is full, and even without any reason.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I need to draw a time space diagram of a client connecting to the server, then requesting data, and server sending x bytes of data then server closing the connection.
First, I am not sure exactly how many trips back and forth there would be, I am thinking:
Client requests connection
Server accepts
3 Client sends ACK
Client requests data
Server sends x bytes of data
Client sends ACK
Server closes connection
Client sends ACK
Is that correct??
Also, I need to specify SEQ, ACK numbers and SYN/ACK/FIN bits, I get the first part but what are the SYN/ACK/FIN "bits"?
I found a nice website which can help you out. here This photo shows a three-way handshake and a disconnection at the end. Notice the changing SYN and ACK numbers as the client and server exchange packets. FIN is the trigger for a disconnect. It can be sent by the client or the server. Source : http://www.pcvr.nl/tcpip/tcp_conn.htm