Say i have a server A having tcp retransmission between A and B, now if another client C talks to A (A -> C) there is no retransmission but will retransmission between A and B have impact on A and C channel?
Related
if there is a tcp connection between A and B,
A send some packets and then a TCP RST(or TCP FIN/ACK) to close the connection,
let me say?
PKT1, PKT2, PKT3, TCP_RST
or
PKT1, PKT2, PKT3, TCP_FIN/ACK
but the packet arrival is out of order
PKT1, TCP_RST(or TCP_FIN/ACK), PKT2, PKT3
then how will B react?
according to the sequence number of TCP_RST and TCP_FIN/ACK,
B knows there are some packets missing(PKT2 and PKT3),
will B wait for PKT2 and PKT3 before it close the connection,
or B immediately close the connection when it receives TCP_RST(or TCP_FIN/ACK)?
thanks
The TCP protocol will reorder the packets before sending them further up the stack. This means it will wait for out of order packets according to the sequence number, ask for retransmission if needed, etc. and wait for the last ack before closing the connection.
You can find the TCP state diagram here:
http://www.ssfnet.org/Exchange/tcp/tcpTutorialNotes.html#ST
TCP guarantees sequence. That includes the sequence of the EOS. It must be delivered after all the data.
I want to build such a system, there are 3 nodes, A, B and C A and B establish a TCP connection, then A tell C the ports, sequence number (seq_no)and Acknowlegement sequence number(ack_seq_no). Then C sends packet to B (C and A share the same IP but they are far away from each other, e.g, C spoof IP of A)
if B never sends data packets to IP(A)(only ACK), C can send packets to B with correct seq_no and ack_seq_no but sometimes if B send a data packet P1to IP(A),
1 A immediately send ACK for the data packet P1 to B, and A tell C the new ack_seq_no. But there is a delay between A and C, so before C knows the new ack_seq_no C may send some data packets(with spoofed IP(A)) to B with obsolete ack_seq_no.
my first question is: what will C behave when it receives a data packet with obsolete ack_seq_no
2 if I delay the ACK for p1 from A to B, I let A tell C first, and then sends the ACK for p1. there are 2 questions:
1) since B is waiting for the ACK of p1 from A, it may retransmit the packet p1, how to increase the retransmission timeout? if each time A reply with the ACK with such a delay, the timeout will be naturally increased, then it is not a problem?
2) if C sends data packets to B(with IP of A) before the ACK(for p1) from A to B. This means the data packets are with updated ack_seq_no, but B doesn't know whether its new ack_seq_no has been known by A or not(coz ACK hasn't arrived yet), so it may regard the ACK is piggybacked over the data packets? then how will B deal with the late ACK?
If an obsolete ACK is received, the ACK will be ignored (it's assumed to be an old packet that was delayed). Every ACK acknowledges everything leading up to it (I'm assuming you're not sending selective ACKs).
The sender should adjust its retransmission timeouts based on the acknowledgement response times.
B can't tell the difference between an ACK from A or C. As far as it's concerned, this is the same as question #1 -- the late ACK will be ignored.
Suppose we have tcp and udp connection over the same link of capacity C . Tcp has transfer rate of C whereas UDP has 8C as its transfer rate . Which will be more efficient ?
Theoretically, if nothing on the way happens to any of the packets, UDP would be faster. UDP doesn't require to acknowledge every packet like TCP does (ACK Flag). Also, no handshake and no connection tear-down is required. UDP would be the faster choice in an ideal network, where no packets get dropped.
The problem is, in a real world example, UDP would lose packets. You would be slower, because you would have to implement a packet control like in TCP in UDP too. UDP does not acknowledge the receival of packets, and it also does not knock on the door to see if anybody is home (TCP SYN). UDP Packets are easier structured than TCP packets, but sacrifice security for their size. http://www.diffen.com/difference/TCP_vs_UDP describes the differences.
So for your example. With a cable that can hold C packets/s, and TCP at a rate of C packets/s and UDP at a rate of 8*C packets/s, UDP would be much faster.
I'm developing C++ application server and client which use TCP. I have three messages on server: A, B and C. They are sent sequentially: A -> B -> C. And clients responses acknowledge messages:rA, rB, rC.
Do client receive A, B and C in order A->B-C? Do server receive rA->rB->rC?
TCP guarantees that the order the packets are received (on a single connection) is the same as the order they were sent. No such guarantee if you've got multiple TCP connections, though - TCP preserves ordering only for the packets within a given TCP connection.
See the Wikipedia article on TCP for more overview.
One of the functions of TCP is to prevent the out-of-order delivery of data, either by reassembling packets into order or forcing retries of out-of-order packets.
Say our client is sending the packets at a constant rate. Now, if server goes down temporarily there can be two situations
(We are using the TCP protocol)
1) The packet won't be delivered to the server. Consequently, the other packets in the line have to wait for the server to respond. And the communication can be carried out from there.
2) The packet won't be delivered and will be tried again, but the other packages won't be affected by this packet.
Say, packets A, B and C are to be transferred. While I am sending packet A the server goes down temporarily, then the packets B and C will be sent at the time they were initially scheduled to be or they will be sent once A is received by the server.
TCP is a stream-oriented protocol. This means that if, on a single TCP connection, you send A followed by B then the reciever will never see B until after it has seen A.
If you send A and B over separate TCP connections, then it is possible for B to arrive before A.
When you say "goes down temporarily", what do you mean? I can see two different scenarios.
Scenario 1: The connection between Server and Client is interrupted.
Packet A is sent on its way. Unfortunately, as it is winding its ways through he cables, one cable breaks and A is lost. Meanwhile, depending on the exact state of the TCP windowing algorithm, packets B and C may or may not be sent (as that would depend on the window size, the size of A/B7C and the amount of as-yet unacknowledged bytes sent). I guess that is saying both your "1" and "2" may be right?
If B and/or C have been sent, there will be no ack of A, until it has been resent. If they have been sent, once A has arrived, the server will ack up until the end of the last frame received in sequence (so, C, if taht is the case).
Scenario 2: The sever goes down
If this happens, all TCP state will be lost and connections will have to be re-established after the server has finished rebooting.