Necessity of three way handshake [closed] - http

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Before a browser can request for a webpage, A TCP connection is required to be established. What is the necessity of 3 way handshake in interacting with a server computer? why can't we simply send a web request and wait for the response?
Shouldn't the resolution of IP address be enough for this purpose?
Basically, I need to know the reason for establishing TCP connection.
thanks in advance

You are using a device named A and server is named B
Host A sends a TCP SYNchronize packet to Host B
Host B receives A's SYN
Host B sends a SYNchronize-ACKnowledgement
Host A receives B's SYN-ACK
Host A sends ACKnowledge
Host B receives ACK.
TCP socket connection is ESTABLISHED.
See more at: http://www.inetdaemon.com/tutorials/internet/tcp/3-way_handshake.shtml#sthash.F2f4b8Xn.dpuf

Because you need a TCP connection to send HTTP over, and TCP has a 3-way handshake.
Basically, I need to know the reason for establishing TCP connection.
Because HTTP runs over TCP. It doesn't exist in a vacuum.

TCP provides ordering, automatic retransmission, and congestion control. I'd say these are the obvious reasons why the design adopted TCP.
In contrast, e.g. UDP is fast. There is no handshaking. But UDP packets are not ordered, also packets can get lost (no automatic retransmission), and there is no congestion control.
You can try implementing your data transferring for things like HTML in UDP. It's not easy, you still need to reinvent ordering and retransmission for reliable lossless delivery.
If you don't care about lossy or a bit out-of-order transferring then you probably don't need TCP. (e.g. real time video)
--
On the other hand, avoid TCP to get better performance is not necessarily a bad idea. Read about QUIC. (It also has features like loss recovery and congestion control, you should not expect it to be extremely lightweight.)

Related

error handling in Ethernet [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
Standard Ethernet has no error correction. If FCS doesn't check out fine, the frame is dropped with no further effort.
Ethernet doesn't notify the switch at the other end of the link that it
is dropping the packet, something like ICMP of IP, neither does it ask for retransmission.
Isn't that a bit odd? One would have guessed a retransmission mechanism right when an error comes up-- before it propagates, without burdening the higher levels with overhead due to this.
In TCP for instance, a packet loss in Ethernet layer is likely to cause
destination buffer overflows due to gaps in the byte sequence, and from there retransmission of dropped segments due to this lack of buffer space, which is a much bigger waste of resources than fixing it right at the link layer.
TIA.
//=================================
EDIT:
The Q here is:
Why doesn't Ethernet have a retransmission mechanism when there is a CRC error? That is,
when the receiving switch sees an error on the frame, why doesn't the it
ask the sending switch at the other end of the link to retransmit the frame? Or doesn't even just notify the sender?
Ethernet is just dropping the packets whenever there are such errors. Without any retransmission or notification, the packet loss won't be discovered
until some other control mechanism in an upper layer protocol.
Wouldn't it be a sound logic to make Ethernet at least notify the sender? Is retransmission overhead the only reason for
not having an elaborate error handling?
Remember Ethernet isn't point-to-point. If an ethernet packet is corrupted, how can you tell who to send the failure message to?
The BER for Ethernet is very very low typically of the order of 10^(-10).
So the overhead for ACKing each frame would get higher than just ignoring errors and let upper layers handle if there is need.
Ok, you won't see it very much any more.... But in a half-duplex ethernet enviroment (csma/cd) packets will be resent if a collision is detected by the transmitting stations.

Iptables from udp to tcp [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Is it possible to convert all outgoing udp traffic from a gateway router to tcp on the same port with iptables. Have looked at mangle but unsure how I can use that.
This is not possible in a generic way for at least the following reasons:
UDP is a datagram protocol, where each packet is independent from the other. TCP instead is a stream protocol. While it would be possible to concatenate the UDP packets to a TCP stream it is not clear at which boundaries the TCP stream should be split to generate the packets.
With UDP duplicates, packet loss and reordering of packets could happen. If you cannot be sure about the proper order and that you got all messages and exactly once you can not construct reliable a TCP stream from it.
TCP acknowledges when it receives packets and if an ACK is missing the packet gets retransmitted. Once you forward TCP to UDP these ACKs can not be done because UDP does not have such mechanism.
It might still be possible to do application specific translations, like UDP DNS to TCP DNS or similar. But this translation depends on the application protocol so you need to have an application specific gateway. iptables as a packet filter works on a lower layer and does not provide such functioniality.

Two ways a TCP client can access a port in the client machine? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
Apparently there are two ways a client can access a port in the client machine. I've only found one so far.
The question:
When a client wishes to send a message to a server using TCP, it must establish a connection to a specific port and IP address. It must use a socket and a port on the client side to transport the data. Discuss the two ways that a client might get access to a port in the client machine.
I've read wikipedia and some other sites, and it looks like there is only one way TCP connects to a port? Am I not understanding this question?
Three 'ways a client might get access to a port in the client machine':
Specify a specific port and use the bind() system call.
Specify port zero and call bind(). The system will allocate a client port.
Don't call bind() at all. The system will again allocate a client port on connect().
Don't ask me which two your instructor wants, or whether he wants something else completely, but that's how I would answer the question. Unless there is more to it than this, it is very poorly posed indeed.
There are 2 types of network communication on TCP/IP protocol from a client to a server (or another client)
TCP protocol
UDP protocol
The main difference between those 2 are; TCP protocol works the way just like you described: The client should try to open a connection on a specific port to an IP address and the remote side should accept the connection properly in order to start sending and recieveing the data through the socket. But the UDP protocol does not need to open (establish) a connection from point to point. It allows you yo send any data at any time without any condition other than a valid IP address and port number.
Note that when talking about UDP protocol, unlike the TCP, you will have no guarantee that your data (actually called "packet") has been successfully sent or not to the remote address at all. You will never know that. And it's only recommended for small amount of data packages. The bigger the packet means the less change of proper transmission.

How is source port for HTTP determined? Is there ever collision in NAT? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I know that when a HTTP request is made, packets are sent from a seemingly-random high-numbered port (e.g. 4575) on the client to port 80 on the server. Then the server sends the reply to the same high-numbered port, the router knows to route that to the client computer, and all is complete.
My question is: How is the return port (4575 in this example) determined? Is it random? If so, within what range? Are there any constraints on it? What happens, for example, if two computers in a LAN send HTTP requests with the same source port to the same website? How does the router know which one to route to which computer? Or maybe this situation is rare enough that no-one bothered to defend against it?
The NAT is going to decide/determine the outbound port for a NATed connection/session, via it's own internal means. Meaning, it will vary according to the implementation of the NAT. This means any responses back will come back to that same outbound port.
As for your question:
What happens, for example, if two computers in a LAN send HTTP
requests with the same source port to the same website?
It will assign different outbound ports for each. Thus, it can distinguish between the two in responses it receives. A NATs would create/maintain a mapping of translated ports, creating new outbound port numbers for new sessions. So even if if there were two different "internal" sessions, from two different machines, on the same port number, it would map to two different port numbers on the outgoing side. Thus, when packets came back in on the respective ports, it would know how to translate them back to the correct address/port on the inside LAN.
Diagram:
It depends on the NAT and on the protocol. For instance I'm writing this message behind a full cone NAT and this particular NAT is configured (potentially hard-wired) to always map an UDP private transport address UDP X:x to the public transport address UDP Y:x. It's quite easy to shed some light on this case with with a STUN server (google has some free stun servers), a cheap NAT, 2 laptops, wire shark and a really really light STUN client which uses a hard coded port like 777. Only the first call will get through and it will be mapped on the original port, the second one will be blocked.
NAT's are a hack, some of them are so bad that they actually override on return the public transport address not only in the header but even in the transported data which is kinda crazy.
ICE protocols has to xor the public address to bypass this issue.

TCP - congestion avoidance [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am trying to understand TCP congestion avoidance mechanism, but I don't understand one thing:
TCP congestion avoidance is per flow or per link?
In other words: there are 2 routers A and B
A is sending B two TCP flows - when one TCP flow detects congestion, does it decrease window size in the other flow as well?
of course if this happens, the other flow will detect congestion in some time, but does the second flow "waits" until it detects congestion on its own? that would be quite uneffective...
thanks a lot
It decreases the window size for the current connection. Each connection's RTT and windows are maintained independently.
Routers operate on layer 3 (IP) and are not aware of layer 4 (TCP), because of this, routers do not take any part in TCP congestion avoidance mechanism. This mechanism is fully implemented by TCP endpoints. It is triggered by routers dropping IP packets, but (classic) routers are not aware what higher level protocol IP packets carry.
The fact that one flow does not affect the other is quite desirable from the security perspective. With NAT you can have many hosts sharing the same IP address. From the outside world all these hosts look as a single machine. So, if some server reduced throughput of all TCP connections coming from a single IP address in response to packets dropped within one of those connections that would open a door to quite nasty DoS attacks.
Another issue is that some routers may be configured to drop packets based on IP ToS field. For example, latency sensitive SSH traffic may set different ToS than bulk FTP download. If router is configured to take into account ToS field, it may drop packets belonging to FTP connection, which should trigger congestion avoidance, but is should not affect packets belonging to SSH connection, which may be handled with higher priority.

Resources