As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am asking it from a practical sense. In TCP, accept() will give us new socket for every connect(). It allows multiple concurrent communications with only one server port.
The questions is why we do not have such convenience in UDP? Don't tell me UDP is connectionless, hence... Logically, accept() has nothing to do with that (the underlying IP is connectionless anyway).
One consequence is that you have to apply lots of UDP ports, which may complicate firewall settings. So my next question, What is the solution for multi-client UDP application regarding port and multiplexing? In some case, I am thinking to embed client information in UDP packet and let the server differentiate. But inherently without accept(), certain to-do is hard (e.g., UDP with OpenSSL).
Thank you for the insight.
Because UDP is a connectionless protocol, you don't need it. You get remote address information with every incoming UDP datagram, so you know who it's from, so you don't need a per-connection socket to tell you. You don't 'have to apply lots of UDP ports' at all. You only need one.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Before a browser can request for a webpage, A TCP connection is required to be established. What is the necessity of 3 way handshake in interacting with a server computer? why can't we simply send a web request and wait for the response?
Shouldn't the resolution of IP address be enough for this purpose?
Basically, I need to know the reason for establishing TCP connection.
thanks in advance
You are using a device named A and server is named B
Host A sends a TCP SYNchronize packet to Host B
Host B receives A's SYN
Host B sends a SYNchronize-ACKnowledgement
Host A receives B's SYN-ACK
Host A sends ACKnowledge
Host B receives ACK.
TCP socket connection is ESTABLISHED.
See more at: http://www.inetdaemon.com/tutorials/internet/tcp/3-way_handshake.shtml#sthash.F2f4b8Xn.dpuf
Because you need a TCP connection to send HTTP over, and TCP has a 3-way handshake.
Basically, I need to know the reason for establishing TCP connection.
Because HTTP runs over TCP. It doesn't exist in a vacuum.
TCP provides ordering, automatic retransmission, and congestion control. I'd say these are the obvious reasons why the design adopted TCP.
In contrast, e.g. UDP is fast. There is no handshaking. But UDP packets are not ordered, also packets can get lost (no automatic retransmission), and there is no congestion control.
You can try implementing your data transferring for things like HTML in UDP. It's not easy, you still need to reinvent ordering and retransmission for reliable lossless delivery.
If you don't care about lossy or a bit out-of-order transferring then you probably don't need TCP. (e.g. real time video)
--
On the other hand, avoid TCP to get better performance is not necessarily a bad idea. Read about QUIC. (It also has features like loss recovery and congestion control, you should not expect it to be extremely lightweight.)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
Transmission Control Protocol and Internet Protocol are two different protocols.
then why they are always mentioned together.
The official name for TCP/IP is Internet Protocol Suite. TCP/IP is a shorthand used by its authors to refer to this new iteration of a standard based on a previous protocol simply called TCP (for Transmission Control Program), so one may infer that the new acronym was meant to differentiate from the latter.
Quoting the Wikipedia entry:
In May 1974 the Institute of Electrical and Electronic Engineers
(IEEE) published a paper titled "A Protocol for Packet Network
Intercommunication." The paper's authors, Vint Cerf and Bob Kahn,
described an internetworking protocol for sharing resources using
packet-switching among the nodes. A central control component of this
model was the Transmission Control Program that incorporated both
connection-oriented links and datagram services between hosts. The
monolithic Transmission Control Program was later divided into a
modular architecture consisting of the Transmission Control Protocol
at the connection-oriented layer and the Internet Protocol at the
internetworking (datagram) layer. The model became known informally as
TCP/IP, although formally it was henceforth called the Internet
Protocol Suite.
Source: http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Historical_origin
They are the protocols used by the Internet Protocol Suite and are always mentioned together because they both are necessary to transmit data over the internet.
From wikipedia: "TCP provides reliable, ordered and error-checked delivery of a stream of octets between programs running on computers connected to a local area network, intranet or the public Internet." and "IP,[...] has the task of delivering packets from the source host to the destination host solely based on the IP addresses in the packet headers."
When you need to send a message from one computer to another the TCP is responsible to break this message in smaller packages and leave the rest of work to IP, that takes care of deliver these smaller groups of data to the correct destination. In the other side, when the other computer receives the packages, TCP assembles them to get the original message.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I would like to create a new protocol, i.e having features of TCP and UDP. can you tell me what are tips and techniques are required if it is possible.
Thanks in advance
TCP gives you three features that UDP does not: (a) estimating sending rate, (b) retransmission, and (c) flow-control. In doing so, the tradefoff is that TCP becomes slower compared to UDP. So, if your application is delay sensitive, which is typically true for audio/video applications, then you need to start with UDP and keep whichever of the above three you want. Typically, UDP applications might add forward-error-correction or application layer packet-book-keeping to ensure retransmission.
There is yet another advantage that UDP offers which TCP does not: if you have an application that might use mulitcast. For such cases, UDP would be the right chioce since UDP can handle point to multipoint. Using TCP for multicast applications would be hard since now the sender would have to keep track of retransmissions/sending rate for multiple receivers.
So, in summary, UDP will offer you two features that TCP cannot: lower-delay and ability to do multicast. So, this way, we can actually reduce the scope of the question and ask what are the features of TCP that one would like to add to UDP since there is no way, one can add features of UDP to TCP.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am trying to understand TCP congestion avoidance mechanism, but I don't understand one thing:
TCP congestion avoidance is per flow or per link?
In other words: there are 2 routers A and B
A is sending B two TCP flows - when one TCP flow detects congestion, does it decrease window size in the other flow as well?
of course if this happens, the other flow will detect congestion in some time, but does the second flow "waits" until it detects congestion on its own? that would be quite uneffective...
thanks a lot
It decreases the window size for the current connection. Each connection's RTT and windows are maintained independently.
Routers operate on layer 3 (IP) and are not aware of layer 4 (TCP), because of this, routers do not take any part in TCP congestion avoidance mechanism. This mechanism is fully implemented by TCP endpoints. It is triggered by routers dropping IP packets, but (classic) routers are not aware what higher level protocol IP packets carry.
The fact that one flow does not affect the other is quite desirable from the security perspective. With NAT you can have many hosts sharing the same IP address. From the outside world all these hosts look as a single machine. So, if some server reduced throughput of all TCP connections coming from a single IP address in response to packets dropped within one of those connections that would open a door to quite nasty DoS attacks.
Another issue is that some routers may be configured to drop packets based on IP ToS field. For example, latency sensitive SSH traffic may set different ToS than bulk FTP download. If router is configured to take into account ToS field, it may drop packets belonging to FTP connection, which should trigger congestion avoidance, but is should not affect packets belonging to SSH connection, which may be handled with higher priority.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I want to connect two computer via serial but for each to see each other via a TCP/IP connection. Ie, create new network device on the computers that are in actual fact serial ports.
The reason for this is that I am actually testing the medium in which the serial connection is made (wireless), and part of the experiment will be to use TCP/IP.
The radio being tested is a telemetry radio for use in low power applications. It polls once a second, sending data out on the wireless channel every poll when something has been received via the serial port. It uses a Modbus RTU delimiter to determine the end of data coming in on the serial port.
slip and ppp are more suitable for use with actual serial modems from what I understand.
This is actually a very hard problem. TCP/IP is a very chatty protocol and you will have problems with the radio system you have described because of the pattern of packets and ACKs you will have. In the past for some similarly unsuited applications I worked on a system that fibbed about the TCP/IP connection by faking some packets while pushing the data over a link like you have.
It is a pain, but we were doing it to support sshing over a totally inappropriate channel (high loss and high latency with moving endpoints) but it worked.
SLIP (Serial Line IP) sounds like something you might want to look into for this project.
you may use Simple TCP/IP, UDP connection by using UART using a software like this:
http://www.serialporttool.com/CommTunnel.htm