Question about TCP and how to assemble TCP segments meaningfully - networking

When an application such as a web server sends HTTP data to a web browser, how does the browser know when it has received all of the data so that it can begin using it instead of waiting for more? TCP doesn't specify anywhere how large a segmented message is going to be.
Right now I'm thinking that it's up to the application layer, like HTTP's Content-Length header. But it seems like even that header could be split off into a 2nd or 3rd packet.

TCP/IP is a connection oriented protocol. So, when the browser performs a HTTP connection using TCP/IP, the Network stack guarantees that stream will arrive in the same order the sender intended to.
So, there is no packet concept when you are dealing with TCP. TCP is an ordered stream of bytes arriving through a socket. No need to worry about packets at all. That's the beauty of a protocol stack: each layer does its own work, and abstracts the layer above it of the underlying complications of the problems it resolves.

Content-length indeed, except in the case where the client reads until it gets an end-of-file indication due to the other end closing the connection. Of course, in HTTP, 'RSVP', so that's not going to happen.
Absent content length, it's gotta look for </html> or some other delimiter in the content. The browser doesn't see packets at all. The connection looks like a stream, with no boundaries, and it's up to the two ends to make a protocol.

Related

Do all protocols based on TCP use one socket per transfer?

I'm studying Socket Programming HOWTO and the author at some point says that
A protocol like HTTP uses a socket for only one transfer.
Is it because of the design of the HTTP protocol itself? Or is it because it is based on TCP, so all protocols based on it (e.g. UDP) must use one socket for only one transfer?
This statement is taken out of context. The context is to point out that TCP is not a message based protocol but an unstructured byte stream. And to have a message semantic one needs to have some way to determine where a message ends.
It then takes HTTP as an example where a message might simply end with a connection close and points out the limitations - namely only a single message per connection per direction. Then it goes on to describe how protocols can be designed without this limitation, i.e. having multiple messages per connection.
HTTP still can be used like this, i.e. have a single request and end with connection close. This is the design of HTTP version 0.9, but can still be done with HTTP/1. But with HTTP/1 it can also be used for multiple messages, one after the other. And with HTTP/2 it can do multiple messages in parallel, multiplexed over a single TCP connection. And HTTP/3 does not even use TCP anymore.
Do all protocols based on TCP use one socket per transfer?
Protocols are not limited to one connection ("socket") per message ("transfer"). Depending on the design of the protocol multiple messages can be send one after the other by having some pre-known message size or a clear message delimiter. Some protocols might send multiple messages in parallel by implementing a multiplexing layer on top of TCP. Some protocols might even use multiple TCP connections in parallel to deliver a single message, i.e. distributing the message over multiple connections.
That statement was probably written in 1996 or earlier. Since 1997, HTTP supports persistent connections, reusing the same TCP connection and the same socket for multiple queries.

In Opendaylight, we send openflow's multipart request and why wireshark can see accumulate multipart request in a single packet?

Like this figure,enter image description here
I can find many openflow1.3 which are multipart requests in this packet, but I don't know why happened here?
Actually, isn't it only one openflow1.3 here ?
It is related to openflowjava do serialize, wireshark, NIC, tcp nagle's algorithim?
Thanks!
TCP is a byte stream, i.e. packet has no semantic from the perspective of the application layer (i.e. OpenFlow). At the transport level there can be multiple application layer "messages" in a single TCP packet, messages cross packet boundaries etc - it does not matter for the application. While often TCP packet boundaries are also message boundaries because of timing in the application and maybe a disabling of NAGLE algorithm, the assumption that TCP packet boundaries are always message boundaries is wrong and any reliance on this will often cause sporadic and hard to reproduce problems.
And based on this what you see is also not a "multipart request". These are just multiple OpenFlow messages (application level) send at the same time or shortly after each other and they are put together into the same transport level entity (packet) since this way it is less overhead to transport each message.

How is a TCP "Connection" maintained, and how does HTTP Keep-Alive affect it?

I'm an application developer looking to learn more about the transport layer of my requests that I've been making all these years. I've also been learning more of the backend and am building my own live data service with websockets, which has me curious about how data actually moves around.
As such I've learned about TCP, and I understand how it works, but there's still one term that confuses me-- a "TCP Connection". I have seen it everywhere, and actually there was a thread opened with the exact same question... but as the OP said in the comments, nobody actually answered the question:
TCP vs UDP - What is a TCP connection?
"when we say that there is a connection established between two hosts,
what does that mean? If I could get a magic microscope and inspect the
server or the client, and - a-ha! - find the connection, what would I
be looking at? Some variable allocated by the OS code? Some entry in
some kind of table? How and when does that gets there, and how and
when it is removed from there"
I've been reading to try to figure this out on my own,
Here is a nice resource that details HTTP flow, also mentions "TCP Connection"
https://blog.catchpoint.com/2010/09/17/anatomyhttp/
Here is another thread about HTTP Keep-alive, same "TCP Connection":
HTTP Keep Alive and TCP keep alive
My understanding:
When a client wants data from server, SYN/ACK handshake happens, this "connection" is established, and both parties agree on the starting sequence number, maximum packet size, etc.
as long as this "connection" is still open, client can request/receive data without doing another handshake. TCP Keep-alive sends a heartbeat to keep this "connection" open
1) Somehow a HTTP Header "Keep-alive" also keeps this TCP "connection" open, even though HTTP headers are part of the packet payload and it doesn't seem to make sense that the TCP layer would parse the HTTP headers?
To me it seems like a "connection" between two machines in the literal sense can never be closed, because a client is always free to hit a server with packets (like the first SYN packet, for example)
2) Is a TCP "connection" just the client and server saving the sequence number from the other's IP address? maybe it's just a flag that's saying "hey this client is cool, accept messages from them without a handshake"? So would closing a connection just be wiping that data out from memory?
... both parties agree on the starting sequence number
No, they don't "agree" one a number. Each direction has their own sequence numbering. So the client sends in the SYN to the server the initial sequence number (ISN) for the data from client to server, the server sends in its SYN the ISN for the data from server to client.
Somehow a HTTP Header "Keep-alive" also keeps this TCP "connection" open ...
Not really. With HTTP keep-alive the client just asks a server nicely to not close the connection after the HTTP response was sent so that another HTTP request can be sent using the same TCP connection. The server might decide to follow the clients wish or not.
To me it seems like a "connection" between two machines in the literal sense can never be closed,
Each side can send a packet with a FIN flag to signal that it will no longer send any data. If both sides has send the FIN the the connection is considered close since no one will send anything and thus nothing can be received. If one side decides that it does not want to receive any more data it can send a packet with a RST flag.
Is a TCP "connection" just the client and server saving the sequence number from the other's IP address?
Kind of. Each side saves the current state of the connection, i.e. IP's and ports involved, currently expected sequence number for receiving, current sequence number for sending, outstanding bytes which were not ACKed yet ... If no such state is there (for example one site crashed) then there is no connection.
... maybe it's just a flag that's saying "hey this client is cool, accept messages from them without a handshake"
If a packet got received which fits an existing state then it is considered part of the connection, i.e. it will be processed and the state will be updated.
So would closing a connection just be wiping that data out from memory?
Closing is telling the other that no more data will be send (using FIN) and if both side have done it both can basically remove the state and then there is no connection anymore.

reading tcp packets out of order

Web games are forced to use tcp.
But with real time constraints tcp head of line blocking behavior is absurd when you don't care about old packets.
While I'm aware that there's definitely nothing that we can do on the client side, I'm wondering if there is a solution on the server side.
Indeed, on the server you get packets in order and miserably wait if misbehaving packet t+42 has been lost even though packets t+43, t+44 can already be nicely waiting in your receive buffer.
Since we are talking about local data, technically it should be possible to retrieve it..
So does anyone have an idea on how to perform that feat?
How to save this precious data from these pesky kernel space daemons?
TCP guarantees that the data arrives in order and re-transmits lost packets. TCP Man Page
Given this, there is only one way to achieve the results you want given your stated constraints, and that is to hack the TCP protocol at the server side (assuming you cannot control the Client WebSocket behavior). The simplest, relative term, would be to open a raw socket, implement your own simple TCP handshake (Syn-Ack when client Syns), then read and write from the socket managing your own TCP headers. Your custom implementation would need to keep track of received sequence numbers and acknowledge all of those you want the client to forget about.
You might be able to reduce effort by making this program a proxy to your original.
Example of TCP raw socket here.

Working with persistent HTTP connections

We are trying to implement a proxy proof of concept but have encountered an interesting question: Since a single HTTP connection can, and indeed should, make multiple requests, and the HTTP transactions are sent via multiple packets due to TCP's magic, is it possible for a HTTP request to begin in the middle of a packet?
Bear in mind that this is not a theoretical question regarding possible optimization of the browser, but whether it actually happens in real life. It would be even better if someone could point me to a written reference on whether or not this is possible and if so how often it can occur.
Clarification update: We know that if we work in the HTTP layer alone we would not need to bother with this question, however we're trying to figure out if some advanced technique could be applied by working on the TCP layer first.
Assuming that you are talking about IP packets: Yes, it is possible that HTTP request starts middle of IP packet.
When you are using persistent HTTP connections, that is, using same TCP connection for several HTTP requests, it is fully possible that request boundary is middle of IP packet.
Also there is a TCP protocol between IP and HTTP. TCP contains also some headers so a IP packet may start with some TCP headers and rest of the packet consists of HTTP request.
HTTP request may also consist of several IP packets (in case of file uploads, transmission errors and following retransmissions etc).
However, I wonder why you are interested in packets if you are working at HTTP level. TCP should hide the IP packet details.
First of all, TCP is a stream based protocol and has no concept of packets. HTTP itself might have some kind of message or record delimiter, but TCP doesn't.
This page might be helpful: Structure of HTTP Transactions
From your question it sounds like you think that each read from a TCP socket is a "packet" of data. In reality, each read simply reads as many bytes as are in the buffer up to the maximum that you requested, without any concept of records or packets.
So for instance, lets say you read 2048 bytes from the socket, you could have the tail end of one transaction, followed by the beginning of a second response half way through the data you read, and only get the remainder of your second response on your next read from the socket.
If you're here in Jerusalem or near by maybe I could help you out.
Unless you are implementing your own TCP stack, you should not need to worry about the packets, but rather about the API that the TCP provides, in case of POSIX interfaces it would be the recv() or read(). So I treat the question then as "Can more than one HTTP requests come into a single read(), and can the HTTP request be split between multiple read() requests?" -- The answer to both would be "yes, it is possible".
An example of where this can happen is HTTP pipelining. This not frequent in real life (ironically, at least some of the browsers disable it by default because of "buggy proxies" :-) - but when it happens, can be a bit of a problem for the users to diagnose - especially if they have no access to the proxy.
One very notable place where it does happen by default apt-get in Debian-derived linux systems. Just install a Debian or Ubuntu server and try to use it through your proxy. You can do that by editing the /etc/apt/apt.conf.d/proxy file and placing the following there:
Acquire::http::Proxy "http://your.proxy.address:8080";
Depends of which abstraction layer of a packet you are talking about: there are many layers underneath HTTP.
HTTP --> TCP (byte stream) --> IP (packet) --> (possibly something else) Ethernet (frame) --> (possibly) some other transport
If you are talking about the IP layer, then yes the HTTP layer would start later on... Note that TCP presents a "byte stream interface" to its Client layer hence, no concept of packet here.
I think I understand where you are trying to go with this question.
If you don't use persistent HTTP connections, the HTTP GET request header is always the very first thing which is sent over the TCP connection, so we can be sure that the start of the HTTP GET request header does "not start in the middle of some TCP packet". But keep in mind that there may be one or more TCP packets without any user data, e.g. only a SYN, which may preceed the TCP packet with the start of the HTTP GET request header. And also keep in mind that the HTTP GET request header may not be contained in a single TCP packet.
If you do use persistent HTTP connections, the start of the HTTP GET request header for request number N+1 can start in the middle of a TCP packet, namely after the end of HTTP GET request body of request number N.
If you are asking these questions you are possibly "doing it wrong". As several other responders have already pointed out, in the vast majority of cases you should probably just be a TCP client and deal with a TCP stream of data and let the TCP code worry about the TCP packets. (Unless, of course, you are working on some special hardware which is looking at individual IP packets as they fly by and try to do some processing at the HTTP layer.)

Resources