Determining content length when not specified in HTTP - http

I have been playing around with parsing HTTP in user-space and I see with some research that there are several ways to send data following the HTTP header and \r\n\r\n. Obviously, content-length is not always used, so what are the other methods and how do you determine the size of the data being sent before hand if not streaming?
I did see content-encoding, chunking and so on, I'm just a bit lost with the overall dynamicness of the protocol in this case. What is the sure fire way of determining the amount of data to be sent (when obviously not streaming something never ending)?
Really appreciate the help.

The new HTTP spec describes this in http://greenbytes.de/tech/webdav/draft-ietf-httpbis-p1-messaging-26.html#message.body.

Related

What size does the server have to give to each response chunk?

If I wanted to configure my personal server so that the response for a certain request is set according to the chunk rules: what size should each of the server response chunk have?
For example, let's say that the chunked response is a long HTML page or a file.
How would you behave in these two cases?
From the RFC:
This allows dynamically produced content to be transferred...
In other words: Transfer-Encoding: chunked is needed when the length of content is unknown.
The length of your content may be as big as 10Tb... but also it can be as small as 10 bytes. It doesn't matter. The chucks' sizes depend solely on the algorithms you are using to generate them and to read then.
Let's say you generate a stream of messages of different lengths, one character per second. In this case you can decide to send one byte chucks to the client. This way the client will be able to use the data as soon as it arrives. But if your client have no use for partial messages, then you probably should save the bandwidth and send a chunk at the moment you've finished generating the next message. And again it doesn't matter how big or small the message is. It can be 2 characters or it can be 1000.
On second thought, there are some use cases for Transfer-Encoding: chunked with the data of known size. But then your question becomes to broad to answer. It depends on your client code, server code, network conditions, data properties, desired user experience, etc.
And if by any chance you are asking about optimal size from the network perspective, then just send the whole file - that the best bet. And support Content-Range on your server instead of Transfer-Encoding: chunked.

Restrict Response Body Size of HTTP POST Method

I am currently setting up a POST HTTP request test case in JMeter. In this case it takes very long until one iteration is threw so the next does start. I suppose that the server spends a lot of time with downloading the response body of the request - which in this case may be quite large:
So here is my question:
Is there some way to restrict the size of a HTTP response body or something similar?
Thank you & BR!
HTTP is stateful protocol therefore there is no way to retrieve only partial response so normal HTTP Request sampler will be receiving the data until the end or the error occurs, whatever comes the first.
I would recommend double-checking your requirement cause it sounds like you want to get false-positive result by retrieving much less content than it will be in the reality.
If this is really something you're looking for, you can request an arbitrary range of bytes using Range header
Add HTTP Header Manager as a child of the request which produces the large response
Add Range header with the value of bytes=0-10
That's it, the server response will be truncated to the first 10 bytes
as a side-effect you will get HTTP Status Code 206 - Partial Content

How to send chunks of video for streaming using HTTP protocol?

I am creating an app which uses sockets to send data to other devices. I am using Http protocol to send and receive data. Now the problem is, i have to stream a video and i don't know how to send a video(or stream a video).
If the user directly jump to the middle of video then how should i send data.
Thanks...
HTTP wasn't really designed with streaming in mind. Honestly the best protocol is something UDP-based (SCTP is even better in some ways, but support is sketchy). However, I appreciate you may be constrained to HTTP so I'll answer your question as written.
I should also point out that streaming video is actually quite a deep topic and all I can do here is try to touch on some of the approaches that you might want to investigate. If you have control of the end-to-end solution then you have some choices to make - if you only control one end, then your choices are more or less dictated by what's available at the other end.
If you only want to play from the start of the file then it's fairly straightforward - make a standard HTTP request and just start playing as soon as you've buffered up enough video that you can finish downloading the file before you catch up with your download rate. You don't need any special server support for this and any video format will work.
Seeking is trickier. You could take the approach that sites like YouTube used to take which is to simply not allow the user to seek until the file has downloaded enough to reach that point in the video (or just leave them looking at a spinner until that point is reached). This is not the user experience that most people will expect these days, however.
To do better you need to be in control of the streaming client. I would suggest treating the file in chunks and making byte range requests for one chunk at a time. When the user seeks into the middle of the file, you can work out the byte offset into the file and start making byte range requests from that point.
If the video format contains some sort of index at the start then you can use this to work out file offsets - so, your video client would have to request at least enough to get the index before doing any seeking.
If the format doesn't have any form of index but it's encoded at a constant bit rate (CBR) then you can do an initial HEAD request and look at the Content-Length header to find the size of the file. Then, if the use seeks 40% of the way through the video, for example, you just seek to 40% of the way through the encoded frames. This relies on knowing enough about the file format that you can calculate an appropriate seek point so that you can identify framing information and the like (or at least an encoding format which allows you to resynchonise with both the audio and video streams even if you jump in at an arbitrary point in the file). This approach might also work with variable bit rate (VBR) as long as the format is such that you can recover from an arbitrary seek.
It's not ideal but as I said, HTTP wasn't really designed for streaming.
If you have control of the file format and the server, you could make life easier by making each chunk a separate resource. This is how Apple HTTP live streaming and Microsoft smooth streaming both work. They need tool support to pre-process the video, and I don't know if you have control of the server end. Might be worth looking into, however. These also do more clever tricks such as allowing a client to switch between multiple versions of the stream encoded at different bit rates to cope with differences in bandwidth.

Pre flush head tag with gzip support

http://developer.yahoo.com/performance/rules.html
There it is given it is good to preflush the head tag .
But I have a question will it help while using gzip ? (I am using apache2).
I think full document will get gziped at one shot and then send to the client.
or is it also possible to have gzip as well as pre-flush the head tag
EDITED
The original version of this question suggested we were dealing with HTTP headers rather than the <head> section on an HTML document. I will leave my original answer below, but it actually has no relevance to this specific question.
To answer the question about pre-flushing the <head> section of a document - while it would be possible to do this in combination with gzip, it is probably not possible without more granular control over the gzip process than Apache affords. It is possible to break a gzipped stream into chunks that can be decompressed on their own (see this) but if there is a way to control Apache's gzip implementation to such a degree then I am not aware of it.
Doing so would likely decrease the efficacy of the gzip, making the compressed size larger, and would only be worth doing when the <head> of a document was particularly large, say, greater than 10KB (this is a somewhat arbitrary value I arrived at by reading about how gzip works under the bonnet, and should definitely not be taken as gospel).
Original answer, relating to the HTTP headers:
Purely from the viewpoint of the HTTP protocol, rather than exactly how you would implement it on an Apache based server, I can't see any reason why you can't preflush the headers and also use gzip to compress the body. Keeping in mind that fact that the headers are never gzipped (if they were, how would the client know they had been?), the transfer encoding of the content should have no effect on when you send the headers.
There are, however a couple of things to keep in mind:
Once the headers have been sent, you can't change your mind about the transfer encoding. So if you send the headers which state that the body will be gzipped, then realise that your body is only 4 bytes, your would still have to gzip it anyway, which would actually increase the size of the body. This probably wouldn't be a problem unless you were omitting the Content-Length: header which while possible, is bad practice as it means you cannot use persistent connections. This leads on to the next point...
You cannot send a Content-Length: header in this secenario. This is because you don't know what the size of the body is until you have compressed it, by which time it is ready to send, so you are not really (from the server's point of view) preflushing the headers, even if you do send them seperately before you start to send the body.
If it takes you a long time time to compress the body of the message (slow/heavily loaded server, very large body etc etc), and you don't start the compression until after you have sent the headers, there is a risk the client may time out waiting for the rest of the response. This depends entirely on the client, but the are so many HTTP client implementations out there that this possibility cannot be totally discounted.
In short, yes it is possible to do it, but there is no catch-all, "Yes, do it" or "No, don't do it" answer - whether you would do it depends on each request and the nature of it's response.

Split CRLF between TCP payloads

I'm currently writing a low-level HTTP parser and have run into the following issue:
I am receiving HTTP data on a packet-by-packet basis, i.e. TCP payloads one at a time. When parsing this data, I am using the HTTP protocol standards of searching for CRLF to delineate header lines, chunk data (in the case of chunked-encoding), and the dual CRLF to delineate header from body.
My question is: do I need to worry about the possibility of CRLF being split between two TCP packet payloads? For example, the HTTP header will finish with CRLFCRLF. Is it possible that two subsequent TCP packets will have CR, and then LFCRLF?
I am assuming that yes; this is a case to worry about, since the application (HTTP) and TCP layers are rather independent of each other.
Any insight into this would be highly appreciated, thank you!
Yes, it is possible that the CRLF gets split into different TCP packets. Just think about the possibility that a single HTTP header is exactly one byte longer than the TCP MTU. In that case, there is only room for the CR, but not for the NL.
So no matter how tricky your code will get, it must be able to handle this case of splitting.
What language are you working in? Does it not have some form of buffered read functionality for the socket, so you don't have this issue?
The short answer to your question is yes, theoretically you do have to worry about it, because it is possible the packets would arrive like that. It is very unlikely, because most HTTP endpoints will tend to send the header in one packet and the body in subsequent packets. This is less by convention and more by the nature of the way most socket-based programs/languages work.
One thing to bear in mind is that while the protocol standards are quite clear about the CRLF separation, many people who implement HTTP (clients in particular, but to some degree servers as well) don't know/care what they are doing and will not obey the rules. They will tend to separate lines with LF only - particularly the blank line between the head and the body, the number of code segments I have seen with this problem I could not count up to quickly. While this is technically a protocol violation, most servers/clients will accept this behaviour and work around it, so you will need to as well.
If you can't do some kind of buffered read functionality, there is some good news. All you need to do is read a packet at a time into memory and tag the data on to the previous packet(s). Every time you have read a packet, scan your data for a double CRLF sequence, if you don't find it, read the next packet, and so on until you find the end of the head. This will be relatively small memory usage, because the head of any request shouldn't ever be more than 5-6KB, which given an ethernet MTU of (averaging around) 1450 bytes means you shouldn't ever need to load more than 4 or 5 packets into memory to cope with it.

Resources