HTTP/1.1 protocol testing tool - http

Currently I'm implementing a HTTP server targeted at extremely constraint environments (I'd loved to use something readily available, but there was nothing matching my needs).
The HTTP/1.1 protocol is a tricky beast with lots of caveats (the RFC is one thing, but the actual specification is "HTTP is whatever Apache accepts" ;) )
I'd like my HTTP server to be as conformant as possible and for that I must test it of course. So I'm looking for a tool that can reliably and reproducible craft HTTP queries, most importantly also the uncommon cases, like Chunked Transfer + Multipart POST in a single query (my attempts at making curl, wget, Firefox, Chromium or Opera creating such a query were fruitless).
TL;DR: I need a tool for testing HTTP/1.1 protocol server implementations.

Related

Intentionally bad HTTP clients for testing

What are good "bad HTTP client"-s I can use to test your HTTP servers?
For instance, there are servers like
https://httpbin.org/
https://badssl.com/
which allow you to test client against different, sometimes intentionally bad, behavior.
I seek for HTTP client utility for testing HTTP servers. It may send wrong Content-Length or close connection in the middle of request, or do other bad things which robust HTTP server should handle.
In the past, I have used Tamper Data (for Firefox). I see that there is an equivalent for Chrome - Tamper Chrome.
These plugins allow you to edit the HTTP request prior to sending it to the server. This way you can do a number of test cases e.g.:
Changing the content length or type
Bypassing the client-side field validation
These are great for manual, exploratory testing.

HTTP1.1 to HTTP/2: what about headers?

In HTTP 1.1, the status line was
scheme/version code reason
HTTP/1.1 200 OK
I see :scheme and :status headers in the HPACK spec. I don't however see anything for version or reason? Is there not one?
In a request in HTTP 1.1, the request line was
method uri scheme/version
POST http://myhost.com HTTP/1.1
I see :method and I see :path, which I think is just a relative path, which is not the same as the full absolute path (and since Chrome and Firefox are pushing HTTPS for HTTP/2, this may make sense). I do not see version header though.
Is there a version header? Or is it seen that this will always be known before the protocol decision such that it is not really needed?
What about reason codes? Is it assumed these are pretty constant so that goes away (I am guessing here)?
In HTTP/1, the version token was needed to differentiate HTTP/1.0 from HTTP/1.1, since they had the same wire representation, but were supporting different features.
For example, a client declaring HTTP/1.1 implicitly tells the server that it supports persistent connections and content chunking.
With HTTP/2, the protocol version is negotiated.
In clear-text HTTP/2, the Upgrade header reports h2c, where the 2 means version 2 of the protocol. I imagine that for HTTP/3 the token will change to h3c.
Similarly happens for encrypted HTTP/2 where the token h2 is negotiated via ALPN.
Reason messages have been dropped as being redundant, as the status code was already conveying all the necessary information (not to mention that they could be attack vectors).
For these reasons, HTTP/2 does not have neither version nor reason pseudo-headers.

The essence of HTTP protocol

There's too much elaboration about the HTTP protocol. But to its essensce, it's nothing but a string of ASCII characters transmitted over the TCP protocol. And the string defines the semantic of the protocol. Am I right on this?
If so, 2 questions follows:
Can we devise any protocols as we want, cause it just looks like
passing strings over the internet.
Why don't we compress the HTTP strings before we pass it down to the TCP level?
That's right, HTTP is by no means a special, but because it underpins the web it receives a lot of attention. It's an application level protocol like SMTP or FTP or any other.
Yes, you could design any protocol you like. For fun, grab an RFC for SMTP, FTP or HTTP and connect to your own server and learn the protocol. RFC2324 is also required reading - http://www.faqs.org/rfcs/rfc2324.html
Lack of HTTP header compression has been talked about a lot in recent years. See Steve Souders blog/books, YSlow! and Google Page Speed sites. The SPDY protocol is probably going to be the front runner at addressing several of the current issues with HTTP connection management, performance and security - http://www.chromium.org/spdy/spdy-whitepaper
Sure. But you would have to get others to adopt your protocol (unless it is an internal/proprietary spec). And if you can coherently express your communique in the form of HTTP, why not use it? It's widely implemented in virtually every language and operating system, and is well understood and easily debugged. Don't just create protocols for the heck of it.
The HTTP specification provides for several common compression schemes. gzip and deflate are particularly widely used. See, for example, Apache's mod_gzip and mod_deflate. Clients and servers routinely negotiate compression on your behalf.

How could we fool the HTTP protocol?

Although HTTP is ubiquitous it comes with its baggage of Headers which in my case is becoming more of a problem.
My data to be transferred is an iota of the HTTP header size.
Is there another protocol that I can
use which is still understood by the
browsers and other networks and doesn't come with the
baggage of HTTP?
Any other way to skip headers and add it at the destination so only a miniscule of data is transferred over the network?
No.
No.
Many HTTP headers are optional. A typical browser request is much larger than a minimal request, which might look like:
GET /doc HTTP/1.1
Host: example.com
Connection: close
(I can say with confidence that requests of this form work because I use them all the time when testing Web server response via telnet example.com 80.)
Possibly you can get useful results simply by omitting some headers.
HTTP requests can be quite small. As chaos points out in his answer, you don't really need to send many headers with a request. The only header that's essential is Host. I can simplify chaos' example a bit more by using HTTP 1.0, which doesn't feature persistent connections.
GET / HTTP/1.0
Host: example.com
(blank line is necessary)
The reply can be similarly simple
HTTP/1.0 200 OK
Content-Type: text/html
data content
In this case, the overhead of HTTP is about 40 bytes in the request and the response. A standard TCP packet is 1500 bytes so you have plenty of room left over in the response packet for the actual data.
There are other HTTP headers, and they do have value. You can include cache information and do conditional GETs. You can use an HTTP/1.1 persistent socket to make subsequent requests faster. Etc, etc. You don't have to use any of this stuff if you don't want, but one nice thing about HTTP is there's a standard way to do more complicated protocols when you need it.
As for doing minimal HTTP in JavaME, if you really care about every byte you may be best off writing your own simple HTTP client by working with a plain TCP socket. If you're talking to a known server you don't need to implement much at all. (If you're talking to arbitrary servers, you need to pay more attention to error handling, redirects, etc).
WebSockets are coming in HTML5 and should suit your needs. A standard HTTP connection can be renegotiated to change protocol to websockets. But I suspect the specification might be a bit young, but it might fit the bill.

Is HTTP/1.0 still in use?

Say one is to write an HTTP server/client, how important is it to support HTTP/1.0? Is it still used anywhere nowdays?
Edit: I'm less concerned with the usefullness/importance of HTTP/1.0, rather the amount of software that actually uses it for non-internal (unit testing being internal use, for example) purposes in the real world (browsers, robots, smartphones/stupidphones, etc...).
As of 2016, you would think that the prominence would decline even more since 1.1 was introduced in 1999 so this is about 17 years.
I checked 7,727,198 lines of logs to see what percent I get of HTTP/1.0 and HTTP/1.1:
Protocol Counts Percent
--------------------------------
HTTP/0.9 0 0.00%
HTTP/1.0 1,636,187 21.17% (all)
HTTP/1.0 15,415 0.20% (without the obvious robots)
HTTP/1.1 6,091,011 78.83%
HTTP/2 0 0.00%
From what I can see, most of the HTTP/1.0 are from robots. So I tried to remove entries that were obviously from such (i.e. Agent including the word robot, bot, slurp, etc.)
So it looks like the amount of end users still stuck with HTTP/1.0 is very limited today (0.2%). However, if you want to let robots check out your websites, you may need/want to keep HTTP/1.0 operational. Most will anyway include the Host: ... header even though they advertise their connection as an HTTP/1.0 protocol.
Also, the differences between HTTP/1.0 and HTTP/1.1 is very blurry in terms of implementation. Most people are happily mixing both. I would not worry too much about still accepting/handling HTTP/1.0 requests.
On another server I am starting to see HTTP/2.0 requests that look like this (got 2427 and I see 34,161,268 HTTP/1.0 and HTTP/1.1 requests, so 0.007%):
PRI * HTTP/2.0
wget uses HTTP/1.0, and it is still relatively popular (though it does support a few HTTP/1.1 features like the Host: header, which is necessary to access any virtual hosts).
A fair number of servers will deliberately return HTTP/1.0 responses because some (older) browsers will afford a HTTP/1.0 server a higher connection limit than the 2-connection limit imposed for HTTP/1.1's persistent connections.
But in general, most "HTTP/1.0" implementations are really just slightly limited versions of the HTTP/1.1 implementations, and many HTTP/1.1 implementations don't really support some features of that version (e.g. pipelining in particular).
I use it all the time when I'm telnet-ing to a server to verify connectivity or figure out why it's not working:
$ telnet 192.168.1.1 80
GET / HTTP/1.0\r\n
\r\n
...
(Because making a 1.0 request doesn't require that I provide any extra headers).
HTTP/1.0 is very important in writing very basic clients that don't need the overhead of all the 1.1 things like pipelining and other complicated things required by 1.1. Post a request get a response and disconnect is very easy to code for. This might be useful in writing test cases for your server that just want to test the application functionality and NOT the HTTP protocol implementation.
There are lots of mobile browsers and applications that use 1.0 because they don't have the space or need for more sophisticated 1.1 implementations, and the latency issues with non-3G connections on non-smart phones completely negates any benefits of 1.1 features.
There are also lots of proxies that degrade everything to 1.0 regardless of what the client asks for, and then there is IE issues.
So the short answer is, for a general purpose HTTP server, 1.0 is very relevant.
Looking into this myself for other purposes:
"HTTP/1.0 is in use by proxies, some mobile clients, and IE when
configured to use a proxy. So 1.0 appears to still account for a non-
trivial % of traffic on the web overall.
...
Yes, there are many 1.0 clients still out there."
Source (July 2009): http://groups.google.com/group/erlang-programming/msg/08f6b72d5156ef74
:-(
Update (March 2011):
If you are going to build a client/server thingy, make the client use HTTP/1.1, and make the server accept both 1.1 and 1.0.
Doing web-development, it is a PITA to get clients trying to load a page without the Host header, because I have no way to know which site I am supposed to load :-S
So you better don't build a client like that ;-)
IME its been a very long time since I've seen a true HTTP/1.0 request. (including mobile devices fuzzylollipop).
I say a true request as MSIE still (pretends) to downgrade to HTTP/1.0 by default (unless yo sig in the config) when you connect via a proxy (all the outgoing requests are flagged as HTTP/1.0) - however it still includes HTTP/1.1 specific request headers and respects all the HTTP/1.1 responses.
Curiously, IIS, in a mirror image, happily ignores the HTTP version (although I've not experimented much with this to see if only does this for MSIE user agents).
So by curious coincidence, MSIE and IIS work much better with proxies than with standards-compliant tools.
C.

Resources