What does this line mean in rfc2068 - http

source
In addition, the proliferation of incompletely-implemented
applications calling themselves "HTTP/1.0" has necessitated a
protocol version change in order for two communicating applications
to determine each other's true capabilities.

From the RFC:
HTTP has been in use by the World-Wide Web global information initiative since 1990. The first version of HTTP, referred to as HTTP/0.9, was a simple protocol for raw data transfer across the Internet.
Rephrased:
Before HTTP was standardised there were differences in implementations that meant they couldn't always communicate with each other correctly (e.g. certain web-browsers couldn't work with certain web-servers). The RFC article refers to these pre-standardisation implementations as using HTTP/0.9.
HTTP/1.0, as defined by RFC 1945, improved the protocol by allowing messages to be in the format of MIME-like messages, containing metainformation about the data transferred and modifiers on the request/response semantics. However, HTTP/1.0 does not sufficiently take into consideration the effects of hierarchical proxies, caching, the need for persistent connections, and virtual hosts. In addition, the proliferation of incompletely-implemented applications calling themselves "HTTP/1.0" has necessitated a protocol version change in order for two communicating applications to determine each other's true capabilities.
Rephrased:
After HTTP was standardised as HTTP/1.0 it certainly helped the interopability and compatibility problems, but version 1.0 of the protocol simply assumed all HTTP software would be able to use it for their existing application, but now that HTTP/1.0 has been in-use for a while the maintainers of the HTTP protocol specification saw that they need to extend HTTP to support these use-cases (e.g. proxies, caches, persistent connections, virtual-hosts) and while these things could be done using the built-in extension mechanisms in HTTP/1.0 they felt a need to increment the version number to HTTP/1.1 in order to prevent an implementation simply assuming the remote host supports a feature or not.
Example
A good example is the Host header in HTTP/1.1 that allows for a web-server serving from a single IP address and port number to serve-up different websites based on the Host header (as before HTTP/1.1 existed webservers could only serve one website per IP address, which is a problem). HTTP/1.0 does allow clients and servers to add their own custom headers, such as Host, however there is no way for the client or the server to know that the other end actually supports the Host header. But in HTTP/1.1 the Host header was formerly added to the specification so if both the client and server declare they use HTTP/1.1 then the other end knows that they'll recognize the Host header and handle it correctly.
So in the HTTP/1.0 days, with custom headers, this is how it would play out if a browser requests www.example.com if it were served from a Shared Webhost:
Browser (to DNS server): "Please give me the IP address for 'www.example.com'"
DNS Server (to browser): "www.example.com is 198.51.100.7"
Browser (to 198.51.100.7): "Hello, I speak HTTP/1.0, please send me index.html for Host: www.example.com
Server (to browser): "I also speak HTTP/1.0, here is index.html for 'not-actually-example.com'"
As you can see, the browser got not-actually-example.com even though it asked for www.example.com, because the Web-server was using HTTP/1.0 which does not recognize the Host header, even though the web-browser was sending the Host header (as an extension/experimental header). The browser software has no way of knowing if not-actually-example.com is what the user wanted or not.

In human terms, what they're saying is: so many people said they did HTTP 1.0 while they didn't, that nobody knew whether it really was HTTP 1.0 any more when someone said it.
To get out of that, they chose a new number.

Related

The use of HTTP headers

In developer.mozilla.org says:
HTTP headers allow the client and the server to pass additional
information with the request or the response
but I don't understand what is the use of that? What is the need to pass additional information with the request or the response?
This is a hard question to answer concisely because of the many different types of HTTP headers and what they do, but here's an attempt at a one-line answer:
HTTP headers allow a client and server to understand each other better, meaning they can communicate more effectively.
So then if you look at individual headers, it becomes clearer why each is needed:
User-Agent header
Sent by the client
Tells the server about the client's setup (browser, OS etc.)
Mostly used to improve client experience, e.g. tailoring responses for mobile devices or dealing with browser compatibility issues
set-cookie header
Sent by the server
Tells the browser to set a cookie
host header
Sent by the client
Specifies the exact domain name of the site the client wants to reach, this is used when a single server hosts multiple websites (a.k.a. virtual hosting)

Are http and https resources equivalent?

Are HTTP and https resources equivalent? That is, does http://example.com/ABC refer to the same resource as https://example.com/ABC?
Evidence for: (1) Cookies with matching domain and path without "secure" attribute are set and returned independent of protocol. (2) HTTP strict transport security bounces you from HTTP to HTTPS with an implicit assumption the resource is the same.
Evidence against: (1) Same origin policy treats a different protocol as a different origin. (2) HTTP RFC shows HTTP, and https comparison is unequal. (3) Resources for other protocols like FTP aren't equivalent to HTTP resources for the same domain (e.g., FTP server root dir different), so what magic does https have over FTP in resource equivalence to HTTP?
I am going to say - Yes - they are the same resources.
The protocol only depicts the transportation layer.
To me
http://example.com/ABC
reads like following:
At example.com a commercial domain I have a resource called ABC.
I read the same for the following irrespective of protocol.
https://example.com/ABC
However web servers can be configured to represent and entirely different contents at the same ABC resource path based on https but in my mind they should not do so.
However the only caveat is if anyone wants to return some sort of warning for using plain HTTP we now have a different meaning but it should return 500 or some error condition for doing so.
The answer is, it depends on the web server configuration. They can and in a lot of cases do point to the same resources, because HTTP and HTTPS tends to be bound to the same single site/application.
However, because they are accessed over different TCP ports (HTTP port 80, HTTPS port 443), it is perfectly possible to have the HTTP resource be served up by a different bound site than the HTTPS resource with the same URI (except protocol) and therefore be totally different.

Varnish + Static HTML Pages

I've recently come across a http web accelerator called Varnish. From what I've read, Varnish speeds up delivery of a website by optimizing every process of HTTP communication with the HTTP server using a reverse proxy configuration.
My question is that if you have a website that has its caching mechanism configured all the way down to static html files then how much more of an effect will Varnish have on this? Does a reverse proxy cut down the work that is performed by the HTTP server to process the request? If you have everything extensively cached on the server-side (HTTP headers, Etags, Expires Headers, Database Caching, Fragment and Page caching) then what more will a HTTP accelerator do to improve on this?
Firstly, we should differentiate between two different types of caching that go on in a normal web system: HTTP caching and server-side caching.
HTTP caching is controlled by HTTP headers, notably as you point out ETag and the various expiry mechanisms (including Expires and various aspects of Cache-Control). This is all covered in RFC 2616 (HTTP), section 13, and allows HTTP caches to return a response to an HTTP request from a client without having to go back to the origin server. In effect, the HTTP caching mechanism allows another machine between client and server to act as if it's the server, in certain cases. This is actually what varnish is doing, as we'll see in a minute; another common use that many people are familiar with is when ISPs provide an HTTP cache within their network, that can generally respond faster to their subscribers (and so improve perceived performance) than the origin servers outside their network.
Server-side caching includes database caching, and fragment and page caching, which are really all just ways of the web server avoiding doing some expensive operation (say, a database query, or rendering a particular piece of a template) by doing it once then keeping the result in a cache for a while.
I said earlier that varnish was an HTTP cache, which means that straight away it's able to be more efficient than a web server serving even a static file. Consider what a web server has to do:
parse the HTTP request
map the URI (and any relevant request headers, such as Accept-Encoding) onto a file
pull up information about the file to build the HTTP headers in the response; these are known as entity headers (RFC 2616 section 7.1, which include things such as Content-Length, Content-Type and the Expires and Last-Modified headers used in HTTP caching)
figure out what additional response headers (RFC 2616 section 6.2; these include ETag and Vary, both important parts of HTTP caching) and general header fields (RFC 2616 section 4.5) are needed
write the HTTP status line and headers out to the network
write the file's contents out to the network
By comparison, varnish is upstream of all of this, so all it has to do is:
parse the HTTP request
map the URI (and any relevant request headers) onto an entry in its internal cache
see if there's an entry; if there is, write it to the network; the HTTP headers will have been stored in the cache
If there isn't an entry, varnish has to do a little more work:
connect to a web server behind it that will run through all the steps 1-6 in the first list to generate a response
write the response to the network, including all the HTTP headers
store the response in its cache
In particular because the HTTP headers and entity body (the entire response) can be cached by varnish, if it can serve out of its cache it has less work to do. When you start generating the response dynamically in your server, the difference can become even more pronounced: say you have a page that takes 5 seconds to generate, but is the same for everyone hitting your site, varnish should be able to serve that in at most milliseconds out of the cache (plus whatever time it takes to get the response across the network to the HTTP client), and has a neat mechanism (the grace period) so it can keep on doing it while hitting the backend server once to refresh the cached version of the page.
Of course, you can introduce server-side caching to improve the speed with which your web server can process a request, but if you have a response you can cache in varnish it's generally going to be faster to do that. (There are various things that are hard to cache in varnish, particularly if you're using cookies or have pages that change depending on which user is looking at them. While it's possible to continue using varnish in these cases, unless you need really incredible speed, as far as I'm aware most people start optimising those cases using server-side caching and other techniques before hitting up varnish.)
(Note that varnish can also edit headers and indeed data going in and out of the cache, which complicates things. But the main points still stand, and even while editing things on the fly varnish can be incredibly fast.)

What is an http upgrade?

This is one of the Node http events. Did the obvious Google Searches, didn't find much. What is it exactly?
HTTP Upgrade is used to indicate a preference or requirement to switch to a different version of HTTP or to another protocol, if possible:
The Upgrade general-header allows the client to specify what
additional communication protocols it supports and would like to use
if the server finds it appropriate to switch protocols. The server
MUST use the Upgrade header field within a 101 (Switching Protocols)
response to indicate which protocol(s) are being switched.
Upgrade = "Upgrade" ":" 1#product
For example,
Upgrade: HTTP/2.0, SHTTP/1.3, IRC/6.9, RTA/x11
The Upgrade header field is intended to provide a simple mechanism
for transition from HTTP/1.1 to some other, incompatible protocol.
According to the IANA register, there are only 3 registered mentions of it (including one in the HTTP specification itself).
The other two are for:
Upgrading to TLS Within HTTP/1.1 (almost never used, not to be confused with HTTP over TLS, which defines HTTPS as widely used). This upgrade allows for a similar mechanism to STARTTLS in other protocols (e.g. LDAP, SMTP, ...) so as to be able to switch to TLS on the same port as the plain connection, after exchanging some of the application protocol messages, as opposed to having the entire HTTP exchange on top of SSL/TLS without it needing to know it's on top of TLS (the way HTTPS works).
Upgrading to WebSockets (still a draft).

HTTP 1.0 vs 1.1

Could somebody give me a brief overview of the differences between HTTP 1.0 and HTTP 1.1? I've spent some time with both of the RFCs, but haven't been able to pull out a lot of difference between them. Wikipedia says this:
HTTP/1.1 (1997-1999)
Current version; persistent connections enabled by default and works well with proxies. Also supports request pipelining, allowing multiple requests to be sent at the same time, allowing the server to prepare for the workload and potentially transfer the requested resources more quickly to the client.
But that doesn't mean a lot to me. I realize this is a somewhat complicated subject, so I'm not expecting a full answer, but can someone give me a brief overview of the differences at a bit lower level?
By this I mean that I'm looking for the info I would need to know to implement either an HTTP server or application. I'm mostly looking for a nudge in the right direction so that I can figure it out on my own.
Proxy support and the Host field:
HTTP 1.1 has a required Host header by spec.
HTTP 1.0 does not officially require a Host header, but it doesn't hurt to add one, and many applications (proxies) expect to see the Host header regardless of the protocol version.
Example:
GET / HTTP/1.1
Host: www.blahblahblahblah.com
This header is useful because it allows you to route a message through proxy servers, and also because your web server can distinguish between different sites on the same server.
So this means if you have blahblahlbah.com and helohelohelo.com both pointing to the same IP. Your web server can use the Host field to distinguish which site the client machine wants.
Persistent connections:
HTTP 1.1 also allows you to have persistent connections which means that you can have more than one request/response on the same HTTP connection.
In HTTP 1.0 you had to open a new connection for each request/response pair. And after each response the connection would be closed. This lead to some big efficiency problems because of TCP Slow Start.
OPTIONS method:
HTTP/1.1 introduces the OPTIONS method. An HTTP client can use this method to determine the abilities of the HTTP server. It's mostly used for Cross Origin Resource Sharing in web applications.
Caching:
HTTP 1.0 had support for caching via the header: If-Modified-Since.
HTTP 1.1 expands on the caching support a lot by using something called 'entity tag'.
If 2 resources are the same, then they will have the same entity tags.
HTTP 1.1 also adds the If-Unmodified-Since, If-Match, If-None-Match conditional headers.
There are also further additions relating to caching like the Cache-Control header.
100 Continue status:
There is a new return code in HTTP/1.1 100 Continue. This is to prevent a client from sending a large request when that client is not even sure if the server can process the request, or is authorized to process the request. In this case the client sends only the headers, and the server will tell the client 100 Continue, go ahead with the body.
Much more:
Digest authentication and proxy authentication
Extra new status codes
Chunked transfer encoding
Connection header
Enhanced compression support
Much much more.
 HTTP 1.0 (1994)
It is still in use
Can be used by a client that cannot deal with chunked
(or compressed) server replies
 HTTP 1.1 (1996- 2015)
Formalizes many extensions to version 1.0
Supports persistent and pipelined connections
Supports chunked transfers, compression/decompression
Supports virtual hosting (a server with a single IP Address hosting multiple domains)
Supports multiple languages
Supports byte-range transfers; useful for resuming interrupted data
transfers
HTTP 1.1 is an enhancement of HTTP 1.0. The following lists the
four major improvements:
Efficient use of IP addresses, by allowing multiple domains to be
served from a single IP address.
Faster response, by allowing a web browser to send multiple
requests over a single persistent connection.
Faster response for dynamically-generated pages, by support for
chunked encoding, which allows a response to be sent before its
total length is known.
Faster response and great bandwidth savings, by adding cache
support.
For trivial applications (e.g. sporadically retrieving a temperature value from a web-enabled thermometer) HTTP 1.0 is fine for both a client and a server. You can write a bare-bones socket-based HTTP 1.0 client or server in about 20 lines of code.
For more complicated scenarios HTTP 1.1 is the way to go. Expect a 3 to 5-fold increase in code size for dealing with the intricacies of the more complex HTTP 1.1 protocol. The complexity mainly comes, because in HTTP 1.1 you will need to create, parse, and respond to various headers. You can shield your application from this complexity by having a client use an HTTP library, or server use a web application server.
A key compatibility issue is support for persistent connections. I recently worked on a server that "supported" HTTP/1.1, yet failed to close the connection when a client sent an HTTP/1.0 request. When writing a server that supports HTTP/1.1, be sure it also works well with HTTP/1.0-only clients.
One of the first differences that I can recall from top of my head are multiple domains running in the same server, partial resource retrieval, this allows you to retrieve and speed up the download of a resource (it's what almost every download accelerator does).
If you want to develop an application like a website or similar, you don't need to worry too much about the differences but you should know the difference between GET and POST verbs at least.
Now if you want to develop a browser then yes, you will have to know the complete protocol as well as if you are trying to develop a HTTP server.
If you are only interested in knowing the HTTP protocol I would recommend you starting with HTTP/1.1 instead of 1.0.
HTTP 1.1 is the latest version of Hypertext Transfer Protocol, the World Wide Web application protocol that runs on top of the Internet's TCP/IP suite of protocols. compare to HTTP 1.0 , HTTP 1.1 provides faster delivery of Web pages than the original HTTP and reduces Web traffic.
Web traffic Example: For example, if you are accessing a server. At the same time so many users are accessing the server for the data, Then there is a chance for hanging the Server. This is Web traffic.
HTTP 1.1 comes with the host header in its specification while the HTTP 1.0 doesn't officially have a host header, but it doesn't refuse to add one.
The host header is useful because it allows the client to route a message throughout the proxy server, and the major difference between 1.0 and 1.1 versions HTTP are:
HTTP 1.1 comes with persistent connections which define that we can have more than one request or response on the same HTTP connection.
while in HTTP 1.0 you have to open a new connection for each request and response
In HTTP 1.0 it has a pragma while in HTTP 1.1 it has Cache-Control
this is similar to pragma

Resources